id
stringlengths
32
40
text
stringlengths
1
2.95M
added
stringlengths
24
24
created
stringlengths
20
24
source
stringclasses
3 values
original_shard_dir
stringclasses
174 values
original_shard_idx
int64
0
311k
num_tokens
int64
2
512k
855034b4e8ac749ccd3a9bb187a9165c76ecd6dd
\section{Introduction} The magnetohydrodynamics (MHD) equilibria in a smooth bounded domain $\Om\subset\RR^3$ are often described by the solutions to the system of equations \begin{subequations}\label{fixed} \begin{align}\label{Eq.MHD} B\times \curl B+\nabla P&=0 \qquad\text{ in }\Om\,,\\ \label{eq.MHD2}\Div B&=0 \qquad\text{ in }\Om\,,\\ \label{eq.MHD3}B\cdot N&=0 \qquad\text{ on } \partial\Om\,. \end{align} Here the vector field $B(x)$ is the steady magnetic field of a perfectly conducting plasma, the scalar function $P(x)$ is the plasma pressure and $N(x)$ denotes the outer unit normal at a boundary point $x\in\pd\Om$. It is also customary to assume that $P$ is constant on the boundary $\pd\Om$, so we can set \begin{equation}\label{Eq.MHD4} P=0 \qquad \text{ on } \partial\Om\,. \end{equation} \end{subequations} Equations~\eqref{fixed}, which we will referred to as the {\em fixed boundary problem}\/, model the equilibrium configurations of a plasma confined in the fixed magnetic domain~$\Om$. Because of this connection, the most interesting case is when $\Om$ is a toroidal domain, that is, when the boundary of the domain is diffeomorphic to a torus. It is worth noting that these equations also describe stationary solutions to the 3D Euler equations in fluid mechanics, with $B$ playing the role of the velocity field of the fluid and $P$ being the negative of the Bernoulli function (see e.g.~\cite{AK}). In this paper we will be concerned with the problem of finding toroidal domains that admit MHD equilibria whose pressure is constant on the boundary but not in the interior. This problem can be traced back at least to the work of H.~Grad in the 1960s, who conjectured~\cite{Grad} that no smooth solutions fibring the domain with toroidal surfaces exist unless the domain is axially symmetric. An important somewhat related result, due to Bruno and Laurence~\cite{T09} in the 1990s, is the existence of weak solutions with nonconstant stepped-pressure in nonsymmetric toroidal domains that are small perturbations of an axisymmetric solid torus. A very illuminating numerical implementation of this model suggesting the existence of stepped-pressure equilibria in toroidal domains far from axisymmetric was developed in~\cite{HHD07,HDDHMNL12}. However, Grad's influential conjecture remains wide open. A comprehensive recent account of the subject can be found in~\cite{CDG}, where the authors construct quasisymmetric smooth solutions with nonconstant pressure to the magnetohydrostatic equations, subject to a small force, on nonsymmetric toroidal domains that are also a small perturbation of the axisymmetric solid torus. Another related equation that appears in plasma physics, particularly concerning the design of plasma confinement devices for nuclear power generation, describes a free boundary steady state surrounded by vacuum with an external current $J\ext$. In terms of the interior and exterior magnetic fields, $B$ and $B\ext$, this system reads in this case as \begin{subequations}\label{free} \begin{align} B\times \curl B+\nabla P&=0 \qquad\text{ in }\Om\,, \label{Eq.FBP1}\\ \Div B&=0 \qquad\text{ in }\Om\,, \label{Eq.FBP2}\\ \Div B\ext&=0 \qquad\text{ in }\RR^3\backslash\overline{\Om}\,, \label{Eq.FBP3}\\ \curl B\ext&=J\ext \qquad\text{ in }\RR^3\backslash\overline{\Om}\,, \label{Eq.FBP4}\\ (B-B\ext)\cdot N&=0 \qquad\text{ on } \partial\Om\,, \label{Eq.FBP5}\\ |B|^2-|B\ext|^2&=0 \qquad\text{ on } \partial\Om\,, \label{Eq.FBP6}\\ B\ext&\to0 \qquad\,\text{as } |x|\to\infty\,. \label{Eq.FBP7} \end{align} The jump condition~\eqref{Eq.FBP6} uses that the (hydrodynamic) pressure in the vacuum is simply $\frac12|B\ext|^2$ and $P=0$ on $\pd\Om$. One usually assumes that the external current is a current sheet, i.e., \begin{equation}\label{Jext} J\ext = J\, dS \end{equation} where $dS$ is the surface measure on the boundary~$\pd\Om'$ of a certain domain $\Om'$ enclosing $\overline{\Om}$ and $J:\pd\Om'\to\RR^3$ is a tangent vector field on the surface. We will additionally impose the tangency condition \begin{equation}\label{Eq.FBP8} B\cdot N =0 \qquad\text{ on } \partial\Om \end{equation} \end{subequations} and refer to the system of equations~\eqref{free} as the {\em free boundary problem}\/. The main result of this article is the existence of piecewise smooth MHD equilibria with nonconstant stepped-pressure in a wide range of toroidal domains, which can be very different from an axisymmetric domain. The same philosophy works, with only minor modifications, for the fixed and free boundary problems. The equilibria we construct are not~$C^1$, like those of Bruno and Laurence for almost-axisymmetric domains, and in fact they feature singular current sheets (cf.~Remark~\ref{R:current}). The toroidal domains we consider can even be knotted in an arbitrarily complicated fashion. Specifically, the result applies to any toroidal domain with an analytic boundary satisfying a certain nondegeneracy assumption, which enables us to employ KAM-theoretic techniques in a certain step of the proof. \subsection{Nondegenerate toroidal domains} To define the nondegeneracy condition that our toroidal domains must satisfy, we need to introduce some notation. Firstly, recall that Hodge theory ensures the existence of a unique (modulo a multiplicative constant) solution to the boundary problem \[ \curl h=0\quad\text{in }\Om\,, \qquad \Div h=0\quad\text{in }\Om\,, \qquad h\cdot N=0 \quad\text{on }\pd\Om \] on a toroidal domain~$\Om\subset\RR^3$. We refer to~$h$ as the {\em harmonic field}\/ of~$\Om$. {\em Beltrami fields}\/ are solutions to the equation \begin{equation}\label{eqsB} \curl B=\lambda B\quad\text{in }\Om\,, \qquad B\cdot N=0 \quad\text{on }\pd\Om \end{equation} for some nonzero real constant~$\la$. The space of Beltrami fields on a toroidal domain~$\Om$ is infinite dimensional. Specifically, as the curl defines a self-adjoint operator with dense domain in the space $L^2_{\Div,h}(\Om)$ of square-integrable divergence-free fields that are $L^2$-orthogonal to the harmonic field~$h$, it is standard that there is an orthogonal basis of $L^2_{\Div,h}(\Om)$ consisting of Beltrami fields with zero harmonic part, i.e., solutions $\{B^n\}_{n=1}^\infty$ to Equation~\eqref{eqsB} for certain nonzero constants $\{\la^n\}_{n=1}^\infty$ that satisfy the additional constraint \[ \int_{\Om}B^n\cdot h\, dx=0\,. \] Note that $|\la^n|\to\infty$ as $n\to\infty$. Moreover, for all $\la\in \RR\backslash\{\la^n \}_{n=1}^\infty$, there is a unique Beltrami field with parameter~$\la$ and prescribed harmonic part, that is, a unique solution to the boundary problem~\eqref{eqsB} subject to the additional constraint \[ \int_{\Om}B\cdot h\, dx=1\,. \] In both cases (that is, when~$B$ is of this form and when $B=B^n$ for some~$n$), the trace to the boundary of the Beltrami field is a smooth vector field tangent to the (embedded, nonsymmetric, possibly knotted) toroidal surface~$\pd\Om$. In other words, $\pd\Om$ is an invariant torus of $B$. Now recall that this invariant torus is {\em Diophantine}\/ with frequency vector~$\om\in\RR^2$ if there exist global coordinates $\vp: \pd\Om\to\TT^2$, with $\TT:=\RR/2\pi\ZZ$, such that the restriction of the field~$B$ to~$\partial\Om$ reads in these coordinates as \begin{equation}\label{eq.torus} B|_{\partial\Om}=\om_1\, \pd_{\vp_1}+\om_2\, \pd_{\vp_2}\,, \end{equation} and $\om$ is a Diophantine vector. This means that there exist constants~$\ga>0$ and~$\tau>1$ such that \begin{equation}\label{eq:diof} |k\cdot \omega| \geq {\gamma}|k|^{-\tau} \end{equation} for any vector with integral components $k\in\ZZ^2\backslash\{0\}$. The ratio~$\om_1/\om_2$ modulo~$1$, which is a Diophantine number, is independent of the choice of coordinates~$\vp$. In this paper, we will say that a toroidal domain~$\Om\subset\RR^3$ is {\em nondegenerate of type I or II}\/ if there is a Beltrami field on~$\Om$ for which the boundary~$\pd\Om$ is a Diophantine invariant torus and if the determinant of certain $2\times 2$ constant matrices is not zero (type I) or not equal to a certain constant depending on the Beltrami field (type II). To streamline the Introduction, the expression of these matrices (which are the average on~$\TT^2$ of matrix-valued quantities involving the specific Beltrami field, the associated Diophantine frequency vector and the linearizing coordinates~$\vp$) is relegated to Definitions~\ref{D:torus} and~\ref{D:torusII} in the main text. To get some intuition about the meaning of this condition, recall that Beltrami fields appear in the plasma physics literature as {force-free fields}\/ with constant factor, so the nondegeneracy condition can be heuristically understood as the existence of a generic force-free field on the domain that is ergodic on the boundary. For concreteness, we shall refer to a Beltrami field with this property as a {\em nondegenerate Beltrami field of type I or II}\/. Some observations are in order. Firstly, since there are infinitely many curl eigenfields that do not necessarily vanish on the toroidal surface~$\pd\Om$, and since the set of Diophantine vectors has full measure, it is natural to conjecture that a ``generic'' toroidal domain should be nondegenerate, in this sense. However, genericity questions for vector-valued eigenvalue problems are remarkably hard~\cite{Uhlenbeck,TAMS} and we have not been able to rigorously establish this claim. A particular case that one can study in detail is the class of thin toroidal domains, which one can understand as thickenings of an arbitrary closed curve in space. There one has a good understanding on the structure of harmonic fields, which can be used to rigorously show that thin toroidal domains are indeed nondegenerate. Details are given in Proposition~\ref{P:isot}. A concrete consequence of this fact is that there certainly are nondegenerate toroidal domains of any knot type, which are obviously not small perturbations of an axisymmetric domain. The boundary can be chosen to be smooth, and even analytic. \subsection{Statement of the main results} We are now ready to present our main result about the existence of MHD equilibria in nondegenerate toroidal domains. In the context of the fixed boundary problem, the result can be stated as follows: \begin{theorem}[Fixed boundary MHD equilibria]\label{T:main} Let $\Om_1\subset\RR^3$ be a nondegenerate toroidal domain of type I with analytic boundary, and let $B_1$ be a nondegenerate Beltrami field of type I with eigenvalue $\la_1$ on~$\Om_1$, in the sense of Definition~\ref{D:torus}. Then, for any $N>1$ and almost all $(\la_2,\dots,\la_N)\in\RR^{N-1}$ with $\la_j\neq\la_k$, the following statements hold: \begin{enumerate} \item There exists a collection of ``nested'' nondegenerate toroidal domains of type I $\{\Om_k\}_{k=2}^N$ with analytic boundary, all of them diffeomorphic to~$\Om_1$, with the property that $\overline{\Om_{k-1}}\subset\Om_k$ for all $2\leq k\leq N$ (see Figure~\ref{fig:nested}). \item There is a piecewise smooth MHD equilibrium $(B,P)$ in the fixed domain~$\Om_N$, which satisfies Equations~\eqref{fixed}. \item For each $1\leq k\leq N$, the magnetic field and the pressure satisfy \[ \curl B = \la_k B \qquad\text{and}\qquad P= p_k \] in $\Om_k\backslash\overline{\Om_{k-1}}$. Here $\{p_k\}_{k=1}^{N-1}$ are distinct nonzero constants, $p_N:=0$ and we have set $\Om_0:=\emptyset$. Furthermore, $B=B_1$ in~$\Om_1$. \end{enumerate} \end{theorem} Likewise, for the free boundary problem, our main result can be stated as follows: \begin{theorem}[Free boundary MHD equilibria]\label{T:main2} Let $\Om\subset\RR^3$ be a nondegenerate toroidal domain of type II with analytic boundary, and let $B$ be a nondegenerate Beltrami field of type II with eigenvalue $\la$ on~$\Om$, in the sense of Definition~\ref{D:torusII}. Then there exists an external magnetic field $B\ext$ and a current sheet of the form $J\ext = J\, dS$, where $dS$ is the surface measure on the boundary of an analytic domain~$\Om'$ that is diffeomorphic to~$\Om$ and encloses~$\BOm$, and where $J$ is an analytic tangent vector field on~$\pd\Om'$, such that $(B,B\ext,J\ext,\Om)$ is a solution of the free boundary problem~\eqref{free} with $P=0$ in $\Om$. \end{theorem} A consequence of these theorems and of the above discussion about nondegenerate domains is the existence of piecewise smooth MHD equilibria with nonconstant pressure and (fixed or free) toroidal boundaries of any knot type. A precise statement is given in Corollary~\ref{C.main}. For the benefit of the reader, in Section~\ref{S:weak} we recall the definition of weak solutions to the system~\eqref{Eq.MHD}--\eqref{eq.MHD3}, which is required to make sense of MHD equilibria that are only piecewise smooth. \begin{figure}[t] \centering{ \fontsize{9pt}{11pt}\selectfont \def\svgwidth{3.333in} \resizebox{75mm}{!}{\input{drawing.pdf_tex}} \caption{A cross section of the nested toroidal domains.} \label{fig:nested} } \end{figure} It is worth mentioning that a minor modification of the proof of Theorem~\ref{T:main} permits to prove the existence of Lipschitz continuous force-free fields with a nonconstant proportionality factor on toroidal domains of complicated geometry. Details are provided in Theorem~\ref{T:ff}. This is interesting because, in a certain precise sense, smooth force-free fields with a nonconstant factor are rare, as discussed in~\cite{MYZ,ARMA}. \subsection{Strategy of the proof} The strategy of the proofs of Theorems~\ref{T:main} and~\ref{T:main2} is similar, so let us focus on the former. The basic idea behind Theorem~\ref{T:main} is motivated by the work of Bruno and Laurence~\cite{T09} on MHD equilibria on small perturbations of an axisymmetric toroidal domain. The perturbative construction they use in their proof, however, hinges strongly on having approximately axisymmetric solutions, where one can obtain very precise information about the solutions and their trajectories, and cannot be extended to deal with toroidal domains that are not approximately symmetric. To explain the gist of our approach, let us stick to the simplest case, $N=2$. The case of an arbitrary~$N\geq2$ is obtained by repeating the process $N-1$ times. Our initial observation (Lemma~\ref{L.Euler}) is that, if we have two Beltrami fields $B_1,B_2$ defined on two disjoint domains $\Om_1,\Om_2':=\Om_2\backslash\overline{\Om_1}$, with $\overline{\Om_1}\subset\Om_2$, one can define a piecewise smooth MHD equilibrium on the domain~$\Om_2$, with a certain piecewise constant pressure function~$P$, provided that the difference $|B_1|^2-|B_2|^2$ is constant on~$\pd\Om_1$. We start by choosing $B_1$ as a nondegenerate Beltrami field in the toroidal domain $\Om_1$, so that the analytic surface $\pd\Om_1$ is a Diophantine invariant torus of $B_1$. To construct a Beltrami field $B_2$ in an exterior neighborhood of $\pd\Om_1$, we use a version of the Cauchy--Kovalevskaya theorem for the curl operator~\cite{Annals} (see also Appendix~\ref{Ap1}) with a Cauchy datum given by an analytic vector field tangent to $\pd\Om_1$. A key aspect of this theorem is that one can only grant the existence of a local solution to the equation provided that the Cauchy datum satisfies an additional constraint. When one takes this constraint into account, showing that $|B_1|^2-|B_2|^2$ is constant on~$\pd\Om_1$ becomes equivalent to proving the existence of an analytic solution to a certain nonlinear Hamilton--Jacobi equation on~$\TT^2$. The key difficulty of the problem is that, as the toroidal domains we consider are far from the axisymmetric case, we cannot extract from the equations enough information about the trajectories of the vector fields. The first manifestation of this difficulty is that we have not found a way of effectively using trajectories to analyze the aforementioned Hamilton--Jacobi equation. Instead, we have shown that one can exploit the fact that the restriction $B_1|_{\pd\Om_1}$ is conjugate to a Diophantine rotation to regard the equation as a nonlinear perturbation of the cohomological equation which appears in KAM theory. With this approach, we eventually establish the existence of analytic solutions by means of a Newton quadratic scheme (Theorem~\ref{L:cohom}). The next step is to show that the resulting field~$B_2$ does in fact have an invariant torus enclosing a toroidal domain $\Om_2\supset\overline{\Om_1}$, which permits to make sense of the basic geometric configuration used to construct the MHD equilibrium. To this end, we prove that $\pd\Om_1$ is a twist (in the KAM theoretic sense) invariant torus of $B_2$, so that it is accumulated by a set of Diophantine analytic invariant tori. However, the key difficulty is that we cannot compute a good approximation for the trajectories of~$B_2$. This means that we do not have enough information to apply the existing KAM theorems for divergence-free fields (see e.g.~\cite{MW,KKP14,Acta,KKP20}), which are based on studying the Poincar\'e map of the field on a transverse section. To solve this problem, we establish a KAM theorem for divergence-free vector fields in $\RR^3$ with two key features that make it rather different from other KAM theorems in the same context~\cite{Sevr,BHT,KKP14}. First, it applies to vector fields which do not need to be approximately integrable or in Birkhoff normal form. Secondly, the twist condition is written solely in terms of the vector field and of the approximate invariant torus. An additional advantage is that the formulas take a particularly simple form when the field is Beltrami. Recall that a KAM theorem for perturbations of integrable volume-preserving diffeomorphisms was obtained in~\cite{CS,X,Y}. \subsection{Organization of the paper} After recalling the definition of weak MHD equilibria, in Section~\ref{S:weak} we prove a lemma ensuring that one can construct piecewise smooth MHD equilibria by gluing two Beltrami fields defined on non-intersecting domains with a common boundary component, provided that the boundary traces of these Beltrami fields satisfy a certain constraint. The main arguments of the proofs of Theorems~\ref{T:main} and~\ref{T:main2} are presented in Sections~\ref{S:Tmain} and~\ref{S.freebound}. For clarity of exposition, however, the two essential technical points of proof (which are of independent interest) are relegated to Sections~\ref{S.cohom} and~\ref{sec:teo}. Specifically, in Section~\ref{S.cohom} we solve, using a cohomological equation, the Hamilton--Jacobi equation associated with the constraint that we came across in Section~\ref{S:weak}. Also, in Section~\ref{sec:teo} we prove our new KAM theorem for divergence-free vector fields in $\RR^3$. Section~\ref{S:nondeg} is devoted to rigorously proving that thin toroidal domains of any topology are generically nondegenerate (of type I and II). The existence result for Lipschitz-continuous force-free fields with a nonconstant factor is presented in Section~\ref{S:ff}. The paper concludes with two technical appendices. In the first appendix we show that Beltrami fields are analytic up to the boundary if the domain is analytic, and in the second we record certain results for Beltrami fields that we proved in~\cite{Annals,Acta,ELP} and which are relevant for the problem under consideration. \section{Construction of weak MHD equilibria from Beltrami fields}\label{S:weak} In this section we introduce the definition of a weak MHD equilibrium. We say that a pair $(B,P)$ of class, say, $L^2(\Om)$ is a {\em weak solution to the stationary MHD equations} in $\Om$ if \[ \int_{\Om} \left[(B\otimes B)\cdot \nabla w- \left(P+\frac12|B|^2\right)\Div w\right]\, dx=0\quad \text{and}\quad \int_{\Om}B\cdot \nabla\phi\,dx=0 \] for any vector field $w\in C^1_c(\Om)$ and any scalar function $\phi\in C^1(\Om)$. Of course, if $B$ and~$P$ are continuously differentiable, this is equivalent to saying that they satisfy Equations~\eqref{Eq.MHD}--\eqref{eq.MHD3} in~$\Om$. \begin{lemma}\label{L.Euler} Let $\{\Om_k\}_{k=1}^N$ be $N\geq2$ bounded domains in $\RR^3$ with smooth connected boundaries. Assume that these domains are nested in the sense that $\overline{\Om_{k-1}}\subset \Om_k$ for all $1\leq k\leq N$, with $\Om_0:=\emptyset$. With $\Om_k':=\Om_k\backslash\overline{\Om_{k-1}}$, suppose furthermore that the vector field $B_k$ satisfies the equation $\curl B_k=\la_kB_k$ in~$\Om_k'$ for some nonzero constant $\la_k$. Assume that $B_k$ is tangent to the boundary of $\Om_k'$ and that \begin{equation}\label{Eq.weak} |B_{k+1}|^2-|B_k|^2=2c_k \text{ on } \pd\Om_k \end{equation} for all $1\leq k\leq N-1$, where $c_k$ are constants. Then \[ B(x):=\sum_{k=1}^N B_k(x)\, 1_{\Om_k'}(x) \] is a piecewise smooth MHD equilibrium on~$\Om_N$ with piecewise constant pressure \[ P(x):=c_0 -\sum_{k=1}^{N-1}c_k\, 1_{\Om_N\backslash\overline{\Om_k}}(x)\,. \] Here $c_0$ is any real constant (in particular, it can be chosen so that $P(x)=0$ if $x\in\pd\Om_N$). \end{lemma} \begin{remark} As $B_k$ is a Beltrami field defined in smooth domains and tangent to the boundary, it is standard~\cite{BS} that $B_k$ is smooth up to the boundary. Therefore, the constraint~\eqref{Eq.weak} makes sense pointwise. A related analytic regularity result up to the boundary, which will be needed later on, is proved in Appendix~\ref{A.analytic}. \end{remark} \begin{remark} The result and the proof remain valid when $\la_k=0$ if the corresponding vector field $B_k$ is additionally assumed to be divergence-free in $\Om_k'$. \end{remark} \begin{proof} To keep the formulas as simple as possible, we will prove the result for $N=2$; the general case is analogous. We start by noticing that, for all $\phi\in C^1(\Om_2)$, \begin{align*} \int_{\Om_2}B\cdot \nabla \phi\, dx&=\int_{\Om_1} B_1\cdot \nabla \phi\, dx+\int_{\Om_2'} B_2\cdot \nabla \phi\, dx \\ &= \int_{\pd \Om_1} \phi\, B_1\cdot N\, dS+\int_{\pd (\Om_2')} \phi\, B_2\cdot N\, dS=0\,, \end{align*} where we have used that $\Div B_1=\Div B_2=0$ in their respective domains and $B_1\cdot N=0$ on~$\pd\Om_1$ and $B_2\cdot N=0$ on~$\partial\Om_2'$. Hence $\Div B=0$ in the sense of distributions. Let us now take an arbitrary vector field $w\in C^\infty_c(\Om_2)$. We can write \begin{align*} I:=&\int_{\RR^3} \left[ (B\otimes B)\cdot \nabla w- \left(P+\frac12|B|^2\right)\Div w\right]\, dx\\ &=\int_{\Om_1} \left[ (B_1\otimes B_1)\cdot \nabla w- \left(c_0+\frac12|B_1|^2\right)\Div w\right]\, dx\\ &\qquad +\int_{\Om_2'} \left[ (B_2\otimes B_2)\cdot \nabla w- \left(c_0-c_1+\frac12|B_2|^2\right)\Div w\right]\, dx=:I_1+I_2 \end{align*} Integrating by parts, and using that $B_1\cdot N=0$ on $\pd\Om_1$, we easily obtain \begin{align*} I_1&=\int_{\Om_1}\left[(B_1\otimes B_1)\cdot\nabla w + \frac12\nabla|B_1|^2\cdot w\right]\,dx-\int_{\pd\Om_1}\left (c_0+\frac12|B_1|^2\right)w\cdot N\,dS\\ &=-\int_{\Om_1}\left[\Div(B_1\otimes B_1)-\frac12\nabla|B_1|^2\right]\cdot w-\int_{\pd\Om_1}\left (c_0+\frac12|B_1|^2\right)w\cdot N\,dS\\ &=-\int_{\pd\Om_1}\left (c_0+\frac12|B_1|^2\right)w\cdot N\,dS\,. \end{align*} To pass to the last equation we have used the well known identity for Beltrami fields \[ \Div(B_1\otimes B_1)=\frac12\nabla|B_1|^2\,. \] Analogously, using the same identity for $B_2$ on $\Om_2'$ and that $B_2\cdot N=0$ on the boundary, we can compute the term $I_2$. Notice that by the connectedness of the boundaries of $\Om_1$ and $\Om_2$ we have $\partial\Om_2'=\pd\Om_1\cup\pd\Om_2$, and that the outward pointing normal vector of $\Om_2'$ on $\pd\Om_1$ is $-N$. We thus obtain \begin{align*} I_2&=\int_{\Om_2'}\Big[(B_2\otimes B_2)\cdot\nabla w + \frac12\nabla|B_2|^2\cdot w\Big]\,dx+\int_{\pd\Om_1}\Big(c_0-c_1+\frac12|B_2|^2\Big)w\cdot N\,dS\\ &=\int_{\pd\Om_1}\Big(c_0-c_1+\frac12|B_2|^2\Big)w\cdot N\,dS\,. \end{align*} The surface integral is taken only on $\pd\Om_1$ because $w=0$ on $\pd\Om_2$. Putting together these computations and using the boundary condition~\eqref{Eq.weak}, we finally conclude that \[ I=\int_{\pd\Om_1}\left[\frac12\left(|B_2|^2-|B_1|^2\right)-c_1\right]w\cdot N\,dS=0\,, \] for all $w\in C^1_c(\Om_2)$. It then follows that $(B,P)$ is a weak solution of the MHD equations in $\Om_2$, as claimed. \end{proof} \begin{remark}\label{R:current} It is easy to check that the plasma current $J:=\curl B$ of the solution constructed in Lemma~\ref{L.Euler} is the vector-valued distribution \[ J =\sum_{k=1}^N\la_k B_k\, 1_{\Om_k'}+\sum_{k=1}^{N-1}(B_k-B_{k+1})\times N_k\, dS_k\,, \] where $dS_k$ and $N_k$ are the area measure and outer unit normal on $\pd\Om_k$. The current sheet terms appearing in this formula are a consequence of the discontinuity of the magnetic field across the surfaces $\pd\Om_k$. \end{remark} \section{Fixed boundary equilibria: proof of Theorem~\ref{T:main}}\label{S:Tmain} In this section we show how to implement the strategy discussed in the Introduction to prove the first main result of this paper, modulo some technical results that will be presented in later sections. \subsection{Nondegenerate toroidal domains of type I} Let us begin by defining the class of toroidal domains that we consider in Theorem~\ref{T:main}. \begin{definition}\label{D:torus} A toroidal domain~$\Om\subset\RR^3$ is {\em nondegenerate of type I}\/ if there exists a Beltrami field $B$ on~$\Om$ such that the following conditions hold: \begin{enumerate} \item $\pd\Om$ is a Diophantine invariant torus of the field. \item The $2\times 2$ constant matrix \begin{equation}\label{eq.gen} M:=\int_{\TT^2}G(\vp)^{-1}\,\left( \begin{array}{cc} 1-\om_1\partial_{\vp_1}\mathcal R & -\om_2\partial_{\vp_1}\mathcal R \\ -\om_1\partial_{\vp_2}\mathcal R & 1-\om_2\partial_{\vp_2}\mathcal R \\ \end{array} \right)\,d\vp \end{equation} is invertible. Here $\om$ is the frequency vector of $B$ on $\pd\Om$, $\vp$ are the linearizing coordinates in Equation~\eqref{eq.torus}, $G$ is the metric matrix (or first fundamental form) of the surface $\pd\Om$ in the coordinates~$\vp$, and $\mathcal R(\vp)$ is the unique zero mean solution to the equation on the torus \[ \om_1\pd_{\vp_1}\mathcal R+\om_2\pd_{\vp_2}\mathcal R=\varkappa-|X|^2\,, \] where $X:= B|_{\pd\Om}$ and the constant~$\varkappa$ is chosen so that $\int_{\TT^2}(\varkappa-|X|^2)\,d\vp=0$. \end{enumerate} \end{definition} By rescaling the Beltrami field, we can henceforth assume that the above constant is $\varkappa=1$. The Beltrami field and the analytic toroidal domain satisfying the nondegeneracy assumption are $B_1$ and $\Om_1$, and the corresponding eigenvalue is~$\la_1$. Here and in what follows, \[ [f]_{\TT^2}:= \frac1{4\pi^2}\int_{\TT^2} f\, d\vp \] denotes the average of a function on~$\TT^2$ (in $\vp$-coordinates) and we set $\om^\perp:= (\om_2,-\om_1)$ for each two-component vector $\om =(\om_1,\om_2)$. \subsection{Construction of the first layer}\label{S.pt} As $\pd\Om_1$ is a invariant torus of $B_1$ with Diophantine frequency vector $\om^{(1)}$, it is standard that one can parametrize the invariant torus by an embedding $K_1 : \TT^2 \rightarrow \RR^3$ satisfying the equation \[ L_{\om^{(1)}} K_1 = B_1 \circ K_1\,, \] where $L_{\om^{(1)}}K_1:=DK_1\om^{(1)}$ is the pointwise derivative of~$K_1$ in the direction of~$\om^{(1)}$. In this picture, of course, the invariant torus is the image $\pd\Om_1=K_1(\TT^2)$. Let us emphasize from the beginning that parametrizing the invariant tori by embeddings is essential for the KAM theorem that we will prove in Section~\ref{sec:teo} (Theorem~\ref{teo:kam:div}), which will play a key role in this proof later on. Since the boundary of the domain $\Om_1$ is analytic, Theorem~\ref{T.analytic} implies that $B_1$ is analytic up to the boundary. A theorem of Herman~\cite{Y,Yoccoz} then ensures that the linearizing parametrization $K_1$ is analytic, or in other words, there are analytic coordinates $\vp:\pd\Om_1\to\TT^2$ in which $X_1:=B_1|_{\pd\Om_1}$ takes the form \[ X_1=G_{K_1}^{-1}DK_1^\top B_1 \circ K_1 = \om_1^{(1)}\pd_{\vp_1}+\om_2^{(1)}\pd_{\vp_2}\,. \] Here $G_{K_1}:=DK_1^\top DK_1$ is the matrix representation of the pullback of the Euclidean metric to the surface $\pd\Om_1$ obtained using the embedding $K_1$, so it is a positive definite $2\times 2$ symmetric (nonconstant) matrix. Theorem~\ref{L:cohom} implies that, for any constant~$c_2$ that is small enough in absolute value, there is an analytic vector field $X_2$ on $\pd\Om_1$ such that: \begin{enumerate} \item The $1$-form that is dual to~$X_2$ with respect to the metric on~$\pd\Om_1$ induced by the Euclidean metric in~$\RR^3$, is closed. \item The pointwise norm of~$X_2$, computed with the induced metric on~$\pd\Om_1$, satisfies the equation \[ |X_2|^2=(1+b_2)|X_1|^2+c_2 \] for some constant bounded as $|b_2|\leq C|c_2|$. \item The vector field $X_2$ depends continuously on the parameter $c_2$, in the $C^r$-topology of vector fields for any~$r$, and is Diophantine with frequency vector \begin{equation}\label{eq.om2} \om^{(2)}:=(1+c_2)^{1/2}\om^{(1)}\,. \end{equation} \end{enumerate} In particular, one can write \[ X_2=X_1+\mathcal O(c_2)=\om_1^{(1)}\pd_{\vp_1}+\om_2^{(1)}\pd_{\vp_2}+\cO(c_2)\,, \] where in what follows $\cO(c_2)$ stands for a quantity (which may vary from line to line) whose $C^r$ norm is bounded by $C|c_2|$, for any fixed integer $r$. Now we consider the Cauchy problem \begin{equation*} \curl B_2'=\la_2 B_2'\,, \qquad B_2'|_{\pd\Om_1}=X_2\,, \end{equation*} for some nonzero constant $\la_2\neq \la_1$ that we will fix later. Since $\pd\Om_1$ and $X_2$ are analytic, and the $1$-form dual to $X_2$ is closed, Theorem~\ref{T:CK} (which is a sort of Cauchy--Kovalevskaya theorem for the curl operator) implies that there exists a unique analytic solution to this Cauchy problem in a neighborhood of $\pd\Om_1$. Eventually, we will only be interested in the behavior of the solution outside ${\Om_1}$. By construction, $\pd\Om_1$ is a Diophantine invariant torus of the vector field $B_2'$. We claim that it is twist (in the KAM sense; see Definition~\ref{def:ndeg}) for almost all choices of the constant $\la_2$. This implies that $\pd\Om_1$ is accumulated by a set of Diophantine invariant tori of $B_2'$ contained in $\RR^3\backslash \overline{\Om_1}$. Since $B_2'$ is analytic, these tori are analytic as well. This follows from Corollary~\ref{Cor_KAM} to the KAM theorem for divergence-free fields that we shall prove in Section~\ref{sec:teo}. Let us denote by $K_2:\TT^2\to\RR^3$ an embedding that is a linearizing parametrization of the invariant torus $\pd\Om_1$ with frequency vector $\om^{(2)}$ of the vector field $B_2'$. Then, we can introduce coordinates (which we still denote by~$\vp$) such that $X_2=B_2'|_{\pd\Om_1}$ becomes the Diophantine linear field \[ X_2=G_{K_2}^{-1}DK_2^\top B_2'\circ K_2=\om_1^{(2)}\pd_{\vp_1}+\om_2^{(2)}\pd_{\vp_2}\,. \] In general, $K_2$ is different from the parametrization $K_1$ that linearizes $X_1$, but it follows from the previous discussion that both parametrizations differ by a higher order correction, i.e., \begin{equation}\label{eq.K2} K_2=K_1+\cO(c_2)\,. \end{equation} Since $B_2'$ satisfies the equation $\curl B_2'=\la_2B_2'$, an easy computation shows that the following identity holds: \begin{lemma} $DB_2'^\top+DB_2'=2DB_2'^\top+\la_2 B_2'\,\times$, where $DB_2'$ is the Jacobian matrix of $B_2'$ and $\times$ denotes the vector product, both computed in Cartesian coordinates. \end{lemma} \begin{proof} The proof is straightforward. Indeed, $$DB_2'^\top+DB_2'=2DB_2'^\top + (DB_2'-DB_2'^\top)=2DB_2'^\top + \curl B_2'\,\times\,,$$ so the claim follows from the equation $\curl B_2'=\la_2B_2'$. \end{proof} To invoke Corollary~\ref{Cor_KAM}, we must check the twist condition (cf.\ Definition~\ref{def:ndeg}). This involves computing a two-component vector field (or $2\times1$ matrix) appearing in Equation~\eqref{eq:condA}, which in this case takes the form \begin{equation*} A_2= - \frac{G_{K_2}^{-1}}{|n_2|^2} \Big[2DK_2^\top DB_2'^\top n_2+\la_2 DK_2^\top (B_2'\times n_2)\Big]\,. \end{equation*} Here $DB_2'^\top$ and $B_2'$ are evaluated at $K_2(\vp)$ and the normal vector \[ n_2(\vp):=\partial_{\vp_1}K_2(\vp) \times \partial_{\vp_2}K_2(\vp) \] is defined in terms of $K_2$ as in Definition~\ref{D:torus}. Observe that $DK_2$ is a $3\times 2$ matrix. Since the vector field $B_2'\times n_2$ is tangent to $\pd\Om_1$ and perpendicular to $B_2'$, we infer that there is a nonvanishing vector (given by a $2\times 1$ matrix) $\alpha_2$ on $\pd\Om_1$ such that \[ B_2'\times n_2=DK_2\,\alpha_2\,. \] Therefore, \[ DK_2^\top (B_2'\times n_2)=(DK_2^\top DK_2) \alpha_2=G_{K_2}\alpha_2\,. \] The matrix $A_2$ then takes the form \begin{align*} A_2&=- \frac{2G_{K_2}^{-1}}{|n_2|^2}DK_2^\top DB_2'^\top n_2-\frac{\la_2}{|n_2|^2}\,\alpha_2\\ &=\frac{2G_{K_2}^{-1}}{|n_2|^2}DK_2^\top L_{\om^{(2)}}n_2-\frac{\la_2}{|n_2|^2}\,\alpha_2=:A^{(1)}_2+A^{(2)}_2\,, \end{align*} where we have used Lemma~\ref{lem:trace} to pass to the second equality. It is clear from this expression that the vector $A_2(\vp)$ only depends on the way the torus $\pd\Om_1$ is embedded in $\RR^3$, on the Diophantine vector $\om^{(2)}$, on the parametrization $K_2$ linearizing $X_2$, and on the eigenvalue $\la_2$. According to Definition~\ref{def:ndeg}, the invariant torus $\pd\Om_1$ of $B_2'$ is {\em twist}\/ if the twist constant \begin{align} T_2:&=\big([A^{(1)}_2]_{\TT^2}+ [A^{(2)}_2]_{\TT^2}\big)\cdot (\om^{(2)})^\perp\notag\\ &=[A^{(1)}_2]_{\TT^2}\cdot (\om^{(2)})^\perp - \la_2|\om^{(2)}|\bigg[\frac{F_2}{|n_2|^2}\bigg]_{\TT^2}\label{T2} \end{align} is nonzero. Here $ F_2(\vp)$ is the function defined as the projection of the field $\alpha_2$ in the $(\om^{(2)})^\perp$ direction, i.e., \[ F_2:=\alpha_2 \cdot \frac{(\om^{(2)})^\perp}{|\om^{(2)}|}\,. \] This function is nonvanishing because the field $DK_2\,\alpha_2$ is perpendicular to $B_2'|_{\pd\Om_1}$, so $\alpha_2$ and $\om^{(2)}$ cannot be proportional at some point of $\TT^2$. Arguing in the same way, we obtain an analogous expression for the twist constant $T_1$ of the invariant torus $\pd\Om_1$ for the vector field $B_1$. As $\pd\Om_1$ is an invariant torus for $B_1$ and $B_2'$, and as the corresponding parametrizations $K_1$ and $K_2$ and Diophantine vectors $\om^{(1)}$ and $\om^{(2)}$ differ just by an error of order $c_2$ by Equations~\eqref{eq.om2}-\eqref{eq.K2}, we conclude that \begin{align*} T_2&=T_1-\la_2\Big(|\om^{(1)}|\bigg[\frac{F_1}{|n_1|^2}\bigg]_{\TT^2}+O(c_2)\Big)+\la_1|\om^{(1)}|\bigg[\frac{F_1}{|n_1|^2}\bigg]_{\TT^2}+O(c_2)\\ &=: T_1-a\la_2+b\,, \end{align*} where $n_1:=\partial_{\vp_1}K_1 \times \partial_{\vp_2}K_1$ and $F_1$ is also nonvanishing. The constants $a,b$ are therefore nonzero if $|c_2|$ is small enough, so $T_2\neq 0$ provided that \[ \la_2\neq \frac{b+T_1}{a}=1+\frac{T_1}{|\om^{(1)}|\Big[\frac{F_1}{|n|^2}\Big]_{\TT^2}}+O(c_2)\,. \] This shows that $\pd\Om_1$ is a twist invariant torus of $B_2'$ for almost all choices of $\la_2$. Hence, we can take a Diophantine analytic invariant torus $\Si_2$ of the vector field $B_2'$, lying outside $\overline{\Om_1}$, which is $\eta$-close to $\pd\Om_1$. By this we mean that, for any fixed~$r$, there is a diffeomorphism $\Psi_1$ of~$\RR^3$ which maps $\pd\Om_1$ into $\Si_2$ and which is close to the identity as $\|\Psi_1-\id\|_{C^r}<\eta$. The invariant torus~$\Si_2$ is then the boundary of a toroidal domain $\Om_2\supset\overline{\Om_1}$. It is easy to check that the matrix $M_2$ in Equation~\eqref{eq.gen}, associated to the vector field $B_2'|_{\pd\Om_2}$ is related to the matrix $M_1$ of $B_1|_{\pd\Om_1}$ as \[ M_2=M_1+O(\eta+|c_2|)\,. \] As $M_1$ is invertible and $\pd\Om_1$ is accumulated by Diophantine invariant tori of $B_2'$, we can therefore take $\eta$ (and $|c_2|$) small enough so that $M_2$ is invertible too. We then conclude that $\Om_2$ is a nondegenerate toroidal domain of type I. \subsection{Conclusion of the proof} As $\Om_2$ is another nondegenerate toroidal domain of type I, we can repeat the argument to construct a vector field $B_3'$ in a neighborhood of $\pd\Om_2$ that solves the equation \[ \curl B_3'=\la_3 B_3'\,, \qquad B_3'|_{\pd\Om_2}=X_3\,, \] for some constant $\la_3\neq \la_2$, and the Cauchy datum $X_3$ satisfies \[ |X_3|^2=(1+b_3)|\widetilde X_2|^2+c_3 \] with arbitrarily small constants $c_3$ and $b_3=O(c_3)$. Here $\widetilde X_2:=B_2'|_{\pd\Om_2}$. Again, one can pick $c_3$ and $\la_3$ so that $\pd\Om_2$ is a twist Diophantine invariant torus of $B_3'$. Therefore there is an analytic Diophantine invariant torus of $B_3'$, which is the boundary of another nondegenerate toroidal domain of type I $\Om_3\supset \overline{\Om_2}$. This process can be iterated $N-1$ times to obtain a family of (analytic) nested tubes $\{\Om_k\}_{k=1}^N$, different constants $\la_k$, small constants $c_k,b_k$ and vector fields $B_k'$ satisfying $\curl B_k'=\la_kB_k'$ in $\Om_k':=\Om_k\backslash \overline{\Om_{k-1}}$ for all $2\leq k\leq N$. To construct a weak solution $(B,P)$ of the MHD equations in the toroidal domain $\Om_N$, we set \begin{align*} B(x)&:=B_1(x)\,1_{\Om_1}(x)+ \sum_{k=2}^N B_k'(x)\, 1_{\Om_k'}(x)\prod_{j=2}^k(1+b_j)^{-1/2}\\ P(x)&:= p_1 \,1_{\Om_1}(x)+ \sum_{k=2}^N p_k \, 1_{\Om_k'}(x)\,. \end{align*} The constant $p_1$ is arbitrary, and the constants $p_k$ are defined in terms of $c_j,b_j$ as \[ p_k:=p_1-\frac12\sum_{l=2}^{k}\prod_{j=2}^l(1+b_j)^{-1}c_l\,, \] Note that, generically, $p_k\neq p_j$ if $k\neq j$. A straightforward application of Lemma~\ref{L.Euler} shows that $(B,P)$ is a piecewise smooth MHD equilibrium with all the properties stated in Theorem~\ref{T:main}. \section{Free boundary equilibria: proof of Theorem~\ref{T:main2}}\label{S.freebound} We first introduce the class of toroidal domains that we consider in Theorem~\ref{T:main2}. \begin{definition}\label{D:torusII} A toroidal domain~$\Om\subset\RR^3$ is {\em nondegenerate of type II}\/ if there exists a Beltrami field $B$ on~$\Om$ (with eigenvalue $\la$) such that the following conditions hold: \begin{enumerate} \item $\pd\Om$ is an invariant torus of the field with Diophantine frequency vector $\om$. \item The {\em twist constant}\/ $T$ (see Definition~\ref{def:ndeg}) satisfies \[ T+\la\,\bigg[\frac{\al\cdot \om^\perp}{|n|^2}\bigg]_{\TT^2}\neq0\,, \] where $\al\cdot\om^\perp\equiv \al_1\om_2-\al_2\om_1$, $K:\TT^2\to\RR^3$ is the linearizing embedding of~$\pd\Om$ in coordinates~$\vp$ (so, in particular, $\pd\Om=K(\TT^2)$), $n:=\pd_{\vp_1}K\times \pd_{\vp_2}K$ is a normal vector and the $\RR^2$-valued function $\al(\vp)$ is defined as \[ (B\circ K)\times n=: DK\, \al\,. \] \end{enumerate} \end{definition} The proof of Theorem~\ref{T:main2} follows the same strategy as the proof of Theorem~\ref{T:main}, although it is easier because it does not make use of the Hamilton--Jacobi equation we study in Section~\ref{S.cohom}. Therefore, to make the presentation easier, we will use the same notation as in the previous section without further mention. As the analytic toroidal domain is nondegenerate in the sense of Definition~\ref{D:torusII}, there exists a Beltrami field $B$ in $\Om$, satisfying the equation \[ \curl B =\la B \] for some nonzero constant $\la$ and the boundary condition $B\cdot N=0$, such that $\pd\Om$ is a Diophantine invariant torus of $B$ with frequency vector $\om$ and its twist constant satisfies \begin{equation}\label{nondegh} T+\la\bigg[\frac{\al\cdot \om^\perp}{|n|^2}\bigg]_{\TT^2}\neq 0 \end{equation} As $P=0$ for a Beltrami field, it is clear that $(B,B\ext,\Om)$ are a solution to the equations for a free boundary MHD equilibrium with external current $J\ext$ if the external magnetic field and the external current satisfy \begin{subequations}\label{external} \begin{align} \curl B\ext &=J\ext \!\quad \text{ in } \RR^3\backslash \overline{\Om}\,,\label{Bext1}\\ \Div B\ext &=0 \qquad \text{ in } \RR^3\backslash \overline{\Om}\,,\\ B\ext \cdot N&=0 \qquad \text{ on } \pd\Om\,, \label{Bext3}\\ |B\ext |^2-|B|^2&=0 \qquad \text{ on } \pd\Om \label{Bext4}\,. \end{align} To ensure that $J\ext$ is a current sheet, we also aim to construct a toroidal domain $\Om'\supset \BOm$ and a tangent vector field $J$ on~$\pd\Om'$ such that \begin{align} J\ext &=J\, dS \,,\\ B\ext &=0 \qquad \text{ in } \RR^3\backslash \overline{\Om'}\,,\label{Bext6} \end{align} \end{subequations} where $dS$ is the surface measure on $\pd\Om'$. Note that the tangent vector field $J$ must be divergence-free with respect to the induced metric on~$\pd\Om'$ because Equation~\eqref{Bext1} implies that, in the sense of distributions, \[ 0= \Div J\ext=(\Div_{\pd\Om'}J)\, dS\,. \] Thus proving Theorem~\ref{T:main2} boils down to constructing a domain $\Om'$ and an analytic divergence-free tangent vector field~$J$ on~$\pd\Om'$ such that the solution to the exterior div-curl problem~\eqref{Bext1}--\eqref{Bext3} on $\RR^3\backslash\BOm$ satisfies the conditions~\eqref{Bext4}--\eqref{Bext6}. To construct solutions to this overdetermined system, we follow the same philosophy as in Section~\ref{S:Tmain}. Let $X:=B|_{\pd\Om}$ be the restriction of the Beltrami field $B$ on the boundary of the domain~$\Om$. Observe that $X$ is analytic in view of Theorem~\ref{T.analytic} and the associated $1$-form $X^\flat$ is closed on $\pd\Om$ by Theorem~\ref{T:CK}. Therefore, the Cauchy problem \[ \curl h =0\,, \qquad \Div h=0\,, \qquad h|_{\pd\Om}=X\,, \] has a unique analytic solution in a small tubular neighborhood~$U$ of $\pd\Om$ as a consequence of Theorem~\ref{T:CK}. By construction, $h\cdot N=0$ and $|h|^2=|B|^2$ on $\pd\Om$. Lemma~\ref{L.Euler} then ensures that the field \[ B\, 1_\Om + h \, 1_{U'}\,, \] with $U':= U\backslash\BOm$, is a weak solution to the stationary MHD equations in the toroidal domain $\Om_2:=U\cup \Om$. Proceeding just as in Subsection~\ref{S.pt}, the twist constant $T_h$ of the invariant torus $\pd\Om$ of the harmonic field $h$ can be readily shown to be \[ T_h=T+\la |\om|\bigg[\frac{F}{|n|^2}\bigg]_{\TT^2}= T+\la \bigg[\frac{\al\cdot\om^\perp}{|n|^2}\bigg]_{\TT^2}\,, \] where $T$ is the twist constant of~$B$ (cf. Definition~\ref{def:ndeg}). This is simply Equation~\eqref{T2}, where we have set $\la_2=0$ because the field~$h$ is harmonic and $c_2=0$ because $|h|^2=|B|^2$ on $\pd\Om$. The nondegeneracy assumption of type II, i.e., Equation~\eqref{nondegh}, ensures that $T_h\neq 0$. Thus Corollary~\ref{Cor_KAM} implies that $\pd\Om$ is accumulated (in both components of its complement) by analytic Diophantine invariant tori of $h$. We can therefore choose an analytic domain $\Om'\supset\BOm$ whose boundary is one of these invariant tori. To conclude, let us now define the vector field $$ B\ext(x) :=h(x)\,1_{\Om'\backslash \BOm}(x) $$ for $x\in \RR^3\backslash \BOm$. As $h$ is divergence free in $\Om'\backslash \BOm$ and tangent to $\pd\Om$ and $\pd\Om'$, an elementary computation shows that $\Div B\ext =0$ in the sense of distributions. Furthermore, the corresponding current is readily computed using that $$ \langle \curl B\ext, v\rangle =\int_{\RR^3\backslash \BOm} v\cdot J\, dS $$ for any $v\in C^\infty_c(\RR^3\backslash \BOm,\RR^3)$, where \[ J:= h\times N' \] and $N'$ is the outer unit normal on $\pd\Om'$. Therefore, $(B\ext,J,\Om')$ satisfies the system~\eqref{external}, so Theorem~\ref{T:main2} follows. \begin{remark} Quantitative versions of the Cauchy--Kovalevskaya theorem (Theorem~\ref{T:CK}) and the KAM theorem (Theorem~\ref{teo:kam:div}) provide an estimate for the separation that we can obtain between the current sheet $\pd\Om'$ and the domain $\Om$ in terms of the Diophantine constants of the frequency vector~$\om$ and of the analyticity radii and the analytic norms of the different objects that appear in the construction (namely, the Beltrami field $B|_{\pd\Om}$ and the linearizing embedding~$K$ of the invariant torus~$\pd\Om$). \end{remark} \section{Solving a Hamilton--Jacobi problem via the cohomological equation}\label{S.cohom} Let $\Om$ be an analytic nondegenerate toroidal domain of type I in $\RR^3$. By definition, there exists a Beltrami field $B$ in $\Om$ that satisfies the equation \[ \curl B=\la B \] for some constant $\la$, $\partial \Om$ is a Diophantine invariant torus of $B$, and the corresponding matrix $M$ defined in Equation~\eqref{eq.gen} is invertible. Arguing as in the beginning of Section~\ref{S.pt} we infer that $B$ is analytic up to the boundary and there are analytic coordinates $\vp:\partial\Om\to \TT^2$ such that \[ Y:=B|_{\pd\Om}=\om_1\pd_{\vp_1}+\om_2\pd_{\vp_2}\,, \] where $\om\in\RR^2$ is a Diophantine frequency vector. With a slight abuse of notation, in this section we will use the same name for a quantity on $\pd\Om$ (a function or a vector field) and for its expression in these coordinates. In this section, if $Z_1,Z_2$ are two vector fields on $\TT^2$, $|Z_1|$ and $Z_1\cdot Z_2$ denote the norm of $Z_1$ and the scalar product of $Z_1$ and $Z_2$, respectively, computed with respect to the metric on $\pd\Om$ induced by the Euclidean metric, which we write in the coordinates~$\vp$. As before, $[f]_{\TT^2}$ will denote the mean of a function $f$ in $\vp$-coordinates, i.e., \[ [f]_{\TT^2}:=\frac{1}{4\pi^2}\int_{\TT^2}f(\vp)\,d\vp \] with $d\vp:=d\vp_1\,d\vp_2$. The main result of this section is the following theorem. This result is used in the proof of Theorem~\ref{T:main} to construct Cauchy data which satisfy the constraint equation~\eqref{Eq.weak} appearing in the Cauchy--Kovalevskaya theorem for the curl operator. \begin{theorem}\label{L:cohom} Let $\Om$ and $Y$ be as above. Then, for any constant~$c$ with small enough absolute value, there is a nonnegative constant~$b$ and an analytic vector field $X$ on $\pd\Om$ of the form \[ X=(1+c)^{1/2}Y+\nabla H+a_1\nabla\vp_1+a_2\nabla\vp_2\,, \] where $\nabla$ denotes the gradient operator on~$\pd\Om$ associated with the induced metric, such that: \begin{enumerate} \item $|X|^2=(1+b)|Y|^2+c$. \item $X$ is analytically conjugate to a linear field with Diophantine frequency vector $\om'=(1+c)^{1/2}\om$. \item For any fixed~$r$, the scalar analytic function~$H$ on~$\pd\Om$ and the constants $b,a_1,a_2$ are bounded as \[ \|H\|_{C^r(\pd\Om)}+|b|+|a_1|+|a_2|\leq C|c|\,. \] \end{enumerate} Moreover, $X$ depends continuously on the parameter $c$ in the $C^r$-topology of vector fields, for any fixed~$r$. \end{theorem} \begin{remark} We are working in the analytic category because we need analytic solutions to apply the Cauchy--Kovalevskaya theorem in the proof of Theorem~\ref{T:main}. In this category, we can prove Theorem~\ref{L:cohom} using a quadratic Newton scheme. Using instead a Nash-Moser iteration, one can prove a completely analogous result in the $C^r$ setting, for large enough $r$. \end{remark} \begin{remark} The expression of~$X$ guarantees that the dual $1$-form of $X$ (computed with respect to the metric induced from $\RR^3$), which we denote by $X^\flat$, is closed: $d X^\flat=0$. This condition is also required to apply the Cauchy--Kovalevskaya theorem. \end{remark} \begin{proof} As in Section~\ref{S:Tmain}, we can rescale~$B$ so that \begin{equation}\label{eqnorm} [|Y|^2]_{\TT^2}=1\,. \end{equation} In coordinates $\vp$, condition (i) for the vector field $X$ is equivalent to picking the analytic function $H:\TT^2\to\RR$ and the constants $a=(a_1,a_2)$ so that the equation \begin{equation}\label{eqHJ} 2(1+c)^{1/2}L_\om H+|\nabla H|^2+2\nabla H\cdot (a\nabla\vp)+f=b|Y|^2+c(1-|Y|^2) \end{equation} is satisfied. Here $L_\om\equiv \om_1\partial_{\vp_1}+\om_2\partial_{\vp_2}$ is the derivative in the $\om$~direction and we have set \begin{align*} f&:=(a\nabla\vp)^2+2(1+c)^{1/2}(a\om)\,,\\ b&:=\Big[|\nabla H|^2+2\nabla H\cdot (a\nabla\vp)+f\Big]_{\TT^2}\,. \end{align*} Throughout this proof, we use the shorthand notation \begin{align*} a\nabla \vp&:= a_1\nabla \vp_1+a_2\nabla \vp_2\,,\\ a\om&:=a_1\om_1+a_2\om_2\,. \end{align*} To study the existence of solutions to Equation~\eqref{eqHJ} it is convenient to define the nonlinear operator $$ T_c(H,a):=2(1+c)^{1/2}L_\om H+|\nabla H|^2+2\nabla H\cdot (a\nabla\vp)+f-b|Y|^2-c(1-|Y|^2)\,. $$ The definition of the constant $b$ ensures that $[T_c(H,a)]_{\TT^2}=0$. To study the operator $T_c$ we will make use of the Banach space $\dot\cH_\rho$ of holomorphic functions~$H$ on the complex strip \begin{equation}\label{Derho} \Delta(\rho):=\{\vp : \text{Re}\,\vp\in\TT^2\,,\; |\text{Im}\,\vp|<\rho\} \end{equation} that have zero mean on $\{\text{Im}\,\vp=0\}$ (i.e., $[H]_{\TT^2}=0$). This space is endowed with the supremum norm $\|H\|_\rho := \sup_{\vp\in \Delta(\rho)} |H(\vp)|$. We will also denote by $\dot\cH_\rho$ an analogous space of vector or matrix-valued functions. With some abuse of notation, we still use the notation $\vp$ for the complexification of the toroidal coordinates~$\vp$. Since the induced metric on $\pd\Om$ is analytic and $Y$ is also an analytic vector field, it then follows that there is some $\rho_0>0$ such that $T_c$ defines a map \[ T_c: \dot\cH_\rho \to \dot\cH_\rho \] for all $\rho<\rho_0$. To solve the equation \begin{equation}\label{eqTcH} T_c(H,a)=0\,, \end{equation} we will crucially use the additional requirement that $X$ is analytically conjugate to the linear field $(1+c)^{1/2}\om$. More precisely, let us consider the equation \begin{equation}\label{eqreduc} \Phi^*\Big((1+c)^{1/2}\om+\nabla H+a\nabla\vp\Big)-(1+c)^{1/2}\om=0\,, \end{equation} where $\Phi(\vp):=\vp+v(\vp)$ is a diffeomorphism of $\TT^2$ and $v:\TT^2\to \RR^2$. We denote the LHS of this equation by $R_c(H,v,a)$. Our goal is to find analytic solutions $(H,v,a)$ to Equations~\eqref{eqTcH} and~\eqref{eqreduc} when $|c|$ is small enough. Notice that~\eqref{eqreduc} automatically guarantees that condition~(ii) is satisfied. To solve this equation, we apply Lemma~\ref{L:Newton} to the approximate solution $$(H_0,v_0,a_0):=(0,0,0)\,.$$ We shall now use the notation of this lemma without further notice. As $T_c(H_0,a_0)=-c(1-|Y|^2)$ and $R_c(H_0,v_0,a_0)=0$, it is clear that \[ E_0=\|1-|Y|^2\|_{\rho}|c|\,. \] It is obvious that there is $c_0>0$ small enough such that the assumption~\eqref{eqE} holds for all $|c|<c_0$ (of course, the smallness assumption on $v_0$ is also satisfied because $\|v_0\|_{\rho}=0$). It remains to check the generic condition on the matrix $M^{(0)}$, cf. Equation~\eqref{eq.genM}. Since $\Phi_0=\id$, an easy computation shows that the columns of the $2\times2$ matrix $M^{(0)}$ are given by the vectors \[ [\nabla \vp_i-\om_i\nabla L_\om^{-1}(1-|Y|^2)]_{\TT^2}\,, \] with $i=1,2$. In terms of the positive definite symmetric matrix $G$ describing the metric on~$\pd\Om$ in the $\vp$-coordinates, it is immediate to check that Equation~\eqref{eq.genM} is equivalent to \[ \det\Bigg[G(\vp)^{-1}\cdot\left( \begin{array}{cc} 1-\om_1\partial_{\vp_1}\mathcal R & -\om_2\partial_{\vp_1}\mathcal R \\ -\om_1\partial_{\vp_2}\mathcal R & 1-\om_2\partial_{\vp_2}\mathcal R \\ \end{array} \right)\Bigg]_{\TT^2}\neq 0 \] where $\mathcal R(\vp)$ is the unique zero mean solution to the equation \[ \om_1\pd_{\vp_1}\mathcal R+\om_2\pd_{\vp_2}\mathcal R=1-|Y|^2\,. \] This condition is immediately satisfied, by Definition~\ref{D:torus}, if $B$ is a nondegenerate Beltrami field of type I on the toroidal domain $\Om$. For any $\rho'\in(0,\rho)$, we can then conclude from Lemma~\ref{L:Newton} that there exists a unique triple $(H,v,a)\in \dot\cH_{\rho'} \times\dot\cH_{\rho'}\times\RR^2$ in a neighborhood of~$(0,0,0)$ such that Equations~\eqref{eqTcH} and~\eqref{eqreduc} hold, provided that $|c|$ is small enough. It is clear that $(H,v,a)$ depends continuously on~$c$. The bound~(iii) then follows from the estimate~\eqref{eqbound} below, the usual Cauchy estimate \[ \|H\|_{C^r(\TT^2)}+|a|\leq C_r \|H\|_{\rho'}+|a|\leq C|c|\,, \] and the obvious bound \[ |b|\leq C(|a|+|a|^2+|a|\|H\|_{C^1(\TT^2)}+\|H\|_{C^1(\TT^2)}^2)\leq C|c| \] for~$|c|<1$. \end{proof} \subsection{Existence of solutions of the Hamilton-Jacobi equation}\label{SS.prop} In this section we prove the basic lemma used to establish the existence of analytic solutions to Equations~\eqref{eqTcH}-\eqref{eqreduc}. To this end, note that Equation~\eqref{eqreduc} reads as \begin{equation}\label{eqred2} R_c(H,v,a):=(1+c)^{1/2}L_\om v-\nabla H\circ(\id+v)-(a\cdot\nabla\vp)\circ(\id+v)=0\,. \end{equation} Here and in what follows, when the operator $L_\om$ acts on vector or matrix-valued functions, its action is understood componentwise. To solve the system \begin{equation}\label{TR} T_c(H,a)=0\,, \qquad R_c(H,v,a)=0\,, \end{equation} we shall use Newton's quadratic scheme and the R{\"u}ssmann estimates for analytic solutions to cohomological equations. We recall that the constants $\ga>0$ and $\tau>1$ in the proof appear in the definition of the Diophantine vector $\om$, and that one can assume $\ga\leq1$ without any loss of generality. \begin{lemma}\label{L:Newton} Let us take $c\in[-\frac12,\frac12]$ and consider a triple $(H_0,v_0,a_0)\in(\dot\cH_\rho, \dot\cH_\rho,\RR^2)$. For any $\rho'\in(0,\rho)$, if $\|v_0\|_{\rho}$ and \begin{equation}\label{eqE} E_0:=\|T_c(H_0,a_0)\|_{\rho}+\|R_c(H_0,v_0,a_0)\|_{\rho} \end{equation} are smaller than a certain constant $\ep_0>0$ that depends on~$\rho'$ but not on~$c$, and if the approximate solution $(H_0,a_0,v_0)$ satisfies the generic assumption given by Equation~\eqref{eq.genM} below, then there exists a unique solution $(H,v,a)\in \dot\cH_{\rho'} \times\dot\cH_{\rho'}\times \RR^2$ to Equations~\eqref{eqTcH} and~\eqref{eqred2} (or, equivalently, \eqref{TR}) bounded as \begin{equation}\label{eqbound} \|H-H_0\|_{\rho'}+\|v-v_0\|_{\rho'}+|a-a_0|<CE_0\,. \end{equation} \end{lemma} \begin{proof} To set a quadratic Newton iteration, we introduce corrections $(\xi,\eta,\al)$ to the approximate solution so that \[ (H_1,v_1,a_1):=(H_0,v_0,a_0)+ (\xi,\eta,\al) \] is a solution to the equations modulo a quadratic error, which is bounded by $CE_0^2$ (precise estimates will be shown later). We also take a constant $b_0$ ensuring that $[T_c(H_0,a_0)]_{\TT^2}=0$, and introduce the correction \[ b_1:=b_0+\beta\,, \] where the constant $\beta$ will be fixed later. Setting $E_H^0:=T_c(H_0,a_0)$ and $E_v^0:=R_c(H_0,v_0,a_0)$, we then obtain $(\xi,\eta,\al)$ as solutions to the linearized equations \begin{multline}\label{eqlin1} 2X_0\cdot\nabla\xi +2\nabla H_0\cdot (\alpha\nabla\vp)+2(a_0\nabla\vp)\cdot(\alpha\nabla\vp)\\ +2(1+c)^{1/2}(\alpha\om)-\be|Y|^2=-E_H^0 \end{multline} and \begin{multline}\label{eqlin2} (1+c)^{1/2}L_\om\eta-\Big(D(\nabla H_0)\circ(\id+v_0)\Big)\eta-(\al\nabla\vp)\circ(\id+v_0)\\ -\Big(D(a_0\nabla\vp)\circ(\id+v_0)\Big)\eta=-E_v^0+\nabla\xi\circ(\id+v_0)\,. \end{multline} In Equation~\eqref{eqlin1}, the vector field $X_0$ is defined as \[ X_0:=(1+c)^{1/2}\om+\nabla H_0+a_0\nabla\vp\,, \] and the constant $\beta$ will be chosen later to ensure the solvability of the equation (that is, so that a certain zero mean condition holds). In Equation~\eqref{eqlin2}, the symbol $D$ is used to denote the Jacobian matrix of a vector field. Taking the pullback of Equation~\eqref{eqlin1} with the diffeomorphism $\Phi_0:=\id+v_0$, defining the function $\hat\xi:=\xi\circ\Phi_0$, and using that \[ \Phi^*_0X_0=(1+c)^{1/2}\om-(I+Dv_0)^{-1}E_v^0\,, \] we can rewrite~\eqref{eqlin1} as \begin{multline*} 2(1+c)^{1/2}L_\om\hat\xi-2\Big((I+Dv_0)^{-1}E_v^0\Big)\hat\xi\\ =\Phi_0^*\Big(-E_H^0+\be|Y|^2-2\nabla H_0\cdot (\alpha\nabla\vp)-2(a_0\nabla\vp)\cdot(\alpha\nabla\vp)-2(1+c)^{1/2}(\alpha\om)\Big)\,. \end{multline*} In this equation, $I$ denotes the $2\times 2$ identity matrix. We also observe that if $\|v_0\|_\rho$ is small enough, the matrix $I+Dv_0$ is invertible. The second summand, which denotes the action of the vector field $-2(I+Dv_0)^{-1}E_v^0$ (understood as a first order differential operator) on the function $\hat\xi$, is in fact a quadratic term; precise estimates will be given below. Thus, we can drop this term and consider the following equation: \begin{multline}\label{eqlin1b} 2(1+c)^{1/2}L_\om\hat\xi\\ =\Phi_0^*\Big(-E_H^0+\be|Y|^2-2\nabla H_0\cdot (\alpha\nabla\vp)-2(a_0\nabla\vp)\cdot(\alpha\nabla\vp)-2(1+c)^{1/2}(\alpha\om)\Big)\,. \end{multline} Following Zehnder~\cite{Zehnder}, to study Equation~\eqref{eqlin2} we define a new function $\widetilde\eta$ as \[ \eta=:(I+Dv_0)\widetilde\eta\,. \] Computing the Jacobian matrix of the equation that defines $E_v^0$, one obtains the identity \begin{multline*} DE_v^0=(1+c)^{1/2}L_\om(Dv_0)-\Big(D(\nabla H_0)\circ(\id+v_0)\Big)(I+Dv_0)\\ -\Big(D(a_0\nabla\vp_0)\circ(\id+v_0)\Big)(I+Dv_0)\,. \end{multline*} Plugging this expression into Equation~\eqref{eqlin2}, and dropping the term $(DE_v^0)\widetilde\eta$, which is quadratic, we can write \begin{align}\label{eqlin2b} L_\om\widetilde\eta=\frac{1}{(1+c)^{1/2}}(I+Dv)^{-1}\Big((\al\nabla\vp+\nabla\xi)\circ(\id+v_0)-E_v^0\Big)\,. \end{align} Summarizing, we have changed the original linearized system of equations by the equivalent linear cohomological Equations~\eqref{eqlin1b} and~\eqref{eqlin2b}. Choosing the constant $\beta$ in Equation~\eqref{eqlin1b} so that \[ \Big[\Phi_0^*\Big(-E_H^0+\be|Y|^2-2\nabla H_0\cdot (\alpha\nabla\vp)-2(a_0\nabla\vp)\cdot(\alpha\nabla\vp)-2(1+c)^{1/2}(\alpha\om)\Big)\Big]_{\TT^2}=0\,, \] Equation~\eqref{eqlin1b} admits a unique zero-mean solution $\xi$, depending on the constant vector $\alpha$, which is of the form \begin{equation*} \xi=\xi^E+(\xi_1^H+\xi_1^a+\xi_1^\om)\alpha_1+(\xi_2^H+\xi_2^a+\xi_2^\om)\alpha_2\,. \end{equation*} Here \begin{align*} \xi^E&:=\Phi_{0*}L_\om^{-1}\Phi_0^*\Bigg(\frac{\beta_{0}|Y|^2-E_H^0}{2(1+c)^{1/2}}\Bigg)\,,\\ \xi^H_i&:=\Phi_{0*}L_\om^{-1}\Phi_0^*\Bigg(\frac{\beta_{1}^{(i)}|Y|^2-\nabla H_0\cdot\nabla\vp_i}{(1+c)^{1/2}}\Bigg)\,,\\ \xi^a_i&:=\Phi_{0*}L_\om^{-1}\Phi_0^*\Bigg(\frac{\beta_{2}^{(i)}|Y|^2-(a_0\nabla\vp)\cdot\nabla\vp_i}{(1+c)^{1/2}}\Bigg)\,,\\ \xi^\om_i&:=\Phi_{0*}L_\om^{-1}\Phi_0^*\Bigg(\beta_{3}^{(i)}|Y|^2-\om_i\Bigg)\,, \end{align*} and the constants $\beta_{0},\beta_{1}^{(i)},\beta_{2}^{(i)},\beta_{3}^{(i)}$ (with $i=1,2$) guarantee that all the above functions of the form $\Phi_0^*\big(\cdots\big)$ have zero mean. This ensures that the action of the operator $L_\om^{-1}$ (mapping functions of zero mean to functions of zero mean) is well defined. Note, in particular, \[ \beta=\beta_{0}+(\beta_{1}^{(1)}+\beta_{2}^{(1)}+\beta_{3}^{(1)})\alpha_1+(\beta_{1}^{(2)}+\beta_{2}^{(2)}+\beta_{3}^{(2)})\alpha_2\,. \] Next, let us plug the expression for $\xi$ in Equation~\eqref{eqlin2b} and consider the $2\times 2$ matrix-valued function $M^{(0)}$ whose columns are the vector fields \[ \nabla\vp_i+\nabla(\xi^H_i+\xi^a_i+\xi^\om_i) \] with $i=1,2$. The solvability of Equation~\eqref{eqlin2b} then follows if and only if one can pick a vector $\alpha\in\RR^2$ such that \begin{equation}\label{eqalpha} \Big[(I+Dv_0)^{-1}M^{(0)}\circ(\id+v_0)\Big]_{\TT^2}\alpha=\Big[(I+Dv_0)^{-1}\big(E_v^0-\nabla \xi^E\circ(\id+v_0)\big)\Big]_{\TT^2}\,. \end{equation} This linear equation has a solution if and only if the matrix $M^{(0)}$ satisfies the invertibility condition \begin{equation}\label{eq.genM} \det\Big[(I+Dv_0)^{-1}M^{(0)}\circ(\id+v_0)\Big]_{\TT^2}\neq0\,. \end{equation} This is the generic assumption appearing in the statement of the lemma. Note that Equation~\eqref{eq.genM} only depends on $(H_0,v_0,a_0)$, on the vector field $Y$ and on the domain $\Om$. We have then proved that, fixing the constants $\alpha$ and $\beta$ as above, there is a unique solution $(\xi,\widetilde\eta)$ to the linearized equations~\eqref{eqlin1b}-\eqref{eqlin2b}. Now let us estimate the analytic norms of these solutions to show that this scheme is indeed quadratic, and that it can be iterated because the norms of the corrected approximate solutions are uniformly bounded. For this we use R\"ussmann estimates~\cite{Rus}: if $A\in \dot\cH_\rho$ and $\om$ is a Diophantine vector, for each~$\de>0$ the cohomological equation $L_\om B =A$ has a unique solution $B\in \dot\cH_{\rho-\delta}$ that can be bounded as $$\|B\|_{\rho-\delta} \leq C \gamma^{-1} \delta^{-\tau} \|A\|_\rho\,.$$ The constant $C$ is independent of~$\de$ In what follows, let us fix a small constant $0<\delta<\rho$ (which will measure the loss of analytic band in the iteration) and assume that $E_0<\ep_0$ (cf. Equation~\eqref{eqE}), where $\ep_0$ satisfies the smallness condition \begin{equation}\label{eqE2} {\ep_0}\ll \gamma^6\delta^{9+6\tau}\,. \end{equation} By assumption, $\|v_0\|_\rho<\ep_0$, and hence~\eqref{eqE2} implies that $\|v_0\|_\rho\ll\de$, which guarantees, by the Cauchy estimate, that $I+Dv_0$ is close to the unit matrix. A straightforward computation using R\"ussmann estimates and the Cauchy estimate for derivatives then implies \begin{align*} \|\xi^E\|_{\rho-\de}&\leq C\ga^{-1}\de^{-\tau}\|E_H^0\|_{\rho}\,,\\ \|\xi_i^H\|_{\rho-2\de}&\leq C\ga^{-1}\de^{-\tau-1}\|H_0\|_\rho\,,\\ \|\xi_i^a\|_{\rho-\de}&\leq C\ga^{-1}\de^{-\tau}|a_0|\,,\\ \|\xi_i^\om\|_{\rho-\de}&\leq C\ga^{-1}\de^{-\tau}\,. \end{align*} Here we have used that the condition $\|v_0\|_\rho\ll\de$ ensures that the diffeomorphism $\Phi_0$ is close to the identity, and hence the constant $C$ can be taken independent of $v_0$. The constant does depend on $Y$ and $\Om$, though. Solving $\alpha$ in Equation~\eqref{eqalpha}, one then concludes \begin{align*} &|\alpha|\leq C\Big(\|E_v^0\|_{\rho}+\ga^{-1}\de^{-1-\tau}\|E_H^0\|_{\rho}\Big)\leq C\ga^{-1}\de^{-1-\tau}E_0\,,\\ &\|\xi\|_{\rho-2\de}\leq C\Big(\ga^{-1}\de^{-\tau}\|E_H^0\|_{\rho}+\ga^{-2}\de^{-2-2\tau}E_0\Big)\leq C\ga^{-2}\de^{-2-2\tau}E_0\,. \end{align*} Now, solving $\widetilde\eta$ in Equation~\eqref{eqlin2b}, and using again that $\|v_0\|_{\rho}\ll\de$, we readily estimate~$\eta$ as \begin{equation*} \|\eta\|_{\rho-4\de}\leq C\ga^{-3}\de^{-3-3\tau}E_0\,. \end{equation*} Analogously, the constant $\beta$ in Equation~\eqref{eqlin1b} can be bounded as \[ |\beta|\leq C\Big(\|E_H^0\|_{\rho}+\de^{-1}|\al|\Big)\leq C\ga^{-1}\de^{-2-2\tau}E_0\,. \] In these bounds, the constant $C$ only depends on $\|H_0\|_\rho$, $|a_0|$, $Y$ and $\Om$. In order to show that this scheme can be iterated, let us check that the norms of the corrected approximate solution $(H_1,v_1,a_1)$ and the constant $b_1$ remain uniformly bounded. Indeed, if $\|H_0\|_\rho+|a_0|+|b_0|<\varkappa_0$ for some positive constant $\varkappa_0$, we can easily derive that \begin{align*} &\|H_1\|_{\rho-2\de}+|a_1|+|b_1|\leq \|H_0\|_{\rho}+|a_0|+|b_0|+C\ga^{-2}\de^{-2-2\tau}E_0<\varkappa_0\,, \end{align*} where in the last inequality we have used the assumptions~\eqref{eqE} and~\eqref{eqE2}. Analogously, if $E_0$ is small enough (cf. Equation~\eqref{eqE2}), \begin{align*} \|v_1\|_{\rho-4\de} &\leq \|v_0\|_{\rho}+C\ga^{-3}\de^{-4-3\tau}E_0\ll \de\,,\\ \|(I+Dv_1)-(I+Dv_0)\|_{\rho-5\de}&=\|D\eta\|_{\rho-5\de}\leq C\ga^{-3}\de^{-4-3\tau}E_0\ll 1\,. \end{align*} The generic assumption~\eqref{eq.genM} is then satisfied in the iteration because the difference $M^{(1)}-M^{(0)}$ is bounded as \[ \|M^{(1)}-M^{(0)}\|_{\rho-5\de}\leq C\ga^{-3}\de^{-4-3\tau}E_0\ll 1\,. \] Therefore, if $\de$ is small enough, \begin{multline*} \Big|\det\Big[(I+Dv_1)^{-1}M^{(1)}\circ(\id+v_1)\Big]_{\TT^2}\Big|\\ \geq \Big|\det\Big[(I+Dv_0)^{-1}M^{(0)}\circ(\id+v_0)\Big]_{\TT^2}\Big|-C\de>0 \end{multline*} because $M^{(0)}$ satisfies an invertibility condition by hypothesis. To complete the proof of the proposition, we have to check that the new errors $E_H^1:=T_c(H_1,a_1)$ and $E_v^1:=R_c(H_1,v_1,a_1)$ are quadratic with respect to the errors $E_H^0$ and $E_v^0$. This follows from the fact that \begin{align*} E_H^1&=(\nabla\xi)^2+2\nabla\xi\cdot(\alpha\nabla\vp)+(\alpha\nabla\vp)^2-2\Big((I+Dv_0)^{-1}E_v^0\Big)\xi\circ\Phi_0\,,\\ E_v^1&=\Big(D^2(\nabla H_0+a_0\nabla\vp)\circ\Phi_0\Big) \eta\otimes\eta\\ &\qquad\qquad+\Big(D(\nabla\xi-\al\nabla\vp)\circ\Phi_0\Big)\eta+\Big(DE_v^0(I+Dv_0)^{-1}\Big)\eta\,. \end{align*} A straightforward computation using the previous estimates shows \begin{align*} \|E_H^1\|_{\rho-3\de}&\leq C\ga^{-4}\de^{-6-4\tau}E_0^2\\ \|E_v^1\|_{\rho-4\de}&\leq C\ga^{-6}\de^{-9-6\tau}E_0^2\,, \end{align*} so the scheme is indeed quadratic. In particular, \[ E_1:=\|E_H^1\|_{\rho-3\de}+|E_v^1\|_{\rho-4\de}\leq C\ga^{-6}\de^{-9-6\tau}E_0^2\,. \] so the new error $E_1$ is smaller than~$\ep_0$ because of the smallness condition~\eqref{eqE2}. It is now standard that the scheme can therefore be iterated to yield a unique solution $(H,v,a)\in \dot\cH_{\rho'}\times \dot\cH_{\rho'} \times \RR^2$ to Equations~\eqref{eqTcH} and~\eqref{eqred2} that is bounded as in Equation~\eqref{eqbound}, with $\rho':=\rho-8\de$. This is an easy consequence of the previous estimates and the following well known lemma: \begin{lemma}\label{lem:conv2} Let $\{E_n\}_{n=0}^\infty$ be a sequence of positive real numbers such that \[ E_{n+1}\leq C\ga^{-a}\de_n^{-b-a\tau}E_n^2 \] for some constant $C>0$, positive reals $a,b$, and small constant $\de_0$, with $\de_{n+1}:={\de_n}/{2}$. Then $E_n\to0$ as $n\to\infty$ provided that ${E_0}\ll {\gamma^a \de^{b+a\tau}} $. \end{lemma} The result is then proven. \end{proof} \section{A KAM theorem adapted to divergence-free vector fields}\label{sec:teo} In this section we prove a KAM theorem that applies to divergence-free vector fields in $\RR^3$ that are not necessarily close to integrable ones or in Birkhoff normal form. Another key technical advantage of this result is that the twist condition is written in terms of the (approximately) invariant torus and the vector field itself, so one does not need any fine information about the trajectories of the vector field. The proof follows the parametrization method as presented in~\cite{LGJV} in the context of Hamiltonian systems. It is convenient to study the reducibility properties of the invariant tori using embeddings of $\TT^2\to\RR^3$, denoting by~$\vp$ the natural coordinates on the torus. As before, in this section $D$ denotes the Jacobian matrix of a vector-valued function and $|\cdot|$ stands for the its norm computed with the induced metric. \subsection{Statement of results} Let $B$ be an analytic divergence-free vector field in $\RR^3$ and let $\omega \in \RR^2$ be a frequency vector satisfying the Diophantine condition~\eqref{eq:diof}. Let $K:\TT^2 \to\RR^3$ be an analytic embedding, which parametrizes a torus $\cT:=K(\TT^2)\subset\RR^3$ that we will eventually assume to be approximately invariant under $B$ with frequency $\omega$. The associated error, or defect, of invariance is measured using the function \begin{equation}\label{eq:error} E(\vp):=L_\omega K(\vp) -B(K(\vp))\,, \end{equation} where, as before, $L_\omega K(\vp):= DK(\vp) \omega$ is the derivative of~$K$ in the direction of~$\om$. We still denote by \begin{equation} G_K(\vp):=DK(\vp)^\top DK(\vp) \end{equation} the matrix representation of the pull-back to $\TT^2$ of the Euclidean metric. It is obviously a nonsingular $2\times2$ matrix because $K$ is an embedding. As $K$ and~$B$ are analytic, they can be analytically continued to a complex strip that we call $\Delta(\rho)$ of the form~\eqref{Derho}. An important ingredient of a KAM theorem is its twist condition, which in this case we define in terms of the embedding $K$ and the vector field $B$ as follows: \begin{definition}\label{def:ndeg} The embedding $K:\TT^2\to\RR^3$ is {\em twist}\/ for the vector field $B$ if the twist constant \begin{equation*} T:=[A_1]_{\TT^2}\om_2-[A_2]_{\TT^2}\om_1\equiv[A]_{\TT^2}\cdot \omega^\perp \end{equation*} is nonzero. Here the $2\times 1$ matrix $A$ is defined as \begin{equation}\label{eq:condA} A(\vp) := - \frac{G_K(\vp)^{-1} DK(\vp)^\top [DB(K(\vp))^\top + DB(K(\vp))]\, n(\vp)}{|n(\vp)|^2} \end{equation} and $n$ is the vector field normal to the torus $\cT$ given by \begin{equation}\label{eq:n} n(\vp):=\partial_{\vp_1}K(\vp) \times \partial_{\vp_2}K(\vp)\,. \end{equation} \end{definition} The main result of this section, which is of interest in itself, is the following KAM theorem. We recall that two tori are said to be {\em $\eta$-close}\/ if there is a diffeomorphism $\Psi$ of~$\RR^3$ mapping one into the other and such that $\|\Psi-\id\|_{C^r}<\eta$, where $r\geq4$ is any fixed integer. \begin{theorem}\label{teo:kam:div} Let $B$, $\omega$, $\rho$ and $K$ be as above. Assume that the embedding $K$ is twist with respect to the vector field $B$. If the invariance error $\|E\|_\rho$ is small enough, there is a constant $\lambda_*$ and an analytic Diophantine invariant torus $\cT_*$ of~$B$ with frequency vector $\omega_* := (1+\lambda_*)\,\omega$. Furthermore, $|\la_*|\leq C \|E\|_\rho$ and the torus $\cT_*$ is $(C \|E\|_\rho)$-close to~$\cT:=K(\TT^2)$. \end{theorem} The following corollary, which is a standard consequence of Theorem~\ref{teo:kam:div}, is the result we employed in the proof of Theorem~\ref{T:main}. \begin{corollary}\label{Cor_KAM} Assume that the (analytic, divergence-free) vector field $B$ has an invariant torus $\cT$ with Diophantine frequency vector $\om$. If $\cT$ is twist (in the sense of Definition~\ref{def:ndeg}), then it is accumulated (in both connected components of the complement $\RR^3\backslash\cT$) by a set of Diophantine analytic invariant tori of~$B$. \end{corollary} \begin{proof} Simply apply Theorem~\ref{teo:kam:div} to the triple $(B,\om',K)$, with a Diophantine frequency vector $\om'\neq \om$ which is very close to $\om$ and has the same Diophantine constants $(\ga,\tau)$. This ensures the invariance error is as small as one wishes. The accumulation property of the statement follows from the fact that the set of Diophantine numbers with fixed $(\ga,\tau)$ has positive measure in any neighborhood of~$\om$. The invariant torus $\cT'$ with frequency vector $(1+\la')\om'$ that one obtains with Theorem~\ref{teo:kam:div}, lies in the exterior component of $\RR^3\backslash\cT$ or in the interior one depending on whether $\frac{\om'_2}{\om'_1}$ is smaller or bigger than $\frac{\om_2}{\om_1}$ (because of the twist condition). \end{proof} To prove Theorem~\ref{teo:kam:div} we will iteratively correct the embedding and the frequency vector by means of the Newton method. Denoting the corrected quantities by \begin{align*} \bar K(\vp)&:=K(\vp)+\Delta_K(\vp)\,,\\ \bar \omega&:=\omega+\delta_\omega\,, \end{align*} one is led to choosing $\Delta_K(\vp)$ as a solution of the linearized equation \begin{equation}\label{eq:cR} \cR(\Delta_K(\vp)) := L_\omega \Delta_K(\vp) - DB(K(\vp)) \Delta_K(\vp) = - E(\vp)-DK(\vp) \delta_\omega. \end{equation} Our next goal is to analyze this equation, which will involve developing a geometric setting in which the analytic properties of the equation are laid bare. The most efficient way to do this is by first considering the (trivial) case where $\cT$ is an actual invariant torus of~$B$. This case will be considered in the next subsection. Subsequently, in Subsection~\ref{sec:app:inv} we will refine this approach to deal with the approximately invariant case, which will enable us to prove Theorem~\ref{teo:kam:div}. \subsection{Geometric study of the invariant case}\label{sec:inv} In this subsection we shall assume that $\calT$ is an invariant torus of the (divergence-free) vector field $B$ with frequency vector $\omega \in \RR^2$, which we parametrize by the map $K: \TT^2 \to \RR^3$. The invariance equation reads as \begin{equation}\label{eq:inv1} L_\omega K(\vp) = B(K(\vp))\,. \end{equation} The columns of the matrix $DK$ are obviously a basis of the tangent space $T_{K(\vp)} \calT$. With $n(\vp)$ being the normal vector~\eqref{eq:n}, the key geometric observation is that the frame \begin{equation}\label{eq:basis1} \left(DK(\vp), \frac{n(\vp)}{|n(\vp)|^2} \right) \end{equation} greatly simplifies the analysis of the operator~$\cR$. More precisely, in this frame~\eqref{eq:basis1}, the linear operator~$\cR$ has a triangular structure that reduces the study of Equation~\eqref{eq:cR} to that of two cohomological equations with constant coefficients. To prove this, we start by computing the action of the operator $\cR$ on the frame~\eqref{eq:basis1}. Firstly, by taking derivatives at both sides of Equation~\eqref{eq:inv1}, we obtain that \[ L_\omega DK(\vp) = DB(K(\vp)) DK(\vp)\,, \] which implies that $\cR(DK)=0$. To compute $\cR (\frac{n(\vp)}{|n(\vp)|^2})$, we use the following lemma: \begin{lemma}\label{lem:trace} If Equation~\eqref{eq:inv1} is satisfied, \[ L_\omega n(\vp)=-DB(K(\vp))^\top n(\vp). \] \end{lemma} \begin{proof} A direct computation yields \begin{align*} L_\omega n(\vp) = {} & L_\omega (\partial_{\vp_1} K(\vp)) \times \partial_{\vp_2} K(\vp) + \partial_{\vp_1} K(\vp) \times L_\omega (\partial_{\vp_2} K(\vp)) \\ = {} & (DB(K(\vp)) \partial_{\vp_1} K(\vp) \times \partial_{\vp_2} K(\vp) + \partial_{\vp_1} K(\vp) \times (DB(K(\vp)) \partial_{\vp_2} K(\vp)\,. \end{align*} Then the elementary identity \[ (U v_1) \times v_2 + v_1 \times (U v_2) + U^\top (v_1 \times v_2) = 0\,, \] holds if $U$ is a $3\times 3$ traceless matrix and $v_1$, $v_2$ are $3\times 1$ vectors. If we take $U$ as $DB(K(\vp))$, and note that $DB$ has zero trace because $B$ is divergence-free, the lemma then follows. \end{proof} We are ready to compute the action of the linear operator $\cR$ on the normal vector: \begin{align} \cR\left( \frac{n(\vp)}{|n(\vp)|^2} \right) &= L_\omega \left( \frac{n(\vp)}{|n(\vp)|^2} \right) - \frac{DB(K(\vp)) n(\vp)}{|n(\vp)|^2} \nonumber \\ &= \frac{L_\omega n(\vp)}{|n(\vp)|^2} + n(\vp) L_\omega [ (|n(\vp)|^2)^{-1} ] - \frac{DB(K(\vp)) n(\vp)}{|n(\vp)|^2} \nonumber\\ &= \frac{L_\omega n(\vp)-DB(K(\vp))n(\vp)}{|n(\vp)|^2} - \frac{n(\vp) (L_\omega n(\vp)^\top n(\vp)+n(\vp)^\top L_\omega n(\vp))}{|n(\vp)|^4}\nonumber\\ & = -\frac{[DB(K(\vp))^\top+DB(K(\vp))]\,n(\vp)}{|n(\vp)|^2}\nonumber\\ &\qquad\qquad+\frac{n(\vp)^\top[DB(K(\vp))^\top+DB(K(\vp))]\,n(\vp)}{|n(\vp)|^4}\,n(\vp)\,. \label{eq:Rn_exp} \end{align} Here we have used Lemma~\ref{lem:trace} to pass to the last line. This expression can be written in the frame~\eqref{eq:basis1} as \begin{equation}\label{eq:Rn1} \cR\left(\frac{n(\vp)}{|n(\vp)|^2} \right) = DK(\vp) A(\vp) \,, \end{equation} where the $2 \times 1$ matrix $A(\vp)$ is \begin{equation}\label{eq:twist} A(\vp) = - \frac{G_K(\vp)^{-1} DK(\vp)^\top [DB(K(\vp))^\top + DB(K(\vp))]\, n(\vp)}{|n(\vp)|^2}\,. \end{equation} Observe $A(\vp)$ is therefore just as in Equation~\eqref{eq:condA}. \subsection{Proof of Theorem~\ref{teo:kam:div}}\label{sec:app:inv} Our goal now is to use the frame introduced in the previous section to analyze Equation~\eqref{eq:cR}, which determines the small corrections $\Delta_K$ and $\delta_\omega$. The problem ultimately boils down to studying the inverse of the operator $\cR$. Taking derivatives in Equation~\eqref{eq:error}, we obtain \begin{equation}\label{eq:aprox} L_\omega DK(\vp)=DB(K(\vp))DK(\vp)+DE(\vp)\,. \end{equation} Arguing as in the proof of Lemma~\ref{lem:trace} but using this equation instead of~\eqref{eq:inv1}, we prove the following: \begin{lemma}\label{L:aprox} The normal vector $n(\vp)$ satisfies the equation \[ L_\omega n(\vp)=-DB(K(\vp))^\top n(\vp) + \partial_{\vp_1} E(\vp) \times \partial_{\vp_2} K(\vp) + \partial_{\vp_1} K(\vp) \times \partial_{\vp_2} E(\vp)\,. \] \end{lemma} This allows us to compute the quantity $\cR (\frac{n(\vp)}{|n(\vp)|^2})$ as in~\eqref{eq:Rn_exp}: \begin{equation}\label{eq:cR:normal} \cR\left(\frac{n(\vp)}{|n(\vp)|^2} \right) = DK(\vp) (A(\vp)+B(\vp)) + \frac{n(\vp)}{|n(\vp)|^2} b(\vp)\,, \end{equation} Here $A(\vp)$ is the vector~\eqref{eq:twist} and \begin{align}\label{eq:B} B(\vp)&:=\frac{G_K(\vp)^{-1} DK(\vp)^\top( \partial_{\vp_1} E(\vp) \times \partial_{\vp_2} K(\vp) + \partial_{\vp_1} K(\vp) \times \partial_{\vp_2} E(\vp))}{|n(\vp)|^2}\\ \label{eq:b} b(\vp) &:= -\frac{(\partial_{\vp_1} E(\vp) \times \partial_{\vp_2} K(\vp) + \partial_{\vp_1} K(\vp) \times \partial_{\vp_2} E(\vp))^\top n(\vp)}{|n(\vp)|^2}\,. \end{align} The following lemma shows how to get an approximate solution of Equation~\eqref{eq:cR}, modulo a quadratic error, using solutions to a pair of cohomological equations: \begin{lemma}\label{lem:corr} Suppose that the functions $\xi_1,\xi_2$ on~$\TT^2$ satisfy the cohomological equations \begin{align} L_\omega \xi_1(\vp) + A(\vp) \xi_2(\vp) = {} & -G_K(\vp)^{-1} DK(\vp)^\top E(\vp) -\delta_\omega\,, \label{eq:xi1}\\ L_\omega \xi_2(\vp) = {} & -n(\vp)^\top E(\vp)\,. \label{eq:xi2} \end{align} Then \begin{equation}\label{eq:Delta} \Delta_K(\vp) := DK(\vp) \xi_1(\vp)+\frac{n(\vp)}{|n(\vp)|^2} \,\xi_2(\vp) \end{equation} solves Equation~\eqref{eq:cR} modulo a quadratic error. More precisely, \begin{equation}\label{eq:newerror} \cR(\Delta_K(\vp))+E(\vp)+DK(\vp)\delta_\omega=DK(\vp) E_1(\vp) +\frac{n(\vp)}{|n(\vp)|^2} E_2(\vp) \end{equation} with \begin{align} E_1(\vp) := {} & G_K(\vp)^{-1} DK(\vp)^\top DE(\vp) \xi_1(\vp) +B(\vp) \xi_2(\vp)\,, \label{eq:E1}\\ E_2(\vp) := {} & n(\vp)^\top DE(\vp) \xi_1(\vp)+b(\vp) \xi_2(\vp)\,. \label{eq:E2} \end{align} \end{lemma} \begin{remark} We observe that, at least formally, the quantities $E_1(\vp)$ and $E_2(\vp)$ are quadratic in $E(\vp)$. \end{remark} \begin{proof} First we compute \begin{align*} \cR(\Delta_K(\vp)) &= \cR(DK(\vp)) \xi_1(\vp) + DK(\vp) L_\omega \xi_1 + \cR\left(\frac{n(\vp)}{|n(\vp)|^2} \right) \xi_2(\vp) + \frac{n(\vp) L_\omega \xi_2}{|n(\vp)|^2}\\[1mm] &= DE(\vp) \xi_1 (\vp) + DK(\vp) \big(L_\omega \xi_1(\vp) + (A(\vp)+B(\vp))\xi_2(\vp)\big)\\ &\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad+ \frac{n(\vp)(L_\omega \xi_2(\vp)+b(\vp) \xi_2(\vp))}{|n(\vp)|^2}\,. \end{align*} Next, we plug this expression in Equation~\eqref{eq:newerror} and read off the errors $E_1(\vp)$ and $E_2(\vp)$: \begin{align*} E_1(\vp) &= G_K(\vp)^{-1}DK(\vp)^\top DE(\vp) \xi_1(\vp) + L_\omega \xi_1(\vp)+(A(\vp)+B(\vp)) \xi_2 \\ &\qquad\qquad\qquad\qquad\qquad+ G_K(\vp)^{-1} DK(\vp)^\top E(\vp) + \delta_\omega\,,\\ E_2(\vp) &= n(\vp)^\top DE(\vp) \xi_1(\vp) + b(\vp) \xi_2(\vp) + L_\omega \xi_2(\vp) + n(\vp)^\top E(\vp)\,. \end{align*} Then expressions~\eqref{eq:E1} and~\eqref{eq:E2} follow from Equations~\eqref{eq:xi1} and~\eqref{eq:xi2}. \end{proof} To solve the cohomological equations~\eqref{eq:xi1} and~\eqref{eq:xi2}, we need to ensure that the RHS of these equations have zero mean. In the case of Equation~\eqref{eq:xi2}, a simple computation shows that \begin{align*} [n^\top E]_{\TT^2} &= {} \frac{1}{4\pi^2} \int_{\TT^2} n(\vp)^\top E(\vp) \,d\vp\\ &= \frac{1}{4\pi^2} \int_{\TT^2} n(\vp)^\top [L_\omega K(\vp)-B(K(\vp))] \,d\vp \\ &= {} -\frac{1}{4\pi^2} \int_{\TT^2} n(\vp)^\top B(K(\vp)) \,d\vp\,, \end{align*} where we have used that $n(\vp)^\top L_\omega K(\vp)=0$. The last integral can be equivalently written as \begin{align*} \int_{\TT^2} n(\vp)^\top B(K(\vp)) \,d\vp&= \int_{\cT} N^\top B\, dS =\int_{\Omega} \Div B\, dx= 0\,, \end{align*} where $dS:=|n|\,d\vp$ is the induced area form on $\cT$, $N:=\frac{n}{|n|}$ is the unit normal and $\Omega$ is the domain in $\RR^3$ bounded by $\cT$. To obtain the last equality, we have also used that $B$ is divergence-free. We thus conclude that $[n^\top E]_{\TT^2} =0$, so there exists a unique zero mean solution~$\tilde\xi_2$ to Equation~\eqref{eq:xi2}. The general solution to this equation is \[ \xi_2(\vp)= \tilde \xi_2(\vp)+[\xi_2]_{\TT^2} \] with any constant $[\xi_2]_{\TT^2}\in\RR$. Plugging the expression for~$\xi_2$ in Equation~\eqref{eq:xi1}, we obtain that the average of the RHS of this equation is \begin{equation}\label{eq:aver} [A \tilde \xi_2]_{\TT^2} + [A]_{\TT^2} [\xi_2]_{\TT^2} + [G_K ^{-1} DK^\top E]_{\TT^2} + \delta_\omega\,. \end{equation} We now make the additional assumption that $\de_\om$ and $\om$ are collinear, so \[ \delta_\omega:=\Lambda \omega \] for some $\La\in\RR$. This is crucial to preserve the Diophantine properties of the frequency vector. The twist condition in Definition~\ref{def:ndeg} then ensures that there is a unique pair $([\xi_2]_{\TT^2}, \La)\in\RR^2$ for which the quantity~\eqref{eq:aver} is~0. We thus obtain an expression for the corrected embedding $\bar K(\vp):=K(\vp)+\Delta_K(\vp)$. The invariance error associated with the corrected embedding can be easily computed: \begin{align*} \bar E(\vp) := {} & L_{\bar\omega} \bar K(\vp)-B(\bar K(\vp)) = \Big(\cR(\Delta_K(\vp))+E(\vp) + DK(\vp)\delta_\omega\Big) \\ & +\Big( B(K(\vp))+DB(K(\vp))\Delta_K(\vp)-B(K(\vp)+\Delta_K(\vp))\Big)+O(E^2)\\\ = {} & \Big(DK(\vp) E_1(\vp) +\frac{n(\vp)}{|n(\vp)|^2} E_2(\vp)\Big) + O(E^2)\,. \end{align*} Therefore, the new error is quadratic in $E(\vp)$ by Lemma~\ref{lem:corr}. In view of the above estimates, it is now well known that, if a certain smallness assumption is satisfied, the associated quadratic scheme starting with $(K_0,\La_0):=(K,1)\in \dot\cH_\rho\times\RR^+$ converges to some $(K_*,\La_*)\in\dot\cH_{\rho'}\times\RR^+$. The embedded torus $\cT_*:=K_*(\TT^2)$ is then a Diophantine invariant torus of~$B$ with frequency vector $\om_*:=\La_*\om$. The details are standard (see e.g.~\cite{LGJV}), and go essentially as in Section~\ref{SS.prop}, so we will just sketch the argument. R\"ussmann estimates for the solutions $\xi_1,\xi_2$ of the cohomological equations~\eqref{eq:xi1}-\eqref{eq:xi2} ensure that, for any small $\de>0$, \begin{align*} \|\xi_2\|_{\rho-\delta} &\leq C\gamma^{-1} \delta^{-\tau} \|E\|_\rho\,,\\ \|\xi_1\|_{\rho-2\delta}& \leq C\gamma^{-2} \delta^{-2\tau} \|E\|_\rho\,. \end{align*} The Cauchy estimate then permits to estimate $B,b,E_1,E_2$ and $\bar E$; in particular, $$\|\bar E\|_{\rho-2\de} \leq C \gamma^{-4} \delta^{-4\tau} \|E\|^2_\rho\,.$$ These are the estimates for each step of the Newton method, which goes as follows: \begin{enumerate} \item Initialize the scheme with $(K_0,\La_0):=(K,1)$. \item Given $(K_n,\La_n)$, set $\om_n:=\La_n\om$ and compute the invariance error $E_n:=L_{\omega_n} K_n-B\circ K_n$. \item Construct the adapted frame by computing $DK_n$ and $n_n$. \item Compute the new $2\times 1$ matrix $A_n$ as in Equation~\eqref{eq:twist}. \item Solve the cohomological equations~\eqref{eq:xi1} and~\eqref{eq:xi2}, thus obtaining $\xi_{1,n}$ and $\xi_{2,n}$. The new constant vector $\Lambda_{n+1} \omega_n$ is then obtained from Equation~\eqref{eq:aver}. \item Compute $\Delta_{K_n}$ from Equation~\eqref{eq:Delta} and set $K_{n+1}:=K_n + \Delta_{K_n}$. \item Repeat the iteration step with $\de_{n+1}:=\de_n/2$. \end{enumerate} Lemma~\ref{lem:conv2} ensures the existence of an invariant torus of~$B$ given by \[ (K_*,\om_*):=\lim_{n\to\infty}(K_n,\La_n\om)\in \dot\cH_{\rho-4\de}\times\RR^+ \] provided that ${\|E_0\|_{\rho}}\ll{\ga^4\delta^{4\tau}}$, so the theorem follows. \section{Thin toroidal domains are generically nondegenerate}\label{S:nondeg} As discussed in the Introduction, it is reasonable to expect that a ``generic'' toroidal domain satisfies both nondegeneracy conditions (cf. Definitions~\ref{D:torus} and~\ref{D:torusII}), but proving generic results for vectorial problems is often extremely hard. Our objective in this section is to prove an analog of this result in the class of thin toroidal domains, where one can analyze the harmonic field (and other Beltrami fields) in detail. Given a closed smooth curve~$\ga:\TT\to\RR^3$, let us denote by $\Om(\ga,\ep)$ the toroidal domain (or tube) of thickness~$\ep$ defined by this curve: \begin{equation}\label{Tgaep} \Om(\ga,\ep):=\{x\in\RR^3: \text{dist}(x,\ga(\TT))<\ep\}\,. \end{equation} The main result of this section can then be stated as follows. When we say that the result holds for almost all small enough~$\ep$, it means that there exists some $\ep_0>0$ and a subset $Z\subset[0,\ep_0]$ of measure zero such that the result holds for all $\ep\in (0,\ep_0]\backslash Z$. \begin{proposition}\label{P:isot} For a generic curve~$\ga$ (in the sense of a dense subset in the $C^r$~topology, for any $r\geq3$), the toroidal domain $\Om(\ga,\ep)$ is analytic and nondegenerate of type I and II for almost all small enough $\ep>0$. \end{proposition} \begin{proof} Theorem~\ref{P:tubes} implies that, for a generic curve $\ga$, the toroidal domain $\Om_1:=\Om(\ga,\ep)$ admits a Beltrami field $B$ satisfying $\curl B=\la_1B$, for some constant $\la_1=O(\ep^3)$, and $\pd\Om_1$ is an analytic Diophantine invariant torus of $B$. This immediately establishes the property~(i) in Definitions~\ref{D:torus} and~\ref{D:torusII}. Let us next prove condition~(ii) in Definition~\ref{D:torus}. In the following computations we shall use the formulas and results obtained in~\cite{Acta} without further mention. Consider the coordinates $(\alpha,r,\theta):\Om_1\to \TT \times (0,1) \times \TT$ that parametrize $\Om_1$ (we are assuming, without any loss of generality, that the length of the core curve $\ga$ is $2\pi$); in particular, $\pd\Om_1$ corresponds to the surface $\{r=1\}$. In these coordinates, $Y:=B|_{r=1}$ takes the form \[ Y=\pd_\al-\tau(\al)\pd_\theta+\cO(\ep)\,, \] where $\tau$ is the torsion of $\ga$, and $\cO(\ep)$ stands for a quantity whose $C^m$ norm is bounded by $C|\ep|$, for any fixed integer $m$. Using the expression of the metric on $\pd\Om_1$ induced from $\RR^3$, in coordinates $(\al,\theta)$, we can readily compute $|Y|^2$, which leads to \begin{equation}\label{eq.Y} |Y|^2=1+\cO(\ep)\,. \end{equation} Moreover, we can compute the linearizing coordinates $(\vp_1,\vp_2)$ in terms of $(\al,\theta)$ as \[ \vp_1=\al+\cO(\ep)\,, \qquad \vp_2=\theta+\int_{0}^{\al}\tau(s)ds-[\tau]_{\TT^1}\al +\cO(\ep)\,, \] and hence $Y$ takes the following form in these coordinates: \[ Y=\pd_{\vp_1}-[\tau]_{\TT^1}\pd_{\vp_2}+\cO(\ep)\,. \] This implies that the frequency vector of $Y$ (which is Diophantine) is given by \[ \om_1=1+O(\ep)\,, \qquad \om_2=-[\tau]_{\TT^1}+O(\ep)\,. \] Accordingly, we infer from Equation~\eqref{eq.Y} that the function $\cR$ in Definition~\ref{D:torus} solves the cohomological equation \[ (1+O(\ep))\pd_{\vp_1}\cR+(-[\tau]_{\TT^1}+O(\ep))\pd_{\vp_2}\cR=(1+O(\ep))-|Y|^2=\cO(\ep)\,. \] It then follows that $\cR=\cO(\ep)$. Putting together all these computations, we obtain a matrix $M$, cf.~Equation~\eqref{eq.gen}, of the form \begin{equation*} M=\int_{\TT^2}G^{-1}_\ep\cdot\left( \begin{array}{cc} 1+\cO(\ep) & \cO(\ep) \\ \cO(\ep) & 1+\cO(\ep) \\ \end{array} \right)d\vp\,, \end{equation*} where $G_\ep$ is the matrix of the metric (which depends on $\ep$) in $\vp$-coordinates. It is obvious that $M$ is invertible because the (inverse) metric matrix $G^{-1}_\ep$ is positive definite, which shows that $\Om_1$ is nondegenerate of type I. In fact, \cite[Theorem 7.8]{Acta} ensures that the twist constant $T$ of the invariant torus $\pd \Om_1$ of $B_1$ (see Definition~\ref{def:ndeg}) is \begin{equation}\label{twistfinal} T=c_\ga \ep^2+O(\ep^3)\,, \end{equation} where $c_\ga$ is certain explicit constant that depends on the curve~$\ga$ through its curvature and torsion but not on~$\ep$ nor $\la$. This constant is nonzero for a generic curve~$\ga$~\cite[Lemma 7.9]{Acta}. To conclude, let us establish property~(ii) in Definition~\ref{D:torusII}. Straightforward computations using the formulas for thin tubes derived in~\cite{Acta} show that \[ |\al\cdot \om^\perp |\leq C\,,\qquad |n|\geq C\ep\,, \] for some $\ep$-independent constant $C$. Here $\al$ is the $\RR^2$-valued function introduced in Definition~\ref{D:torusII}, and not an angular coordinate. One can then use~\eqref{twistfinal} to see that \[ T+\la\,\bigg[\frac{\al\cdot \om^\perp}{|n|^2}\bigg]_{\TT^2}= c_\ga \ep^2+ O\bigg(\frac{\la}{\ep^2}\bigg)+O(\ep^3)\,. \] Therefore, \[ T+\la\,\bigg[\frac{\al\cdot \om^\perp}{|n|^2}\bigg]_{\TT^2}\neq0 \] provided that the nonzero constant~$\la$ satisfies $|\la|< C|c_\ga|\ep^4$ for a certain positive constant~$C$ independent of~$\ep$. \end{proof} \begin{remark} The nondegenerate Beltrami field $B$ in the proof of Proposition~\ref{P:isot} has eigenvalue $\la_1=O(\ep^3)$. In particular, the plasma current $\curl B$ in the toroidal domain $\Om(\ga,\ep)$ is small, which is a desirable feature for stellarator design~\cite{Bo}. \end{remark} As any toroidal domain is isotopic to a thin tube, we infer that there are MHD equilibria of the kind described in Theorems~\ref{T:main} and~\ref{T:main2} that have arbitrary topology: \begin{corollary}\label{C.main} There exist piecewise smooth MHD equilibria with fixed or free toroidal boundaries of arbitrary topology. More precisely, given any toroidal domain $\Om_0\subset\RR^3$, one can assume that the domains $\Om_k$ of Theorem~\ref{T:main} and $\Om,\Om'$ of Theorem~\ref{T:main2} are diffeomorphic to~$\Om_0$. \end{corollary} \begin{proof} It is an immediate consequence of Theorems~\ref{T:main} and~\ref{T:main2} and of the fact that, by Proposition~\ref{P:isot}, a generic thin tube is nondegenerate of type I and II. \end{proof} \section{Lipschitz continuous force-free fields with nonconstant factor}\label{S:ff} A {\em force-free field}\/ in a domain $\Om$ is a vector field $B$ that satisfies the equations \[ \curl B=f\,B\quad\text{in }\Om\,, \qquad \Div B=0\quad\text{in }\Om\,, \qquad B\cdot N=0\quad\text{on }\pd\Om \] for some scalar function~$f$. In the context of hydrodynamics, these fields are called Beltrami fields with nonconstant factor~\cite{MYZ,ARMA}. It is obvious that a force-free field satisfies the MHD equations in $\Om$ with $P=0$. This is used to define force-free fields of low regularity. Specifically, one says that a vector field $B\in L^2(\Om)$ is {\em force-free}\/ if \[ \int_{\Om} \left[(B\otimes B)\cdot \nabla w- \frac12|B|^2\Div w\right]\, dx=0\quad \text{and}\quad \int_{\Om}B\cdot \nabla\phi\,dx=0 \] for any vector field $w\in C^1_c(\Om)$ and any scalar function $\phi\in C^1(\Om)$. The strategy of the proof of Theorem~\ref{T:main} can be readily adapted to show the existence of Lipschitz-continuous force-free fields with nonconstant factor on toroidal domains of any topology. More precisely, one can prove the following: \begin{theorem}\label{T:ff} Let $B_1$ be a nondegenerate Beltrami field of type I with eigenvalue~$\la_1$ on an analytic toroidal domain~$\Om_1$. For any $N\geq2$ and almost all distinct constants $\{\la_k\}_{k=2}^N$, there exists a family of nested analytic toroidal domains $\{\Om_k\}_{k=1}^N$ as in Theorem~\ref{T:main} and a Lipschitz continuous vector field~$B$ satisfying the equation \[ \curl B=f\,B\,,\qquad \Div B=0 \] in~$\Om$, where the factor \[ f:= \sum_{k=1}^N \la_k\,{1}_{\Om_k\backslash \overline{\Om_{k-1}}} \] is not constant in~$\Om_N$. \end{theorem} \begin{proof} We use the same construction as in the proof of Theorem~\ref{T:main}, but we consider the particular case where $c_k:=0$ for all $k=2,\cdots,N$. This obviously implies that $b_k=0$ as well. The effect of this choice of constants is that the vector field $B_k$ in a neighborhood of $\pd\Om_{k-1}$ is constructed using the Cauchy--Kovalevskaya theorem with Cauchy datum given by $B_{k-1}|_{\pd\Om_{k-1}}$, so $\pd\Om_{k-1}$ is no longer a discontinuity surface of the magnetic field, and that we do not need to use Theorem~\ref{L:cohom}. For almost all choices of the constant $\la_k$, the Diophantine invariant torus $\pd\Om_{k-1}$ one obtains is twist. This ensures the existence of a family of nested toroidal domains $\{\Om_k\}_{k=1}^N$ as above and of a weak solution $(B,P)$ to the MHD equations with constant pressure $P=0$. In each set $\Om_k\backslash\overline{\Om_{k-1}}$, $B$ satisfies the equation $\curl B=\la_kB$ and is analytic up to the boundary; in particular, $B$ is Lipschitz continuous on~$\Om_N$. The plasma current $\curl B$ is given by Remark~\ref{R:current}, and the singular terms supported on the toroidal surfaces vanish because~$B$ is continuous. This formula shows that $B$ is in fact a force-free field with piecewise constant factor~$f$ as above. The theorem is then proven. \end{proof} \begin{remark} The assumption that the matrix $M$ is invertible that appears in the definition of nondegenerate toroidal domains of type I is not used to prove Theorem~\ref{T:ff} because here we do not need Theorem~\ref{L:cohom}. Thus, the theorem holds under the slightly weaker assumption on the domain~$\Om_1$ that it admits a Beltrami field for which the boundary is a Diophantine invariant torus. \end{remark} \begin{remark} In view of Proposition~\ref{P:isot}, Theorem~\ref{T:ff} implies the existence of Lipschitz-continuous force-free fields with nonconstant factor on thin tubes of any topology. \end{remark} \section*{Acknowledgements} A.E.\ is supported by the ERC Consolidator Grant~862342. A.L.\ is supported by the Swedish Research Council VR grant~2019-04591. D.P.-S. is supported by the grants MTM PID2019-106715GB-C21 (MICINN) and Europa Excelencia EUR2019-103821 (MCIU). This work is supported in part by the ICMAT--Severo Ochoa grant CEX2019-000904-S.
2023-04-23T06:41:13.966Z
2021-06-22T02:44:01.000Z
redpajama/arxiv
arxiv_0001
1,966
16,467
fdc9fce42c3fbe4bbe6ed910fe17638c58d8f89c
\section{Introduction} An essential assumption for the deployment of machine learning models in real-world applications is the alignment of training and testing data distributions. Under this condition, models are expected to generalize, yet real-world applications often fail to meet this assumption. Instead, continual distribution shift is widely observed in a range of applications. For example, satellite images of buildings and lands change over time due to city development \citep{christie2018functional}; self-driving cars receive data with quality degrading towards nightfall \citep{bobu2018adapting,wu2019ace}. Although this problem can be mitigated by collecting training data that covers a wide range of distributions, it is often impossible to obtain such a large volume of labeled data in many scenarios. On the other hand, the negligence of shifts between domains also leads to suboptimal performance. Motivated by this commonly observed phenomenon of gradually shifting distributions, we study supervised gradual domain adaptation in this work. Supervised gradual domain adaptation models the training data as a sequence of batched data with underlying changing distributions, where the ultimate goal of learning is to obtain an effective classifier on the target domain at the last step. This relaxation of data alignment assumption thus equips gradual domain adaptation with the applicability in a wide range of scenarios. Compared with unsupervised gradual domain adaptation, where only unlabeled data is available along the sequence, in supervised gradual domain adaptation the learner also has access to labeled data from the intermediate domains. Note that this distinction in terms of problem setting is essential, as it allows for more flexible model adaptation and algorithm designs in supervised gradual domain adaptation. The mismatch between training and testing data distributions has long been observed, and it had been addressed with conventional domain adaptation and multiple source domain adaptation \citep{duan2012learning,hoffman2013efficient,hoffman2018cycada,hoffman2018algorithms,zhao2018adversarial,wen2020domain,mansour2021theory} in the literature. Compared with the existing paradigms, supervised gradual domain adaptation poses new challenges for these methods, as it involves more than one training domains and the training domains come in sequence. For example, in the existing setting of multiple-source domain adaptation~\citep{zhao2018adversarial,hoffman2018algorithms}, the learning algorithms try to adapt to the target domain in a one-off fashion. Supervised gradual domain adaptation, however, is more realistic, and allows the learner to take advantage of the temporal structure among the gradually changing training domains, which can lead to potentially better generalization due to the smaller distributional shift between each consecutive pair of domains. Various empirically successful algorithms have been proposed for gradual domain adaptation \citep{hoffman14,gadermayr2018gradual,wulfmeier2018incremental,bobu2018adapting}. Nevertheless, we still lack theoretical understanding of their limits and strengths. \citet{kumar2020understanding} provides the first algorithm-specific theoretical guarantee for unsupervised gradual domain adaptation. However, the given upper bound of the learning error on the target domain suffers from exponential dependency (in terms of the length of the trajectory) on the initial learning error on the source domain. This is often hard to take in reality and it is left open whether this can be alleviated in supervised gradual domain adaptation. In this paper, we study the problem of gradual domain adaptation under a supervised setting where labels of training domains are available. We prove that the learning error of the target domain is only linearly dependent on the averaged error over training domains, showing a significant improvement compared to the unsupervised case. We show that our results are comparable with the learning bound for multiple source training and can be better under certain cases while relaxing the requirement of access to all training domains upfront simultaneously. Further, our analysis is algorithm and loss function independent. Compared to previous theoretical results on domain adaptation, which used $l_1$ distance \citep{mansour2009domain} or $W_\infty$ distance to capture shifts between data distributions, our results are obtained under milder assumptions. We use $W_p$ Wasserstein distance to describe the gradual shifts between domains, enabling our results to hold under a wider range of real applications. Our bound features two important ingredients to depict the problem structure: sequential Rademacher complexity \cite{rakhlin2015online} is used to characterize the sequential structure of gradual domain adaptation while discrepancy measure \cite{kuznetsov2017generalization} is used to measure the non-stationarity of the sequence. Our theoretical results provide insights into empirical methods on gradual domain adaptation. Specifically, our bound highlights the following two observations: \begin{itemize} \item Effective representation where the data drift is ``small'' helps. Our theoretical results highlight an explicit term showing that representation learning can directly optimize the learning bound. \item There exists an optimal time horizon (number of training domains) for supervised gradual domain adaptation. Our results highlight a trade-off between the time horizon and learning bound. \end{itemize} Based on the first observation, we propose a min-max learning objective to learn representations concurrently with the classifier. To optimize this objective, however, requires simultaneous access to all training domains. In light of this challenge, we relax the requirement of simultaneous access with temporal models that encode knowledge of past training domains. To verify our observations and the proposed objectives, we conduct experiments on both semi-synthetic datasets with MNIST dataset and large-scale real datasets such as FMOW \citep{christie2018functional}. Comprehensive experimental results validate our theoretical findings and confirm the effectiveness of our proposed objective. \section{Related Work} \paragraph{(Multiple source) domain adaptation} Learning with shifting distributions appears in many learning problems. Formally referred as domain adaptation, this has been extensively studied in a variety of scenarios, including computer vision \citep{hoffman14,venkateswara2017deep,zhao2019madan}, natural language processing \citep{blitzer2006domain,blitzer2007biographies,axelrod2011domain}, and speech recognition \citep{sun2017unsupervised,sim2018domain}. When the data labels of the target domain are available during training, known as supervised domain adaptation, several parameter regularization based methods \citep{yang2007adapting,aytar2011tabula}, feature transformations based methods \citep{saenko2010adapting,kulis2011you} and a combination of the two are proposed \citep{duan2012learning,hoffman2013efficient}. The theoretical limits of domain adaptations have also been extensively studied \citep{david2010impossibility,zhao2019learning,wu2019domain,zhao2020fundamental}. The problem of adapting with multiple training domains, referred to as multiple source domain adaptation (MDA), is also studied extensively. \citet{hoffman2018algorithms} first studied the asymptotic learning bounds for MDA. \citet{zhao2018adversarial} provides the first generalization bounds and proposed efficient adversarial neural networks to demonstrate empirical superiority. The theoretical results are further explored by \citet{wen2020domain} with a generalized notion of distance measure, and by \citet{mansour2021theory} when only limited target labeled data are available. \paragraph{Gradual domain adaptation}~~ Many real-world applications involve data that come in sequence and are continuously shifting. \citet{hoffman14} addresses with data from continuously evolving distribution with a novel unsupervised manifold-based adaptation method. Following works \citep{gadermayr2018gradual,wulfmeier2018incremental,bobu2018adapting} also proposed unsupervised approaches for this variant of gradual domain adaptation with unsupervised algorithms. \citet{kumar2020understanding} studied the problem of adapting to an unseen target domain with shifting training domains. Their result features the first theoretical guarantee for unsupervised gradual domain adaptation with a self-training algorithm and highlights that learning with a gradually shifting domain can be potentially much more beneficial than a Direct Adaptation. The work provides a theoretical understanding of the effectiveness of empirical tricks such as regularization and label sharpening. However, they are obtained under rather stringent assumptions. They assumed that the label distribution remains unchanged while the varying class conditional probability between any two consecutive domains has bounded $W_\infty$ Wasserstein distance, which only covers a limited number of cases. Moreover, the loss functions are restricted to be the hinge loss and ramp loss while the classifier is restricted to be linear. This result is later extended by \citet{chen2020self} with linear classifiers and Gaussian spurious features. The theoretical advances are complemented by recent empirical success in gradual domain adaptation. Recent works \cite{chen2021gradual} extends the unsupervised gradual domain adaptation problem to the case where intermediate domains are not already available. \cite{abnar2021gradual,sagawa2021extending} provides the first comprehensive benchmark and datasets for both supervised and unsupervised gradual domain adaptation. \section{Preliminaries} \subsection{Problem Formulation} The problem of gradual domain adaptation proceeds sequentially through a finite time horizon $\{1, \dots, T\}$ with evolving data domains. A data distribution $P_t \in {\mathbb{R}}^d \times {\mathbb{R}}^k $ is realized at each time step with the features denoted as $X \in {\mathbb{R}}^d$ and labels as $Y \in {\mathbb{R}}^k$. With a given loss function $\ell( \cdot, \cdot) $, we are interested in obtaining an effective classifier $h \in \mathcal{H}: {\mathbb{R}}^d \rightarrow {\mathbb{R}}^k$ that minimizes a given loss function on the target domain $P_T$, which is also the last domain. With access to only $n$ samples from each intermediate domain $P_1, \ldots, P_{T-1}$, we seek to design algorithms that output a classifier at each time step where the final classifier performs well on the target domain. Following the prior work~\citep{kumar2020understanding}, we assume the shift is gradual and the label distribution remains unchanged. To capture such a gradual shift, we use the Wasserstein distance to measure the change between any two consecutive domains. The Wasserstein distance offers a way to include a large range of cases, including the case where the two measures of the data domains are not on the same probability space~\citep{cai2020distances}. \begin{defn}{(Wasserstein distance)} The $p$-th Wasserstein distance, denoted as $W_p$ distance, between two probability distribution $P, Q$ is defined as \begin{align*} W_p(P, Q) = \left( \inf_{\gamma \in \Gamma(P,Q)} \int \| x - y\|^p d \gamma (x,y)\right)^{1/p} \,, \end{align*} where $\Gamma(P,Q)$ denotes the set of all joint distribution $\gamma$ over $(X, Y)$ such that $X \sim P$, $Y \sim Q$. \end{defn} Intuitively, Wasserstein distance measures the minimum cost needed to move one distribution to another. The flexibility of Wasserstein distance enables us to derive tight theoretical results for a wider range of practical applications. In comparison, previous results leverage $l_1$ distance \cite{mansour2009domain} or the Wasserstein-infinity $W_\infty$ distance \citep{kumar2020understanding} to capture non-stationarity. However, due to the monotonicity of the $W_p$ distance, the $W_1$ distance leads to tighter upper bounds and is more commonly employed due to its low computational cost. Previous literature hence offers limited insights whereas our results include this more general scenario. \subsection{Assumptions} We formally describe the assumptions below. \begin{asmp}\label{asmp:gradual} For all $1 \leq t \leq T$ and some constant $\Delta > 0$, the $p$-th Wasserstein distance between class conditional distance $P_{t, X\mid Y =y}, P_{t+1, X\mid Y = y}$ is bounded as \begin{align*} W_p(P_{t, X\mid Y = y}, P_{t+1, X\mid Y = y}) \leq \Delta,\quad\forall y\in\mathcal{Y} \,. \end{align*} \end{asmp} \begin{asmp}\label{asmp:label_unchanged} The label distribution remains unchanged through out the time horizon, i.e., $\forall t \in [T]$,$ P_t(Y =y) = P_{t+1}(Y =y)$. \end{asmp} We study the problem without restrictions on the specific form of the loss function, and we only assume that the empirical loss function is bounded and is hence Lipschitz continuous. This covers a rich class of loss functions, including the logistic loss/binary cross-entropy, and hinge loss. Formally, let $\ell_h$ be the loss function, $\ell_h = \ell(h(x), y):\mathcal{X}\times\mathcal{Y}\to \mathbb{R}$. We have the following assumption. \begin{asmp}\label{asmp:lip} The loss function $\ell_h $ is $\rho$-Lipschitz continuous and bounded such that $\|\ell_h \|_\infty \leq M $. \end{asmp} This assumption is general as it holds when the input data are compact. In the case where the input data fails to be compact, the assumption remains true after the normalization of data. Moreover, we note that this assumption is mainly for the convenience of technical analysis and is common in the literature \citep{mansour2009domain,cortes2011domain,kumar2020understanding}. Before we present the main theoretical result, we first further define a few notations and definitions needed for the theorem and the proof sketch in the next section \paragraph{Notation} To simplify the notation, we let $Z = (X,Y)$ and use shorthand $Z_1^T$ for $Z_1, \ldots, Z_T$. \subsection{Other Technical Definitions} Our first tool is used to help us characterize the structure of sequential domain adaptation. Under the statistical learning scenario with i.i.d.\ data, Rademacher complexity serves as a well-known complexity notion to capture the richness of the underlying hypothesis space. However, with the sequential dependence, classical notions of complexity are insufficient to provide a description of the problem. To capture the difficulty of sequential domain adaptation, we use the sequential Rademacher complexity, which was originally proposed for online learning where data comes one by one in sequence \citep{rakhlin2015online}. To formally define the sequential Rademacher complexity, we need to first introduce the $\mathcal{Z}$-valued tree. \begin{defn}[$\mathcal{Z}$-valued tree \citep{rakhlin2015online}]\label{def:zvalue} A path of length $T$ is defined by a sequence $\epsilon=\left(\epsilon_{1}, \ldots, \epsilon_{T}\right) \in\{\pm 1\}^{T}$. Then, a $\mathcal{Z}$-valued tree $\mathbf{z}$ refers to a complete rooted binary tree where nodes are labeled with elements of $\mathcal{Z}$. The tree $\mathbf{z}$ can be identified by a sequence $\left(\mathbf{z}_{1}, \ldots, \mathbf{z}_{T}\right)$ of labeling functions $\mathbf{z}_{i}:\{\pm 1\}^{i-1} \rightarrow \mathcal{Z}$. Specifically, the root of the tree is labelled as $\mathbf{z}_{1} \in \mathcal{Z}$ and $\mathbf{z}_{i}$ for $i>1$ is the label of the node when we follow a path of length $i-1$ from the root, with $+1$ denotes ``right'' and $-1$ denotes ``left''. \end{defn} To shorthand the notation, we use $\mathbf{z}_t(\epsilon)$ to denote $\mathbf{z}_t$ that depends only on $(\epsilon_1, \ldots, \epsilon_{t-1})$. When $(\epsilon_1, \ldots, \epsilon_{T})$ are Rademacher random variables, we can define the sequential Rademacher complexity of a function class $\mathcal{H}$ as follows. \begin{defn}[Sequential Rademacher Complexity \citep{rakhlin2015online}]\label{def:seq_complexity} For a function class $\mathcal{F}$, the sequential Rademancher complexity is defined as \begin{align} \mathfrak{R}_{T}^{\mathrm{seq}}(\mathcal{F})=\sup_{\mathbf{z}} \mathbb{E}\left[\sup _{f \in \mathcal{F}} \frac{1}{T}\sum_{t=1}^{T} \epsilon_{t} f\left(z_{t}(\epsilon)\right)\right] \,, \end{align} where the supremum is taken over all $\mathcal{Z}$-valued trees of depth $T$. \end{defn} We next introduce the discrepancy measure, a key ingredient that helps us to characterize the non-stationarity resulting from the shifting data domains. This can be used to bridge the shift in data distribution with the shift in errors incurred by the classifier. \begin{defn}[Discrepancy measure \citep{kuznetsov2020discrepancy}] \label{def:discrepancy} \begin{align} \operatorname{disc}_T = &\sup _{h \in \mathcal{H}}\left(\mathbb{E}\left[\ell_h\left(X_{T}, Y_T\right) \mid Z_{1}^{T-1}\right] \frac{1}{T-1}\sum_{t=1}^{T-1} \mathbb{E}\left[ \ell_h\left(X_{t}, Y_t\right) \mid Z_{1}^{t-1}\right]\right) \,. \end{align} \end{defn} We will later show that the discrepancy measure can be directly upper-bounded when the shift in class conditional distribution is gradual. We also note that this notion is general and feasible to be estimated from data in practice \cite{kuznetsov2020discrepancy}. Similar notions have also been used extensively in non-stationary time series analysis and mixing processes \cite{kuznetsov2014generalization,kuznetsov2017generalization}. \section{Theoretical Results} In this section, we provide our theoretical guarantees for the performance of the final classifier learned in the setting described above. Our result is algorithm agnostic and general to loss functions that satisfy Assumption \ref{asmp:lip}. We then discuss the implications of our results and give a proof sketch to illustrate the main ideas. The following theorem gives an upper bound of the expected loss of the learned classifier on the last domain in terms of the shift $\Delta$, sequential Rademacher complexity, and etc. \begin{restatable}{thm}{mainthm} \label{thm:main} Under Assumptions \ref{asmp:gradual}, \ref{asmp:lip}, with $n$ data points access to each data distribution $P_t$, $t \in \{1, \ldots, T\}$, and loss function $\ell_h = \ell(h(x), y):\mathcal{X}\times\mathcal{Y}\to \mathbb{R}$, the loss on the last distribution incurred by a learned classifier $h \in \frac{1}{T-1}$ can be upper bounded by \begin{align}\label{eq:thm} &\mathbb{E}\left[\ell_{h_T}\left(X_T, Y_{T}\right) \mid Z_{1}^{T-1}\right] \nonumber \\ &\leq \mathbb{E}\left[\ell_{h_0}\left(X_T, Y_T\right) \mid Z_{1}^{T-1}\right] + \underbrace{\frac{3}{T} +\frac{3M}{T} \sqrt{8 \log \frac{1}{\delta}}}_{E_1} + \underbrace{\frac{1}{T}\sqrt{\frac{\text{VCdim}(\mathcal{H}) + \log (2/\delta)}{2n}} + O\left(\frac{1}{\sqrt{nT}}\right)}_{E_2} \nonumber \\ &+ \underbrace{18 M \sqrt{4 \pi \log T} \mathfrak{R}_{T-1}^{s e q}(\mathcal{F})+ 3T\rho\Delta}_{E_3}\,, \end{align} where $\ell_h \in \mathcal{F}$, $\mathfrak{R}_{T}^{s e q}(\mathcal{F})$ is the sequential Rademacher complexity of $\mathcal{F}$, $\text{VCdim}(\mathcal{H})$ is the VC dimension of $\mathcal{H}$ and $h_{0}=\operatorname{argmin}_{h \in \mathcal{H}} \frac{1}{T}\sum_{t=1}^{T} \ell\left(h(X_t), Y_{t}\right)$. \end{restatable} When $\ell_h \in \mathcal{F}$ is bounded and convex, the sequential Rademacher complexity term is upper bounded by $O(\sqrt{1/nT})$ \citep{rakhlin2015online}. For some complicated function classes, such as multi-layer neural networks, they also enjoy a sequential Rademacher complexity of order $O(\sqrt{1/nT})$ \citep{rakhlin2015online}. Before we move to present a proof sketch of Theorem \ref{thm:main}, we first discuss the implications of our theorem. \begin{rem}\label{rem:1} There exists a non-trivial trade-off between $E_1 + E_2$ and $E_3$ through the length $T$. When $T$ is larger, all terms except for the terms in $E_3$ will be smaller while the terms in $E_3$ will be larger. Hence, it is not always beneficial to have a longer trajectory. \end{rem} \begin{rem}\label{rem:2} All terms in (\ref{eq:thm}) except for the last term $3T\rho\Delta$ are determined regardless of the algorithm. The last term depends on $\Delta$ which measures the class conditional distance between any two consecutive domains. This distance can potentially be minimized through learning an effective representation of data. \end{rem} \paragraph{Comparison with unsupervised gradual domain adaptation} \label{rem:linear} Our result is only linear with respect to the average loss $\mathbb{E}\left[\ell_{h_0}\left(X_T, Y_T\right) \mid Z_{1}^{T-1}\right]$, where $h_{0}=\operatorname{argmin}_{h \in \mathcal{H}} \frac{1}{T}\sum_{t=1}^{T} \ell\left(h(X_t), Y_{t}\right)$. In contrast, the previous upper bound given by \cite{kumar2020understanding}, which is for unsupervised gradual domain adaptation, is exponential with respect to the initial loss on the first data domain. It remains unclear, however, if the exponential cost is unavoidable when labels are missing during training as the result by \cite{kumar2020understanding} is algorithm specific. \paragraph{Comparison with multiple source domain adaptation} The setting of multiple source domain adaptation neglects the temporal structure between training domains. Our results are comparable while dropping the requirement of simultaneous access to all training domains. Our result suffers from the same order of error with respect to the Rademacher complexity and from the VC inequality with supervised multiple source domain adaptation (MDA) \citep{wen2020domain}. However, for MDA, the error of a classifier $h$ on the target domain also relies on the average error of $h$ on training domains. We note that in comparison our results scales with the averaged error of the best classifier on the training domains. While we defer the full proof to the appendix, we now present a sketch of the proof. \begin{ps} With Assumption \ref{asmp:gradual}, we first show that when the Wasserstein distance between two consecutive class conditional distributions is bounded, the discrepancy measure is also bounded. \begin{restatable}{lem}{losswpbound}\label{lem:loss_wp_bound} Under Assumption \ref{asmp:lip}, the expected loss on two consecutive domains satisfy. \begin{equation*} \mathbb{E}_\mu[\ell_h(X, Y)] - \mathbb{E}_\nu[\ell_h(X^\prime, Y^\prime)] \leq \rho\Delta \,, \end{equation*} where $\mu, \nu$ are the probability measure for $P_t, P_{t+1}$, $(X, Y) \sim P_t$, and $(X^\prime, Y^\prime) \sim P_{t+1}$. \end{restatable} Then we leverage this result to bound the loss incurred in expectation by the same classifier on two consecutive data distributions. We start by decomposing the discrepancy measure with an adjustable summation term as \begin{align*} \operatorname{disc}_T \leq & \sup_{h \in \mathcal{H}}\left(\frac{1}{s} \sum_{t=T-s+1}^{T} \mathbb{E}\left[\ell_h\left(X_{t}, Y_t\right) \mid Z_{1}^{t-1}\right] \frac{1}{T} \sum_{t=1}^{T} \mathbb{E}\left[ \ell_h\left(X_{t}, Y_t\right) \mid Z_{1}^{t-1}\right]\right) \\ &+ \sup _{h \in \mathcal{H}}\left(\mathbb{E}\left[\ell_h\left(X_{T}, Y_{T}\right) \mid Z_{1}^{T-1}\right] \frac{1}{s} \sum_{t=T-s+1}^{T} \mathbb{E}\left[ \ell_h \left(X_{t}, Y_t\right) \mid Z_{1}^{t-1}\right]\right) \,. \end{align*} We show by manipulating this adjustable summation, the discrepancy measure can indeed be directly obtained through an application of Lemma \ref{lem:loss_wp_bound}. We now start to bound the learning error in interest by decomposing \begin{align*} &\mathbb{E}\left[\ell_{h_{T}}\left(X_{T}, Y_{T}\right)\mid Z_{1}^{T-1}\right]- \mathbb{E}\left[\ell_{h_0}\left(X_T, Y_T\right) \mid Z_{1}^{T-1}\right] \\ \leq & 2\Phi(Z_1^T) + \left( \frac{1}{T}\sum^{T-1}_{t=1} \left[\ell_{h_{T}}\left(X_{t}, Y_{t}\right)\right] - \frac{1}{T}\sum^{T-1}_{t=1}\ell_{h_0}\left(X_T, Y_T\right) \right)\,, \end{align*} where $\Phi\left(Z_{1}^{T}\right)=\sup _{h \in \mathcal{H}}\left(\mathbb{E}\left[\ell_h\left(X_{T}, Y_{T}\right) \mid Z_{1}^{T-1}\right] \right.$ $\left.-\sum_{t=1}^{T} \frac{1}{T} \ell_h\left(X_{t}, Y_{t}\right)\right)$. The term $\Phi\left(Z_{1}^{T}\right)$ can be upper bounded by Lemma \ref{lem:cor2} \cite{kuznetsov2020discrepancy} and thus it is left to bound the remaining term $\frac{1}{T}\sum^{T-1}_{t=1} \left[\ell_{h_{T}}\left(X_{t}, Y_{t}\right)\right] - \frac{1}{T}\sum^{T-1}_{t=1}\ell_{h_0}\left(X_T, Y_T\right) $. To upper bound this difference of average loss, we first compare the loss incurred by a classifier learned by an optimal online learning algorithm to $f_0$. By classic online learning theory results, the difference is upper bounded by $O\left(\frac{1}{\sqrt{nT}}\right)$. Then we compare the optimal online learning classifier to our final classifier $h_T$ and upper bound the difference through the VC inequality \cite{bousquet2004introduction}. Lastly, we leverage results from the literature of non-stationary time series, which is restated in the following lemma. \begin{restatable}{lem}{corthree}[Corollary 3 of \cite{kuznetsov2020discrepancy}]\label{lem:cor3} Let $q = (q_1, \ldots, q_T)$ be real numbers. For any $\delta>0$, with probability at least $1-\delta$, \begin{align} \mathbb{E}\left[\ell_{h_0}\left(X_{T}, Y_T\right) \mid Z_{1}^{T}\right] \leq &\inf _{h \in \mathcal{H}} \mathbb{E}\left[\ell_h\left(X_{T}, Y_T\right) \mid Z_{1}^{T-1}\right]+\operatorname{disc}(\mathbf{q}) \nonumber\\ &+\|\mathbf{q}\|_{2}+6 M \sqrt{4 \pi \log T} \mathfrak{R}_{T}^{s e q}(\mathcal{F}) +M\|\mathbf{q}\|_{2} \sqrt{8 \log \frac{1}{\delta}} \,. \end{align} where $\operatorname{disc}_T$ is the discrepancy measure defined in Definition~\ref{def:discrepancy} and $\mathfrak{R}_{T}^{s e q}(\mathcal{F})$ is the sequential Rademacher complexity. \end{restatable} We take $q = (q_1, \ldots, q_T)$ to be the uniform weights, $1/T$, for the final bound. \end{ps} \section{Insights for Practice} The key insight indicated by Theorem \ref{thm:main} and Remark \ref{rem:2} is that the bottleneck of supervised gradual domain adaption is not only predetermined through the set up of the problem but also rely heavily on $\rho \Delta$, where $\Delta $ is the upper bound of the Wasserstein class conditional distance between two data domains and $\rho$ is the Lipschitz constant of the loss function. In practice, the loss function is often chosen beforehand and remains unchanged through out the learning process. Therefore, the only term available to be optimized is $\Delta$, which can be effectively reduced if a good representation of data can be learned for classification. We give a feasible primal-dual objective that learns a mapping function from input to feature space concurrently with the original classification objective \paragraph{A primal-dual objective formulation} Define $g$ to be a mapping that maps $X \in \mathbb{R}^d$ to some feature space. We propose the learning objective as to learn a classifier $h$ simultaneously with the mapping function $g$ with the exposure of historical data $Z_1^{T-1}$. With the feature $g(X)$ from the target domain, our learning objective is now \begin{align}\label{eq:originalobj} &\mathbb{E}\left[\ell_h(g(X_{T}), Y_T))|Z_1^{T-1} \right] - \inf_{h^\ast,g^\ast} \mathbb{E}\left[\ell_{h^\ast} (g^\ast(X_{T}), Y_T)|Z_1^{T-1} \right] \,. \end{align} Intuitively, this can be viewed as a combination of two optimization problems where both $\Delta$ and the learning loss are minimized. The objective (\ref{eq:optobj}) is hard to evaluate without further assumptions. Thus we restrict our study to the case where both $g$ and $h$ are parametrizable. Specifically, we assume $g$ is parameterized by $\omega$ and $h$ is parameterized by $\theta$. Then we leverage the Wasserstein-$1$ distance's dual representation to derive a primal-dual formulation that can be computationally feasible to evaluate. \begin{align}\label{eq:optobj} &\min_\theta \max_\omega \mathbb{E}\left[\ell_{h_{\theta,T}}\left(g_\omega(X_{T}), Y_T\right) \mid Z_{1}^{T-1}\right] + \lambda L_D \,, \end{align} where $L_D = \max_{t,t+1}\mathbb{E}_{P_t}\left[g_\omega(X_{t})\right] - \mathbb{E}_{P_{t+1}}\left[g_\omega(X_{t+1})\right] $ and $\lambda$ is a tunable parameter. \paragraph{One-step and temporal variants} Notice that $L_D$ relies on the maximum distance across all domains. It is thus hard to directly evaluate $L_D$ without simultaneous access to all domains. With access only to the current and the past domains, we could optimize the following one-step primal-dual loss at time $t$ instead. \begin{align} \label{eq:onesteploss} &\min_\theta \max_\omega \mathbb{E}\left[\ell_{h_{\theta,t}}\left( g_\omega(X_{t}), Y_t\right) \mid Z_{1}^{t}\right] + \lambda L_{D_t} \,, \end{align} where $L_{D_t} = \mathbb{E}_{P_t}\left[g_\omega(X_{t})\right] - \mathbb{E}_{P_{t-1}}\left[g_\omega(X_{t-1})\right] $. Compared to the objective (\ref{eq:optobj}), the one-step loss (\ref{eq:onesteploss}) only gives us partial information, and directly optimizing it may often lead to suboptimal performance. While it is inevitable to optimize with some loss of information under the problem set up, we use a temporal model (like an LSTM) to help preserve historical data information in the process of learning mapping function $g$. In particular, in the temporal variant, we will be using the hidden states of an LSTM to dynamically summarize the features from all the past domains. Then, we shall use the feature distribution computed from the LSTM hidden state to align with the feature distribution at the current time step. To practically implement these objectives, we can use neural networks to learn the representation and the classifier. To approximate the Wasserstein distance, another neural network will be used as a critic to judge the quality of the learned representations. To minimize the distance between representations of different domains, one can use $W_1$ distance as an empirical metric. Distance of the critic on different domains can then be minimized to encourage the learning of similar representations. We note that the use of $W_1$ distance, which is easy to evaluate empirically, to guide representation learning has been practiced before~\cite{shen2018wasserstein}. We take this approach further to the problem of gradual domain adaption. \section{Empirical Results} In this section, we perform experiments to demonstrate the effectiveness of supervised gradual domain adaptation and compare our algorithm with No Adaptation, Direct Adaptation, and Multiple Source Domain Adaptation (MDA) on different datasets. We also verify the insights we obtained in the previous section by answering the following three questions: \begin{enumerate} \item \textbf{How helpful is representation learning in gradual domain adaptation?} Theoretically, effective representation where the data drift is ``small'' helps algorithms to gradually adapt to the evolving domains. This corresponds to minimizing the $\rho \Delta$ term in our Theorem \ref{thm:main}. We show that our algorithm with objective (\ref{eq:onesteploss}) outperforms the objective of empirical risk (No Adaptation). \item \textbf{Can the one-step primal-dual loss (\ref{eq:onesteploss}) act as an substitute to optimization objective (\ref{eq:optobj})?} Inspired by our theoretical results (Theorem \ref{thm:main}), the primal-dual optimization objective (\ref{eq:optobj}) should guide the adaptation process. However, optimization of this objective requires simultaneous access to all data domains. We use a temporal encoding (through a temporal model such as LSTM) of historical data to demonstrate the importance of the information of past data domains. We compare this to results obtained with a convolutional network (CNN)-based model to verify that optimizing the one-step loss (\ref{eq:onesteploss}) with temporal model could largely mitigate the information loss. \item \textbf{Does the length of gradual domain adaptation affects the model's ability to adapt?} Our theoretical results suggest that there exists an optimal length $T$ for gradual domain adaptation. Our empirical results corroborate this as when the time horizon passes a certain threshold the model performance is saturated. \end{enumerate} \subsection{Experimental Setting} We briefly introduce our experimental setting here, while more details and additional experimental results can be found in the appendix. We repeat each experiment over 5 random seeds and report the mean with $1$ std. \subsubsection{Dataset} \textbf{Rotating MNIST}~~ This is a semi-synthetic dataset from the MNIST dataset. We rotate the image continuously from $0-30$, $0-60$, and $0-120$ degrees across the time horizon, forming 3 datasets. The degree of rotation increases linearly as $t$ increases. \textbf{Portraits}~~This dataset contains portraits of high school seniors across years \cite{ginosar2015century}. The data is splitted into domains by their year index. \textbf{FMOW}~~This dataset is composed of over 1 million satellite images and their building/land use labels from 62 categories \cite{christie2018functional,koh2021wilds} from $2002$-$2017$. The input is an RGB image of $224 \times 224$ pixels and each image comes with metadata of the year it is taken. The data is splitted into domains chronologically and the target domain is year $2017$. Our work is the first study of gradual domain adaptation with FMOW. \subsubsection{Algorithms and model architecture} \textbf{No Adaptation}~~ For this method, we perform empirical risk minimization with cross-entropy loss on the initial domain and then test on the target domain. We use model VGG16 \citep{simonyan2014very} for MNIST/Portrait, and ResNet18 \citep{he2016deep} for FMOW. \textbf{Direct Adaptation}~~ We group the training domains $t = 1, \ldots, T - 1$ and let the algorithm learn to adapt from the grouped domain to the target domain. We use cross-entropy loss with objective (\ref{eq:onesteploss}) and VGG16 \citep{simonyan2014very} for MNIST/Portrait and ResNet18 \citep{he2016deep} for FMOW. To test the effectiveness of the encoding historical representation, we use a 2-layer GRU for MNIST/Portrait and a 1-layer LTSM for FMOW. \textbf{Multiple Source Domain Adaptation (MDA)}~~ We compare with algorithms designed for MDA, where the algorithm has simultaneous access to multiple labeled domain $t = 1, \ldots, T - 1$ and learns to adapt to the target domain. We use \textit{maxmin} and \textit{dynamic} variants of MDAN \citep{zhao2018adversarial}, Fish \cite{shi2022gradient} and DARN \cite{wen2020} as our baseline algorithms. We also provide the comparisons where the MDA algorithm only has access to the last two source domains ($T - 2, T-1$). \textbf{Gradual Adaptations}~~ Our proposed approach trains the algorithm to sequentially adapt from the initial domain $t = 1$ to the last domain $t=T$. At each time step, the algorithm only has access to two consecutive domains. We use cross-entropy loss with objective (\ref{eq:onesteploss}) to perform successive adaptations with gradually changing domains. The rest of the setup is the same as the ones for Direct Adaptation. \subsection{Experimental Results} \begin{table*}[h]\centering \caption{Results on rotating MNIST dataset with Gradual Adaptation on 5 domains, Direct Adaptation, and No Adaptation. }\label{table:rotate} \begin{tabular}{@{}cccccc@{}} \toprule \multicolumn{6}{c}{Rotating MNIST} \\ \midrule \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{Gradual Adaptation with 5 domains} & \multicolumn{2}{c|}{Direct Adaptation} & No Adaptation \\ \midrule \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{CNN} & \multicolumn{1}{c|}{LSTM} & \multicolumn{1}{c|}{CNN} & \multicolumn{1}{c|}{LSTM} & CNN \\ \midrule \multicolumn{1}{c|}{0-30 degree} & \multicolumn{1}{c|}{90.21 $\pm$ 0.48} & \multicolumn{1}{c|}{\textbf{94.83} $\pm$ 0.49} & \multicolumn{1}{c|}{77.97 $\pm$ 0.99} & \multicolumn{1}{c|}{89.72 $\pm$ 0.73} & 79.76 $\pm$ 3.20 \\ \midrule \multicolumn{1}{c|}{0-60 degree} & \multicolumn{1}{c|}{87.35 $\pm$ 1.02} & \multicolumn{1}{c|}{\textbf{92.52} $\pm$ 0.25} & \multicolumn{1}{c|}{73.27 $\pm$ 1.51} & \multicolumn{1}{c|}{88.53 $\pm$ 0.76} & 58.36 $\pm$ 2.59 \\ \midrule \multicolumn{1}{c|}{0-120 degree} & \multicolumn{1}{c|}{82.38 $\pm$ 0.57} & \multicolumn{1}{c|}{\textbf{89.72} $\pm$ 0.35} & \multicolumn{1}{c|}{62.52 $\pm$ 1.06} & \multicolumn{1}{c|}{84.30 $\pm$ 2.60} & 38.25 $\pm$ 0.61 \\ \bottomrule \end{tabular} \end{table*} \paragraph{Learning representations further helps in gradual adaptation}~~ On rotating MNIST, the performance of the model is better in most cases when adaptation is considered (Table \ref{table:rotate}), which demonstrates the benefit of learning proper representations. With a CNN architecture, the only exception is when the shift in the domain is relatively small ($0$ to $30$ degree), where the No Adaptation method achieves higher accuracy than the Direct Adaptation method by $2\%$. However, when the shift in domains is relatively large, Adaptation methods are shown to be more successful in this case and this subtle advantage of No Adaptation no longer holds. Furthermore, Gradual Adaptation further enhances this outperformance significantly. This observation shows the advantage of sequential adaptation versus direct adaptation. We further show that the performance of the algorithm monotonically increases as it progress to adapt to each domain and learn a cross-domain representation. Figure \ref{fig:gradual} shows the trend in algorithm performance on rotating MNIST and FMOW. \begin{table*}[h]\centering \caption{Results on rotating MNIST dataset with Gradual Adaptation on 5 domains and MDA (MDAN)~\citep{zhao2018adversarial}. }\label{table:rotate2} \scalebox{0.9}{\begin{tabular}{@{}cccccc@{}}\toprule\multicolumn{6}{c}{Rotating MNIST} \\ \midrule\multicolumn{1}{c|}{} & \multicolumn{2}{c|}{\begin{tabular}[c]{@{}c@{}}Gradual Adaptation with 5 domains\end{tabular}} & \multicolumn{3}{c}{MDAN} \\ \cmidrule(l){2-6} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{CNN} & \multicolumn{1}{c|}{LSTM} & \multicolumn{1}{c|}{Maxmin} & \multicolumn{1}{c|}{Dynamic} & \begin{tabular}[c]{@{}c@{}}Dynamic\\ with last 2 domain\end{tabular} \\ \midrule\multicolumn{1}{c|}{0-30 degree} & \multicolumn{1}{c|}{90.21 $\pm$ 0.48} & \multicolumn{1}{c|}{94.83 $\pm$ 0.49} & \multicolumn{1}{c|}{93.62 $\pm$ 0.87} & \multicolumn{1}{c|}{\textbf{95.79} $\pm$ 0.33} & 83.04 $\pm$ 0.29 \\ \midrule\multicolumn{1}{c|}{0-60 degree} & \multicolumn{1}{c|}{87.35 $\pm$ 1.02} & \multicolumn{1}{c|}{\textbf{92.52} $\pm$ 0.25} & \multicolumn{1}{c|}{91.99 $\pm$ 0.51} & \multicolumn{1}{c|}{92.27 $\pm$ 0.26} & 61.49 $\pm$ 0.72 \\ \midrule\multicolumn{1}{c|}{0-120 degree} & \multicolumn{1}{c|}{82.38 $\pm$ 0.57} & \multicolumn{1}{c|}{\textbf{89.72} $\pm$ 0.35} & \multicolumn{1}{c|}{87.25 $\pm$ 0.52} & \multicolumn{1}{c|}{88.57 $\pm$ 0.21} & 44.14 $\pm$ 1.77 \\ \bottomrule\end{tabular}} \end{table*} \begin{table*}[h]\centering \caption{Results on FMOW with Gradual Adaptation with 3 domains, Direct Adaptation, and No Adaptation. }\label{table:fmow} \begin{tabular}{@{}clcc@{}} \toprule \multicolumn{4}{c}{FMOW} \\ \midrule \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}No Adaptation\\ with ERM\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Direct Adaptation\\ with CNN\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Gradual Adaptation \\ with CNN\end{tabular}} & \begin{tabular}[c]{@{}c@{}}Gradual Adaptation\\ with LSTM\end{tabular} \\ \midrule \multicolumn{1}{c|}{33.10 $\pm$ 1.94}& \multicolumn{1}{c|}{41.94 $\pm$ 2.73}& \multicolumn{1}{c|}{36.86 $\pm$ 1.91}& \textbf{43.52} $\pm$ 1.40 \\ \bottomrule \end{tabular} \end{table*} \begin{table}{}\centering \caption{Results on Portraits with Gradual Adaptation for different lengths of horizon $T$, Direct Adaptation, and No Adaptation. }\label{table:port} \begin{tabular}{@{}ccc@{}} \toprule \multicolumn{3}{c}{Portraits} \\ \midrule \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{CNN} & LSTM \\ \midrule \multicolumn{1}{c|}{No Adpatation} & \multicolumn{1}{c|}{76.01 $\pm$ 1.45} & N/A \\ \midrule \multicolumn{1}{c|}{Direct Adaptation} & \multicolumn{1}{c|}{86.86 $\pm$ 0.84} & N/A \\ \midrule \multicolumn{1}{c|}{Gradual - 5 Domains} & \multicolumn{1}{c|}{87.77 $\pm$ 0.98} & 87.41 $\pm$ 0.76 \\ \midrule \multicolumn{1}{c|}{Gradual - 7 Domains} & \multicolumn{1}{c|}{89.14 $\pm$ 1.64} & 89.15 $\pm$ 1.12 \\ \midrule \multicolumn{1}{c|}{Gradual - 9 Domains} & \multicolumn{1}{c|}{\textbf{90.46} $\pm$ 0.54} & \textbf{89.88} $\pm$ 0.54 \\ \midrule \multicolumn{1}{c|}{Gradual - 11 Domains} & \multicolumn{1}{c|}{\textbf{90.56} $\pm$ 1.21} & \textbf{90.93} $\pm$ 0.75 \\ \bottomrule \end{tabular} \end{table} \paragraph{One-step loss is insufficient as a substitute, but can be improved by temporal model}~~ The inefficiency of adaptation without historical information appears with all datasets we have considered, reflected through Table \ref{table:rotate}, \ref{table:fmow}, \ref{table:port}. In almost all cases, we observe that learning with a temporal model (LSTM) achieves better accuracy than a convolutional model (CNN). The gap is especially large on FMOW, the large-scale dataset in our experiments. We suspect that optimizing with only partial information can lead to suboptimal performance on such a complicated task. This is reflected through the better performance achieved by Direct Adaptation with CNN when compared to Gradual Adaptation with CNN and 3 domains (Table \ref{table:fmow}). In contrast, Gradual Adaptation with LSTM overtakes the performance of Direct Adaptation, suggesting the importance of historical representation. Another evidence is that Figure \ref{fig:gradual} shows that Gradual Adaptation with a temporal model performs better on all indexes of domains on rotating MNIST and FMOW. \begin{figure}[h] \centering \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{figs/gradual_mnist_new.png} \caption{Rotating MNIST}\label{fig:rotate120} \end{subfigure} ~ \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{figs/gradual_fmow.png} \caption{FMOW}\label{fig:gradual_fmow} \end{subfigure} \caption{Figure \ref{fig:rotate120} compares the training curves on rotating MNIST with maximum rotation of 120 degrees. Figure \ref{fig:gradual_fmow} compares the training curves on FMOW.}\label{fig:gradual} \end{figure} \paragraph{Existence of optimal time horizon}~~ With the Portraits dataset and different lengths of horizon $T$, we verify that optimal time horizon can be reached when model performance is saturated in Table \ref{table:port}. The performance of the model increases drastically when the shifts in domains are considered, shown by the difference in the performance of No Adaptation, Direct Adaptation, and Gradual Adaptation with $5$ and $7$ domains. However, this increase in performance becomes relatively negligible when $T$ is large (the performance of Gradual Adaptation with $9$ and $11$ domains is very small). This rate of growth in accuracy implies that there exists an optimal number of domains. \begin{table}[htb] \centering \caption{Results on rotating MNIST dataset with Gradual Adaptation on 5 domains and MDA methods, Fish \cite{shi2022gradient} and DARN \cite{wen2020} }\label{new_result} \begin{tabular}{@{}clcc@{}} \toprule & \multicolumn{1}{c}{Fish} & DARN & Ours \\ \midrule \multicolumn{1}{c|}{0-30 degree} & \textbf{95.83 $\pm$ 0.13} & 94.20 $\pm$ 0.27 & 94.83 $\pm$ 0.49 \\ \multicolumn{1}{c|}{0-60 degree} & 90.57 $\pm$ 0.37 & 89.50 $\pm$ 0.12 & \textbf{92.52 $\pm$ 0.25} \\ \multicolumn{1}{c|}{0-120 degree} & 83.26 $\pm$ 1.58 & 82.28 $\pm$ 2.42 & \textbf{89.72 $\pm$ 0.35} \\ \bottomrule \end{tabular} \end{table} \begin{figure}[htb] \centering \includegraphics[width=\textwidth]{pca.jpg} \caption{PCA projection plot of learned representation} \label{fig:pca} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=0.5\textwidth]{projectionplot6.png} \caption{Euclidean distance to target domain of the projections of learned representations.} \label{fig:eu} \end{figure} \paragraph{Comparison with MDA}~~ Lastly, we remark on the results (Table \ref{table:rotate2} and \ref{new_result}) achieved by Gradual Adaptation in comparison with MDA methods (MDAN \cite{zhao2018adversarial}, DARN \cite{wen2020} and Fish \cite{shi2022gradient}). On Rotating MNIST, we note that Gradual Adaptation outperforms MDA methods when the shift is large ($60$ and $120$ degree rotation) while relaxing on the requirement of simultaneous access to all source domains. It is only when the shift is relatively small ($30$-degree rotation), MDA method DARN achieves better result than ours. When MDA method is only presented with the last two training domains, Gradual Adaptation offers noticeable advantages regardless of the shift in domain (Table \ref{table:rotate2}). This demonstrates the potential of graduate domain adaptation in real applications that even when the data are not simultaneously presented it is possible to achieve a competitive or even better performance. One possible reason for this can be illustrated by Figure \ref{fig:pca} and Figure \ref{fig:eu}, in which we plot the PCA projections and the Euclidean distance to the target domain of learned representations. From Figure \ref{fig:pca}, we can see that gradual domain adaptation method is able to gradually learn an increasingly closer representation of the source domain to the target domain. This helps our method to make our prediction based on more relevant features while MDA methods may be hindered by not-so-relevant features from multiple domains. \section{Conclusion} We studied the problem of supervised gradual domain adaptation, which arises naturally in applications with temporal nature. In this setting, we provide the first learning bound of the problem and our results are general to a range of loss functions and are algorithm agnostic. Based on the theoretical insight offered by our theorem, we designed a primal-dual learning objective to learn an effective representation across domains while learning a classifier. We analyze the implications of our results through experiments on a wide range of datasets.
2023-04-23T06:41:15.091Z
2022-04-26T02:44:28.000Z
redpajama/arxiv
arxiv_0001
1,994
7,222
adc83b19e793491b1c6ea0fd8b46cd9f32e592fc
2023-04-23T06:41:15.104Z
2021-02-17T02:11:21.000Z
redpajama/arxiv
arxiv_0001
1,995
2
23862997ad0e3f2e042ba9fb6decba2e14ebf37b
\subsection{Semi-supervised Learning with Self-Training} \label{ssec:background} Self-training is a semi-supervised framework in which a pre-trained teacher model assigns pseudo-labels for unlabeled data and then a student model is trained with the self-labeled dataset. It has been applied in several applications such as image recognition \cite{selftrn_rec_noisy_2019} and automatic speech recognition \cite{selftrn_asr_noisy_2020}. Our approach follows the noisy self-training method where we investigate data augmentation methods on musical signal and evaluate how they affect the separation performance. \section{Conclusion} \label{sec:conc} We present a semi-supervised method for singing voice separation to deal with the scarcity of data with ground-truth. Using the noisy self-training framework, we can effectively make use of a large unlabeled dataset to train a deep separation network. Experimental results show that random mixing as data augmentation improves model training, and the data filtering method with pre-trained voice activity detectors improves the quality of the self-labeled training samples. Our study serves as a foundation for more complicated systems such as using stereo input, working with unlabeled datasets with mixture only (as opposed to noisy source tracks), and extending the teacher-student loop with additional iterations. \section{Experimental Setup} \label{sec:expr} \subsection{Dataset} \label{ssec:expr_dataset} We use MIR-1K \cite{mir-1k}, ccMixter \cite{ccmixter}, and the training partition of MUSDB \cite{musdb18} as the labeled dataset for supervised training. The training set contains approximately 11 hours of recordings. We use DAMP \cite{damp} as the unlabeled dataset for training the student model. DAMP dataset contains more than 300 hours of vocal and background recordings from karaoke app users. Since these recordings are not professionally produced, there exists bleeding of music in the vocal tracks and bleeding of singing voice in the accompaniment tracks as well; hence, it is not suitable for supervised source separation. \subsection{Preprocessing} \label{ssec:expr_preprocess} To reduce dimensionality and speed up processing, we downsample each track to 16 kHz and convert it to mono. We further segment the recordings to non-overlapping 30-second segments, and if the segment is less than 30 seconds we zero-pad to the end of the signal. The spectrograms are computed with a 1024-point STFT with a hop size of 256. \subsection{Noisy Self-Training Procedure} \label{ssec:expr_trn} For both teacher and student training, we minimize Equation~\ref{eq:loss_total} with an Adam optimizer with an initial learning rate of 1e-4, and we decrease the learning rate by half for every 100k iterations until it's no greater than 1e-6. We set $\lambda_{\text{audio}} = \lambda_{\text{spec}} = 1$ in Equation~\ref{eq:loss_source} and $\lambda_{\text{voc}} = \lambda_{\text{acc}} = 1$ in Equation~\ref{eq:loss_total}. To augment the training set, we randomly select a window size of $T = 2.5, 5, 10$ seconds as the input to the model to experiment with the effect of input length, each with a batch size of 4, 2, and 1, respectively. The maximal batch size is chosen under the memory limit. We experiment with different probabilities of applying random mixing with $p = 0, 0.25, 0.5, 0.75, 1$. The teacher model is trained on the labeled datasets. Then, it assigns pseudo-labels for the unlabeled dataset. We infer vocal labels using DAMP vocal tracks as input to the teacher model and infer accompaniment labels from DAMP accompaniment tracks. Due to the leakage in these vocal and background tracks, they can be viewed as mixtures where one source is more likely to dominate the other, compared to normal mixtures. \section{Introduction} \label{sec:intro} The task of singing voice separation is to separate the input mixture into different components: singing voice and accompaniment. It is a crucial problem in music information retrieval and has commercial usage such as music remixing and karaoke applications. It also has the potential to provide useful information for downstream tasks such as song identification, lyric transcription, singing voice synthesis, and voice cloning without access to clean sources. Deep learning models have recently shown promising results in singing voice separation. Popular methods are mostly supervised methods, where a deep neural network is trained on a multi-track corpus with paired vocal and accompaniment ground-truths. \cite{mdn, mmdenselstm} apply dense connections between convolutional or long short-term memory (LSTM) blocks to estimate separate masks, and \cite{openunmix} uses a bidirectional LSTM (BLSTM) network in the separator. Models with multi-scale processing further improve the performance of separation. With the concatenation of features at different scales along with skip connections, U-Net \cite{WaveUNetAM} can maintain long term temporal correlation while processing local information with higher resolution. Such architecture has been effective in both time-frequency domain \cite{spleeter2020, spleeter_study_data, M-UNet, attn_unet} and end-to-end, time-domain methods \cite{demucs1, demucs2}. Models that simultaneously process features at different resolutions with multiple paths have also shown effectiveness in singing voice separation systems \cite{mulcat, meta-tasnet}. The primary challenge for supervised methods with deep learning is the lack of training data with ground-truth. It is more significant for larger networks that are more prone to overfitting issues. There are several multi-track datasets publicly available for singing voice separation including MIR-1K \cite{mir-1k}, ccMixter \cite{ccmixter}, and MUSDB \cite{musdb18}. However, these datasets are relatively small (all these combined are around 15 hours) and not diverse. To artificially increase the size of the dataset, \cite{spleeter_study_data, aug_uhlich, aug_cohen} apply data augmentation to signal including random channel swapping, amplitude scaling, remixing sources from different songs, time-stretching, pitch shifting, and filtering. These methods, individually or combined, are empirically shown to enhance separation performance only by a limited margin \cite{spleeter_study_data}. On the other hand, semi-supervised and unsupervised methods do not require a large corpus with a one-to-one correspondence between the mixtures and ground-truth sources. \cite{demucs1} leverages mixture data by first training a silent-source detector on a small labeled dataset, then mixing recordings with only one source and mixture recordings with the source being silent, and finally optimizing with a weakly supervised loss. \cite{adv_stoller, adv_mich} propose generative adversarial frameworks that require isolated sources only. The distance between the distributions of separator's output and the isolated sources is minimized with adversarial training. \cite{dae_1, dae_sinkhorn} use unpaired vocal and accompaniment data to learn non-negative, smooth representations with a denoising auto-encoder using an unsupervised objective. \cite{BootstrappingDM} proposes a stage-wise algorithm where a clustering-based labeler assigns time-frequency bin labels with confidence measure, and a student separator network is trained on these labels afterward. Self-training is a semi-supervised framework in which a pre-trained teacher model assigns pseudo-labels for unlabeled data. Then a student model is trained with the self-labeled dataset. It has been applied in several applications such as image recognition \cite{selftrn_rec_noisy_2019} and automatic speech recognition \cite{selftrn_asr_noisy_2020}. Our approach follows the noisy self-training method in which we investigate data augmentation methods on musical signal and evaluate how they affect the separation performance. With the framework of noisy self-training, we aim to improve the performance of a deep separator network where only a limited amount of data with ground-truth is available. The contribution of this work is listed as follows: \begin{itemize} \item We use a large unlabeled corpus to improve separation results under the noisy self-training framework. \item We show how data augmentation can improve the model's ability to generalize with a focus on random remixing between sources. \item We propose to use a voice activity detector to evaluate the quality of self-labeled data in the student training to perform data filtering. \end{itemize} \section{System Description} \label{sec:sys} \subsection{Noisy Self-Training for Singing Voice Separation} \label{ssec:sys_nst} Our proposed self-training framework for singing voice separation consists of the following steps: \begin{enumerate} \item Train a teacher separator network $\mathcal{M}_0$ on a small labeled dataset $\mathcal{D}_l$. \item Assign pseudo-labels for the large unlabeled dataset $\mathcal{D}_u$ with $\mathcal{M}_0$ to obtain the self-labeled dataset $\mathcal{D}_{0}$. \item Filter data samples from $\mathcal{D}_{0}$ to obtain $\mathcal{D}_{f0}$. \item Train a student network $\mathcal{M}_1$ with $\mathcal{D}_l \cup \mathcal{D}_{f0}$. \end{enumerate} This framework can be made iterative by repeating steps 2 to 4, using the student network $\mathcal{M}_i$ as the new teacher to obtain a self-labeled dataset $\mathcal{D}_{i+1}$, and training a new student model $\mathcal{M}_{i+1}$. The process stops when there is no performance gain. We illustrate the framework pipeline in Figure~\ref{fig:selftrain}. \begin{figure}[htb] \begin{minipage}[b]{1.0\linewidth} \centering \centerline{\includegraphics[width=\textwidth]{figs/selftrain.pdf}} \end{minipage} \caption{The pipeline of noisy self-training for singing voice separation.} \label{fig:selftrain} \end{figure} \subsubsection{Data Filtering with Voice Activity Detector} \label{sssec:vad_filter} Poor quality self-labeled samples may contain leakage of the singing voice in the accompaniment tracks or leakage of musical background in the vocal tracks. To filter out these samples, we evaluate the quality of the data with a voice activity detector (VAD). The VAD takes the STFT magnitude spectrogram of the mixture as input and predicts the frame-level energy ratio between the source and the mixture. We use the 2D-CRNN architecture with the same configuration as in \cite{vad_univ_sep}. We train two separate VADs to estimate the energy ratio of vocal over mixture, and accompaniment over mixture, respectively. The ground-truth is defined as 0 when both the vocal and the accompaniment are silent. The VADs are trained with the same labeled dataset as the one for the teacher separator model using binary cross-entropy loss. To measure the leakage of accompaniment in vocal, we pass the self-labeled vocal track into the accompaniment activity detector. Similarly, we feed the self-labeled background track into the vocal activity detector to detect leakage of the singing voice. A frame is defined as a ``poor quality frame'' if either its accompaniment energy in the vocal track or its vocal energy in the background track is higher than some threshold. We count the total number of ``poor quality frames'' for each song, and songs with a smaller percentage of such frames are considered to have higher quality. \subsubsection{Data Augmentation} \label{sssec:data_aug} Data noise is a key component in the noisy self-training framework. We apply data augmentation methods for the training of both teacher and student models. Each training sample contains both vocal and accompaniment tracks of duration 30 seconds. To augment the training set, we randomly select a window of duration $T$ seconds (with $T < 30$) from the sample. We also perform ``random mixing'' by mixing vocal and background sources from two randomly selected songs with a probability of $p$. Besides, we apply dynamic mixing ratio, pitch shifting, lowpass filtering, and EQ filtering to the data. \subsection{Separator Network} \label{ssec:sep_network} We use the PoCoNet \cite{poconet} for both teacher and student models. The neural network takes the concatenation of real and imaginary parts of the mixture's STFT spectrogram as input. The separator estimates the complex ratio masks for each source. The wave-form signal is obtained by applying inverse STFT transform on the estimated spectrograms. The separator is a fully-convolutional 2D U-Net architecture with DenseNet and attention blocks. Each DenseNet block contains three convolutional layers, each followed by batch normalization and Rectified Linear Unit (ReLU). Convolutional operations are causal only in the time direction but not in the frequency direction. We choose a kernel size of $3\times 3$ and a stride size of 1, and the number of channels increases from 32, 64, 128 to 256. We control the size of the network by varying the number of levels in U-Net and the maximum number of channels. In the attention module, the number of channels is set to 5 and the encoding dimension for key and query is 20. The connections of layers in DenseNet and attention blocks follow \cite{poconet}. Frequency-positional embeddings are applied to each time-frequency bin of the input spectrogram. For time frame $t$ and frequency bin $f$, the embedding vector is defined as: \begin{equation} \rho(t, f) = (\cos(\pi \frac{f}{F}), \cos(2\pi \frac{f}{F}), \ldots, \cos(2^{k-1}\pi \frac{f}{F})), \label{eq:pos_emb} \end{equation} where $F$ is the frequency bandwidth and $k = 10$ is the dimension of the embedding. \subsubsection{Loss Functions} \label{sssec:sys_loss} For each output source, the loss function is the weighted sum of wave-form and spectral loss: \begin{equation} \mathcal{L}_s(y, \hat{y}) = \lambda_{\text{audio}}\mathcal{L}_{\text{audio}}(y, \hat{y}) + \lambda_{\text{spec}}\mathcal{L}_{\text{spec}}(Y, \hat{Y}), \label{eq:loss_source} \end{equation} where $s$ is the output source, $y, \hat{y}$ are time-domain output and reference signals and $Y = \lvert \text{STFT}(y)\rvert, \hat{Y} = \lvert \text{STFT}(\hat{y})\rvert$ are the corresponding STFT magnitude spectrograms. We choose both $\mathcal{L}_{\text{audio}}(\cdot)$ and $ \mathcal{L}_{\text{spectral}}(\cdot)$ to be $\ell1$ loss. The total loss is the weighted sum of each source: \begin{equation} \mathcal{L}(y, \hat{y}) = \lambda_{\text{voc}}\mathcal{L}_{\text{voc}}(y, \hat{y}) + \lambda_{\text{acc}}\mathcal{L}_{\text{acc}}(Y, \hat{Y}), \label{eq:loss_total} \end{equation} \section{Evaluation Results and Discussions} \label{sec:res} \begin{table}[ht] \centering \begin{tabular}{c|c|c|c|c|c|c} \hline Len & Size & Prob & Use & SDR(V) & SDR(A) & Mean \\ (s) & (1e6) & RM & DAMP & & & \\ \hline \multirow{2}{*}{2.5} & \multirow{10}{*}{8.3} & 0 & \multirow{4}{*}{No} & 1.84 & 10.31 & 6.08 \\ & & 0.5 & & 1.72 & 9.51 & 5.62 \\ \cmidrule{1-1}\morecmidrules\cmidrule{3-3}\morecmidrules\cmidrule{5-7} \multirow{2}{*}{5} & & 0 & & 3.55 & 10.91 & 7.23 \\ & & 0.5 & & 4.08 & 11.34 & 7.71 \\ \cmidrule{1-1}\morecmidrules\cmidrule{3-7} \multirow{11}{*}{10} & & 0 & Yes & 3.93 & 11.46 & 7.70 \\ \cmidrule{3-7} & & 0 & \multirow{5}{*}{No} & 5.88 & 12.52 & 9.2 \\ & & 0.25 & & 6.35 & 12.56 & 9.46 \\ & & 0.5 & & 7.06 & 13.35 & 10.21 \\ & & 0.75 & & 6.98 & 13.36 & 10.17 \\ & & 1.0 & & 6.91 & 13.66 & \textbf{10.29} \\ \cmidrule{2-7} & \multirow{3}{*}{1.6} & 0 & Yes & 0.03 & 6.62 & 3.33 \\ \cmidrule{3-7} & & 0 & \multirow{2}{*}{No} & 4.17 & 10.86 & 7.52 \\ & & 0.5 & & 4.34 & 11.13 & 7.74 \\ \cmidrule{2-7} & \multirow{2}{*}{15.4} & 0 & \multirow{2}{*}{No} & 5.81 & 11.94 & 8.88 \\ & & 0.5 & & 6.9 & 13.07 & 9.99 \\ \hline \end{tabular} \caption{Test performance metrics (SDR in dB) for teacher model candidates. We experiment with various input sizes, number of model parameters, and the probability of random mixing to pick the best configuration for the teacher model. The best performance is highlighted in bold.} \label{tab:teacher} \end{table} \begin{table}[!htb] \centering \begin{tabular}{c|c|c|c|c} \hline Size & top \% & SDR(V) & SDR(A) & Mean \\ (1e6) & & & & \\ \hline 8.3 & 1 & 6.57 & 12.92 & 9.75 \\ \hline \multirow{3}{*}{15.4} & 1 & 7.27 & 13.73 & 10.5 \\ & 0.5 & 7.52 & 13.91 & 10.72 \\ & 0.25 & 7.8 & 13.92 & \textbf{10.86} \\ \hline \end{tabular} \caption{Test performance metrics (SDR in dB) for student models. We experiment with different model sizes and the proportion of quality-controlled self-labeled samples. The best performance is shown in bold.} \label{tab:student} \end{table} \begin{table*}[htb] \centering \begin{tabular}{c|c|c|c|c|c|c} \toprule Name & \#Src & Input & Extra & SDR(V) & SDR(A) & Mean \\ & & type & Data & & & \\ \hline \hline Demucs\cite{demucs2} & \multirow{2}{*}{4} & \multirow{2}{*}{Stereo} & Labeled & 7.05 & N/A & N/A \\ \cite{mulcat} & & & \ding{55} & 6.92 & N/A & N/A \\ \hline MMDenseLSTM\cite{mmdenselstm} & \multirow{4}{*}{2} & \multirow{2}{*}{Stereo} & \ding{55} & 4.94 & \textbf{16.4} & 10.67 \\ MDN\cite{mdn} & & & \ding{55} & 3.87 & 15.41 & 9.64 \\ \cmidrule{1-1}\morecmidrules\cmidrule{3-7} MT U-Net\cite{M-UNet} & & \multirow{2}{*}{Mono} & \ding{55} & 5.28 & 13.04 & 9.16 \\ \cite{adv_mich} & & & \ding{55} & 3.5 & N/A & N/A \\ \midrule Ours (teacher) & \multirow{3}{*}{2} & \multirow{3}{*}{Mono} & \ding{55} & 6.91 & 13.66 & 10.29 \\ Ours (student, no VAD) & & & DAMP & 7.27 & 13.73 & 10.5 \\ Ours (student, VAD) & & & DAMP & \textbf{7.8} & 13.92 & \textbf{10.86} \\ \bottomrule \end{tabular} \caption{Comparison of the proposed method and other baseline models. The best performance is shown in bold.} \label{tab:comp} \end{table*} \subsection{Evaluation Framework} \label{ssec:res_eval} As in previous studies on singing voice separation \cite{mdn, mmdenselstm, WaveUNetAM, M-UNet, adv_mich}, we measure the signal-to-distortion ratio (SDR) to evaluate the separation performance. Following the SiSec separation campaign \cite{sisec}, we use the 50 songs from the test partition of MUSDB \cite{musdb18} as the test set. We partition each audio track into non-overlapping one-second segments, and we take the median of segment-wise SDR for each song and report the median from all 50 songs. We use the python package \texttt{museval}\footnote{\url{https://sigsep.github.io/sigsep-mus-eval/}} to compute SDR. \subsection{Teacher Training} \label{ssec:res_teacher} We select the configuration for the teacher model by experimenting with different input window sizes of training samples and the number of model parameters. Table~\ref{tab:teacher} shows the test SDR for the combinations of input and model size. We first observe that using longer input size improves both vocal and accompaniment SDR. The improvement can be attributed to the attention blocks where longer input context provides more information for separation. Another observation is that larger models do not guarantee performance gain. The largest model (15.4M param) performs significantly better than the smallest one (1.6M) but is slightly worse than the 8.3M version for the probability of random mixing $p=0, 0.5$. Using the best combination of input length (10 seconds) and model size (8.3M), we experiment with different probability of applying random mixing. \cite{spleeter_study_data} shows that random mixing does not have a positive effect on test SDR, and one possible explanation is that it creates mixtures with somewhat independent sources. Our experiments, however, indicate that random mixing alone significantly improves the results. The best performance is obtained when random mixing is always applied. Our observations are consistent with the argument in \cite{ismir2020_tutorial} that ``one-versus-all'' separation benefits from mixing independent tracks. Intuitively, mixtures with dependent sources are more difficult to separate. Random mixing makes it easier for the model to learn and to converge faster on the training set. Meanwhile, by mixing up sources from different songs, the training set becomes more diverse and the model has a better ability to generalize at inference time. In addition, we verify that the DAMP dataset should not be applied directly in supervised source separation tasks by including this dataset along with the other labeled datasets. We experiment with two model sizes (1.6M and 8.3M) using 10-second input without random mixing, and the SDR values degrade sharply for both cases. \subsection{Student Training} \label{ssec:res_student} Table~\ref{tab:student} summarizes the test SDR for student models. As opposed to the teacher model, the 15.4M model has a 0.75 dB SDR gain compared to the 8.3M model. The observation that the larger capacity student model improves the performance is consistent with the findings in \cite{selftrn_rec_noisy_2019}. To verify the quality control approach with VADs, we first count for each song the number of ``poor-quality frames'' as defined in Section~\ref{sssec:vad_filter} from three different datasets: DAMP, self-labeled DAMP, and MUSDB. From the visualization in Figure~\ref{fig:vad_cnt}, the unprocessed DAMP contains the highest percentage of data with a large number of poor-quality frames, the distribution of MUSDB is concentrated in the low count region, while the self-labeled dataset lies in between. This implies that the count of ``poor-quality frames'' based on the output of VADs is a reasonable indicator of the quality of data samples. The experimental results demonstrate that the proposed data filtering method with VADs further improves the performance. The highest SDR is obtained when only the top quarter of the self-labeled data is included in the training. Incorporating a higher percentage of self-labeled data may provide more diversity but is more likely to include samples with poor quality, thus negatively affecting the model's performance. \begin{figure}[htb] \begin{minipage}[b]{0.9\linewidth} \centering \centerline{\includegraphics[width=\textwidth]{figs/violin_vad_cnt.pdf}} \end{minipage} \caption{Count of ``poor-quality frames'' for different datasets.} \label{fig:vad_cnt} \end{figure} \subsection{Comparison with Other Methods} \label{ssec:res_comp} To compare the separation of singing voice with state-of-the-art, we also include models that separate the mixture into four sources. It has been shown in \cite{spleeter_study_data} that, these four-source models have similar vocal separation performance compared to two-source models, even though the four-source separation task is more challenging than the two-source counterpart; possibly because of the additional supervision provided by different instrumental sources in the multi-task learning setup. Hence, we include the vocal SDR values of state-of-the-arts for four-source models \cite{demucs2,mulcat} in our comparison. Our proposed approach, the student model using quality control with VADs, obtains the highest vocal and average SDR among all models, and the vocal separation outperforms others by a significant margin. The accompaniment SDR is higher than the baseline model with mono input \cite{M-UNet} but worse than the stereo ones \cite{mdn, mmdenselstm}. Stereo input contains more spatial information for accompaniment than vocal since the left and right channel difference for background tracks are at a much larger scale than vocal tracks. Such information may improve the separation of accompaniment.
2023-04-23T06:41:15.148Z
2021-02-17T02:11:24.000Z
redpajama/arxiv
arxiv_0001
1,999
3,723
b49db5284d27cefa9fdee32e22cb92417eb16052
\section{Introduction} \label{sec:introduction} Avalanches and landslides, as well as many industrial processes can be classified as granular flows. Substantially improved rheological formulations have given rise to numerous attempts to simulate these phenomena with Navier-Stokes type models. The vast amount of studies relies on the $\mu(I)$-rheology and its derivatives. The core of the $\mu(I)$-rheology is the Drucker-Prager yield criterion \citep{drucker1952soil, rauter2020granular} and the recognition that the friction coefficient $\mu$ is solely a function of the inertial number $I$ \citep{midi2004dense, jop2006constitutive}. Further studies found a similar correlation between the inertial number and the packing density $\phi$ \citep{forterre2008flows}. A similar scaling was found in granular flows with low Stokes numbers $St$ (see Eq.~\eqref{eq:stokes}). The Stokes number is related to the ratio between inertia and drag force on a particle and thus describes the influence of ambient fluid on the granular flow dynamics \citep[e.g.][]{finlay2001mechanics}. Small Stokes numbers indicate a strong influence of the pore fluid on the particles, and hence also on the landslide dynamics. In this regime, the viscous number $J$ replaces the inertial number $I$ as a control parameter for the friction coefficient $\mu$ and the packing density $\phi$, forming the so-called $\mu(J)$-$\phi(J)$-rheology \citep{boyer2011unifying}. Furthermore, excess pore pressure can be remarkably high under these conditions and it is imperative to explicitly consider it in numerical simulations. High drag forces and respectively small Stokes numbers are usually related to small particles. They are virtually omnipresent in geophysical flows: submarine landslides \citep{kim2019landslide}, turbidity currents \citep{heerema2020determines}, powder snow avalanches \citep{sovilla2015structure}, and pyroclastic flows \citep{druitt1998pyroclastic} can be dominated by fine grained components. It follows that a large portion of gravitational mass flows occurs at low Stokes numbers and a deeper understanding of the respective processes is relevant for many researchers. Incompressible granular flow models have been applied in different forms to various problems in the last decade. \cite{lagree2011granular} were the first to conduct numerical simulations of subaerial granular collapses with the $\mu(I)$-rheology and the finite volume method. \cite{staron2012granular} used the same method to simulate silo outflows, and \cite{domnik2013coupling} used a constant friction coefficient to simulate granular flows on inclined plates. \cite{vonboetticher2016debrisintermixing, vonboetticher2017debrisintermixing} applied a similar model, based on OpenFOAM, to debris flows and many more examples can be found in the literature. More recently, compressible flow models have been introduced to simulate subaquatic granular flows at low Stokes numbers. The applied methods include, e.g., smoothed particle hydrodynamics \citep{wang2017two}, coupled lattice Boltzmann and discrete element method \citep{yang2017role}, the material point method \citep{baumgarten2019general} or the finite volume multiphase framework of OpenFOAM \citep{si2018development}. Results have often been compared to experiments of \cite{balmforth2005granular} (subaerial) and \cite{rondon2011granular} (subaquatic), two works that gained benchmark character in the granular flow community. Most of the mentioned applications rely on standard methods from computational fluid dynamics (CFD). This is reasonable, considering the similarity between the hydrodynamic (Navier-Stokes) equations and the granular flow equations. However, the pressure dependent and shear thinning viscosity associated with granular flows introduces considerable conceptual and numerical problems. The unconditional ill-posedness of an incompressible granular flow model with constant friction coefficient was described by \cite{schaeffer1987instability} and the partial ill-posedness of the $\mu(I)$-rheology by \cite{barker2015well}. By carefully tuning the respective relations, \cite{barker2017partial} were able to regularize the $\mu(I)$-rheology for all but very high inertia numbers. \cite{barker2017well} described a well-posed compressible rheology, incorporating the $\mu(I)$-rheology as a special case. Another pitfall of granular rheologies is the concept of effective pressure. When pore pressure is considerably high (i.e. at low Stokes numbers), it is imperative to distinguish between effective pressure and total pressure \citep[first described by][]{terzaghi1925erdbaumechanik}. Effective pressure represents normal forces in the grain skeleton that have a stabilizing effect, in contrast to pore pressure which has no stabilizing effect. This has shown to be a major issue, as pore pressure and consequently the effective pressure, react very sensitively to the packing density and dilatancy \citep{rondon2011granular}. Besides the rheology, tracking of the slide geometry poses a major challenge. Surface tracking is usually implemented in terms of the algebraic volume-of-fluid (VOF) method \citep[e.g.][]{lagree2011granular, si2018development}, the level-set method \citep[e.g.][]{savage2014modeling}, geometric surface tracking methods \citep[e.g.][]{roenby2016computational, maric2018enhanced}, or particles based methods \citep[e.g.][]{baumgarten2019general, wang2017two}. The volume-of-fluid method, which is also used in this work, allows to track the slide as a single component but also as a mixture of multiple phases (grains and pore fluid). Components are defined in here as objects (e.g.~the landslide) that completely cover a bounded region in space without mixing with other components (e.g.~the ambient fluid), see Fig.~\ref{fig:alpha_def1}. The tracking becomes a purely geometric problem \citep[see e.g.][for a geometric interpretation]{roenby2016computational}. In contrast, phases (e.g.~grains) are dispersed and mixed with other phases (e.g.~pore fluid) to represent the dynamic bulk of the landslide, see Fig.~\ref{fig:alpha_def2}. The component-wise tracking is used in various landslide models \citep[e.g.][]{lagree2011granular, domnik2013coupling, barker2017partial}. Components, i.e.~the slide and the surrounding fluid, are immiscible and separated by a sharp interface. Usually, this also implies that the model is incompressible. The phase-wise tracking is commonly applied in chemical engineering \citep{gidaspow1994multiphase, vanwachem2000derivation, passalacqua2011implementation} and has lately been introduced to environmental engineering \citep[e.g.][]{cheng2017sedfoam, chauchat2017sedfoam, si2018development}. This approach allows to describe a variable mixture of grains and pore fluid that merges smoothly into the ambient fluid. The description of the pore fluid as an individual phase enables the model to decouple effective pressure from pore pressure, which is imperative in many flow configurations, e.g.~for low Stokes numbers. In this work, a two-component and a two-phase Navier-Stokes type model are applied to granular flows. Both models are implemented into the open-source toolkit OpenFOAM \citep{weller1998tensorial,rusche2002computational,opencfd2009user}, using the volume-of-fluid method for component- and phase-wise tracking (see section~\ref{sec:method}). Subaerial \citep{balmforth2005granular} and subaquatic granular collapses \citep{rondon2011granular} are simulated with both models and results are compared to the respective experiments and with each other. We apply the $\mu(I)$-$\phi(I)$-rheology to subaerial cases ($St \gtrapprox 1$) and the $\mu(J)$-$\phi(J)$-rheology to subaquatic cases ($St \lessapprox 1$). The two-component model applies simplified rheologies in form of the incompressible $\mu(I)$- and $\mu(J)$-rheologies. The $\phi(I)$- and $\phi(J)$-curves are merged into the particle pressure relation of \cite{johnson1987frictional} to achieve the correct quasi-static limits \citep{vescovi2013from}. This yields reasonable values for the packing density at rest which is imperative for granular collapses with static regions. In contrast to many previous works \citep[e.g.][]{savage2014modeling, vonboetticher2017debrisintermixing, si2018development}, we renounce additional contributions to shear strength (e.g.~cohesion) because we do not see any physical justification (e.g.~electrostatic forces, capillary forces, cementing) in the investigated cases. We apply a very transparent and simple model, focusing on the relevant physical processes and achieve a remarkable accuracy, especially in comparison to more complex models \citep[e.g.][]{si2018development, baumgarten2019general}. Further, it is shown that various experimental setups with different initial packing densities can be simulated with the same constitutive parameters, whereas many previous attempts required individual parameters for different cases \citep[e.g.][]{savage2014modeling, wang2017two, si2018development}. The paper is organised as follows: The multi-phase (section~\ref{ssec:twophase}) and multi-component (section~\ref{ssec:onephase}) models are introduced in section~\ref{sec:method}, including models for granular viscosity (section~\ref{ssec:rheo}), granular particle pressure (sections~\ref{ssec:ps1} and \ref{ssec:ps2}) and drag (section~\ref{ssec:drag}). Results are shown and discussed in section~\ref{sec:balmforth} for a subaerial case and in section~\ref{sec:rondon} for two subaquatic cases. A conclusion is drawn in section~\ref{sec:conclusion} and a summary is given in section~\ref{sec:summary}. Furthermore, a thorough sensitivity analysis is provided in the appendix. \section{Methods} \label{sec:method} \subsection{Two-phase landslide-model} \label{ssec:twophase} The two-phase model is based on the phase momentum and mass conservation equations \cite[see e.g.][]{rusche2002computational}. The governing equations for the continuous fluid phase are given as \begin{eqnarray} &\dfrac{\partial \phi_{\r{c}}}{\partial t} + \bnabla\bcdot\left(\phi_{\r{c}}\,\b{u}_{\r{c}}\right) = 0,\label{eq:disp_alpha}\\ &\dfrac{\partial \phi_{\r{c}}\,\rho_{\r{c}}\,\b{u}_{\r{c}}}{\partial t} + \bnabla\bcdot\left(\phi_{\r{c}}\,\rho_{\r{c}}\,\b{u}_{\r{c}}\otimes\b{u}_{\r{c}}\right) = \bnabla\bcdot\left(\phi_{\r{c}}\,\bt{T}_{\r{c}}\right)-\phi_{\r{c}}\,\bnabla\,p+\phi_{\r{c}}\,\rho_{\r{c}}\,\b{g}+ k_{\r{gc}}\left(\b{u}_{\r{g}}-\b{u}_{\r{c}}\right).\label{eq:disp_momentum} \end{eqnarray} and for the grains as \begin{eqnarray} &\dfrac{\partial \phi_{\r{g}}}{\partial t} + \bnabla\bcdot\left(\phi_{\r{g}}\,\b{u}_{\r{g}}\right) = 0,\label{eq:disp_alpha_s}\\ &\dfrac{\partial \phi_{\r{g}}\,\rho_{\r{g}}\,\b{u}_{\r{g}}}{\partial t} + \bnabla\bcdot\left(\phi_{\r{g}}\,\rho_{\r{g}}\,\b{u}_{\r{g}}\otimes\b{u}_{\r{g}}\right) = \bnabla\bcdot\left(\phi_{\r{g}}\,\bt{T}_{\r{g}}\right)-\bnabla\,p_{\r{s}}-\phi_{\r{g}}\,\bnabla\,p+\phi_{\r{g}}\,\rho_{\r{g}}\,\b{g}+k_{\r{gc}}\left(\b{u}_{\r{c}}-\b{u}_{\r{g}}\right),\label{eq:disp_momentum_s} \end{eqnarray} Phase-fraction fields $\phi_{\r{g}}$ and $\phi_{\r{c}}$, i.e.~the phase volume over the total volume \begin{equation} \phi_{i} = \dfrac{V_{i}}{V}, \end{equation} describe the composition of the grain-fluid mixture, see Fig.~\ref{fig:alpha_def2} (the index $i$ indicates either $\r{c}$ or $\r{g}$). The granular phase-fraction is identical with the packing density $\phi = \phi_{\r{g}}$. Phase-fractions take values between zero and one and the sum of all phase-fractions yields one. The pore fluid is assumed to match the surrounding fluid and the respective phase-fraction $\phi_{\r{c}}$ is therefore one outside the slide. This way, phase-fraction fields provide not only a mechanism to track the packing density of the slide, but also its geometry. Every phase moves with a unique velocity field $\b{u}_{i}$, which is not divergence-free. This allows the mixture to change, yielding a variable packing density and thus bulk-compressibility, although phase densities $\rho_{\r{g}}$ and $\rho_{\r{c}}$ are constant. The volume weighted average velocity is divergence free, \begin{equation} \bnabla\bcdot \ol{\b{u}} = \bnabla\bcdot \left(\phi_{\r{g}}\,\b{u}_{\r{g}} + \phi_{\r{c}}\,\b{u}_{\r{c}} \right) = 0,\label{eq:divergencefreevel} \end{equation} which allows to use numerical methods for incompressible flow. The pore pressure (or shared pressure) $p$ is acting on all phases equally, while the grain phase experiences additional pressure due to force chains between particles, the so called effective pressure (or particle pressure) $p_{\r{s}}$, see Fig.~\ref{fig:particle_pressure}. The effective pressure is a function of the packing density in this model and the balance between effective pressure and external pressure (e.g.~overburden pressure) ensures realistic packing densities. The total pressure can be assembled as \begin{equation} p_{\r{tot}} = p + p_{\r{s}}.\label{eq:p_tot} \end{equation} The deviatoric phase stress tensors are expressed as \begin{equation} \bt{T}_i = 2\,\rho_i\,\nu_i\bt{S}_i,\label{eq:def_viscosity_compressiblity} \end{equation} with phase viscosity $\nu_i$, phase density $\rho_i$ and deviatoric phase strain rate tensor \begin{equation} \bt{S}_i = \dfrac{1}{2}\left(\bnabla\b{u}_i + \left(\bnabla\b{u}_i\right)^T\right) - \dfrac{1}{3}\bnabla \bcdot \b{u}_i\,\b{I.} \end{equation} The viscosity of the pore fluid $\nu_{\r{c}}$ is usually constant and the granular viscosity $\nu_{\r{g}}$ is following from constitutive models like the $\mu(I)$-rheology (see section~\ref{ssec:rheo}). The total deviatoric stress tensor can be calculated as \begin{equation} \bt{T} = \phi_{\r{c}}\,\bt{T}_{\r{c}} + \phi_{\r{g}}\,\bt{T}_{\r{g}}.\label{eq:total_dev} \end{equation} The last terms in Eqs.~\eqref{eq:disp_momentum} and \eqref{eq:disp_momentum_s} represent drag forces between phases and $k_{\r{gc}}$ is the drag coefficient of the grains in the pore fluid. Lift and virtual mass forces are neglected in this work, because they play a minor role \citep{ si2018development}. The granular viscosity $\nu_{\r{g}}$, the effective pressure $p_{\r{s}}$, and the drag coefficient $k_{\r{gc}}$ represent interfaces to exchangeable sub-models, presented in sections~\ref{ssec:rheo}-\ref{ssec:drag}. \begin{figure} \begin{center} \includegraphics[scale=1]{fig1.eps} \end{center} \caption{Definition of phase-fractions $\phi_i$ and phase velocities $\b{u}_i$ in and outside a dense granular avalanche for the two-phase model. Phase velocities can differ, allowing phase-fractions to change, giving the avalanche compressible properties.} \label{fig:alpha_def2} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=1]{fig2.eps} \end{center} \caption{Representative volume element of a grain-fluid mixture. The effective pressure $p_{\r{s}}$ (red arrows) represents normal forces in the grain skeleton (black arrows). The pore-pressure (blue arrows) represents pressure that is equally shared by pore fluid and grains.} \label{fig:particle_pressure} \end{figure} \subsection{Two-component landslide-model} \label{ssec:onephase} Many two-phase systems can be substantially simplified by assuming that phases move together, i.e.~that phase velocities are equal, \begin{equation} \b{u}_{i} \approx \ol{\b{u}} = \phi_{\r{g}}\,\b{u}_{\r{g}} + \phi_{\r{c}}\,\b{u}_{\r{c}}.\label{eq:samevelocities} \end{equation} This fits very well to completely separated phases that are divided by a sharp interface \citep[e.g.~surface waves in water,][]{rauter2021numerical} but also systems of mixed phases (e.g.~grains and fluid) can be handled to some extent \citep[e.g.][]{lagree2011granular}. The phase momentum conservation equations \eqref{eq:disp_momentum} and \eqref{eq:disp_momentum_s} can be combined into a single momentum conservation equation and the system takes the form of the ordinary Navier-Stokes Equations with variable fluid properties \citep[see e.g.][]{rusche2002computational}, \begin{eqnarray} &\dfrac{\partial \rho\,\ol{\b{u}}}{\partial t} + \bnabla\bcdot\left(\rho\,\ol{\b{u}}\otimes\ol{\b{u}}\right) = \bnabla\bcdot\bt{T}-\bnabla\,p_{\r{tot}}+\rho\,\b{g},\label{eq:NS_momentum}\\ &\bnabla\bcdot\ol{\b{u}} = 0.\label{eq:NS_cont} \end{eqnarray} A detailed derivation can be found in appendix~\ref{sec:derivation}. The pressure is denoted as $p_{\r{tot}}$, indicating that it contains contributions from hydrodynamic and effective pressure. The phase-fraction fields $\phi_i$ cannot be recovered after this simplification and the method switches to the tracking of components instead of phases, see Fig.~\ref{fig:alpha_def1}. Components are tracked with so-called component indicator functions $\alpha_i$ (sometimes called phase indicator functions but in here we consequently distinguish phases from components), being either one if component $i$ is present at the respective location or zero otherwise, \begin{equation} \alpha_i = \begin{cases} 1 &\text{if component $i$ is present}\\ 0 &\text{otherwise} \end{cases} \end{equation} Values between zero and one are not intended by this method and only appear due to numerical reasons, i.e.~the discretisation of the discontinuous field (see section~\ref{ssec:numerics}). In here, two component indicator functions are used, one for the ambient fluid component, $\alpha_{\r{c}}$, and one for the slide component, $\alpha_{\r{s}}$ (see Fig.~\ref{fig:alpha_def1}). Evolution equations for component indicator functions can be derived from mass conservation equations as \begin{equation} \dfrac{\partial \alpha_i}{\partial t} + \bnabla\bcdot\left(\alpha_i\,\ol{\b{u}}\right) = 0.\label{eq:NS_alpha} \end{equation} The definition of components is straight forward for completely separated phases, where components can be matched with phases, e.g.~water and air. The definition of the slide component, on the other hand, is not unambiguous, as it consists of a variable mixture of grains and pore fluid. A boundary of the slide component can, for example, be found by defining a limit for the packing density (e.g.~50\% of the average packing density). Further, a constant reference packing density $\ol{\phi}$ has to be determined, which is assigned to the whole slide component. The density of the slide component follows as \begin{equation} \rho_{\r{s}} = \ol{\phi}\rho_{\r{g}} + (1-\ol{\phi})\rho_{\r{c}},\label{eq:mean_density} \end{equation} and a similar relation can be established for the deviatoric stress tensor (see section \ref{ssec:unifying}). The local density $\rho$ and the local deviatoric stress tensor $\bt{T}$ can be calculated as \begin{eqnarray} &\rho = \sum\limits_i \alpha_i\,\rho_i = \alpha_{\r{s}}\,\rho_{\r{s}} + \alpha_{\r{c}}\,\rho_{\r{c}}\\ &\bt{T} = \sum\limits_i \alpha_i\,\bt{T}_i =\alpha_{\r{s}}\,\bt{T}_{\r{s}} + \alpha_{\r{c}}\,\bt{T}_{\r{c}}, \end{eqnarray} using component densities $\rho_{i}$, as well as component deviratoric stress tensors $\bt{T}_{i}$. Component deviatoric stress tensors are calculated as \begin{equation} \bt{T}_{i} = \nu_{i}\,\rho_{i}\,\bt{S}, \end{equation} with the component viscosity $\nu_i$ and the deviatoric shear rate tensor $\bt{S}$. Note that the deviatoric shear rate tensor $\bt{S}$ matches the shear rate tensor $\bt{D}$, because the volume weighted averaged velocity field is divergence free, \begin{equation} \bt{S} = \bt{D} = \dfrac{1}{2}\left(\bnabla\,\b{\ol{u}}+\bnabla\,\b{\ol{u}}^T\right). \end{equation} The viscosity of the ambient fluid $\nu_{\r{c}}$ is usually constant and the viscosity of the slide region $\nu_{\r{s}}$ is following from granular rheology, see section~\ref{ssec:rheo}. \begin{figure} \begin{center} \includegraphics[scale=1]{fig3.eps} \end{center} \caption{Definition of component indicator functions $\alpha_i$ and the velocity $\b{\ol{u}}$ in and outside a dense granular avalanche for the two-component model.} \label{fig:alpha_def1} \end{figure} \subsection{Rheology} \label{ssec:rheo} \subsubsection{Unifying rheologies} \label{ssec:unifying} Most granular rheologies (e.g.~the $\mu(I)$-rheology) are defined in terms of the total deviatoric stress tensor in the slide component $\bt{T}_{\r{s}}$. This has to be accounted for and corrected in the two-phase model if the same viscosity model is used in both models. Similar to Eq.~\eqref{eq:mean_density}, component viscosities can be related to phase viscosities as \begin{eqnarray} \bt{T}_{\r{s}} = \ol{\phi}\bt{T}_{\r{g}} + (1-\ol{\phi})\,\bt{T}_{\r{c}},\\ 2\,\rho_{\r{s}}\,\nu_{\r{s}}\bt{S}_{\r{s}} = 2\,\ol{\phi}\rho_{\r{g}}\,\nu_{\r{g}}\,\bt{S}_{\r{g}} + 2\,(1-\ol{\phi})\rho_{\r{c}}\,\nu_{\r{c}}\,\bt{S}_{\r{c}}. \label{eq:which_nu} \end{eqnarray} The contribution of the granular phase to stresses is assumed to be much higher than the contribution of the pore fluid, $\ol{\phi}\rho_{\r{g}}\,\nu_{\r{g}}\,\bt{S}_{\r{g}} \gg (1-\ol{\phi})\rho_{\r{c}}\,\nu_{\r{c}}\,\bt{S}_{\r{c}}$. Further, by neglecting the mass of the pore fluid, $\rho_{\r{s}} \approx \ol{\phi}\,\rho_{\r{g}}$, it follows that kinematic viscosities have to be similar in both models, \begin{equation} \nu_{\r{s}} \approx \nu_{\r{g}}. \end{equation} Alternatively, one can match the dynamic viscosities $\nu_{\r{s}}\,\rho_{\r{s}}$ and $\nu_{\r{g}}\,\rho_{\r{g}}$ if the factor $\phi_{\r{g}}$ is removed from the viscous term in Eq.~\eqref{eq:disp_momentum_s}. Note, that this assumptions are fairly accurate for subaerial granular flows but questionable for subaquatic granular flows. However, multi-phase and multi-component models differ substantially under subaquatic conditions and a unification is not possible. \subsubsection{Drucker-Prager plasticity model} \label{sssec:coulomb} An important characteristic of granular materials is the pressure dependent shear stress, described by the Drucker-Prager yield criterion \citep{drucker1952soil}. \cite{schaeffer1987instability} was the first to include granular friction in the Navier-Stokes equations by expressing the Drucker-Prager yield criterion in terms of the shear rate tensor and the pressure, \begin{equation} \bt{T}_{\r{s}} = \mu\,p_{\r{s}}\,\dfrac{\bt{S}}{\|\bt{S}\|},\label{eq:granular} \end{equation} where the norm of a tensor $\|\bt{A}\|$ is defined as \begin{equation} \|\bt{A}\| = \sqrt{\dfrac{1}{2}\,\r{tr}\left(\bt{A}^2\right)}.\label{eq:thisnorm} \end{equation} The friction coefficient $\mu$ is constant and a material parameter in the first model by \cite{schaeffer1987instability}. The slide component viscosity follows as \begin{equation} \nu_{\r{s}} = \dfrac{\|\bt{T}_{\r{s}}\|}{\rho_{\r{s}}\,\|\bt{S}\|} = \mu\,\dfrac{p_{\r{s}}}{\rho_{\r{s}}\|\bt{S}\|}.\label{eq:granularviscosity} \end{equation} This relation has been applied with slight modifications by e.g.~\cite{domnik2013coupling}, \cite{savage2014modeling} or \cite{rauter2020granular}. Following the findings in section~\ref{ssec:unifying}, the kinematic viscosity of slide and grains have to be similar and the granular phase viscosity follows as \begin{equation} \nu_{\r{g}} = \dfrac{\|\bt{T}_{\r{g}}\|}{\rho_{\r{s}}\,\|\bt{S}_{\r{g}} \|} = \mu\,\dfrac{p_{\r{s}}}{\rho_{\r{g}}\ol{\phi}\,\|\bt{S}_{\r{g}}\|}.\label{eq:granularviscosity2} \end{equation} The viscosity reaches very high values for $\|\bt{S}\| \rightarrow 0$ and very small values for $p_{\r{s}} \rightarrow 0$ and both limits can lead to numerical problems. To overcome numerically unstable behaviour the viscosity is truncated to an interval $[\nu_{\min}, \nu_{\max}]$. A thoughtful choice of $\nu_{\max}$ is crucial for the presented method. Small values tend towards unphysical results, because solid-like behaviour can only be simulated by very high viscosities. Big values, on the other hand, tend towards numerical instabilities (see section~\ref{sssec:timestepping}). The ideal value for the maximum viscosity depends on the respective case and can be estimated with a scaling and sensitivity analysis (see appendix~\ref{ssec:convergence_viscosity}). The relation \begin{equation} \nu_{\r{max}} = \dfrac{1}{10}\,\sqrt{|\b{g}|\,H^3},\label{eq:nu_thresh} \end{equation} where $H$ is the characteristic height of the investigated case, was found to give a good estimate for a reasonable viscosity cut-off. Notably, the Drucker-Prager yield surface leads to an ill-posed model \citep{schaeffer1987instability} and the truncation of the viscosity is not sufficient for a regularization. \cite{schaeffer1987instability} did not distinguish between effective and total pressure in Eq.~\eqref{eq:granularviscosity}, limiting the applications of his model substantially. We will explicitly consider effective pressure in Eqs.~\eqref{eq:granularviscosity} and \eqref{eq:granularviscosity2} using Eq.~\eqref{eq:ps1} or \eqref{eq:ps2} in the two-component model and Eq.~\eqref{eq:ps3}, \eqref{eq:ps4}, or \eqref{eq:ps5} in the two-phase model to avoid such limitations. \subsubsection{$\mu(I)$-rheology} \label{sssec:muI} The $\mu(I)$-rheology \citep{midi2004dense, jop2006constitutive, forterre2008flows} states that the friction coefficient $\mu$ is not constant in dense, dry, granular flows but rather a function of the inertial number $I$. The inertial number $I$ is defined as the ratio between the typical time scale for microscopic rearrangements of grains with diameter $d$, $t_{\r{micro}} = d\,\sqrt{\rho_{\r{g}}/p_{\r{s}}}$, and the macroscopic time scale of the deformation, $t_{\r{macro}} =1/2\,\|\bt{S}\|^{-1}$, \begin{equation} I = 2\,d\,\|\bt{S}\| \sqrt{\dfrac{\rho_{\r{g}}}{p_{\r{s}}}}, \end{equation} In the two-phase model, the shear rate $\bt{S}$ is replaced with the deviatoric shear rate of grains $\bt{S}_{\r{g}}$. Various approaches have been proposed for the $\mu(I)$-curve, in here we apply the classic relation, given as \begin{equation} \mu(I) = \mu_{\r{1}} + \left(\mu_{\r{2}}-\mu_{\r{1}}\right)\dfrac{I}{I_0+I},\label{eq:muI} \end{equation} where $\mu_{\r{1}}$, $\mu_{\r{2}}$ and $I_0$ are material parameters \citep{jop2006constitutive}. The dynamic friction coefficient $\mu(I)$ is introduced into the Drucker-Prager yield criterion, Eqs.~\eqref{eq:granularviscosity} or \eqref{eq:granularviscosity2} to get the respective granular viscosity. \subsubsection{$\mu(J)$-rheology} \label{sssec:muJ} At small Stokes numbers, defined as \begin{equation} St = 2\,d^2\,\|\bt{S}\|\,\dfrac{\rho_{\r{g}}}{\nu_{\r{c}}\,\rho_{\r{c}}},\label{eq:stokes} \end{equation} the pore fluid has substantial influence on the rheology and the microscopic time scale is defined by the viscous scaling $t_{\r{micro}} = \nu_{\r{c}}\,\rho_{\r{c}}/p_{\r{s}}$ \citep{boyer2011unifying}. The friction coefficient is thus no longer a function of the inertial number $I$ but rather of the viscous number $J$, defined as \begin{equation} J = 2\,\|\bt{S}\|\dfrac{\nu_{\r{c}}\,\rho_{\r{c}}}{p_{\r{s}}}. \end{equation} The functional relation of the friction coefficient on the viscous number was described by \cite{boyer2011unifying} as \begin{equation} \mu(J) = \mu_{\r{1}} + \left(\mu_{\r{2}}-\mu_{\r{1}}\right)\dfrac{J}{J_0+J} + J + \dfrac{5}{2}\,\phi_{\r{m}}\,\sqrt{J},\label{eq:muJ} \end{equation} where $\mu_{\r{1}}$, $\mu_{\r{2}}$, $J_0$ and $\phi_{\r{m}}$ are material parameters \citep{boyer2011unifying}. The $\mu(J)$-rheology is taking advantage of the Drucker-Prager yield criterion, similar to the $\mu(I)$-rheology. Notably, the $\mu(I)$ and $\mu(J)$-rheology can be combined by forming a new dimensionless number $K = J + \alpha\,I^2$ with a constitutive parameter $\alpha$ \citep{trulsson2012transition, baumgarten2019general}. However, this was not required for the cases presented in this work. \subsection{Effective pressure in the two-component model} \label{ssec:ps1} \subsubsection{Total pressure assumption} \label{sssec:totalpressure} The two-component model is limited in considering pore pressure and dilatancy effects because the packing density is not described by this model. The effective pressure can only be reconstructed from total pressure $p_{\r{tot}}$ and various assumptions. The simplest model assumes that the pore pressure is negligibly small, leading to \begin{equation} p_{\r{s}} \approx p_{\r{tot}}.\label{eq:ps1} \end{equation} This assumption is reasonable for subaerial granular flows and has been applied to such by e.g.~\cite{lagree2011granular} or \cite{savage2014modeling}. \subsubsection{Hydrostatic pressure assumption} \label{sssec:hydrostatic} In subaquatic granular flows, the surrounding high-density fluid increases the total pressure substantially and it cannot be neglected. Following \cite{savage2014modeling}, improvement can be achieved by calculating the hydrostatic pore pressure as \begin{equation} p_{\r{hs}} = \begin{cases} \rho_{\r{c}}\,\b{g}\bcdot\left(\b{x} - \b{x}_0\right) & \text{for} \quad \b{g}\bcdot\left(\b{x} - \b{x}_0\right) > 0,\\ 0 & \text{else}, \end{cases} \end{equation} and subtracting it from the total pressure, \begin{equation} p_{\r{s}} \approx p_{\r{tot}} - p_{\r{hs}}.\label{eq:ps2} \end{equation} Here, $\b{x}_0$ is the position of the free water surface, where the total pressure is supposed to be zero. For a variable and non-horizontal free water surface, common in e.g.~landslide-tsunamis, this concept is complicated substantially, and to the authors knowledge, not applied. Furthermore, excess pore pressure, which is common in low Stokes number flows, is out of the scope for this model. \subsection{Effective pressure in the two-phase model} \label{ssec:ps2} \subsubsection{Critical state theory} \label{sssec:cst} The structure of the two-phase model allows us to include the packing density in the effective pressure equation. Critical state theory \citep{roscoe1958yielding, roscoe1970influence, schofield1968critical} was the first model to describe the relationship between the effective pressure and the packing density. The critical state is defined as a state of constant packing density and constant shear stress, which is reached after a certain amount of shearing of an initially dense or loose sample. The packing density in this state, called critical packing density $\phi_{\r{crit}}$, is a function of the effective pressure $p_{\r{s}}$. This function can be inverted to get the effective pressure as a function of the critical packing density. It is further assumed that the flow is in its critical state $\phi_{\r{g}} = \phi_{\r{crit}}$ to get a model that is compatible with the governing equations. This assumption is reasonable for avalanches, slides, and other granular flows but questionable for the initial release and deposition. At small deformations, the packing density might be lower (underconsolidated) or higher (overconsolidated) than the critical packing density and the effective pressure model will over- or underestimate the effective pressure. A popular relation for the effective pressure (the so-called critical state line) has been described by \cite{johnson1987frictional, johnson1990frictional} as \begin{equation} p_{\r{s}} = a\,\dfrac{\phi_{\r{g}}-\phi_{\r{rlp}}}{\phi_{\r{rcp}}-\phi_{\r{g}}},\label{eq:ps3} \end{equation} where $\phi_{\r{rlp}}$ is the random loose packing density in critical state, $\phi_{\r{rcp}}$ the random close packing density in critical state and $a$ a scaling parameter. The scaling parameter $a$ can be interpreted as the effective pressure at the packing density $\frac{1}{2}\left(\phi_{\r{rcp}}+\phi_{\r{rlp}}\right)$. Note that we apply a simplified version of the original relation, similar to \cite{vescovi2013from}. Packing densities above $\phi_{\r{rcp}}$ are not valid and avoided by the asymptote of the effective pressure at $\phi_{\r{rcp}}$. If packing densities higher or equal $\phi_{\r{rcp}}$ appear in simulations, they should be terminated and restarted with refined numerical parameters (e.g.~time step duration). \subsubsection{$\phi(I)$-relation} \label{sssec:phiI} Equation~\eqref{eq:ps3} is known to hold for slow deformations in critical state \citep[see e.g.][]{vescovi2013from}. However, this relation is not consistent with granular flow experiments. Granular flows show dilatancy with increasing shear rate, expressed by e.g.~\cite{forterre2008flows} as a function of the inertia number $I$, \begin{equation} \phi_{\r{g}}(I) = \phi_{\max} -\Delta\phi\,I, \end{equation} where $\phi_{\max}$ and $\Delta\phi$ are material parameters. This relation can be transformed into a model for the effective pressure by introducing the inertial number $I$, \begin{equation} p_{\r{s}} = \rho_{\r{s}}\,\left(2\,\|\bt{S}_{\r{g}}\|\,d\,\dfrac{\Delta\phi}{\phi_{\max}-\phi_{\r{g}}}\right)^{2}.\label{eq:ps4.0} \end{equation} This relation has two substantial problems: For $\|\bt{S}_{\r{g}}\| = 0$ it yields $p_{\r{s}} = 0$ and for $\phi_{\r{g}} = 0$ it yields $p_{\r{s}} \neq 0$, which causes numerical problems and unrealistic results. The first problem is addressed by superposing Eq.~\eqref{eq:ps4.0} with the quasi-static relation \eqref{eq:ps3}, similar to \cite{vescovi2013from}. The second problem is solved by multiplying Eq.~\eqref{eq:ps4.0} with the normalized packing density $\phi_{\r{g}}/\ol{\phi}$, which ensures that the pressure vanishes for $\phi_{\r{g}} = 0$. The normalization with the reference packing density $\ol{\phi}$ ensures that parameters ($\phi_{\max}$, $\Delta\phi$) will be similar to the original equation. Further, to reduce the number of material parameters, we set the maximum packing density in the $\phi(I)$-relation equal to the random close packing density $\phi_{\r{rcp}}$. The final relation reads \begin{equation} p_{\r{s}} = a\,\dfrac{\phi_{\r{g}}-\phi_{\r{rlp}}}{\phi_{\r{rcp}}-\phi_{\r{g}}} + \rho_{\r{g}}\,\dfrac{\phi_{\r{g}}}{\ol{\phi}}\left(2\,\|\bt{S}_{\r{g}}\|\,d\,\dfrac{\Delta\phi}{\phi_{\r{rcp}}-\phi_{\r{g}}}\right)^{2},\label{eq:ps4} \end{equation} and is shown in Fig.~\ref{fig:alphai} alongside the original relations of \cite{johnson1987frictional} and \cite{forterre2008flows}. Interestingly, this relation contains many features of the extended kinetic theory of \cite{vescovi2013from} (compare Fig.~\ref{fig:alphai}b with Fig. 6b in \cite{vescovi2013from}). Notably, the inertial number is a function of only the packing density and the shear rate, $I = f\left(\phi_{\r{g}}, \|\bt{S}_{\r{g}}\|\right)$, because the effective pressure is calculated as function of the packing density. The same follows for the friction coefficient $\mu = f\left(\phi_{\r{g}}, \|\bt{S}_{\r{g}}\|\right)$ and the deviatoric stress tensor $\|\bt{T}_{\r{g}}\| = f\left(\phi_{\r{g}}, \|\bt{S}_{\r{g}}\|\right)$. This highlights that the two-phase model implements a density-dependent rheology, rather than a pressure-dependent rheology. \begin{figure*} \begin{center} \includegraphics[scale=0.75]{fig4.eps} \end{center} \caption{Left: Effective pressure $p_{\r{s}}$ following the $\phi(I)$-relation as a function of packing density $\phi_{\r{g}}$ and deviatoric shear rate $\|\bt{S}_{\r{g}}\|$. The dashed lines show the original relation of \cite{forterre2008flows}, the continuous coloured lines show the modified relation and the black line the quasi-static limit following \cite{johnson1987frictional}. Right: The critical packing density as a function of particle pressure $p_{\r{s}}$ and deviatoric shear rate $\|\bt{S}_{\r{g}}\|$. Dashed lines are following the original $\phi(I)$-relation, continuous lines the modified version. The critical state theory would result in horizontal lines in this plot.} \label{fig:alphai} \end{figure*} It should be noted that there are various possibilities to combine critical state theory and the $\mu(I)$-$\phi(I)$-rheology. An alternative approach including bulk viscosity is provided by e.g.~\cite{schaeffer2019constitutive}. \subsubsection{$\phi(J)$-relation} \label{sssec:phiJ} The low Stokes number regime requires the replacement of the inertial number $I$ with the viscous number $J$. The dependence of the packing density on the viscous number was described by \cite{boyer2011unifying} as \begin{equation} \phi_{\r{g}} = \dfrac{\phi_{\r{m}}}{1+\sqrt{J}}, \end{equation} and we can derive the effective pressure by inserting the viscous number as \begin{equation} p_{\r{s}} = \dfrac{2\,\nu_{\r{c}}\,\rho_{\r{c}}\,\|\bt{S}_{\r{g}}\|}{\left(\frac{\phi_{\r{m}}}{\phi_{\r{g}}}-1\right)^2}. \end{equation} Notably, \cite{boyer2011unifying} emphasised that $\phi_{\r{m}}$ is not matching the random close packing density $\phi_{\r{rcp}} \approx 0.63$ but rather a value close to $0.585$. This leads to substantial problems for large values of $\phi_{\r{g}}$ as the relation is only valid for $\phi_{\r{g}} < \phi_{\r{m}} = 0.585 $ or $\|\bt{S}_{\r{g}}\| = 0$. In other words, shearing is only possible for $\phi_{\r{g}} < \phi_{\r{m}}$. We solve this issue by allowing a creeping shear rate of $S_0$ at packing densities above $\phi_{\r{m}}$. Further and as before, we superpose the relation with the quasi-static relation of \cite{johnson1987frictional} to yield the correct asymptotic values for $\|\bt{S}_{\r{g}}\| \rightarrow 0$ known from critical state theory. The final relation reads \begin{equation} p_{\r{s}} = a\,\dfrac{\phi_{\r{g}}-\phi_{\r{rlp}}}{\phi_{\r{rcp}}-\phi_{\r{g}}} + \dfrac{2\,\nu_{\r{c}}\,\rho_{\r{c}}\,\|\bt{S}_{\r{g}}\|}{\left(\frac{\hat{\phi}_{\r{m}}}{\phi_{\r{g}}}-1\right)^2},\label{eq:ps5} \end{equation} with \begin{equation} \hat{\phi}_{\r{m}} = \begin{cases} \phi_{\r{m}}+\left(\phi_{\r{rcp}}-\phi_{\r{m}}\right)\,\left(S_{0}-\|\bt{S}\|\right) & \text{for} \quad S_{0} > \|\bt{S}\|, \\ \phi_{\r{m}} & \text{else}. \end{cases} \end{equation} The respective relation is shown in Fig.~\ref{fig:alphaj} alongside the original relations of \cite{johnson1987frictional} and \cite{boyer2011unifying}. States with $\|\bt{S}\| \geq S_0$ and $\phi_{\r{g}} \geq \phi_{\r{m}}$ or $\phi_{\r{g}} \geq \phi_{\r{rcp}}$ are not intended by this model and simulations should be terminated if such states appear. \begin{figure*} \begin{centering} \includegraphics[scale=0.75]{fig5.eps} \end{centering} \caption{Left: Particle pressure $p_{\r{s}}$ following the $\phi(J)$-relation as a function of packing density $\phi_{\r{g}}$ and deviatoric shear rate $\|\bt{S}_{\r{g}}\|$. The dashed lines show the original relation of \cite{boyer2011unifying}, the continuous coloured lines show the modified relation and the black line the static limit expressed following \cite{johnson1987frictional}. Right: The critical packing density as a function of particle pressure $p_{\r{s}}$ and deviatoric shear rate $\|\bt{S}_{\r{g}}\|$. Dashed lines are following the original $\phi(J)$-relation, continuous lines the modified version. The grey area shows the region where only creeping shear rates below $S_0$ are allowed.} \label{fig:alphaj} \end{figure*} \subsection{Drag and permeability model} \label{ssec:drag} The drag model describes the momentum exchange between grains and pore fluid in the two-phase model and widely controls permeability, excess pore pressure relaxation, and the settling velocity of grains. A wide range of drag models for various situations can be found in the literature. In here we stick to the Kozeny-Carman relation as applied by \cite{pailha2009two}, \begin{equation} k_{\r{g}\r{c}} = 150\,\dfrac{\phi_{\r{g}}^2\,\nu_{\r{c}}\,\rho_{\r{c}}}{\phi_{\r{c}}\,d^2},\label{eq:ksc} \end{equation} with the grain diameter $d$ as the only parameter. This relation is supposed to be valid for small relative velocities and densely packed granular material. It has been modified to account for higher relative velocities \citep{ergun1952fluid} and lower packing densities \citep{gidaspow1994multiphase}, however, which is not relevant for the investigated configurations (see \cite{si2018development} for an application of the extended relation). This relation is visualized in Fig.~\ref{fig:drag}a for various diameters and packing densities. The drag coefficient can be reformulated into a permeability coefficient as known in soil mechanics and porous media. Comparing Darcy's law \citep[e.g.][]{bear1972dynamics} with the equations of motion for the fluid phase, we can calculate the hydraulic conductivity as \begin{equation} K = \dfrac{\rho_{\r{c}}|\b{g}|}{k_{\r{gc}}} \end{equation} and furthermore the intrinsic permeability \citep[e.g.][]{bear1972dynamics} as \begin{equation} \kappa = K\dfrac{\nu_{\r{c}}}{|\b{g}|} = \dfrac{\nu_{\r{c}}\,\rho_{\r{c}}}{k_{\r{gc}}} = \dfrac{\phi_{\r{c}}\,d^2}{150\,\phi_{\r{g}}^2}.\label{eq:permeability} \end{equation} The permeability is visualized in Fig.~\ref{fig:drag}b. In a similar manner, the drag coefficient can be calculated as \begin{equation} k_{\r{g}\r{c}} = \dfrac{\rho_{\r{c}}\,\nu_{\r{c}}}{\kappa}, \end{equation} if the intrinsic permeability of the granular material is known. \begin{figure*} \begin{centering} \includegraphics[scale=0.75]{fig6.eps} \end{centering} \caption{Drag coefficient $k_{\r{gc}}$ (left) and permeability $\kappa$ (right) following the Kozeny-Carman relation \citep{pailha2009two} for various grain diameters (colour) and packing densities (x-axis).} \label{fig:drag} \end{figure*} \subsection{Numerical solution and exception handling} \label{ssec:numerics} All models are implemented into OpenFOAM-v1812 \citep{weller1998tensorial, opencfd2009user} and solved with the finite volume method \citep{jasak1996error, rusche2002computational, moukalled2016finite}. \subsubsection{Two-component landslide-model} The two-component model is based on the solver multiphaseInterFoam, using the PISO-algorithm \citep{issa1986solution} and interpolations following \cite{rhie1983numerical} to solve the coupled system of pressure and velocity. First, an updated velocity field is calculated without the contribution of pressure. The predicted velocity field is later corrected to be divergence-free and the pressure follows from the required correction. Finally, all other fields, e.g.~the phase indicator functions, are updated. This procedure is repeated in each time step. Components (slide and ambient air or water) are divided by an interface which is supposed to be sharp. However, the interface is often smeared by numerical diffusion. To keep the interface between components sharp, the relative velocity between phases $\b{u}_{\r{ij}}$, which was previously eliminated from the system, is reintroduced in Eq.~\eqref{eq:NS_alpha}, \begin{equation} \dfrac{\partial \alpha_i}{\partial t} + \bnabla\bcdot\left(\alpha_i\,\b{\ol{u}}\right) + \bnabla\bcdot\left(\alpha_i\,\alpha_j\,\b{u}_{ij}\right) = 0.\label{eq:sp_alpha} \end{equation} Eq.~\eqref{eq:sp_alpha} is finally solved using the MULES algorithm (Multidimensional Universal Limiter with Explicit Solution) \citep{weller2008new}. This scheme limits the interface compression term (i.e.~the term containing $\b{u}_{ij}$) to avoid over- ($\alpha_i > 1$) and undershoots ($\alpha_i < 0$) of the component indicator fields. There is no conservation equation for the relative velocity in the two-component model and it has to be reconstructed from assumptions. Two methods are known to construct the relative velocity for granular flows. \cite{barker2020coupling} suggest to construct the relative velocity for granular flows from physical effects such as segregation and settling. The relative velocity follows as the terminal velocity of spheres in the surrounding fluid under the influence of gravity. Alternatively, one can construct a velocity field that is normal to the interface and of the same magnitude as the average velocity $\b{\ol{u}}$, \begin{equation} \b{u}_{ij} = |\b{\ol{u}}|\dfrac{\alpha_{j}\,\bnabla\,\alpha_{i} - \alpha_{i}\,\bnabla\,\alpha_{j}}{|\alpha_{j}\,\bnabla\,\alpha_{i} - \alpha_{i}\,\bnabla\,\alpha_{j}|}. \end{equation} This method has a maximum sharpening effect \citep{weller2008new} and is thus also applied in this work. \subsubsection{Two-phase landslide-model} The two-phase model is based on the solver multiphaseEulerFoam. The system of pressure and average velocity is solved with the same concept as in the two-component solver. The velocity fields for all phases are first predicted without contributions from pore pressure $p$, but including effective pressure $p_{\r{s}}$. The average velocity is then corrected to be divergence-free and the pore pressure follows from the required correction. In a further step, the velocity correction is applied to phase velocities. The solution procedure is described in depth by \cite{rusche2002computational}. The interface compression term is not required in this model because settling and segregation is directly simulated and counteracting numerical diffusion. The implementation of the effective pressure term is taken from SedFoam~2.0 \citep{chauchat2017sedfoam}. \subsubsection{Time stepping} \label{sssec:timestepping} The numerical solution of transport equations is subject to limitations that pose restrictions on the solution method. One of these limitations is known as the Courant-Friedrichs-Lewy (CFL) condition and enforced by limiting the CFL number. In convection dominated problems, the CFL number is defined as the ratio of the time step duration $\Delta t$ and the cell convection time $\Delta x/u_x$, i.e.~the time required for a particle to pass a cell with size $\Delta x$, \begin{equation} \r{CFL}^{\r{conv}} = \dfrac{u_x\,\Delta t}{\Delta x}.\label{eq:cfl_conv} \end{equation} For the stability of e.g.~the forward Euler method, it is required, that the convection time is smaller than the time step duration, \begin{equation} \r{CFL}^{\r{conv}} \leq 1, \end{equation} and similar limits exist for other explicit methods. This limitation has to be enforced by choosing the time step duration $\Delta t$ according to mesh size and flow velocity. However, Eq.~\eqref{eq:cfl_conv} is only valid for convection dominated problems. In the case of granular flows, the viscosity term is dominating over all other terms. Therefore, the viscosity has to be considered in the calculation of the CFL number and the time step duration. The respective definition, ignoring the contribution of convection follows as \begin{equation} \r{CFL}^{\r{diff}} = \dfrac{\nu\,\Delta t}{\Delta x^2}.\label{eq:cfl_nu} \end{equation} This relation is imperative for stability of explicit and semi-implicit Navier-Stokes solvers when viscous forces are dominating. The squared cell size in the denominator and the high viscosity introduce very strict limitations on the time step, making computations very expensive. Note that simplified relations for the one-dimensional case are given in here. The full multi-dimensional conditions for arbitrary finite volume cells can be found in \cite{rauter2021numerical}. \section{Subaerial granular collapse \citep{balmforth2005granular}} \label{sec:balmforth} As a first test of the numerical models, we simulate the granular collapse experiments of \cite{balmforth2005granular} under subaerial conditions. A sketch of the experiment is shown in Fig.~\ref{fig:balmforth}. The experiment was conducted between two parallel, smooth walls and the setup is approximated as a 2D granular collapse. \cite{balmforth2005granular} conducted multiple experiments with different geometries, in here we focus on the experiments with an aspect ratio of $H/L = 1/2$, but similar results have been obtained for other aspect ratios. In theory, both, the two-component and the two-phase model should be equally capable of simulating this case because pore pressure plays a minor role. Most parameters, such as density, quasi-static friction coefficient, and particle diameter are reported by \cite{balmforth2005granular}. The missing parameters are completed with data from the literature. Notably, the experiments are conducted on a smooth surface, which was incorporated in simulations by switching to a constant friction coefficient $\mu_{\r{wall}}$ at smooth surfaces. This modification is simple in the finite volume method because stresses are calculated on cell faces before their divergence is calculated as a sum over faces. The Stokes number is estimated to be of order $10^3$ (with $\|\bt{S}\| = 10\,\r{s^{-1}}$) for this experiments and the $\mu(I)$-$\phi(I)$-rheology is chosen to describe friction and effective pressure. Parameters for the $\mu(I)$ and $\phi(I)$-curves are chosen in the physically reasonable range ($\mu_2-\mu_1 \approx 0.3$, $I_0 \approx 0.25$, $\Delta\phi=0.1$) following various references \citep[e.g.][]{forterre2008flows} in combination with values reported by \cite{balmforth2005granular}. A wide range of limiting packing densities can be found in literature, $\phi_{\r{rlp}}$ varying between $0.5$ \citep{si2018development} and $0.598$ \citep{vescovi2013from}, $\phi_{\r{rcp}}$ varying between $0.619$ \citep{vescovi2013from} and $0.64$ \citep{savage2014modeling}. These parameters are therefore optimized to the subaquatic case (section~\ref{sec:rondon}), where extended measurements are available, and applied to this case without further modification. The average packing density is assumed to be $\ol{\phi} = 0.6$ following the critical state line at this pressure level. The applied pressure equation is visualized in Fig.~\ref{fig:alphai}. From the height $H=0.1\,\r{m}$ the required viscosity threshold $\nu_{\max}$ can be estimated following Eq.~\eqref{eq:nu_thresh} to be of order $1\,\r{m^{2}\,s^{-1}}$. This estimation was validated with a sensitivity analysis (see appendix~\ref{ssec:convergence_viscosity}). The final set of parameters is given in Tab.~\ref{tab:balmforth}. \begin{figure} \begin{center} \includegraphics[scale=1]{fig7.eps} \end{center} \caption{Experimental column collapse setup of \cite{balmforth2005granular}. The aspect ration $H/L$ has been varied throughout the experiments. We will focus on the experiment $L=0.2\,\r{m}$, $H=0.1\,\r{m}$, similar to \cite{savage2014modeling}.} \label{fig:balmforth} \end{figure} \begin{table} \caption{Material parameters for the subaerial granular collapse simulations. Note that not all material parameters are required by all models.} \label{tab:balmforth} \begin{center} \begin{tabular}{llll} \hline phase / component & par. & value & description\\ \hline air & $\rho_{\r{c}}$ & $1\,\r{kg\,m^{-3}}$ & air density\\ & $\nu_{\r{c}}$ & $1.48\bcdot10^{-5}\,\r{m^{2}\,s^{-1}}$ & air viscosity\\ \hline slide / grains & $d$ & $10^{-3}\,\r{m}$ & particle diameter\\ & $\mu_{\r{wall}}$ & $0.317$ & wall friction coefficient\\ & $\mu_{\r{1}}$ & $0.595$ & quasi-static friction coefficient\\ & $\mu_{\r{2}}$ & $0.895$ & dynamic friction coefficient\\ & $I_0$ & $0.25$ & reference inertial number\\ & $\nu_{\r{min}}$ & $10^{-4}\,\r{m^{2}\,s^{-1}}$ & lower viscosity threshold\\ & $\nu_{\r{max}}$ & $1\,\r{m^{2}\,s^{-1}}$ & upper viscosity threshold\\ & $\ol{\phi}$ & $0.60$ & assumed mean packing density$^3$\\ & $\rho_{\r{s}}$ & $1\,430\,\r{kg\,m^{-3}}$ & slide density$^1$\\ & $\rho_{\r{g}}$ & $2\,600\,\r{kg\,m^{-3}}$ & particle density$^2$\\ & $\phi_{\r{rlp}}$ & $0.53$ & random loose packing density$^2$\\ & $\phi_{\r{rcp}}$ & $0.63$ & random close packing density$^2$\\ & $a$ & $130\,\r{Pa}$ & critical state line parameter$^2$\\ & $\Delta\phi$ & $0.1$ & dynamic loosening factor$^2$\\ \hline \end{tabular} \end{center} $^1$only two-component model.\\ $^2$only two-phase model.\\ $^3$used to match kinematic viscosity in the two-phase model following Eq.~\eqref{eq:which_nu}.\\ \end{table} Regular meshes of square cells are used to cover a simulation domain of $0.5\times0.2\,\r{m}$, which was sufficient to have no artificial influences from boundaries. Standard boundary conditions are applied at walls (zero velocity, zero pressure gradient) and the permeable top (zero velocity gradient, zero pressure). Multiple mesh resolutions were applied to investigate the influence of the grid resolution on the results (see appendix~\ref{ssec:convergence_grid}). The time stepping was investigated with a similar approach, modifying the limit for $\r{CFL}_{\r{max}}^{\r{diff}}$ between $1$ and $1000$ (depending on model and solver mode, see appendix~\ref{ssec:convergence_timestep}). In the following, the CFL-number is limited to $1$ and the cell size set to $0.0017\,\r{m}$, which showed to be sufficient to achieve converged and mesh independent results. \subsection{Two-component model} \label{ssec:balmforth_tc} The component indicator for the slide component $\alpha_{\r{s}}$ is initialized to $1$ within the square that forms the initial granular column. We assume that hydrostatic pore pressure is negligible ($ < 2\,\r{Pa}$) and therefore apply Eq.~\eqref{eq:ps1} to calculate the effective pressure. The simulation covering a simulation duration of $0.8\,\r{s}$ took $6.9\,\r{h}$ on eight cores of LEO4 (High Performance Cluster from the University of Innsbruck, consisting of Intel Xeon (Broadwell) compute cores). The total pressure, which is assumed to match the effective pressure, is shown for three-time steps in Fig.~\ref{fig:2005_sp_p}, alongside the final pile in the experiment. The continuous black line shows the sharp free surface, assumed to be located at $\alpha_{\r{s}} = 0.5$. Furthermore, the velocity field $\b{\ol{u}}$ is shown as arrows. The collapse takes about $0.4\,\r{s}$ and the pile remains in its final shape for the rest of the simulation. The two-component model matches the experiment well, however, the volume of the final pile is slightly underestimated. Results are very robust in terms of mesh refinement or coarsening (see appendix~\ref{ssec:convergence_grid}) and mesh dependent instabilities \cite[as e.g.][]{martin2017continuum, gesenhues2019finite} have not been observed. \begin{figure} \begin{center} \includegraphics[scale=1]{fig8.eps} \end{center} \caption{ Total pressure, assumed to match the effective pressure in the two-component model (subaerial case). The black arrows represent the velocity. The continuous black line shows the free surface of the slide ($\alpha_{\r{s}} = 0.5$), the dashed black line shows the final experimental pile shape of \cite{balmforth2005granular}. } \label{fig:2005_sp_p} \end{figure}   \subsection{Two-phase model} \label{ssec:balmforth_tp} The two-phase model uses the same parameters as the two-component model, including numerical parameters, such as viscosity threshold and CFL limit. The phase-fraction $\phi_{\r{g}}$ was initialized such that effective pressure is in balance with lithostatic vertical stresses, yielding an initial mean phase-fraction of $\overline{\phi_{\r{g}}} = 0.608$. This procedure ensures that there will be no dilatancy or compaction in stable regions of the pile. The simulation took $9.1\,\r{h}$ under the same conditions as the two-component simulation. A stronger mesh dependency is observed for this model, however, the runout is converging for fine meshes (see appendix~\ref{ssec:convergence_grid}). The pore pressure and the effective pressure following the extended $\phi(I)$-theory are shown for three time steps in Fig.~\ref{fig:2005_tp_p}, alongside the final pile shape in the experiment. The continuous black line indicates the position of the free surface, assumed to be located at $\phi_{\r{s}} = 0.25$. The average velocity is shown as arrows in Figs.~\ref{fig:2005_tp_p}a-c, the relative velocity of air with respect to grains in Figs.~\ref{fig:2005_tp_p}d-f. The relative velocity in the initial phase is considerably high, indicating an inflow of air into the bulk and thus dilatancy. The two-phase model matches the experiment exceptionally well and the dilatancy in the experiment is matched by the simulation to a high degree. Note that the effective pressure at rest is directly linked to the packing density which can be qualitatively estimated from Fig.~\ref{fig:2005_tp_p}f. \begin{figure} \begin{center} \includegraphics[scale=1]{fig9.eps} \end{center} \caption{ Pore pressure (a-c) and effective pressure (d-f) in the two-phase model (subaerial case). The arrows show the average velocity (a-c) and the relative velocity (d-f). The continuous black line shows the free surface of the slide ($\phi_{\r{g}} = 0.25$), the dashed black line shows the final experimental pile shape of \cite{balmforth2005granular}. } \label{fig:2005_tp_p} \end{figure} \subsection{Discussion and comparison} \label{ssec:balmforth_dc} Both models performed well at simulating the subaerial granular collapse. This is in line with previous results of e.g.~\cite{lagree2011granular} or \cite{savage2014modeling}. The effective pressure and the total pressure are fairly similar, because excess pore pressure is dissipating quickly through dilatancy or compaction. The magnitude of pore pressure in the two-phase model is smaller than $8\,\r{Pa}$ and thus less than $1\,\%$ of the effective pressure, validating the assumption of neglectable pore pressure. The runout is similar in both models, the front is slightly elongated in the two-phase model. Further, the two-phase model shows a better match with the experiment at the upper end of the final slope. Both of these minor differences can be attributed to dilatancy effects. The two-component model is intrinsically not able to capture this process. Two mechanisms for dilatancy can be observed in the two-phase model. Firstly, the average effective pressure in the slide is reduced as it is spreading out and the packing density decreases proportionally to the effective pressure, as prescribed by the critical state line. Secondly, shearing can reduce the packing density well below its critical packing density due to the dynamic contribution of the $\phi(I)$-theory to effective pressure. The loosely packed slide will not return to the critical packing density after shearing but remain in a loose state, forming a hysteresis. The granular material is able to remain in a loose state because the deviatoric stress tensor counteracts one-dimensional settling deformations (known as oedometric compression in soil mechanics). Furthermore, the granular column may have been overconsolidated in the experiment, however, this was not incorporated in the model due to the initialisation in critical state. Dilatancy is rather unimportant under subaerial conditions, as it does not imply changes in rheology or flow dynamics. Therefore, the two-component model is well suited for subaerial granular collapses, where pore pressure is negligibly small and the Stokes number is well above one. The reduced friction at the smooth basal surface has a small but noticeable effect on the final pile shape. The runout is longer when incorporating the smooth surface and matches the experiment better. Previous works \citep[e.g.][]{savage2014modeling} ignored the smooth bottom of the experiment and still obtained accurate final pile shapes by using a constant friction coefficient. The increased friction of the $\mu(I)$-rheology (in comparison to a constant quasi-static friction coefficient) compensates for the reduced basal friction quite exactly (see appendix~\ref{ssec:dynamics}). The two-component model is less sensitive to grid resolution than the two-phase model (see appendix \ref{ssec:convergence_grid}) but more sensitive to the time step duration (see appendix \ref{ssec:convergence_timestep}). At the same resolution, both models require roughly the same computational resources and no model shows a substantial advantage in this regard. It is important to carefully choose the time step duration, as it can have drastic influences on simulation results. Generally, $\r{CFL}^{\r{diff}}$ has to be limited to one to guarantee satisfying results, while some cases and solver settings allow higher $\r{CFL}^{\r{diff}}$ numbers. This limitation is much stronger than the traditional CFL criterion and $\r{CFL}^{\r{conv}}$ is roughly $0.001$. Notably, the time step duration is constant in simulations, $\Delta t \approx 3\bcdot10^{-6}\,\r{s}$, because the constant maximum viscosity $\nu_{\r{max}}$ in stable regions and the constant cell size $\Delta x$ controlled the time stepping. \section{Subaqueous granular collapse \citep{rondon2011granular}} \label{sec:rondon} The granular collapse experiments of \cite{rondon2011granular} are conducted under subaquatic conditions and the Stokes number was estimated to be of order $10^{-1}$ (at $\|\bt{S}\| = 10\,\r{s^{-1}}$). Pore pressure, packing density, and permeability play an important role under these conditions and the complexity increases substantially. Experiments accounted for the increased complexity by varying the average initial packing density in experiments between $0.55$ and $0.61$. The pore pressure was recorded by a sensor in the bottom plate, approximately below the centre of the column at $x=0.02\,\r{m}$ (see Fig.~\ref{fig:rondon}). This sensor showed strong variations of the pore pressure in dense and loose experiments, indicating its important role for subaquatic slides. \begin{figure} \begin{center} \includegraphics[scale=1]{fig10.eps} \end{center} \caption{Experimental column collapse setup of \cite{rondon2011granular}. The packing density and the aspect ratio have been varied in the experiment. We will focus on a densely and a loosely packed case, similar to \cite{savage2014modeling}.} \label{fig:rondon} \end{figure} A loose or underconsolidated ($\ol{\phi_{\r{g}}} = 0.55$, $L=0.06\,\r{m}$, $H=0.048\,\r{m}$) and a dense or overconsolidated ($\ol{\phi_{\r{g}}} =0.6$, $L=0.06\,\r{m}$, $H=0.042\,\r{m}$) simulation are conducted in this work to investigate the sensitivity of the model. As before, the experiments were conducted between two parallel, smooth walls and the setup is approximated with 2D simulations. Most material parameters are reported by \cite{rondon2011granular}, parameters for the $\mu(J)$ and $\phi(J)$-curves are supplemented with data from \cite{boyer2011unifying}. The quasi-static friction coefficient $\mu_{\r{1}}$ is taken from \cite{si2018development}. The particles have a diameter of $d=0.225\,\r{mm}$ and are immersed into a Ucon solution \citep[for details, see][]{rondon2011granular} with a viscosity of $\nu_{\r{c}}=1.2\bcdot10^{-5}\,\r{m^{2}\,s^{-1}}$ (about $10$ times higher than water), leading to a very low permeability of $\kappa\approx10^{-10}\r{m^2}$ following Eq.~\eqref{eq:permeability}. Early tests revealed that the two-phase model reacts very sensitively to the critical state line parameters $\phi_{\r{rlp}}$, $\phi_{\r{rcp}}$, and $a$. Parameters from literature \citep[e.g.~the critical state line applied by][]{si2018development} lead to unrealistic granular pressures at $\phi_{\r{g}}=0.60$ and could thus not be applied. We set the limiting packing densities to $\phi_{\r{rlp}}=0.53$ and $\phi_{\r{rcp}}=0.63$ to allow initial average packing densities between $0.55$ and $0.61$. The scaling variable $a$ was found by matching the peak pore pressure in the dense simulation with the respective measurement (see Fig.~\ref{fig:2011time_p}). The total set of parameters used for both cases is shown in Tab.~\ref{tab:rondon}. Regular meshes of square cells with size $0.0005\,\r{m}$ are applied, covering a simulation domain of $0.15\,\r{m}\times0.105\,\r{m}$ (dense case) and $0.25\,\r{m}\times0.105\,\r{m}$ (loose case). The CFL number $\r{CFL}^{\r{diff}}$ is limited to $10$ in order to keep computation times to a reasonable level. A sensitivity study was conducted to proof convergence at this grid size (see appendix~\ref{ssec:convergence_grid}) and $\r{CFL}^{\r{diff}}$ number (see appendix~\ref{ssec:convergence_timestep}). \begin{table} \caption{Material parameters for the subaquatic granular collapse simulations. Note that not all material parameters are required by all models.} \label{tab:rondon} \begin{center} \begin{tabular}{llll} \hline phase & par. & value & description\\ \hline ucon mix & $\rho_{\r{c}}$ & $1\,000\,\r{kg\,m^{-3}}$ & ucon mix density\\ & $\nu_{\r{c}}$ & $1.2\bcdot10^{-5}\,\r{m^{2}\,s^{-1}}$ & ucon mix viscosity\\ \hline slide & $d$ & $2.25\bcdot10^{-4}\,\r{m}$ & particle diameter\\ & $\mu_{\r{1}}$ & $0.340$ & quasi-static friction coefficient\\ & $\mu_{\r{2}}$ & $0.740$ & dynamic friction coefficient\\ & $J_0$ & $0.005$ & reference viscous number\\ & $\nu_{\r{min}}$ & $10^{-4}\,\r{m^{2}\,s^{-1}}$ & lower viscosity threshold\\ & $\nu_{\r{max}}$ & $1\,\r{m^{2}\,s^{-1}}$ & upper viscosity threshold\\ & $\ol{\phi}$ & $0.60$ & assumed mean packing density$^3$\\ & $\rho_{\r{s}}$ & $1\,900\,\r{kg\,m^{-3}}$ & slide density$^1$\\ & $\rho_{\r{g}}$ & $2\,500\,\r{kg\,m^{-3}}$ & particle density$^2$\\ & $\phi_{\r{rlp}}$ & $0.53$ & random loose packing density$^2$\\ & $\phi_{\r{rcp}}$ & $0.63$ & random close packing density$^2$\\ & $a$ & $130\,\r{Pa}$ & critical state line parameter$^2$\\ & $\phi_{\r{m}}$ & $0.585$ & dynamic reference packing density$^2$\\ & $S_0$ & $5\,\r{s^{-1}}$ & maximum creep shearing$^2$\\ \hline \end{tabular} \end{center} $^1$only two-component model.\\ $^2$only two-phase model.\\ $^3$used to match kinematic viscosity in the two-phase model following Eq.~\eqref{eq:which_nu}. \end{table} \subsection{Two-component model - dense case} \label{ssec:rondon_tc1} The hydrostatic pore pressure is high under subaquatic conditions and the two-component model applies Eq.~\eqref{eq:ps2} to consider its influence on the effective pressure. All parameters are taken from Tab.~\ref{tab:rondon}. The evolution of the slide geometry, the effective pressure and the velocity $\b{\ol{u}}$ are shown in Fig.~\ref{fig:2011_sp_p1}, alongside the final experimental pile shape. The final pile shape of the model corresponds roughly to the experiment. The velocity, on the other hand, is roughly corresponding to the loose case, and the collapse is completed after $1\,\r{s}$, whereas the dense experiment took more than $30\,\r{s}$. The simulation and its failure mechanism are similar to the subaerial case where the free unsupported side of the pile is collapsing until reaching a stable slope inclination. Notably, neither the dense nor the loose experiment showed such a failure mechanism (see Fig.~\ref{fig:sketch}). No excess pore pressure is included in this model and a hypothetical pressure sensor at the bottom of the column would measure constantly $0\,\r{Pa}$ as indicated in Fig.~\ref{fig:2011time_p}. \begin{figure} \begin{center} \includegraphics[scale=1]{fig11.eps} \end{center} \caption{ Effective pressure at $t=0.2\,\r{s}$ (a), $t=0.4\,\r{s}$ (b) and $t=1.0\,\r{s}$ (c) in the two-component model (subaquatic dense case). The black arrows represent the velocity. The continuous black line shows the free surface of the slide ($\alpha_{\r{s}} = 0.5$), the dashed black line shows the final experimental pile shape of \cite{rondon2011granular}. } \label{fig:2011_sp_p1} \end{figure} \subsection{Two-component model - loose case} \label{ssec:rondon_tc2} The two-component model provides only a few and ineffective possibilities to consider variations of the packing density. To simulate the loose granular collapse with this model, the average packing density is changed to $\ol{\phi} = 0.55$ and the bulk density correspondingly to $\rho_{\r{s}} = 1825\,\r{kg\,m^3}$. Further, the initial column geometry is changed as reported by \cite{rondon2011granular}. All other parameters match the dense case. Changing rheology parameters, e.g.~$\mu_1$ or $\mu_2$ \citep[as e.g.][]{wang2017two} is technically possible but does not help in understanding the physical process or the influence of packing density. The difference to the dense simulation is very small and thus not shown in here \cite[see, e.g.][for similar results]{bouchut2017two}. As before, the final pile shape is close to the dense experiment while the simulated velocity is close to the loose experiment. The runout is slightly longer as in the dense simulation because the loose column is slightly taller. \subsection{Two-phase model - dense case} \label{ssec:rondon_tp1} The two-phase model allows us to explicitly consider variations in the initial packing density. The dense case is initialized with a homogeneous packing density of $0.60$. The evolution of the dense granular column as simulated with the two-phase model is shown in Fig.~\ref{fig:2011d_p}, alongside three states of the experiment at $t=3\,\r{s}$, $6\,\r{s}$ and $30\,\r{s}$. The simulation covering a duration of $10\,\r{s}$, took $240\,\r{h}$ on 8 cores of LEO4. The dense case is dominated by negative excess pore pressure (Fig.~\ref{fig:2011d_p}a-e), meaning that pore pressure within the slide is lower than outside. The effective pressure (Fig.~\ref{fig:2011d_p}f-j) is respectively higher, which increases the shear strength of the column. Initially, the shear strength is high enough to delay the collapse and to keep the column mostly stable. The pore pressure gradient leads to the suction of fluid into the column (Fig.~\ref{fig:2011d_p}g-h) and the granular material is dilating. Dilation reduces the effective pressure and allows the column to collapse. This happens first near the free surface on the unsupported side of the column, leading to a breaching-like flow of grains (Fig.~\ref{fig:2011d_p}g-h). Grains mix with fluid at the breaching edge, reducing packing density, effective pressure, and thus friction to very low values. The resulting mixture behaves like a small turbidity current and reaches long run-outs with shallow slopes, as visible in Fig.~\ref{fig:2011d_p}i-j. The zone of low particle pressure extends towards the centre of the column with time and further mobilisation occurs. At $t = 0.5\,\r{s}$, we can see the formation of a shear band. The grains above the shear band slide off, first as a triangular cohesive block (note the uniform velocity field in Fig.~\ref{fig:2011d_p}b), which disintegrates between $t=1\,\r{s}$ and $t=3\,\r{s}$ (Fig.~\ref{fig:2011d_p}i). The overall process is finished (i.e.~$t_{\r{end}}$) in the simulation after roughly $10\,\r{s}$, while the experiment took about $30\,\r{s}$. The final pile form and the failure mechanism match the experiment very well, which can be seen best in a comparison with the videos provided by \cite{rondon2011granular}, see Fig.~\ref{fig:sketch}. Further, a good match with the measured excess pore pressure is achieved, as shown in Fig.~\ref{fig:2011time_p}. The time scale and velocity of the collapse, on the other hand, differ substantially between simulation and experiment. Notably, pore pressure $p$ and effective pressure $p_{\r{s}}$ do not sum up to the total vertical load, as a considerable fraction of the vertical load is transferred to the ground by viscous stresses. \begin{figure} \includegraphics[scale=1]{fig12.eps} \caption{ Pore pressure (a-f) and effective pressure (g-l) at $t=0.05\,\r{s}$ (a, g), $t=0.5\,\r{s}$ (b, h), $t=1\,\r{s}$ (c, i), $t=3\,\r{s}$ (d, j), $t=6\,\r{s}$ (e, k) and the final state (f,l) using the two-phase model (subaquatic dense case). The black arrows represent the average velocity (a-f) and the relative velocity (g-l). The final state ($t_{\r{end}}$) is reached at $t=10\,\r{s}$ in the simulation (small velocities remain) but $t=30\,\r{s}$ in the experiment. The black line shows the free surface of the slide, assumed at $\phi_{\r{s}}=0.25$. The free surface of the experiment is shown for comparison as a black dashed line. } \label{fig:2011d_p} \end{figure} \subsection{Two-phase model - loose case} \label{ssec:rondon_tp2} The simulation of the loose granular column uses the same parameters as the dense simulation. The packing density in the column is initialized homogeneously to $\phi = 0.55$ and its height is increased as reported by \cite{rondon2011granular}. The simulation covering a duration of $6\,\r{s}$ took $213\,\r{h}$ on 8 cores of LEO4. As a result of the very loose packing, the effective shear strength is low and the column collapses rapidly and entirely, without any static regions. The pore pressure has to support the majority of the weight and is respectively high (Fig.~\ref{fig:2011l_p}a). The effective pressure increases at the rapidly flowing front, at $t=0.25\,\r{s}$ (Fig.~\ref{fig:2011l_p}g) due to the dynamic increase of effective pressure following the $\phi(J)$-theory. The increase in effective pressure leads to a proportional increase in friction and the front is slowed down, Fig.~\ref{fig:2011l_p}h-i. Although the effective pressure is low in comparison to the dense case (four times lower), the friction is sufficient to bring the slide to a stop. The final slope inclination is shallow and the low quasi-static particle pressure is sufficient to support the slope, Fig.~\ref{fig:2011l_p}j. The packing density increases slightly during the collapse but the stability is mostly gained by reducing the slope inclination. The final pile shape matches the experiment very well, only a small amount of granular material forms a turbidity current that exceeds the runout of the experiment. The simulated velocity is higher than in the experiment but the difference is less severe than in the dense case. The simulated excess pore pressure differs remarkably from the measured excess pore pressure as shown in Fig.~\ref{fig:2011time_p}. Two stages can be observed in the simulated excess pore pressure history. First, the simulation shows a high peak of excess pore pressure, exceeding the highest experimental pore pressure by a factor of two. The peak dissipates quickly, as the slide and thus overburden pressure leave the region where the pore pressure sensor is installed. This first peak is not appearing in the experiment, where the highest pore pressure is reached in a flatter peak at a later point in time. In a second phase, starting at $t=1\,\r{s}$, the pore pressure dissipates much slower. In this phase, the pore pressure dissipation is driven by compaction of the granular material and slightly underestimated by the model. \begin{figure} \includegraphics[scale=1]{fig13.eps} \caption{Pore pressure (a-e) and effective pressure (f-j) at $t=0.05\,\r{s}$ (a, f), $t=0.25\,\r{s}$ (b, g) , $t=0.65\,\r{s}$ (c, h), $t=1.30\,\r{s}$ (d, i) and the final state ($t_{\r{end}} = 6.0\,\r{s}$) (e, j) using the two-phase model (subaquatic loose case). The black arrows represent the average velocity (a-e) and the relative velocity (f-j). The black line shows the free surface of the slide, assumed at $\phi_{\r{s}}=0.25$. The free surface of the experiment is shown for comparison as a black dashed line. } \label{fig:2011l_p} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=1]{fig14.eps} \end{center} \caption{The excess pore pressure as a function of time for the subaquatic granular collapses. The loose simulation (red) shows a strong peak of excess pore pressure that exceeds the experimental measurement (upper black dashed line). The dense simulation (blue) fits the experimental measurement (lower black dashed line) well. The two-component simulation forms a horizontal line at $p=0\,\r{Pa}$ as it neglects excess pore pressure.} \label{fig:2011time_p} \end{figure} \subsection{Discussion and comparison} \label{sec:rondon_dc} The subaqueous granular collapse clearly exceeds the capabilities of the two-component model. The high sensitivity to the initial packing density can not be explained with this model and the loose and dense simulation are virtually similar. Results of the two-component model lie between the two extreme cases of the loose and dense experiment, matching the velocity of the loose and the run-out of the dense experiment. This is reasonable, considering that the missing excess pore pressure is stabilizing the dense column and destabilizing the loose column. This model is not sufficient for a practical application, as the runout is substantially underestimated in the loose case. Extremely long run-outs on slopes with $2^\circ$ inclination have been observed in nature \citep[e.g.][]{bryn2005explaining} and they can not be explained with a granular two-component model. The two-phase model can take advantage of its ability to capture excess pore pressure. It outperforms the two-component model by showing the correct final pile shapes (Figs~\ref{fig:2011d_p}f and \ref{fig:2011l_p}e) and a consistent sensitivity to pore pressure and initial packing density (Fig.~\ref{fig:2011time_p}). The failure mechanism of both, the dense and the loose experiment, are successfully simulated (see Fig.~\ref{fig:sketch}), indicating that the two-phase model captures the most important physical phenomena. The model fails in two aspects, as the pore pressure peak in the loose case and the time scale in the dense case differ by a factor of $2$ and $3$, respectively. It should be noted that no exhausting parameter fitting was required for these results. Solely the critical state line is optimized to yield the correct pore pressure, all other parameters were selected a priori following \cite{rondon2011granular}, \cite{savage2014modeling}, and \cite{boyer2011unifying}. Notably, some of the issues, e.g.~the overestimated velocity of the loose collapse, might be resolvable with fitting parameters. Furthermore, the model allows us to simulate both cases with the same set of parameters with good accuracy. This distinguishes this work from earlier attempts \citep[e.g.][]{savage2014modeling, wang2017two, si2018development}, where some parameters were fitted individually to the dense and loose case. Excess pore pressure plays an important role in subaquatic experiments because it controls shear strength and friction. Dilatancy, compaction, and the dynamic particle pressure further influence friction and thus the kinematics of the slide. The dense column is only able to collapse after decreasing its packing density and thus its effective shear strength. The column is dilating until reaching the limiting packing density $\phi_{\r{m}}$. Before this packing density is reached, the shear rate is limited to the creeping shear rate $S_0$. A relatively high value was used for this parameter and a lower creeping limit would be desirable, especially considering the error of the time scale in the dense simulation (see appendix~\ref{ssec:creep_shear}). However, strong oscillations were observed when choosing lower values for $S_0$ because the shear rate often exceeded $S_0$ before dilating sufficiently. \begin{figure} \begin{center} \includegraphics[scale=1]{fig15.eps} \end{center} \caption{Selected snapshots of the experiments from \cite{rondon2011granular} (a,d,g), the simulations (b,e,h) and corresponding sketches (c,f,i). The distance between marks on the axes is $0.02\,\r{m}$. The snapshots highlight the gliding of a cohesive block and breaching (a,b,c), the remoulding of the block due to shearing (d,e,f) and the formation of hydroplaning and turbidity currents (g,h,i) at the loose front.} \label{fig:sketch} \end{figure} The bottom of the dense column is further compacting in the simulation, up to a packing density of $0.604$. This is reasonable as the initial particle pressure of $303.3\,\r{Pa}$ at $\phi=0.60$ is below the overburden pressure of $370.8\,\r{Pa}$ of the pile. At the same time, negative excess pore pressure can be observed at the bottom of the column. Compaction and negative excess pore pressure seem to contradict each other at first glance. However, the negative excess pore pressure in the upper parts of the column is so strong, that fluid is flowing upwards from the bottom of the column. This can be seen in the relative velocity field (Fig.~\ref{fig:2011d_p}h), but also the gradient of pore pressure (Fig.~\ref{fig:2011d_p}b) indicates that pore liquid will flow upwards. The front speed of the loose collapse is entirely controlled by the dynamic contribution of the $\mu(J)$-$\phi(J)$-rheology to effective pressure and friction. Simulations with critical state theory (constant friction coefficient $\mu$ and the quasi-static effective pressure model of \cite{johnson1987frictional}) exceed the experimental runout by far (see Appendix~\ref{ssec:dynamics}). This is a strong contrast to the subaerial case where acceptable results could be achieved with critical state theory. The dynamic contribution to particle pressure and friction plays also an important role in the dense case, although this pile collapses very slow. The thin layers of grains that are breaching from the unsupported column flank reach packing densities far below $\phi_{\r{rlp}} = 0.53$ due to mixing with the ambient fluid. At this packing density, the quasi-static contribution to effective pressure vanishes, and the runout of these particles is entirely controlled by dynamic particle pressure and friction. The runout of the breaching flank could not be controlled in simulations with critical state theory (see appendix~\ref{ssec:dynamics}). The pore pressure in the loose case differs qualitatively and quantitatively from the measurement. Within the applied model, it seems reasonable that a high initial peak decreases quickly, as substantial amounts of grains and thus overburden pressure leave the region of the pressure sensor. Similar results with an early, short, and strong peak and a slow further dissipation, close to the measurement, have been obtained with other frameworks, e.g.~by \cite{bouchut2017two} or \cite{baumgarten2019general}. The dilatancy of the dense column is substantially faster in the numerical model than in the experiment, although the permeability is underestimated following the comparison of the pore pressure. Therefore it is unlikely that permeability is the cause for this discrepancy and we assume that inaccuracies in the rheology are responsible. The $\mu(J)$-$\phi(J)$-rheology describes the steady shearing of a fluid-grain mixture very well \citep{boyer2011unifying}. However, the transient transition towards the steady packing density at a certain effective pressure is not described. This transition depends on the permeability of the granular material but also on its viscosity (shear and bulk viscosity). As mentioned before, the high value for the creeping shear rate $S_0$ could be responsible for this issue but it might also be related to the missing bulk viscosity or a mismatch of constitutive parameters. Bulk viscosity could delay the dilatancy in the early stage of the dense collapse, bringing the time scale of the collapse in the simulation closer to the experiment. Bulk viscosity could further help to decrease the pore pressure peak in the loose case, as some of the pore pressure could be transformed into viscous pressure. \cite{schaeffer2019constitutive} suggests a form for the bulk viscosity which has the potential to improve these aspects. \cite{savage2014modeling} and \cite{si2018development} include a cohesive shear strength into their model to correct some of these problems and to fit results to the experiment. However, there is no evidence for cohesive forces in a fully submerged granular flow. Neither electrostatic forces nor cementing have been reported by \cite{rondon2011granular}. Apparent cohesion can be traced back to negative excess pore pressure, which is directly simulated by the numerical model. Notably, \cite{si2018development} are able to control the slide velocity very well. However, this is achieved by fitting the cohesion to the respective case and by a strong overestimation of the negative excess pore pressure, reaching values around $500\,\r{Pa}$ at the pressure sensor at $t=3\,\r{s}$ \cite[see Fig.~5 by][]{si2018development}). \cite{baumgarten2019general} applied a similar model (elasto-plastic multiphase model with $\mu(K)$-$\phi(K)$-scaling) to the same cases. The results show similar problems, i.e. an overestimation of the pore pressure in the loose case and an overestimation of the collapse velocity in the dense case. Notably, we achieve similar results in these test cases with a substantially simpler model. \section{Conclusions} \label{sec:conclusion} The Navier-Stokes Equations can be an adequate tool for accurate simulations of granular flows when they are complemented with the correct rheologies. Substantial progress has been made in recent years with the $\mu(I)$-rheology and its extensions to compressible flows and low Stokes number flows. The incompressible $\mu(I)$-rheology fits well into the multi-component framework of OpenFOAM and the compressible $\mu(I)$-$\phi(I)$-rheology fits well into the multi-phase framework, as previously shown by e.g.~\cite{chauchat2017sedfoam}. We apply, for the first time, the compressible $\mu(I)$-$\phi(I)$-rheology to granular collapses and avalanching flows. The superposition with the critical state theory is imperative to get realistic packing densities at rest and a stable solver. For subaerial, i.e.~high Stokes number flows, dilatancy plays a minor role and results of the compressible model are similar to the incompressible model. However, the dilatancy predicted by the compressible model is able to close the gap between the experiments and the incompressible model. Further, the compressible model should be well-posed \citep{barker2017well, heyman2017compressibility, schaeffer2019constitutive}, in contrast to many incompressible granular flow models \citep{barker2015well}. Note that bulk viscosity, which is imperative for a well-posed rheology \citep[e.g.][]{schaeffer2019constitutive}, was not considered in this study. However, the coupling of the granular phase to the pore fluid has a similar effect as bulk viscosity and might be able to restore a well-posed system. For a guaranteed well-posed compressible rheology that collapses to the $\mu(I)$-$\phi(I)$-rheology in steady state, the reader is referred to \cite{schaeffer2019constitutive}. The upsides of the compressible two-phase model come at the cost of more parameters and a stronger mesh dependence. Furthermore, code and case setup are more complicated with the two-phase model and simulations are more prone to failure if initial conditions or parameters are not well suited for the case. Therefore, the incompressible model might be better suited for some flows at high Stokes numbers, especially considering regularized rheologies that are well-posed for a wide range of flow regimes \citep[e.g][]{barker2017partial}. Notably, we did not encounter any problems with the partial ill-posedness of the $\mu(I)$-rheology, which could be related to relatively coarse grids, high numerical diffusion, the short simulation duration or the truncation of the viscosity. The extension to low Stokes number flows is made possible by the $\mu(J)$-$\phi(J)$-rheology. At low Stokes numbers, it is imperative to consider excess pore pressure and a two-phase model is required. Therefore, the incompressible $\mu(J)$-rheology is rather impractical and becomes only applicable after supplementing it with the $\phi(J)$-curve to the compressible $\mu(J)$-$\phi(J)$-rheology. The dynamic growth of pressure and friction is substantial for accurate results, highlighting the value of the $\mu(J)$-$\phi(J)$-rheology. The fitting of parameters was reduced to a minimum and only the critical state line had to be optimized to the experiments. It should be noted that these parameters could be determined by measuring the critical packing density at a few pressure levels, making the simulations free of any fitted parameter. The compressible two-phase model reacts sensitive to the packing density, recreating the final runout, pile shape, and failure mechanism of the experiments very well. The model still lacks in some aspects, e.g.~the time scale and the velocity of the dense collapse and the pore pressure peak in the loose collapse. It was shown that the incompressible two-component model can be derived from the compressible two-phase model by neglecting the relative velocity between phases. This simplification yields reasonable results for subaerial granular flows at high Stokes numbers but fails to describe the subaquatic granular flows at low Stokes numbers. This seems to be contradictory, as the relative velocity (which was neglected in the incompressible model) is very small in the subaquatic case (see Figs.~\ref{fig:2011d_p} and \ref{fig:2011l_p}) but considerable high in the subaerial case (see Fig.~\ref{fig:2005_tp_p}). This apparent paradox can be resolved by the fact that unhindered density changes have no notable influence on the flow dynamics. However, if changes in packing density are constrained, pore pressure will build up and the rheology of the material will change drastically. Thus, pore pressure, rather than compressibility is the key factor that allows the two-phase model to accurately capture the flow mechanics. The two-phase model provides many other upsides aside from the inclusion of pore pressure. The continuous transition from dense granular material to pure ambient fluid should be useful for the simulation of granular free streams \citep[][]{viroulet2017multiple}, turbidity currents \citep{heerema2020determines} and powder snow avalanches \citep[e.g.][]{sovilla2015structure}. Other studies showed that the two-phase model is useful for sediment transport \citep{chauchat2017sedfoam} and other dilute particle-fluid mixtures \citep[e.g.][]{passalacqua2011implementation}. OpenFOAM provides a good platform to evaluate concepts (e.g.~the multi-component and multi-phase methodology) and models (e.g.~$\mu(I)$-$\phi(I)$ and $\mu(J)$-$\phi(J)$-rheologies). The implemented rheologies can be further coupled with segregation \citep{barker2020coupling} or tsunami simulations \cite[e.g.][]{si2018general}. However, the segregated semi-implicit solver strategy of OpenFOAM sets limits to models and execution velocity, as (part of the) viscous terms and the particle pressure are included explicitly. This showed to be problematic and a fully implicit solver, that solves all equations simultaneously, might be superior in this regard. The model can help to understand the extreme outruns of submarine landslides, such as the Storegga landslide \citep[e.g.][]{bryn2005explaining} and the big variation in tsunamigenic potentials \citep[e.g.][]{lovholt2017some}. Theories, such as hydroplaning and remoulding \citep[e.g.][]{de2004hydroplaning} can be quantitatively described by the critical state theory and its dynamic extension in form of the $\mu(J)$-$\phi(J)$-rheology. Hydroplaning, formerly described as the flowing of sediment on a thin layer of liquid can be interpreted as a region of low or even zero packing density and vanishing effective pressure. This can be observed in Fig.~\ref{fig:sketch}g-i, where the front of the loose slide is lifted by pressure in the surrounding fluid. Remoulding can similarly be explained with critical state theory as an overconsolidated sample that is dilating during shearing (see Fig.~\ref{fig:sketch}a-f). The two-phase model and its capability to describe various and realistic failure mechanisms with different time scales are particularly valuable for understanding the tsunamigenic potential of submarine landslides and the respective slopes. The dense column collapses very slowly, reaching velocities of up to $0.1\,\r{m\,s^{-1}}$ in small layers near the surface. The loose column collapses entirely with velocities up to $0.4\,\r{m\,s^{-1}}$. The tsunamigenic potential of a landslide scales with initial acceleration and the mobilized volume \citep[e.g.][]{lovholt2017some} and a substantial difference in tsunamigenic potential follows for the dense and the loose slide. This shows that packing density, excess pore pressure and permeability are key parameters in controlling stability, failure mechanism, slide acceleration, and tsunamigenic potential. Many full-scale subaquatic landslide simulations are based on Bingham fluids, a visco-plastic rheology independent of the pressure \citep[e.g.][]{kim2019landslide}. This seems to stand in strong contradiction to the model applied here. However, the simulation of the loose case shows that packing density changes are small. For a nearly constant packing density, the effective pressure decouples from overburden pressure because the weight is absorbed entirely by pore pressure. As a consequence, overburden pressure and friction will decouple and the microscopic granular friction will appear as cohesion on a macroscopic scale. The macroscopic description as a Bingham fluid is therefore surprisingly consistent with the findings in this work, especially for fine grained marine sediments with low permeabilities. \section{Summary} \label{sec:summary} This work highlights a path to extend the incompressible $\mu(I)$-rheology for subaerial granular flows to the compressible $\mu(J)$-$\phi(J)$-rheology for subaquatic granular flows. The implementation of the $\mu(I)$-$\phi(I)$-rheology in a multiphase framework and the $\mu(I)$-rheology in a multi-component framework allows us to conduct subaerial granular collapses with two different models. The application shows consistency between the incompressible $\mu(I)$-rheology \citep[e.g.][]{lagree2011granular} and the compressible $\mu(I)$-$\phi(I)$-rheology. Notably, substantial modifications to the $\phi(I)$-curve are required for a practical application of the rheology. The simulations show that compressibility and dilatancy have a small influence on high Stokes number flows because excess pore pressure is negligibly small. The implementation of the $\mu(J)$-$\phi(J)$-rheology extends possible applications to low Stokes number flows, e.g.~subaquatic granular collapses. The incompressible model reaches its limitations under these conditions and the compressible model is required for an accurate simulation. Other than previous attempts, we applied the exact same set of parameters to an initially dense and loose granular collapse with satisfying results. Notably, the application of the $\mu(J)$-$\phi(J)$-rheology does not require an extensive fitting of constitutive parameters. The comparison between the compressible model and experiments uncovered discrepancies in the time scale and the pore pressure. These could be indicators for issues in the rheology, e.g.~a missing bulk viscosity or issues with the creeping regime that had to be introduced for numerical stability. The well-posedness of the proposed model is not guaranteed and should be investigated in the future. The compressible two-phase model has a wide range of applications and the results have implications on many problems in geoscience. Applications to sediment transport and scouring \citep{cheng2017sedfoam} have been shown with a similar model. We further expect the applicability to turbidity currents and all other gravitational mass flows with low and high Stokes numbers. Furthermore, \cite{si2018general} showed the applicability of a similar model to landslide tsunami simulations by incorporating the free water surface. \backsection[Acknowledgements]{The author gratefully acknowledges the support from Wolfgang Fellin and the research area scientific computing of the University of Innsbruck. The author thanks Gertraud Medicus, Thomas Barker, Finn L{\o}vholt and Geir Pedersen for helpful comments and Pascale Aussillous for authorizing the use of pictures of the experiment. Further, the author thanks the editor and two referees for their helpful comments and support in publishing this work.} \backsection[Funding]{This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk{\l}odowska-Curie grant agreement No.~721403 (SLATE). The computational results presented have been achieved (in part) using the HPC infrastructure LEO of the University of Innsbruck.} \backsection[Declaration of interests]{The author reports no conflict of interest.} \backsection[Author ORCID]{M. Rauter, https://orcid.org/0000-0001-7829-6751}
2023-04-23T06:41:15.335Z
2021-02-17T02:02:26.000Z
redpajama/arxiv
arxiv_0001
2,001
15,113
c05f2a835543ac6388a564d1f12b9f99a0010ab7
\section{Introduction} \label{sec:introduction} Large computationally expensive models based on 2D/3D convolutional neural networks (CNNs) are widely used in video understanding \citep{tran2015learning, carreira2017quo, tran2018closer}. Thus, increasing computational efficiency is highly sought after \citep{feichtenhofer2020x3d, zhou2018mict, zolfaghari2018eco}. However, most of these efficient approaches focus on architectural changes in order to maximize network capacity while maintaining a compact model \citep{zolfaghari2018eco, feichtenhofer2020x3d} or improving the way that the network consumes temporal information \citep{feichtenhofer2018slowfast, korbar2019scsampler}. Despite promising results, it is well known that CNNs perform unnecessary computations at some levels of the network \citep{han2015deep, howard2017mobilenets, sandler2018mobilenetv2, feichtenhofer2020x3d, Pan_2018_CVPR}, especially for video models since the high appearance similarity between consecutive frames results in a large amount of redundancy. In this paper, we aim at dynamically reducing the internal computations of popular video CNN architectures. Our motivation comes from the existence of highly similar feature maps across both time and channel dimensions in video models. Furthermore, this internal redundancy varies depending on the input: for instance, static videos will have more temporal redundancy whereas videos depicting a single large object moving tend to produce a higher number of redundant feature maps. To reduce the varied redundancy across channel and temporal dimensions, we introduce an input-dependent redundancy reduction framework called VA-RED$^2$ (Video Adaptive REDundancy REDuction) for efficient video recognition (see Figure~\ref{fig:teaser} for an illustrative example). Our method is model-agnostic and hence can be applied to any state-of-the-art video recognition networks. The key mechanism that VA-RED$^2$ uses to increase efficiency is to replace full computations of some redundant feature maps with cheap reconstruction operations. Specifically, our framework avoids computing all the feature maps. Instead, we choose to only calculate those non-redundant part of feature maps and reconstruct the rest using cheap linear operations from the non-redundant features maps. In addition, VA-RED$^2$ makes decisions on a per-input basis: our framework learns an input-dependent policy that defines a "full computation ratio" for each layer of a 2D/3D network. This ratio determines the amount of features that will be fully computed at that layer, versus the features that will be reconstructed from the non-redundant feature maps. Importantly, we apply this strategy on both time and channel dimensions. We show that for both traditional video models such as I3D \citep{carreira2017quo}, R(2+1)D~\citep{tran2018closer}, and more advanced models such as X3D \citep{feichtenhofer2020x3d}, this method significantly reduces the total floating point operations (FLOPs) on common video datasets without accuracy degradation. The main \textbf{contributions} of our work includes: {(1)} A novel \textbf{input-dependent adaptive framework} for efficient video recognition, VA-RED$^2$, that automatically decides what feature maps to compute per input instance. Our approach is in contrast to most current video processing networks, where feature redundancy across both time and channel dimensions is not directly mitigated. {(2)} An \textbf{adaptive policy} jointly learned with the network weights in a fully differentiable way with a shared-weight mechanism, that allows us to make decisions on how many feature maps to compute. Our approach is model-agnostic and can be applied to any backbones to reduce feature redundancy in both time and channel domains. {(3)} \textbf{Striking results of VA-RED$^2$ over baselines}, with a $30\%$ reduction in computation in comparison to R(2+1)D~\citep{tran2018closer}, a $40\%$ over I3D-InceptionV2~\citep{carreira2017quo}, and about $20\%$ over the recently proposed X3D-M~\citep{feichtenhofer2020x3d} without any performance loss, for video action recognition task. The superiority of our approach is extensively tested on three video recognition datasets (Mini-Kinetics-200, Kinetics-400~\citep{carreira2017quo}, and Moments-In-Time~\citep{monfort2019moments}) and one spatio-temporal action localization dataset (J-HMDB-21~\citep{jhuang2013towards}). {(4)} A \textbf{generalization of our framework} to video action recognition, spatio-temporal localization, and semantic segmentation tasks, achieving promising results while offering significant reduction in computation over competing methods. \begin{figure*} \centering \includegraphics[width=\linewidth]{figures/teaser_vared.pdf} \vspace{-4mm} \caption{Our VA-RED$^2$ framework dynamically reduces the redundancy in two dimensions. Example 1 (left) shows a case where the input video has little movement. The features in the temporal dimension are highly redundant, so our framework fully computes a subset of features, and reconstructs the rest with cheap linear operations. In the second example, we show that our framework can reduce computational complexity by performing a similar operation over channels: only part of the features along the channel dimension are computed, and cheap operations are used to generate the rest.} \vspace{-4mm} \label{fig:teaser} \end{figure*} \section{Related Work} \label{sec:relatedwork} \vspace{-2mm} \paragraph{Efficiency in Video Understanding Models.} Video understanding has made significant progress in recent years, mainly due to the adoption of convolutional neural networks, in form of 2D CNNs~\citep{karpathy2014large,simonyan2014two,cheron2015p,feichtenhofer2017spatiotemporal,gkioxari2015finding,wang2016temporal,zhou2018temporal,lin2019tsm,fan2019more} or 3D CNNs~\citep{tran2015learning,carreira2017quo,hara2018can,tran2018closer}. Despite promising results on common benchmarks, there is a significant interest in developing more efficient techniques and smaller models with reasonable performance. Previous works have shown reductions in computational complexity by using hybrid 2D-3D architectures \citep{xie2018rethinking, zhou2018mict, zolfaghari2018eco}, group convolution \citep{tran2019video} or selecting salient clips \citep{korbar2019scsampler}. Feichtenhofer et al., \citep{feichtenhofer2018slowfast} propose a dedicated low-framerate pathway. Expansion of 2D architectures through a stepwise expansion approach over the key variables such as temporal duration, frame rate, spatial resolution, network width, is recently proposed in \citep{feichtenhofer2020x3d}. Diba et al.~\citep{diba2019dynamonet} learn motion dynamic of videos with a self-supervised task for video understanding. Fan et al.~\citep{fan2020rubiksnet} incorporate a efficient learnable 3D-shift module into a 3D video network. Wang et al.~\citep{wang2020video} devise a correlation module to learn correlation along temporal dimension. Li et al.~\citep{li2020directional} encode the clip-level ordered temporal information with a CIDC network. While these approaches bring considerable efficiency improvements, none of them dynamically calibrates the required feature map computations on a per-input basis. Our framework achieves substantial improvements in average efficiency by avoiding redundant feature map computation depending on the input. \vspace{-1mm} \textbf{Adaptive Inference.} Many adaptive computation methods have been recently proposed with the goal of improving efficiency~\citep{bengio2015conditional,bengio2013estimating,veit2018convolutional,wang2018skipnet,graves2016adaptive,meng2021adafuse}. Several works have been proposed that add decision branches to different layers of CNNs to learn whether to exit the network for faster inference~\citep{yu2018slimmable,figurnov2017spatially,mcgill2017deciding,teerapittayanon2016branchynet} Wang et al. \citep{wang2018skipnet} propose to skip convolutional blocks on a per input basis using reinforcement learning and supervised pre-training. Veit et al. \citep{veit2018convolutional} propose a block skipping method controlled by samples from a Gumbel softmax, while Wu et al. \citep{wu2018blockdrop} develop a reinforcement learning approach to achieve this goal. Adaptive computation time for recurrent neural networks is also presented in~\citep{graves2016adaptive}. SpotTune~\citep{guo2019spottune} learns to adaptively route information through finetuned or pre-trained layers. A few works have been recently proposed for selecting salient frames conditioned on the input~\citep{yeung2016end,wu2019adaframe,korbar2019scsampler,gao2019listen} while recognizing actions in long untrimmed videos. Different from adaptive data sampling ~\citep{yeung2016end,wu2019adaframe,korbar2019scsampler,gao2019listen}, in this paper, our goal is to remove feature map redundancy by deciding how many features need to be computed for temporal and channel dimensions per input basis, for efficient video recognition. AR-Net~\citep{meng2020ar} recently learns to adaptively choose the resolution of input frames with several individual backbone networks for video inference. In contrast, our method focuses on reducing the redundancy in both temporal and channel dimension, and is applicable to both 3D and 2D models, while AR-Net is only for 2D model and it is focused on spatial resolution. Moreover, our method integrates all the inference routes into a single model which is in the almost same size to the original base model. Thus our model is significantly smaller than AR-Net in terms number of model parameters. \vspace{-1mm} \textbf{Neural Architecture Search.} Our network learns the best internal redundancy reduction scheme, which is similar to previous work on automatically searching architectures~\citep{elsken2018efficient}. Liu et al. \citep{liu2018darts} formulate the architecture search task in a differentiable manner; Cai et al. \citep{cai2018proxylessnas} directly learn architectures for a target task and hardware, Tan et al. \citep{tan2019efficientnet} design a compound scaling strategy that searches through several key dimensions for CNNs (depth, width, resolution). Finally, Tan et al. \citep{tan2019mnasnet} incorporate latency to find efficient networks adapted for mobile use. In contrast, our approach learns a policy that chooses over full or reduced convolutions at inference time, effectively switching between various discovered subnetworks to minimize redundant computations and deliver high accuracy. \section{Video Adaptive Redundancy Reduction} \label{sec:proposedmethod} \vspace{-2mm} Our main goal is to automatically decide which feature maps to compute for each input video in order to classify it correctly with the minimum computation. The intuition behind our proposed method is that there are many similar feature maps along the temporal and channel dimensions. For each video instance, we estimate the ratio of feature maps that need to be fully computed along the temporal dimension and channel dimension. Then, for the other feature maps, we reconstruct them from those pre-computed feature maps using cheap linear operations. \textbf{Approach Overview.}\label{sec:overview} Without loss of generality, we start from a 3D convolutional network $\mathcal{G}$, and denote its $l^{th}$ 3D convolution layer as $f_l$, and the corresponding input and output feature maps as $X_l$ and $Y_l$ respectively. For each 3D convolution layer, we use a very lightweight policy layer $p_l$ denoted as \emph{soft modulation gate} to decide the ratio of feature maps along the temporal and channel dimensions which need to be computed. As shown in Figure~\ref{fig:temporal_channel}, for temporal-wise dynamic inference, we reduce the computation of 3D convolution layer by dynamically scaling the temporal stride of the 3D filter with a factor $R = 2^{p_l(X_l)[0]}$. Thus the shape of output $Y_l'$ becomes $C_{out}\times T_o/R \times H_o\times W_o$. To keep the same output shape, we reconstruct the remaining features based on $Y_l'$ as \begin{equation} Y_{l}[j + iR] = \left\{ \begin{array}{ll} \Phi^t_{i,j}(Y_{l}'[i]) & \text{if } j\in\{1, ..., R-1\} \\ Y_{l}'[i] & \text{if } j=0 \end{array}, i\in\{0, 1, ..., T_o/R-1\}, \right. \end{equation} where $Y_{l}[j + iR]$ represents the $(j+iR)^{th}$ feature map of $Y_l$ along the temporal dimension, $Y_{l}'[i]$ denotes the $i^{th}$ feature map of $Y_l'$, and $\Phi^t_{i,j}$ is the cheap linear operation along the temporal dimension. The total computational cost of this process can be written as: \begin{equation}\label{eq:comp_temporal} \mathcal{C}(f_l^t) = \frac{1}{R}\cdot \mathcal{C}(f_l) + \sum_{i,j}\mathcal{C}(\Phi^t_{i, j}) \approx \frac{1}{R}\cdot \mathcal{C}(f_l), \end{equation} where the function $\mathcal{C}(\cdot)$ returns the computation cost for a specific operation, and $f_l^t$ represents our dynamic convolution process along temporal dimension. Different from temporal-wise dynamic inference, we reduce the channel-wise computation by dynamically controlling the number of output channels. We scale the output channel number with a factor $r = (\frac{1}{2})^{p_l(X_l)[1]}$. In this case, the shape of output $Y_l'$ is $rC_{out}\times T_o\times H_o\times W_o$. Same as before, we reconstruct the remaining features via cheap linear operations, which can be formulated as $Y_{l} = [Y_l', \Phi^c(Y_l')]$, where $\Phi^c(Y_l') \in R^{(1-r)C_{out}\times T_o\times H_o\times W_o}$ represents the cheaply generated feature maps along the channel dimension, and $Y_l\in R^{C_{out}\times T_o\times H_o\times W_o}$ is the output of the channel-wise dynamic inference. The total computation cost of joint temporal-wise and channel-wise dynamic inference is: \begin{equation} \mathcal{C}(f_l^{t,c}) \approx \frac{r}{R}\cdot \mathcal{C}(f_l), \end{equation} where $f_l^{t,c}$ is the adjunct process of temporal-wise and channel-wise dynamic inference. \begin{figure*} \centering \includegraphics[width=\linewidth]{figures/temporal_module_2.pdf} \vspace{-6mm} \caption{An illustration of dynamic convolution along temporal dimension (a) and channel dimension (b) respectively. $\Phi_t$ and $\Phi_s$ represent the temporal cheap operation and spatial cheap operation respectively. In (a), we multiply the temporal stride $S$ with the factor $R = 2^{p_t}$ to reduce computation, where $p_t$ is the temporal policy output by soft modulation gate. In (b), we compute part of output features with the ratio of $r=(\frac{1}{2})^{p_c}$, where $p_c$ is the channel policy. Best viewed in color.} \label{fig:temporal_channel} \vspace{-1mm} \end{figure*} \textbf{Soft Modulation Gate for Differentiable Optimization.}\label{sec:policy} We adopt an extremely lightweight policy layer $p_l$ called soft modulation gate for each convolution layer $f_l$ to modulate the ratio of features which need to be computed. Specifically, the soft modulation gate takes the input feature maps $X_l$ as input and learns two probability vectors $V^l_t \in R^{S_t}$ and $V^l_c \in R^{S_c}$, where $S_t$ and $S_c$ are the temporal search space size and the channel search space size respectively. The $V^l_t$ and $V^l_c$ are learned by: \begin{equation} [V^l_t, V^l_c] = p_l(X_l) = \phi(\mathcal{F}(\omega_{p,2}, \delta(\mathcal{N}(\mathcal{F}(\omega_{p,1}, G(X_l))))) + \beta_p^l), \end{equation} where $\mathcal{F}(\cdot, \cdot)$ denotes the fully-connected layer, $\mathcal{N}$ is the batch normalization, $\delta(\cdot)$ represents the $\tanh(\cdot)$ function, $G$ is the global pooling operation whose output shape is $C_{in}\cdot T\times 1\times 1$, $\phi(\cdot)$ is the output activation function, here we just use $\max(\tanh(\cdot), 0)$ whose output range is $[0,1)$, and $\omega_{p,1}\in R^{(S_t+S_c)\times D_h}$, $\omega_{p,2}\in R^{D_h\times C_{in}\cdot T}$ are the weights of their corresponding layers, $D_h$ is the hidden dimension number. $V_t^l$ and $V_c^l$ will then be used to modulate the ratio of the feature maps to be computed in temporal-wise dynamic convolution and channel-wise dynamic convolution. During training, we obtain the final output of the dynamic convolution by weighted sum of all the feature maps which contains different ratio of fully-computed features as follows: \begin{equation}\label{eq:whole_process} Y^l_c = \sum^{S_c}_{i=1} V_c^l[i]\cdot f_l^c(X_l, r=(\frac{1}{2})^{(i-1)}), \quad Y_l = \sum^{S_t}_{j=1} V_t^l[j]\cdot f_l^t(Y^l_c, R=2^{(j-1)}), \end{equation} where $f_l^c(\cdot, r)$ is the channel-wise dynamic convolution with the channel scaling factor $r$, and $f_l^t(\cdot, R)$ it the temporal-wise dynamic convolution with the temporal stride scaling factor $R$. During the inference phase, only the dynamic convolutions whose weights are not zero will be computed. \textbf{Shared-weight Training and Inference.}\label{sec:shareweight} Many works in adaptive computation and neural architecture search suffer from very heavy computational cost and memory usage during training stage due to the large search space. In our case, under the naive implementation, the training computational cost and parameter size would linearly grow as the search space size increases. To train our model efficiently, we utilize a weight-sharing mechanism to reduce the computational cost and training memory. To be specific, we first compute all the possible necessary features using a big kernel. Then, for each dynamic convolution with different scaling factor, we sample its corresponding ratio of necessary features and reconstruct the rest features by cheap operations to get the final output. Though this, we are able to keep the computational cost at a constant value invariant to the search space. More details on this are included in Section~\ref{appen:impl} of the Appendix. \textbf{Efficiency Loss.}\label{sec:efficientloss} To encourage our network to output a computational efficient subgraph, we introduce the efficiency loss $\mathcal{L}_c$ during the training process, which can be formulated as \begin{equation} \mathcal{L}_e = (\mu_0 \sum_{l=1}^{L}\frac{\mathcal{C}(f_l)}{\sum_{k=1}^{L}\mathcal{C}(f_k)} \cdot \frac{r_l^s}{R_l^s})^2, \mu_0 = \left\{ \begin{array}{ll} 1 & \text{if correct} \\ 0 & \text{otherwise} \end{array}, \right. \end{equation} where $r_l^s$ is channel scaling factor of the largest filter in the series of channel-wise dynamic convolutions, and $R_l^s$ is stride scaling factor of the largest filter of temporal-wise dynamic convolutions. Overall, the loss function of our whole framework can be written as $\mathcal{L} = \mathcal{L}_a + \lambda_e\mathcal{L}_e$, where $L_a$ is the accuracy loss of the whole network and $\lambda_e$ is the weight of efficiency loss which can be used to balance the importance of the optimization of prediction accuracy and computational cost. \section{Experiments} \vspace{-1mm} \textbf{Datasets.} We conduct our \textbf{video action recognition} experiments on three standard benchmarks: Mini-Kinetics-200, Kinetics-400, and Moments-In-Time. Mini-Kinetics-200 (assembled by~\citep{meng2020ar}) is a subset of full Kinetics dataset ~\citep{carreira2017quo} containing 121k videos for training and 10k videos for testing across 200 action classes. Moments-In-Time dataset has 802,244 videos in training and 33,900 videos in validation across 339 categories. To show the generalization ability to different task, we also conduct the \textbf{video spatio-temporal action localization} on J-HMDB-21~\citep{jhuang2013towards}. J-HMDB-21 is a subset of HMDB dataset~\citep{kuehne2011hmdb} which has 928 short videos with 21 action categories. We report results on the first split. For \textbf{semantic segmentation} experiments, we use ADE20K dataset~\citep{zhou2017scene, zhou2018semantic}, containing 20k images for training and 2k images for validation. ADE20K is a densely labeled image dataset where objects and object parts are segmented down to pixel level. We report results on validation set. \textbf{Model Architectures.}\label{sec:arch} We evaluate our method on three most widely-used model architectures: I3D~\citep{carreira2017quo}, R(2+1)D~\citep{tran2018closer}, and the recent efficient model X3D~\citep{feichtenhofer2020x3d}. We consider I3D-InceptionV2 (denoted as I3D below) and R(2+1)D-18 (denoted as R(2+1)D below) as our base model. In our implementation of X3D, we remove all the swish non-linearity~\citep{ramachandran2017searching} except those in SE layer~\citep{hu2018squeeze} to save training memory and speed up the inference speed on GPU. We choose X3D-M (denote as X3D below) as our base model and demonstrate that our method is generally effective across datasets. \textbf{Implementation Details.} We train and evaluate our baseline models by mainly following the settings in their original papers~\citep{tran2018closer, xie2018rethinking, feichtenhofer2020x3d}. We train all our base and dynamic models for 120 epochs on mini-Kinetics-200, Kinetics-400, and 60 epochs on Moments-In-Time dataset. We use a mini-batch size of 12 clips per GPU and adopt synchronized SGD with cosine learning rate decaying strategy~\citep{loshchilov2016sgdr} to train all our models. Dynamic models are finetuned with efficiency loss for 40/20 epochs to reduce density of inference graph while maintaining the accuracy. During finetuning, we set $\lambda_c$ to $0.8$ and learning rate to $0.01$ for R(2+1)D and $0.1$ for I3D and X3D. For testing, we adopt \emph{K-LeftCenterRight} strategy: $K$ temporal clips are uniformly sampled from the whole video, on which we sample the left, center and right crops along the longer spatial axis, the final prediction is obtained by averaging these $3\times K$ clip predictions. We set $K = 10$ on Mini-Kinetics-200 and Kinetics-400 and $K=3$ on Moments-In-Time. More implementation details are included in Section~\ref{appen:impl} of Appendix. For video spatio-temporal action localization, we adopt YOWO architecture in \citep{kopuklu2019yowo} and replace 2D branch with 3D backbone to directly compare them. We freeze the parameters of 3D backbone as suggested in \citep{kopuklu2019yowo} due to small number of training video in J-HMDB-21~\citep{jhuang2013towards}. The rest part of the network is optimized by SGD with initial learning rate of $10^{-4}$. Learning rate is reduced with a decaying factor of $0.5$ at 10k, 20k, 30k and 40k iterations. For semantic segmentation, we conduct experiments using PSPNet~\citep{zhao2017pyramid}, with dilated ResNet-18~\citep{yu2017dilated, he2016deep} as our backbone architecture. As PSPNet is devised for image semantic segmentation, we only apply the channel-wise redundancy reduction to the model and adopt synchronized SGD training for 100k iterations across 4GPUs with 2 images on each GPU. The learning rate decay follows the cosine learning rate schedule schedule~\citep{loshchilov2016sgdr}. \begin{table*}[t] \small \centering \caption{ \textbf{Action recognition results using different number of input frames and different search space.} We choose R(2+1)D-18 on Mini-Kinetics-200 and study the performance with different number of input frames and different search space (denoted as Sea. Sp.). Search space of 2 means that both temporal-wise and channel-wise policy network have 2 alternatives: computing all feature maps, or computing only $\frac{1}{2})$ of the feature maps. Similarly, search space 3 have 3 alternatives: computing \emph{1)} all feature maps, \emph{2)} $\frac{1}{2}$ of feature maps, \emph{3)} $\frac{1}{4}$ of feature maps. \xmark~denote the base model and \cmark~denote the dynamic model trained using our proposed approach VA-RED$^2$. We also report the average speed of different models in terms of number of clips processed in one second ($clip/second$).} \vspace{-1mm} \begin{tabular*}{\textwidth}{ccccccccc \toprule length & sp. & GFLOPs$_{Avg}$ & GFLOPs$_{Max}$ & GFLOPs$_{Min}$ & avg speed & clip-1 & video-1 & video-5 \\ \midrule \multirow{3}{*}{8} & \xmark & $27.7$ & $27.7$ & $27.7$ & $192.1$ & $56.4$ & $66.8$ & $86.8$ \\ & 2 & $20.0 (-28\%)$ & $22.1 (-20\%)$ & $18.0 (-35\%)$ & \textbf{205.5} & $57.7$ & \textbf{68.0} & \textbf{87.4} \\ & 3 & $21.6 (-22\%)$ & $23.2 (-16\%)$ & $19.8 (-29\%)$ & $201.4$ & \textbf{58.2} & $67.7$ & \textbf{87.4} \\ \midrule \multirow{2}{*}{16} & \xmark & $55.2$ & $55.2$ & $55.2$ & $97.1$ & $57.5$ & $67.5$ & $87.1$ \\ & 2 & $40.4 (-27\%)$ & $43.2 (-22\%)$ & $36.6 (-34\%)$ & \textbf{108.7} & \textbf{60.6} & \textbf{70.0} & \textbf{88.7} \\ \midrule \multirow{2}{*}{32} & \xmark & $110.5$ & $110.5$ & $110.5$ & $49.6$ & $60.5$ & $69.4$ & $88.2$ \\ & 2 & $79.3 (-28\%)$ & $89.5 (-19\%)$ & $72.4 (-34\%)$ & \textbf{53.4} & \textbf{63.3} & \textbf{72.3} & \textbf{89.7} \\ \bottomrule \end{tabular*} \label{tab:frame-search-space} \vspace{-1mm} \end{table*} \begin{table*}[t] \small \centering \begin{minipage}{.48\textwidth} \centering \caption{\textbf{Action recognition results on Mini-Kinetics-200.} We set the search space as 2 and train all the models with 16 frames. The metric speed uses $clip/second$ as the unit. } \vspace{-3mm} \begin{adjustbox}{width=\textwidth,center} \begin{tabular*}{1.12\textwidth}{cccccc \toprule Model & Dy. & GFLOPs & Speed & clip-1 & video-1 \\ \midrule \multirow{2}{*}{R(2+1)D} & \xmark & $55.2$ & 97.1 & $57.5$ & $67.5$ \\ & \cmark & $40.4 $ & \textbf{108.7} & \textbf{60.6} & \textbf{70.0} \\ \midrule \multirow{2}{*}{I3D} & \xmark & $56.0$ & $116.4$ & $59.7$ & $68.3$ \\ & \cmark & $26.5$ & \textbf{141.7} & \textbf{62.2} & \textbf{71.1} \\ \midrule \multirow{2}{*}{X3D} & \xmark & $6.20$ & 169.4 & \textbf{66.5} & \textbf{72.2} \\ & \cmark & $5.03$ & \textbf{178.2} & $65.5$ & $72.1$ \\ \bottomrule \end{tabular*} \end{adjustbox} \label{tab:mini-k} \end{minipage} \hspace{1mm} \begin{minipage}{.47\textwidth} \caption{\textbf{Action recognition results with Temporal Pyramid Network (TPN) on Mini-Kinetics-200.} TPN-8f and TPN-16f indicate that we use 8 frames and 16 frames as input to the model respectively.} \vspace{-2mm} \begin{tabular*}{\textwidth}{ccccc \toprule \small{Model} & \small{Dy.} & \small{GFLOPs} & \small{clip-1} & \small{video-1} \\ \midrule \multirow{2}{*}{TPN-8f} & \xmark & $28.5$ & $58.9$ & $67.2$ \\ & \cmark & $21.5$ & \textbf{59.2} & \textbf{68.8} \\ \midrule \multirow{2}{*}{TPN-16f} & \xmark & $56.8$ & $59.8$ & $68.5$ \\ & \cmark & $41.5$ & \textbf{60.8} & \textbf{70.6} \\ \bottomrule \end{tabular*} \label{tab:tpn} \end{minipage} \vspace{-4mm} \end{table*} \begin{table*}[t] \small \centering \caption{\textbf{Comparison with CorrNet~\citep{wang2020video} and AR-Net~\citep{meng2020ar} on Mini-Kinetics-200.} We set the search space as 2 and train all the models with 16 frames.} \vspace{-2mm} \begin{tabular*}{0.9\textwidth}{ccccc|ccccc \toprule Model & Dy. & GFLOPS & clip-1 & video-1 & Method & Params & GFLOPs & clip-1\\ \cmidrule(lr){1-5} \cmidrule(lr){6-9} \multirow{2}{*}{CorrNet} & \xmark & $60.8$& $59.9$ & $68.2$ & AR-Net & 63.0M & 44.8 & 67.2 \\ \cmidrule(lr){2-5} \cmidrule(lr){6-9} & \cmark & \textbf{45.5} & \textbf{60.4} & \textbf{70.0} & VA-RED$^2$ & \textbf{23.9M} & \textbf{43.4} & \textbf{68.3} \\ \bottomrule \end{tabular*} \label{tab:corr} \vspace{-1mm} \end{table*} \begin{table*}[t] \small \centering \caption{\textbf{Action recognition results on Kinetics-400.} We set the search space as 2, meaning models can choose to compute all feature maps or $\frac{1}{2}$ of them both on temporal and channel-wise convolutions. } \vspace{-2mm} \begin{adjustbox}{width=\textwidth,center} \begin{tabular*}{1.11\textwidth}{cccccccccccc \toprule \multirow{2}{*}{Model} & \multirow{2}{*}{Dy.} & \multicolumn{5}{c}{16-frame} & \multicolumn{5}{c}{32-frame} \\ \cmidrule(lr){3-7} \cmidrule(lr){8-12} & & \small{GFLOPs} & \small{speed} & \small{clip-1} & \small{video-1} & \small{video-5} & \small{GFLOPs} & \small{speed} & \small{clip-1} & \small{video-1} & \small{video-5} \\ \midrule \multirow{2}{*}{R(2+1)D} & \xmark & $55.2$ & $97.1$ & $57.3$ & $65.6$ & $86.3$ & $110.5$ & $49.6$ & $61.5$ & $69.0$ & $88.6$ \\ & \cmark & $40.3$ & \textbf{105.9} & \textbf{58.4} & \textbf{67.6} & \textbf{87.6} & $80.7$ & \textbf{53.0} & $61.5$ & \textbf{70.0} & \textbf{88.9} \\ \midrule \multirow{2}{*}{I3D} & \xmark & $56.0$ & $116.4$ & $55.1$ & $66.5$ & $86.7$ & $112.0$ & $57.6$ & $57.2$ & $64.9$ & $86.5$ \\ & \cmark & $32.1$ & \textbf{140.7} & \textbf{58.6} & \textbf{67.1} & \textbf{87.2} & $64.3$ & \textbf{71.7} & \textbf{61.0} & \textbf{68.6} & \textbf{88.4} \\ \midrule \multirow{2}{*}{X3D} & \xmark & $6.42$ & $169.4$ & $63.2$ & $70.6$ & $90.0$ & \multicolumn{5}{c}{\multirow{2}{*}{[X3D-M is designed for 16 frames]}} \\ & \xmark & $5.38$ & \textbf{177.6} & \textbf{65.3} & \textbf{72.4} & \textbf{90.7} & \\ \bottomrule \end{tabular*} \end{adjustbox} \label{tab:full-k} \vspace{-4mm} \end{table*} \textbf{Results on Video Action Recognition.} We first evaluate our method by applying it to R(2+1)D-18~\citep{tran2018closer} with different number of input frames and different size of search space. Here we use GFLOPs (floating point operations) to measure the computational cost of the model and report clip-1, video-1 and video-5 metrics to measure the accuracy of our models, where clip-1 is the top-1 accuracy of model evaluation with only one clip sampled from video, video-1 and video-5 are the top-1 and top-5 accuracy of model evaluated with \emph{K-LeftCenterRight} strategy. Note that we report the FLOPs of a single video clips at the spatial resolution $256\times256$ (for I3D and X3D) or $128\times128$ (for R(2+1)D). In addition, we report the speed of each model with the metric of $clip/second$, which denotes the number of video clips that are processed in one second. We create the environment with PyTorch 1.6, CUDA 11.0, and a single NVIDIA TITAN RTX (24GB) GPU as our testbed to measure speed of different models. Table~\ref{tab:frame-search-space} shows the results (In all of the tables, \xmark~represents the original fixed model architecture while \cmark~denote the dynamic model trained using our proposed approach). Our proposed approach VA-RED$^2$ significantly reduces the computational cost while improving the accuracy. We observe that dynamic model with the search space size of $2$ has the best performance in terms of accuracy, GFLOPS and speed. We further test our VA-RED$^2$ with all of the three model architectures: R(2+1)D-18, I3D-InceptionV2, and X3D-M (Table~\ref{tab:mini-k}) including the very recent temporal pyramid module~\citep{yang2020tpn} and correlation module~\citep{wang2020video} on Mini-Kinetics-200 dataset. We choose R(2+1)D-18 with TPN and CorrNet as the backbone architecture and test the performance of our method using a search space of 2 in Table~\ref{tab:tpn} and Table~\ref{tab:corr} (Left) respectively. Table~\ref{tab:mini-k} shows that method boosts the speed of base I3D-InceptionV2 and R(2+1)D models by 21.7\% and 10.6\% respectively, showing its advantages not only in terms of GFLOPS but also in actual speed. Table~\ref{tab:corr} (Left) shows that our dynamic approach also outperforms the baseline CorrNet by 1.8\% in top-1 video accuracy, while reducing the computational cost by 25.2\% on Mini-Kinetics-200. Furthermore, we compare our method with AR-Net~\citep{meng2020ar}, which is a recent adaptive method that selects optimal input resolutions for video inference. We conduct our experiments on 16-frame TSN~\citep{wang2016temporal} with ResNet50 backbone and provide the comparison on FLOPs, parameter size, and accuracy (Table~\ref{tab:corr} (Right)). To make a fair comparison, we train AR-Net using the official implementation on the same Mini-Kinetics-200 dataset with Kaiming initialization~\citep{he2015delving}. Table~\ref{tab:corr} (Right) shows that our method, VA-RED$^2$ outperforms AR-Net in both accuracy and GFLOPS, while using about 62\% less parameters. Table~\ref{tab:full-k} and Table~\ref{tab:mit} show the results of different methods on Kinetics-400 and Moments-In-Time, respectively. To summarize, we observe that VA-RED$^2$ consistently improves the performance of all the base models including the recent architectures X3D, TPN, and CorrNet, while offering significant reduction in computation. Moreover, our approach is model-agnostic, which allows this to be served as a plugin operation for a wide range of action recognition architectures. From the comparison among different models, we find that our proposed VA-RED$^2$ achieves the most computation reduction on I3D-InceptionV2, between $40\%$ and $50\%$, while reducing less than $20\%$ on X3D-M. This is because X3D-M is already very efficient both in terms of channel dimension and temporal dimension. Notice that the frames input to X3D-M are at the temporal stride of $5$, which makes them share less similarity. Furthermore, we observe that dynamic I3D-InceptionV2 has very little variation of the computation for different input instances. This could be because of the topology configuration of the InceptionV2, which has lots of parallel structures inside the network architecture. We also compare VA-RED$^2$ with a weight-level pruning method~\citep{han2015learning} and a automatic channel pruning method (CGNet)~\citep{hua2019channel} on Mini-Kinetics-200. Table~\ref{tab:pruning} shows that our approach significantly outperforms the weight-level pruning method by a margin of about 3\%-4\% in clip-1 accuracy with similar computation over the original fixed model and consistently outperforms CGNet while requiring less GFLOPs (maximum 2.8\% in 16 frame). These results well demonstrate the effectiveness of our dynamic video redundancy framework over network pruning methods. \textbf{Results on Spatio-Temporal Action Localization.} We further extend our method to the spatio-temporal action localization task to demonstrate the generalization ability to different task. We conduct our method on J-HMDB-21 with two different 3D backbone networks: I3D-InceptionV2 and X3D-M. We report frame-mAP at IOU threshold $0.5$, recall value at IOU threshold $0.5$, and classification accuracy of correctly localized detections to measure the performance of the detector. Table~\ref{tab:hmdb} shows that our dynamic approach outperforms the baselines on all three metrics while offering significant savings in FLOPs (e.g., more than 50\% savings on I3D). In summary, VA-RED$^2$ is clearly better than the baseline architectures in terms of both accuracy and computation cost on both recognition and localization tasks, making it suitable for efficient video understanding. \begin{table*}[t] \small \centering \begin{minipage}{.47\textwidth} \centering \caption{\small{\textbf{Action recognition results on Moments-In-Time.} We set the search space as 2, i.e., models can choose to compute all feature maps or $\frac{1}{2}$ of them both on temporal and channel-wise convolutions. The speed uses $clip/second$ as the unit.}} \vspace{-1mm} \begin{adjustbox}{width=\textwidth,center} \begin{tabular*}{1.1\textwidth}{cccccc \toprule \small{Model} & \small{Dy.} & \small{GFLOPs} & \small{speed} & \small{clip-1} & \small{video-1} \\ \midrule \multirow{2}{*}{R(2+1)D} & \xmark & $55.2$ & $97.1$ & $27.0$ & $28.8$ \\ & \cmark & $42.5$ & \textbf{105.5} & \textbf{27.3} & \textbf{30.1} \\ \midrule \multirow{2}{*}{I3D} & \xmark & $56.0$ & $116.4$ & $25.7$ & $26.8$ \\ & \cmark & $32.1$ & \textbf{140.7} & \textbf{26.3} & \textbf{28.5} \\ \midrule \multirow{2}{*}{X3D} & \xmark & $6.20$ & $169.4$ & $24.8$ & $24.8$ \\ & \cmark & $5.21$ & \textbf{177.4} & \textbf{26.7} & \textbf{27.7} \\ \bottomrule \end{tabular*} \end{adjustbox} \label{tab:mit} \end{minipage} \hspace{0.1cm} \begin{minipage}{.51\textwidth} \caption{\small{\textbf{Comparison with network pruning methods.} We choose R(2+1)D on Mini-Kinetics-200 dataset with different number of input frames. Numbers in \textcolor{cadmiumgreen}{green}/\textcolor{mediumpersianblue}{blue} quantitatively show how much our proposed method is \textcolor{cadmiumgreen}{better}/\textcolor{mediumpersianblue}{worse} than these pruning methods.}} \vspace{-2mm} \begin{tabular*}{\textwidth}{ccccc \toprule \small{Method} & Frames & \small{GFLOPs} & \small{clip-1} \\ \midrule \multirow{3}{*}{Weight-level} & 8 & $19.9$ \perminus{0.1} & 54.5 \perplus{3.2} \\ & 16 & $40.3$ \perminus{0.1} & 57.7 \perplus{2.9} \\ & 32 & $79.6$ \perminus{0.3} & 59.6 \perplus{3.7} \\ \midrule \multirow{3}{*}{CGNet} & 8 & $23.8$ \perplus{3.8} & 56.2 \perplus{1.5} \\ & 16 & $47.6$ \perplus{7.2} & 57.8 \perplus{2.8} \\ & 32 & $95.3$ \perplus{16.0} & 61.8 \perplus{1.5} \\ \bottomrule \end{tabular*} \label{tab:pruning} \end{minipage} \vspace{-2mm} \end{table*} \begin{table*}[t] \small \centering \begin{minipage}{.51\textwidth} \caption{\small{\textbf{Action localization results on J-HMDB.} We set the search space as 2 for dynamic models. The speed uses $clip/second$ as the unit. }} \vspace{-2mm} \begin{adjustbox}{width=\textwidth,center} \begin{tabular*}{1.15\textwidth}{ccccccc \toprule \small{Model} & \small{Dy.} & \small{GFLOPs} & \small{speed} & \small{mAP} & \small{Recall} & \small{Classif.} \\ \midrule \multirow{2}{*}{I3D} & \xmark & $43.9$ & $141.1$ & $44.8$ & \textbf{67.3} & $87.2$ \\ & \cmark & $21.3$ & \textbf{167.4} & \textbf{47.2} & $65.6$ & \textbf{91.1} \\ \midrule \multirow{2}{*}{X3D} & \xmark & $5.75$ & $176.3$ & $47.9$ & $65.2$ & \textbf{93.2} \\ & \cmark & $4.85$ & \textbf{184.6} & \textbf{50.0} & \textbf{65.8} & $93.0$ \\ \bottomrule \end{tabular*} \end{adjustbox} \label{tab:hmdb} \end{minipage} \hspace{0.1cm} \begin{minipage}{.47\textwidth} \centering \caption{\textbf{Effect of efficiency loss on Kinetics-400.} \emph{Eff.} denotes the efficiency loss. } \vspace{-2mm} \begin{tabular*}{\textwidth}{ccccc \toprule Model & \emph{Eff.} & GFLOPs & clip-1 & video-1 \\ \midrule \multirow{2}{*}{R(2+1)D} & No & $49.8$ & $57.9$ & $66.7$ \\ & Yes & $40.3$ & \textbf{58.4} & \textbf{67.6} \\ \midrule \multirow{2}{*}{I3D} & No & $56.0$ & $58.0$ & $66.5$ \\ & Yes & $32.1$ & \textbf{58.6} & \textbf{67.1} \\ \bottomrule \end{tabular*} \label{tab:abl} \end{minipage} \vspace{-2mm} \end{table*} \begin{table*}[!t] \small \centering \caption{\textbf{Ablation experiments on dynamic modeling along temporal and channel dimensions.} We choose R(2+1)D-18 on Mini-Kinetics-200 and set the search space to 2 in all the dynamic models. } \vspace{-2mm} \begin{tabular*}{\textwidth}{cccccccccc \toprule \multirow{2}{*}{Dy. Temp.} & \multirow{2}{*}{Dy. Chan.} & \multicolumn{4}{c}{8-frame} & \multicolumn{4}{c}{16-frame} \\ \cmidrule(lr){3-6} \cmidrule(lr){7-10} & & GFLOPs & speed & clip-1 & video-1 & GFLOPs & speed & clip-1 & video-1 \\ \midrule \xmark & \xmark & $27.7$ & $192.1$ & $56.4$ & $66.8$ & $55.2$ & $97.1$ & $57.5$ & $67.5$\\ \cmark & \xmark & $23.5$ & $198.6$ & $57.1$ & $66.8$ & $46.1$ & $105.0$ & $58.6$ & $67.6$ \\ \xmark & \cmark & $22.7$ & $196.5$ & $57.0$ & $66.7$ & $46.3$ & $102.0$ & $59.2$ & $68.3$ \\ \cmark & \cmark & $20.0$ & \textbf{205.5} & \textbf{57.7} & \textbf{68.0} & $40.4$ & \textbf{108.7} & \textbf{60.6} & $\textbf{70.0}$\\ \bottomrule \end{tabular*} \vspace{-0.3cm} \label{tab:fixed} \end{table*} \begin{figure*} \centering \includegraphics[width=\linewidth]{figures/viz_policy_half.pdf} \caption{\small{\textbf{Ratio of computed feature per layer and class on Mini-Kinetics-200 dataset.} We pick the first 25 classes of Mini-Kinetics-200 and visualize the per-block policy of X3D-M on each class. Lighter color means fewer feature maps are computed while darker color represents more feature maps are computed. }} \label{fig:policy-viz} \end{figure*} \begin{figure*} [!ht] \centering \includegraphics[width=\linewidth]{figures/qualitative_fig4.pdf} \caption{\small{\textbf{Validation video clips from Mini-Kinetics-200.}} For each category, we plot two input video clips which consume the most and the least computational cost respectively. We infer these video clips with 8-frame dynamic R(2+1)D-18 model trained on Mini-Kinetics-200 and the percentage indicates the ratio of actual computational cost of 2D convolution to that of the original fixed model. Best viewed in color. } \label{fig:example-viz} \vspace{-1mm} \end{figure*} \textbf{Effect of Efficiency Loss.} We conduct an experiment by comparing the model performance before and after being finetuned with our proposed efficiency loss. Table~\ref{tab:abl} shows that finetuning our dynamic model with efficiency loss significantly reduces the computation without any accuracy loss. \textbf{Ablation Experiments on Dynamic Modeling.} We test performance of or approach by turning of dynamic modeling along temporal and channel dimensions on Mini-Kinetics-200. Table~\ref{tab:fixed} shows that dynamic modeling along both dimensions obtains the best performance while requiring the least computation. This shows importance of input-dependent policy for deciding how many features need to be computed for both temporal and channel dimensions. \textbf{Visualization and Analysis.} To better understand the policy decision process, we dissect the network layers and count the ratio of feature maps that are being computed during each convolution layers for each category. From Figure~\ref{fig:policy-viz}, we observe that: In X3D, point-wise convolutions which right after the depth-wise convolutions have more variation among classes and network tends to consume more temporal-wise features at the early stage and compute more channel-wise features at the late stage of the architecture. The channel-wise policy has also more variation than the temporal-wise policy among different categories. Furthermore, we show few contrasting examples which are in the same category while requiring very different computation in Figure~\ref{fig:example-viz}. Video clips which have more complicated scene configuration (e.g. cooking eggs and playing volleyball) and more violent camera motion (e.g. flipping pancake) tend to need more feature maps to do the correct predictions. More qualitative results can be found in Section~\ref{appen:feat_viz}, Section~\ref{appen:policy_viz} and Section~\ref{appen:qual} of the Appendix. \vspace{1mm} \textbf{VA-RED$^2$ on Dense Visual Tasks.} Our VA-RED$^2$ framework is also applicable for some dense visual tasks, like semantic segmentation, which requires the pixel-level prediction for the input content. To prove this, we apply our method to a semantic segmentation model on the ADE-20K dataset~\citep{zhou2017scene, zhou2018semantic}. We report computational cost of model encoder and the mean IoU (Intersection-Over-Union) in Table~\ref{tab:seg}. As can be seen from Table~\ref{tab:seg}, our proposed VA-RED$^2$ has the absolute advantage in terms of efficiency while maintaining the precision of segmentation. This experiment clearly shows that our method is not only effective on the recognition and detection tasks, but also applicable to the dense visual tasks like semantic segmentation. \begin{table*}[!t] \small \centering \caption{\textbf{VA-RED$^2$ on semantic segmentation.} We choose dilated ResNet-18 as our backbone architecture and set the search space as 2. Models are trained for 100K iterations with batch size of 8.} \vspace{-2mm} \begin{tabular*}{\textwidth}{ccccccc \toprule \multirow{2}{*}{Model}& \multicolumn{2}{c}{Original model} & \multicolumn{4}{c}{Channel-wise reduction using VA-RED$^2$} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-7} & GFLOPs & mean IoU & GFLOPs$_{avg}$ & GFLOPs$_{max}$ & GFLOPs$_{min}$ & mean IoU \\ \midrule Dilated ResNet-18 & $10.6$ & $31.2\%$ & \textbf{7.8} & $9.1$ & $7.3$ & \textbf{31.3\%} \\ \bottomrule \end{tabular*} \label{tab:seg} \vspace{-2mm} \end{table*} \section{Conclusion} \vspace{-1mm} In this paper, we propose an input-dependent adaptive framework called VA-RED$^2$ for efficient inference which can be easily plugged into most of existing video understanding models to significantly reduce the model computation while maintaining the accuracy. Extensive experimental results on video action recognition, spatio-temporal localization, and semantic segmentation validate the effectiveness of our framework in multiple standard benchmark datasets. \newpage \textbf{Acknowledgements.} This work is supported by IARPA via DOI/IBC contract number D17PC00341. This work is also supported by the MIT-IBM Watson AI Lab and its member companies, Nexplore and Woodside. \textbf{Disclaimer.} The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DOI/IBC, or the U.S. Government. \section{Dataset Details}\label{appen:dataset} We evaluate the performance of our approach using three video action recognition datasets, namely Mini-Kinetics-200~\citep{meng2020ar}, Kinetics-400~\citep{carreira2017quo}, and Moments-In-Time~\citep{monfort2019moments} and one spatio-temporal action localization task namely J-HMDB-21~\citep{jhuang2013towards}. Kinetics-400 is a large dataset containing 400 action classes and 240K training videos that are collected from YouTube. The Mini-Kinetics dataset contains 121K videos for training and 10K videos for testing, with each video lasting 6-10 seconds. The original Kinetics dataset is publicly available to download at \url{https://deepmind.com/research/open-source/kinetics}. We use the official training/validation/testing splits of Kinetics-400 and the splits released by authors in~\citep{meng2020ar} for Mini-Kinetics-200 in our experiments. Moments-in-time~\citep{monfort2019moments} is a recent collection of one million labeled videos, involving actions from people, animals, objects or natural phenomena. It has 339 classes and each video clip is trimmed to 3 seconds long. This dataset is designed to have a very large set of both inter-class and intra-class variation that captures a dynamic event at different levels of abstraction (i.e. "opening" doors, curtains, mouths, even a flower opening its petals). We use the official splits in our experiments. The dataset is publicly available to download at \url{http://moments.csail.mit.edu/}. Joints for the HMDB dataset (J-HMDB-21~\citep{jhuang2013towards}) is based on 928 clips from HMDB51 comprising 21 action categories. Each frame has a 2D pose annotation based on a 2D articulated human puppet model that provides scale, pose, segmentation, coarse viewpoint, and dense optical flow for the humans in action. The 21 categories are brush hair, catch, clap, climb stairs, golf, jump, kick ball, pick, pour, pull-up, push, run, shoot ball, shoot bow, shoot gun, sit, stand, swing baseball, throw, walk, wave. The dataset is available to download at \url{http://jhmdb.is.tue.mpg.de/}. \section{Implementation Details} \label{appen:impl} \textbf{Details of Shared-weight Training and Inference.} In this section, we provide more details of the shared-weight mechanism presented in Section~3 of the main paper. We first compute all the possible necessary features using a big kernel and then for each dynamic convolution with different scaling factor, we sample its corresponding ratio of necessary features and reconstruct the rest features by cheap operations to get the final output. For example, the original channel-wise dynamic convolution at ratio $r = (\frac{1}{2})^{(i-1)}$ can be analogized to \begin{equation} \big{[}(f_l^c(X_l, r=(\frac{1}{2})^{i_s^c-1})[0:(\frac{1}{2})^{(i-1)} C_{out}]) , (\Phi^c(f_l^c(X_l, r=(\frac{1}{2})^{i_s^c-1})[0:(\frac{1}{2})^{(i-1)}\cdot C_{out}]))\big{]}, \end{equation} where $[\cdot:\cdot]$ is the index operation along the channel dimension, and $i_s^c$ is the index of the largest channel-wise filter, during training phase, we have $i_s^c = 1$, while during inference phase, $i_s^c$ is the smallest index for $V_c^l$, $s.t. V_c^l[i_s^c] = 0$. By utilizing such a share-weight mechanism, the computation of the total channel-wise dynamic convolution is reduce to $(\frac{1}{2})^{i_s^c-1}\cdot \mathcal{C}(f_l)$. Further, we have the total computational cost of the adjunct process as \begin{equation} \mathcal{C}(f_l^{t,c}) = (\frac{1}{2})^{i_s^c+i_s^t-2}\cdot \mathcal{C}(f_l), \end{equation} where $i_s^t$ is the index of largest temporal-wise filter. \begin{table}[t] \small \centering \caption{\textbf{Quantitative results of redundancy experiments.} We compute the correlation coefficient, RMSE and redundancy proportions (RP) for feature maps in well-known pretrained video models on Moments-in-Time and Kinetics-400 datasets. RP is calculated as the number of tensors with both CC and RMSE above redundancy thresholds of 0.85 and 0.001, respectively. We show results corresponding to averaging the per layer values for all videos in the validation sets. We observe that networks trained on Moments-In-Time (and evaluated on the Moments in Time validation set) tend to present slightly less redundancy than their Kinetics counterparts, and the time dimension tends to be more redundant than the channel dimension in all cases. We observe severe redundancy across the board (with some dataset-model pairs achieving upwards of 0.8 correlation coefficient between their feature maps), which further motivates our redundancy reduction approach. } \vspace{1mm} \begin{tabular}{cccccc \toprule Dataset & Model & Dimension & CC & RMSE & RP \\ \midrule \multirow{4}{*}{Moments-In-Time} & I3D & Temporal & 0.77 & 0.083 & 0.62 \\ & I3D & Channel & 0.71 & 0.112 & 0.48 \\ & R(2+1)D & Temporal & 0.73 & 0.108 & 0.49 \\ & R(2+1)D & Channel & 0.68 & 0.122 & 0.43 \\ \midrule \multirow{4}{*}{Kinetics-400} & I3D & Temporal & 0.81 & 0.074 & 0.68 \\ & I3D & Channel & 0.76 & 0.091 & 0.61 \\ & R(2+1)D & Temporal & 0.78 & 0.081 & 0.64 \\ & R(2+1)D & Channel & 0.73 & 0.088 & 0.58 \\ \bottomrule \end{tabular} \label{tab:red_quant} \end{table} \textbf{Training and Inference.} We apply our method mainly to 2D convolutions in R(2+1)D since 2D convolution takes the most computational cost compared with 1D convolution. We train most of our models on 96 NVIDIA Tesla V100-32GB GPUs and perform synchronized BN~\citep{ioffe2015batch} across all the GPUs. For R(2+1)D~\citep{tran2018closer}, the learning rate is initialized as $0.18$ and the weight decay is set to be $5\times 10^{-4}$. For I3D~\citep{carreira2017quo, xie2018rethinking} and X3D~\citep{feichtenhofer2020x3d}, the learning rates both start from $1.8$ and weight decay factors are $1\times 10^{-4}$ and $5\times 10^{-5}$ respectively. Cosine learning rate decaying strategy is applied to decrease the total learning rate. All of the models are trained from scratch and warmed up for 15 epochs on mini-Kinetics/Kinetics, 8 epochs on Moments-In-Time dataset. We adopt the Nesterov momentum optimizer with an initial weight of 0.01 and a momentum of 0.9. During training, we follow the data augmentation (location jittering, horizontal flipping, corner cropping, and scale jittering) used in TSN~\citep{wang2016temporal} to augment the video with different sizes spatially and flip the video horizontally with 50\% probability. We use single-clip, center-crop FLOPs as a basic unit of computational cost. Inference-time computational cost is roughly proportional to this, if a fixed number of clips and crops is used, as is for our all models. Note that Kinetics-400 dataset is shrinking in size ($\sim$15\% videos removed from original Kinetics) and the original version used in~\citep{carreira2017quo} are no longer available from official site, resulting in some difference of results. \section{Redundancy Analysis}\label{appen:red} To motivate our redundancy reduction approach, we measure and visualize the internal redundancy of well known pretrained networks. We analyze the internal feature maps of existing pre-trained I3D-InceptionV2 and R(2+1)D networks on Moments in Time and Kinetics. For each model-dataset pair, we extract feature maps for all examples in the validation sets, in both time and channel dimensions, and measure their similarity. In detail, our method consists of the following steps: (1) For a given input, we first extract the output feature maps from all convolutional layers in the network at hand. (2) In each layer, we measure the similarity of each feature map to each other with Person's correlation coefficient (CC) and root mean squared error (RMSE). We additionally flag feature maps that exhibit high similarity as redundant. (3) After computing this for the validation sets, we average the values over all examples to obtain mean metrics of redundancy per model and per dataset. We additionally compute the ranges of these values to visualize how much redundancy can vary in a model-dataset pair. We present quantitative results in Table \ref{tab:red_quant} and show examples of our findings in Figure \ref{fig:red_qual}. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{figures/viz_feature_maps_first_layer_i3d_moments.png} \caption{Visualization of the first 9 filters of the first layer of I3D, on examples with most (top) and least (bottom) redundancy in the temporal dimension. We exemplify the results on frames 1, 2 and 3. As can be seen, the video with most redundancy consists of a relatively static video with little movement, and the sets of feature maps from frame to frame harbor heavy similarity. The video with least redundancy consists of a gift unwrapping with rapid movement (even in the first few frames) and the corresponding feature maps present visible structural differences from frame to frame. Although in both cases, redundancy is present, it is clear that some examples present much more redundancy than others, thus motivating our input-dependent redundancy reduction approach.} \label{fig:red_qual} \vspace{-0.3cm} \end{figure} \section{VA-RED$^2$ on Longer-training Model}\label{appen:epoch} In our experiments, all of our models are trained under a common evaluation protocol for a fair comparison. To balance the training cost and model performance, we use a smaller epoch size than the original paper to train our models. For example, authors in \citep{tran2018closer} and \citep{feichtenhofer2020x3d}, train the R(2+1)D models and X3D models for 188 epochs and 256 epochs respectively to pursue the state-of-the art. However, we only train the models for 120 epochs to largely save the computation resources and training time. However, to rule out the possibility that our base models (i.e., without using Dynamic Convolution) benefit from longer training epochs while our VA-RED$^2$ may not, we conduct an ablation study on the epoch size in Table~\ref{tab:epoch}. We can see that our method still shows superiority over the base model in terms of the computational cost and accuracy on the 256-epoch model. Thus we conclude that the effectiveness of our method in achieving higher performance with low computation also holds on the longer-training models. \begin{table*}[thb] \small \centering \caption{\textbf{Comparison between the performance of VA-RED$^2$ on 120-epoch X3D model and 256-epoch X3D model.} We choose X3D-M as our backbone architecture and set the search space as 2. We train one group of models for 120 epochs and the other for 256 epochs.} \begin{tabular*}{\textwidth}{cccccccccc \toprule \multirow{2}{*}{Model}& \multirow{2}{*}{Dynamic} & \multicolumn{4}{c}{120 epochs} & \multicolumn{4}{c}{256 epochs} \\ \cmidrule(lr){3-6} \cmidrule(lr){7-10} & & GFLOPs & clip-1 & video-1 & video-5 & GFLOPs & clip-1 & video-1 & video-5 \\ \midrule \multirow{2}{*}{X3D-M} & \xmark & $6.42$ & $63.2$ & $70.6$ & $90.0$ & $6.42$ & $64.4$ & $72.3$ & $90.8$ \\ & \cmark & $5.38$ & \textbf{65.3} & \textbf{72.4} & \textbf{90.7} & $5.87$ & \textbf{66.4} & \textbf{73.6} & \textbf{91.2} \\ \bottomrule \end{tabular*} \label{tab:epoch} \end{table*} \section{Feature Map Visualizations}\label{appen:feat_viz} To further validate our initial motivation, we visualize the feature maps which are fully computed by the original convlution operation and those which are generated by the cheap operations. We demonstrate those in both temporal dimension (c.f. Figure~\ref{fig:temp_feat}) and channel dimension (c.f. Figure~\ref{fig:channel_feat}). In both cases we can see that the proposed cheap operation generates meaningful feature maps and some of them looks even no difference from the original feature maps. \begin{figure*}[!hb] \centering \includegraphics[width=0.85\linewidth]{figures/viz_temporal_feat.pdf} \caption{\textbf{Visualization of temporal-wise feature maps.} We plot the temporal feature maps which are fully computed by the original convolution and those mixed with cheaply generated feature maps. The feature maps marked with red bounding boxes are cheaply generated. We do this analysis on 8-frame dynamic R(2+1)D-18 pretrained on Mini-Kinetics-200. These feature maps are the output of the first spatial convolution combined with ReLU non-linearity inside the \url{ResBlock_1}. We can see that most of the cheaply generated feature maps looks no difference from the original feature maps, which further support our approach. Best viewed in color.} \label{fig:temp_feat} \vspace{-0.3cm} \end{figure*} \begin{figure*}[!hb] \centering \includegraphics[width=0.85\linewidth]{figures/viz_channel_feat.pdf} \caption{\textbf{Visualization of channel-wise feature maps.} We plot the feature maps across the channel dimension. We contrast two kinds of feature maps: fully computed by the original convolution and those mixed with cheaply generated feature maps. The feature maps inside the red bounding boxes are cheaply generated. The analysis is performed on 8-frame dynamic R(2+1)D-18 model which is pretrained on Mini-Kinetics-200 dataset and we extract these feature maps which are output by the first spatial convolution layer inside the \url{ResBlock_1}. Best viewed in color.} \label{fig:channel_feat} \end{figure*} \section{Policy Visualizations}\label{appen:policy_viz} To compare with the policy on Mini-Kinetics-200 (Figure 3 of the main paper), we also visualize the ratio of features which are consumed in each layer on Kinetics-400 (c.f. Figure~\ref{fig:policy-viz-fullk}) and Moments-In-Time (c.f. Figure~\ref{fig:policy-viz-moments}). We can see from these two figures that the conclusions we draw from Mini-Kientics-200 still hold. Specifically, In X3D, point-wise convolutions which right after the depth-wise convolutions have more variation among classes and network tends to consume more temporal-wise features at the early stage and compute more channel-wise features at the late stage of the architecture. However, R(2+1)D choose to select fewer features at early stage by both temporal-wise and channel-wise policy. Furthermore, we count the FLOPs of each instance on Mini-Kinetics-200, Kinetics-400, and Moments-In-Time and plot pie charts to visualize the the distribution of this instance-level computational cost. We analyze such distribution with two models: R(2+1)D-18 and X3D-M. All of the results are demonstrated in Figure~\ref{fig:pie-comp}. \begin{figure*}[ht] \centering \includegraphics[width=0.9\linewidth]{figures/fullk_policy_viz.pdf} \caption{\small{\textbf{Ratio of computed feature per layer and class on Kinetics-400 dataset.} We visualize the per-block policy of X3D-M and R(2+1)D-18 on all 400 classes. Lighter color means fewer feature maps are computed while darker color represents more feature maps are computed. While X3D-M tends to consume more temporal-wise features at the early stage and compute more channel-wise features at the late stage, R(2+1)D choose to select fewer features at early stage by both temporal-wise and channel-wise policy. For both architectures, the channel-wise policy has more variation than the temporal-wise policy among different categories.}} \label{fig:policy-viz-fullk} \end{figure*} \begin{figure*}[ht] \centering \includegraphics[width=0.9\linewidth]{figures/policy_viz_moments.pdf} \caption{\small{\textbf{Ratio of computed feature per layer and class on Moments-In-Time dataset.} We visualize the per-block policy of X3D-M and R(2+1)D-18 on all 339 classes. Lighter color means fewer feature maps are computed while darker color represents more feature maps are computed.}} \label{fig:policy-viz-moments} \end{figure*} \begin{figure*} [ht] \centering \includegraphics[width=0.8\linewidth]{figures/pie-chart.pdf} \caption{\small{\textbf{Computational cost distribution across different models on different datasets.} We count the computation of each instance cost by different models on different datasets. For instance, for the upper-left one, we use the model backbone of R(2+1)D-18 on Mini-Kinetics-200. This sub-figure indicates that there are $87.7\%$ of videos in Mini-Kinetics-200 (Dataset) consuming $38.6 - 41.4$ GFLOPs by using R(2+1)D-18 (Backbone), $8.8\%$ of videos consuming $35.9-38.6$ GFLOPs, and $3.5\%$ of videos consuming $41.4-44.2$ GFLOPs.}} \label{fig:pie-comp} \end{figure*} \begin{figure*} [!ht] \centering \includegraphics[width=0.9\linewidth]{figures/qualitative_fig7.pdf} \caption{\small{\textbf{Validation video clips from Kinetics-400.}} For each category, we plot two input video clips which consume the most and the least computational cost respectively. We infer these video clips with 16-frame dynamic R(2+1)D-18 which is pre-trained on Kinetics-400. The percentage in the figure indicates the ratio of the actual computational cost of 2D convolution to that of the original fixed model. Best viewed in color.} \label{fig:example-viz-fullk} \end{figure*} \begin{figure*} [!ht] \centering \includegraphics[width=0.9\linewidth]{figures/qualitative_fig8.pdf} \caption{\small{\textbf{Validation video clips from Moments-In-Time.}} For each category, we plot two input video clips which consume the most and the least computational cost respectively. We infer these video clips with 16-frame dynamic R(2+1)D-18 which is pre-trained on Moments-In-Time. The percentage in the figure indicates the ratio of the actual computational cost of 2D convolution to that of the original fixed model. Best viewed in color.} \label{fig:example-viz-moments} \end{figure*} \section{Qualitative Results}\label{appen:qual} We show additional input examples which consume different levels of computational cost on Kinetics-400 dataset (c.f. Figure~\ref{fig:example-viz-fullk}) and Moments-In-Time dataset (c.f. Figure~\ref{fig:example-viz-moments}). To be consistent, we use the 16-frame dynamic R(2+1)D-18 as our pre-trained model. We can see that the examples consuming less computation tend to have less temporal motion, like the second example in Figure~\ref{fig:example-viz-fullk}, or have a relatively simple scene configuration, like the first and second examples in Figure~\ref{fig:example-viz-moments}. \end{appendices}
2023-04-23T06:41:15.553Z
2021-10-06T02:10:05.000Z
redpajama/arxiv
arxiv_0001
2,006
9,984
4a0816665932ff04f4f74beb2a1419787116685d
\section{Details of sparse regression algorithms}\label{sec:sparse_regr} In this Section of the Supplementary Material, we discuss algorithms for the optimization of the non-convex objective function for the sparse regression problem: \begin{equation} \mathcal{L} = ||\mathbf{U}_t-\Theta(\mathbf{U}, \mathbf{U}_x, \ldots)\cdot\xi||_2+\lambda_0 ||\xi||_0. \label{eq:object_func_l0} \end{equation} In Sections~\ref{sec:brute_force} and \ref{sec:cem}, we provide additional details, including the pseudocode, of the brute-force algorithm and the cross-entropy algorithm, respectively. In Sections~\ref{sec:stridge} and~\ref{sec:lasso}, we briefly discuss two other popular sparse-regression methods: Sequential Thresholding and Ridge regression (STRidge) and least absolute shrinkage and selection operator (LASSO) regression. \subsection{BruteForce algorithm}\label{sec:brute_force} The brute-force algorithm (BruteForce) for PDE-learning consists of two stages (see Algorithm~\ref{alg:brute_force} below): (i) looping over all possible combinations of terms from the dictionary, (ii) for the selected terms, reconstruct coefficients via linear regression and evaluate the objective function (\ref{eq:object_func_l0}). Finally, the algorithm returns coefficients that minimize the objective function $\mathcal{L}$. Although this algorithm has exponential complexity when increasing the number of candidate terms, it could still be used in practice in a lot of cases. The largest problem instance we were able to solve with BruteForce contained $M=20$ candidate terms (see Table~\ref{table:summary}, Problems \#9, \#10). \begin{algorithm}[H]\label{alg:brute_force} \caption{BruteForce algorithm for sparse selection of PDE terms} \begin{algorithmic}[1] \Function{BruteForceL0}{$\mathbf{U}_t$, $D_x^n\mathbf{U}$, $\lambda_0$} \For{nonzero indexes $\in$ all $2^M$ combinations of $M$ terms in the dictionary $\Theta$}\Comment{Iterate over term combinations} \State $\tilde{\Theta} = \Theta[:, \textrm{nonzero indexes}]$\Comment{Select columns in the dictionary matrix} \State $\xi = (\tilde\Theta^\dag \tilde\Theta)^{-1} \tilde\Theta^\dag \mathbf{U}_t$\Comment{Perform linear regression to extract nonzero coefficients} \State $\mathcal{L} \gets ||\mathbf{U}_t-\tilde\Theta(U, D_x^n U)\cdot \xi||_2 + \lambda_0 ||\xi||_0$ \Comment{Evaluate objective function} \If {$\mathcal{L}<\mathcal{L}_{best}$} \Comment{Objective function improved} \State $\mathcal{L}_{best} \gets \mathcal{L}$ \State $\xi_{best} \gets \xi$ \EndIf \EndFor \State \textbf{return} $\xi_{best}$ \EndFunction \end{algorithmic} \end{algorithm} \subsection{CrossEntropy algorithm}\label{sec:cem} As a scalable alternative to the BruteForce method, we propose a sampling-based algorithm which we call CrossEntropy, see Algorithm~\ref{alg:cem}. CrossEntropy is conceptually similar to BruteForce, but, instead of performing an exhaustive search over $2^M$ combinations of terms, it relies on the Cross-Entropy method (CEM)~\cite{rubinstein1999cross, de2005tutorial} as a subroutine for combinatorial optimization (term selection) of a ``black-box'' function $\mathcal{L}$. CEM is a heuristic method that shows reliable practical performance for hard optimization problems (e.g.~the travelling salesman problem), is computationally efficient and is relatively simple in implementation. The CEM algorithm is analogous to a derivative-free evolutionary algorithm with a Monte-Carlo-like update rule. The key steps in the algorithm are \begin{itemize} \item Initialize a weights vector $\vec{W}=(W_1,\ldots, W_M)$ with zero values. The weights define the probability of a term being present via the Bolzmann distribution (SoftMax policy). \item Create a population of weights vectors, independently update vector elements in the population by adding i.i.d. Gaussian fluctuations. \item In order to estimate the value of the objective function $\mathcal{L}$ in each population, we perform a series of rollouts for a given vector of SoftMax weights. In each rollout, the indexes of nonzero terms are sampled using the SoftMax policy. The coefficients $\xi$ of non-zero terms are recovered via linear regression and then used for the evaluation of the objective function $\mathcal{L}$. \item Select top performing (``elite'') candidates in the population (e.g.~top 1\%) according to the objective function $\mathcal{L}$. \item Update the current weights vector by taking element-wise mean of the elite weights array. \end{itemize} The largest problem we were able to solve with the CrossEntropy algorithm contained $M=45$ terms (see Table~\ref{table:summary}, Problems \#11, \#12). Typical values of hyperparameters we used in our PDE-learning experiments are: $\texttt{number of rollouts} = 100$, $\texttt{batch size} = 100$, $\texttt{elite fraction} = 1\%$. \begin{algorithm}[H] \caption{Sparse selection algorithm based on the Cross Entropy method for combinatorial optimization} \label{alg:cem} \begin{algorithmic}[1] \Function{SoftMaxPolicy}{$\vec W$} \For{i \textbf{in} $1,\ldots, M$} \State $p_i\gets \exp{(W_{i})}/\sum_{j=1}^M \exp{(W_j)}$ \Comment{Get term probabilities from weights vector $\vec{W}$} \State $indx[i]\sim Bernoulli(p_i)$ \Comment{Sample indices of nonzero terms from Bernoulli distribution} \EndFor \State \textbf{return} $indx$ \Comment{Return vector of indexes of nonzero terms} \EndFunction \Function{EstimateBestLossAndCoefs}{$\vec{W},\mathbf{U}_t, D_x^n\mathbf{U}, \, num\; rollouts, \lambda_0$} \For{rollout \textbf{in} $1,\ldots, num\; rollouts$}\hspace{10pt}\Comment{Perform sampling of indexes of non-zero terms and estimate min loss for a fixed vector of SoftMax weights } \State $indx \gets \textrm{SoftMaxPolicy}(\vec{W})$ \State $\tilde{\Theta} = \Theta[:, indx]$\Comment{Select columns in the dictionary matrix} \State $\xi[indxs] = (\tilde\Theta^\dag \tilde\Theta)^{-1} \tilde\Theta^\dag \mathbf{U}_t$\Comment{Perform linear regression to extract nonzero coefficients} \State $\xi[\sim indx] \gets 0$\Comment{Assign zero values to the remaining coefficients} \State $\mathcal{L} \gets ||\mathbf{U}_t-\tilde\Theta(U)\cdot\xi||_2 + \lambda_0 ||\xi||_0$ \Comment{Compute current loss function} \If {$\mathcal{L}<\mathcal{L}_{best}$} \Comment{Check if the objective function has improved} \State $\mathcal{L}_{best} \gets \mathcal{L}$ \State $\xi_{best} \gets \xi$ \EndIf \EndFor \State \textbf{return} $\mathcal{L}_{best}$, $\xi_{best}$ \EndFunction \Function{TrainCEM}{$elite\, frac$, $batch\, size$, $num\, rollouts$} \State $\vec{W}_{popul}\gets [batch\, size \times M]$ \Comment{Initialize array of weights for the CEM population} \For{iter \textbf{in} $1,\ldots, niter$} \For{b \textbf{in} $1, \ldots, batch\, size$} \State $\vec{dW} \sim \mathcal{N}(0, \sigma_{W})$ \Comment{Update weights in each batch by adding vector of i.i.d.~Gaussian variables} \State $\vec{W}_{popul}[b] = \vec{W}+\vec{dW}$ \State $\mathcal{L}_{popul}[b],\, \xi \gets \textrm{EstimateBestLossAndCoefs}(\vec{W}_{popul}[b],\, num\, rollouts)$ \EndFor \State $indx_{elite} \gets argsort(\mathcal{L}_{popul})[:elite\; frac \times batch\; size]$ \Comment{Select elite weights, e.g.~$1\%$ of top performing weights samples from the population} \State $\vec{W}_{elite}\gets \vec{W}_{popul}[indx_{elite}]$ \State $\vec{W} \gets mean(\vec{W}_{elite})$ \State $\sigma_{W} \gets std(\vec{W}_{elite})$ \EndFor \State \textbf{return} $\vec{W}$ \EndFunction \Function{CrossEntropyL0}{$\mathbf{U}_t$, $D_x^n\mathbf{U}$, $\lambda_0$, $elite\, frac=0.01$, $num\, rollouts$, $batch\, size$, $niter$} \State $\vec{W}\gets \textrm{TrainCEM}(elite\; frac, batch\; size, num\; rollouts)$ \State $\mathcal{L}_{best},\, \xi_{best} \gets \textrm{EstimateBestLossAndCoefs}(\vec{W},\mathbf{U}_t, D_x^n\mathbf{U},\, num\; rollouts, \lambda_0)$ \State \textbf{return} $\xi_{best}$ \EndFunction \end{algorithmic} \end{algorithm} \subsection{Sequential Thresholding and Ridge regression (STRidge)}\label{sec:stridge} STRidge is a heuristic algorithm for the least-squares sparse regression problem in the presence of $L_0$ and $L_2$ penalty terms and is based on an annealing-like schedule for thresholding of non-zero regression coefficients. See description and pseudocode in Ref.~\onlinecite{rudy2017data}.\\ \subsection{LASSO regression}\label{sec:lasso} A commonly used approach to promote sparsity is to consider convex relaxation of the original problem (\ref{eq:object_func_l0}) by using $L_1$ regularization instead of $L_0$. This method is known as LASSO regression: $\mathcal{L} = ||\mathbf{U}_t-\Theta(\mathbf{U}, \mathbf{U}_x, \ldots)\cdot\xi||_2^2+\lambda_1 ||\xi||_1$. However, LASSO tends to have difficulty finding a sparse basis when the data matrix $\Theta$ has high correlations between columns (which could be the case for nonlinear terms in $\Theta$), which results in a poor PDE reconstruction quality~\cite{rudy2017data}. \subsection{Summary of PDE-reconstruction results for various sparse selection algorithms} In this subsection, we present a short summary (see table~\ref{table:summary}) of the PDE-learning problems considered in the main text and the performance of three algorithms for term selection: BruteForce, STRidge, and CrossEntropy. \bgroup \def1.5{1.5} \begin{center} \begin{table}[H] \caption{Performance of sparse selection algorithms on the problems considered in the main text. Successful reconstruction of an entire sequence of PDEs is marked as $(\checkmark)$, failure is marked as $(\times)$, and partial success---when only some PDEs depending on the value of $\lambda_0$ were correctly identified---is marked as $(\pm)$. In the problem list column, ``fermion hydro.'' stands for ``fermion hydrodynamics'', while ``extended lib.'' refers to an extended library of candidate terms.}\label{table:summary} \begin{tabularx}{\textwidth}{|c|c|c|c|c|X|} \hline &\textbf{Problem} & BruteForce & STRidge & CrossEntropy & Candidate Terms \\ \hline 1&Single magnon, $B(x)=0$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & \noindent\parbox[c]{\hsize}{ $1$, $\partial_x^n u$, $u \partial_x^n u$,\\ $n\in[1,..,4]$ }\\ \hline 2&Single magnon, $B(x)=B_0(x-x_0)^2$ & $\checkmark$ & $\checkmark$ & $\checkmark$& \noindent\parbox[c]{\hsize}{ $1$, $\partial_x^n u$, $(x-x_0)^n u$, $n\in[1,..,4]$ }\\ \hline 3& \begin{tabular}[c]{@{}c@{}} Domain wall, XXZ $(\Delta=0)$,\\ zero temperature \end{tabular} & $\checkmark$ & $\checkmark$ & $\checkmark$ & \noindent\parbox[c]{\hsize}{ $\partial_x^n u$, $u^m \partial_x u$ $n\in[1,..,4]$, $m\in [1,..,5]$ }\\ \hline 4&\begin{tabular}[c]{@{}c@{}} Domain wall, XXZ $(\Delta=0)$, \\ zero temperature (extended lib.) \end{tabular} & $\checkmark$ & $\times$ & $\checkmark$ & \noindent\parbox[c]{\hsize}{ $\partial_x^n u$, $u^m \partial_x u$, $\sin{(2\pi u/P)} u_x$ \\ $n\in[1,..,4]$, $m\in [1,..,5]$, $P\in [1,..,10]$ }\\ \hline 5&\begin{tabular}[c]{@{}c@{}} Domain wall, XXZ $(\Delta/J=0.5)$,\\ zero temperature \end{tabular}& $\checkmark$ & $\pm$ & $\checkmark$ & \noindent\parbox[c]{\hsize}{ $\partial_x^n u$, $u^m \partial_x$, $n\in[1,..,4]$, $m\in [1,..,5]$ }\\ \hline 6& \begin{tabular}[c]{@{}c@{}}Domain wall, XXZ $(\Delta/J=0.5)$,\\ zero temperature (extended lib.) \end{tabular} & $\checkmark$ & $\pm$ & $\checkmark$ & \noindent\parbox[c]{\hsize}{ $\partial_x^n u$, $u^m \partial_x u$, $\sin{(2\pi u/P)} u_x$ \\ $n\in[1,..,4]$, $m\in [1,..,5]$, $P\in [1,..,10]$ }\\ \hline 7& \begin{tabular}[c]{@{}c@{}} Domain wall, XXZ $(\Delta/J=1)$,\\ high-temperature state \end{tabular} & $\checkmark$ & $\checkmark$ & $\checkmark$ & \noindent\parbox[c]{\hsize}{ $u_t=-\partial_x \mathcal{J}(u)$,\,\\ $\mathcal{J}(u)$: $u^n$, $u^n \partial_x u$, $u^n \partial_x^2 u$, $n\in[1,..,5]$ }\\ \hline 8& \begin{tabular}[c]{@{}c@{}} Domain wall, XXZ $(\Delta/J=2)$, \\ high-temperature state \end{tabular} & $\checkmark$ & $\checkmark$ & $\checkmark$ & \noindent\parbox[c]{\hsize}{ $u_t=-\partial_x \mathcal{J}(u)$,\,\\ $\mathcal{J}(u)$: $u^n$, $u^n \partial_x u$, $u^n \partial_x^2 u$, $n\in[1,..,5]$}\\ \hline 9&\begin{tabular}[c]{@{}c@{}} Fermion hydro., \\ $U=0$ ($J_1=0.5$, $J_2=0$) \end{tabular}& $\checkmark$ & $\times$ & $\checkmark$ & \noindent\parbox[c]{\hsize}{ Table~\ref{table:t2} $(P, T)=(-,+)$ }\\ \hline 10&\begin{tabular}[c]{@{}c@{}} Fermion hydro., \\ $U=0$ ($J_1=0.5$, $J_2=-0.125$) \end{tabular}& $\checkmark$ & $\times$ & $\checkmark$ & \noindent\parbox[c]{\hsize}{ Table~\ref{table:t2} $(P, T)=(-,+)$}\\ \hline 11&\begin{tabular}[c]{@{}c@{}}Fermion hydro. (extended lib.), \\ $U=0$ ($J_1=0.5$, $J_2=0$) \end{tabular}& not tractable & $\times$ & $\checkmark$ & \noindent\parbox[c]{\hsize}{ all terms from Table~\ref{table:t2}}\\ \hline 12&\begin{tabular}[c]{@{}c@{}} Fermion hydro. (extended lib.),\\ $U=0$ ($J_1=0.5$, $J_2=-0.125$) \end{tabular}& not tractable & $\times$ & $\checkmark$ & \noindent\parbox[c]{\hsize}{ all terms from Table~\ref{table:t2} }\\ \hline 13&\begin{tabular}[c]{@{}c@{}} Fermion hydro., \\ $U/J=4$ ($J_1=0.5$, $J_2=0$) \end{tabular}& not tractable & $\times$ & $\checkmark$ & \noindent\parbox[c]{\hsize}{ Table~\ref{table:t2} $(P, T)=(-,\cdot\,)$ }\\ \hline \end{tabularx} \end{table} \end{center} \egroup \section{PDE-learning of quench dynamics in the XXZ model: analytical derivations and additional examples} In this Section, we derive closed-form PDEs presented in the main text describing long-wavelength dynamics of excitations in the low-energy sector of the XXZ model. We provide additional details of PDE-learning methodology and discuss numerical schemes to calculate spatiotemporal derivatives from the data. We consider the following benchmarking cases: (i) single-magnon dynamics in the nearest-neighbor XXZ model with/without an external magnetic field [Section~\ref{sec:single_magnon_xxz}], (ii) non-local PDEs for single-magnon dynamics in the long-range XXZ model [Section~\ref{sec:long_range}], (iii) evolution of a domain-wall initial state corresponding to a zero temperature product state and to a high-temperature Gibbs state [Section~\ref{sec:domain_wall}]. Cases (i) and (iii) were considered in the main text, whereas, for case (ii), we introduce a new model---the long-range interacting XXZ spin chain---and show how our PDE-learning method can be extended to systems with power-law-decaying interactions. \subsection{Magnon dynamics in the nearest-neighbor XXZ model}\label{sec:single_magnon_xxz} In this subsection, we consider quench dynamics of the XXZ spin chain in the single-magnon excitation sector and provide an analytical derivation of Eq.~(6) from the main text. The Hamiltonian of the XXZ model reads \begin{equation} \label{eq:xxz_ham} H = \sum_{i} \left[J\left(S^x_i S^x_{i+1} + S^y_i S^y_{i+1}\right) + \Delta S^z_i S^z_{i+1} + B_i S_i^z\right], \end{equation} where $S^\mu_i=\sigma^\mu_i/2$ are spin operators, $\sigma^\mu_i$ are standard Pauli operators associated with the $i$th spin polarization, and coefficients $J$, $\Delta$, and $B$ are real parameters. In our simulations we set periodic boundary conditions in the Hamiltonian (\ref{eq:xxz_ham}). The initial state $|\psi_0\rangle$ is prepared as a wave packet in the single-magnon excitation sector over the ferromagnetic product state: \begin{equation}\label{eq:1p_psi_0} |\psi_0\rangle = \sum_{n} f(n)\, \mathcal U(\theta_n , \phi_n) \ket{\downarrow}_{n} \prod_{j \neq n} \ket{\downarrow}_{j} = \frac 1{\sqrt{\pi\sigma^2}} \sum_{n} e^{-(n - x_0)^2/\sigma^2 + ik_0 n} | \theta_n, \phi_n \rangle_n \prod_{j \neq n} \ket{\downarrow}_{j} , \end{equation} where $f(n)$ is Gaussian wave-packet envelope function corresponding to momentum $k_0$ and centered around coordinate $x_0$. Here $\mathcal U(\theta_n, \phi_n)$ is an $SU(2)$ unitary rotation operator acting as \begin{equation} | \theta_n, \phi_n \rangle = \mathcal U(\theta_n , \phi_n)\ket{\downarrow}_n = \cos{(\theta_n/2)}\ket{\uparrow}_n + \sin{(\theta_n/2)} e^{i \phi_n}\ket{\downarrow}_n. \label{eq:wave_packet_psi} \end{equation} We introduce the following complex-valued function $u(t, x)$. At the sites of the spin chain, $x_i \equiv i a$, where $a$ is the lattice spacing, we set the value of the function to \begin{equation} u(t, x_i) = \langle S^+_i(t) \rangle = \frac{1}{2}[\langle \sigma^x_i(t)\rangle + i\langle \sigma^y_i(t)\rangle], \end{equation} where $S^+_i=S^x_i+iS^y_i = \frac{1}{2} (\sigma^x_i + i \sigma^y_i)$ is the spin raising operator, $O(t) = \exp{(iHt)}O\exp{(-iHt)}$ is the time-dependent operator in the Heisenberg picture, and $\langle O \rangle \equiv \langle\psi_0|O|\psi_0\rangle$ is the expectation value taken in the initial state. To derive the equations of motion, we use the canonical commutation relations for the spin operators, \begin{equation} [S^+_i(t), S^-_j(t)] = 2\delta_{ij}S^z_i(t), \quad [S^+_i(t), S^z_j(t)] = -\delta_{ij}S^+_i(t). \end{equation} Calculating the time derivative of the observable of interest in the Heisenberg representation, we obtain \begin{equation} \label{eq:s_plus_t} i\partial_t \langle S^+_i(t)\rangle = \langle [ S^+_i(t), H] \rangle = J \Bigl(\langle S^+_{i-1}(t)S^z_i(t)\rangle+\langle S^+_{i+1}(t)S^z_i(t) \rangle\Bigl)- \Delta \Bigl(\langle S^+_i(t)S^z_{i+1}(t)\rangle+\langle S^+_i(t)S^z_{i-1}(t) \rangle\Bigl) - B_i \langle S_i^+(t) \rangle. \end{equation} The right-hand side of Eq.~(\ref{eq:s_plus_t}) depends on two-point same-time correlation functions of the type $\langle S^+_i(t) S^z_j(t) \rangle$. Therefore, for a generic initial state, the time derivative could not be expressed via $u(t,x_i)$ only. However, in the case of initial states in the form of Eq.~(\ref{eq:1p_psi_0}), i.e.~a superposition of a zero-magnon state and a one-magnon state---$|\psi_0\rangle = |\psi_{0,m=0}\rangle + |\psi_{0,m=1}\rangle$ where $|\psi_{m=0}\rangle = \ket{\downarrow}^{\otimes L}$---the equation can be simplified. Projecting the r.h.s.~terms in Eq.~(\ref{eq:s_plus_t}) onto the span of $|\psi_{0,m=0}\rangle$ and $|\psi_{0,m=1}\rangle)$, we obtain \begin{eqnarray} && \langle \psi_{0,m=1}| S^+_i(t)| \psi_{0,m=0}\rangle = u(t,x_i),\\ && \langle S^+_i(t) S^z_j(t) \rangle = \langle \psi_{0,m=1} |S^+_i(t) S^z_j(t)|\psi_{0,m=0} \rangle = \frac 12 u(t, x_i). \end{eqnarray} As a result, we arrive at the following closed equation: \begin{equation} \label{eq:fin_diff_xxz_hz(x)} i \partial_t u(t, x_i) = \frac{J}{2} \Bigl(u(t, x_{i+1})+u(t, x_{i-1})\Bigl) -\Delta u(t, x_i) - B_i u(t, x_i). \end{equation} Due to the linearity of dynamical equations, there is no dependence on the choice of the envelope function $f(n)$ for the initial state [see Eq.~(\ref{eq:1p_psi_0})] We would like to note that the simple closed form of Eq.~(\ref{eq:fin_diff_xxz_hz(x)}) is due to the specific choice of observable $u = \langle S^+_i(t)\rangle$. Another natural choice of initial condition and observable is \begin{equation}\label{eq:psi0_altern} |\psi_0 \rangle = \sum_{n} f(n) | \uparrow \rangle_{n} \prod_{j \neq n}|\downarrow \rangle_{j}, \qquad \tilde u(t, i) = \langle S^z_i(t) \rangle. \end{equation} The initial state in Eq.~(\ref{eq:psi0_altern}) has a conventional form of a single-magnon excitation, whereas Eq.~(\ref{eq:1p_psi_0}) corresponds to a superposition of a single-magnon and a ferromagnetic ground state. However, in the former case, the onsite $z$-magnetization alone does not contain enough information to predict its evolution at later times, hence, for this choice of observable $\tilde u(t,x)$, a simple self-contained PDE does not exist. Now we consider the long-wavelength limit of Eq.~(\ref{eq:fin_diff_xxz_hz(x)}). We assume that $u(t,x)$ is a smooth interpolation of integer-valued points. We also consider $B(x)$ as a smooth interpolation for the local magnetic field such that $B(x_i) = B_i$. The continuous form of Eq. (\ref{eq:fin_diff_xxz_hz(x)}) reads \begin{equation}\label{eq:cos_q_exact} i \partial_t u = J\cos{(i\partial_x)}u-\Delta u - B(x) u. \end{equation} Next, we assume that the magnetic field and the observables of the spin system change slowly in space, with the smallest-scale variations characterized by a length-scale $\lambda\gg 1$, implying that $|\partial_x^n u|,\, |\partial^n_x B| \leq \mathcal O(\lambda^{-n})$. Then the dynamics of the complex function $u(t,x)$ can be approximated as \begin{eqnarray} \label{eq:pde_xxz_hz(x)} i\partial_t u =\frac{J}{2} \partial^2_x u + (J-\Delta) u - B(x) u + \mathcal{O}(\lambda^{-4}). \end{eqnarray} Notably, Eq.~(\ref{eq:pde_xxz_hz(x)}) has the form of the single-particle Schr\"odinger equation in an external potential generated by the longitudinal magnetic field $B(x)$. Although formally the derivation of Eq.~(\ref{eq:pde_xxz_hz(x)}) does not require the magnetic field profile $B(x)$ to have small spatial gradients, such a condition could be important to guarantee smoothness of the solution $u(t,x)$ during the evolution. \begin{figure}[h!] \includegraphics[scale=0.3]{figs_suppl/free_1magnon_combo_suppl.pdf} \caption{Propagation of a wave packet in the XXZ spin chain, with $\Delta/J=0.5$ (exact diagonalization). The initial state $|\psi_0\rangle$ corresponds to a superposition of a single-magnon excitation and a ferromagnetic state: $|\psi_0\rangle = A \sum_{n} e^{-(n - x_0)^2/\sigma^2 }| + \rangle_n \prod_{j \neq n} \ket{\downarrow}_{j}$, where $|+\rangle_n = \frac{1}{\sqrt{2}}(\ket{\uparrow}_n + \ket{\downarrow}_n)$. Periodic boundary conditions are imposed. The parameters are: total number of lattice sites $L=100$, $J=-1$, $\sigma = 5$, and number of time steps $N_t=2000$. Panels (a, b) correspond to $\Re[u]=\langle S^x(t,x)\rangle$, while panel (c) corresponds to $\Im[u]=\langle S^y(t,x) \rangle$. Solid lines display the exact evolution, while dashed lines show the solution of the PDE (\ref{eq:pde_xx_num}). The evolution times in (b, c) are labeled in panel (a). (d) Difference between the exact solution and the solution of the inferred PDE.} \label{fig:1p_wave_packet} \end{figure} \begin{figure}[h!] \includegraphics[scale=0.33]{figs_suppl/parabolic_1magnon_combo_suppl.pdf} \caption{Propagation of a wave packet in the XXZ spin chain, with $\Delta/J=0.5$, in the presence of a parabolic longitudinal magnetic field $B_i = B_0\left(i - x_0\right)^2$ (exact diagonalization). The initial state is the same as in Fig.~\ref{fig:1p_wave_packet}, periodic boundary conditions are imposed, $B_0 = 5\times 10^{-4}$, and the number of time steps is $N_t=4000$. Panels (a, b) correspond to the input data for our algorithm: $\Re[u]=\langle S^x(t,x)\rangle$ and $\Im[u]=\langle S^y(t,x)\rangle$. (c) Difference between the exact solution and the solution of the recovered PDE (\ref{eq:extract_pde_xxz_hz(x)}).} \label{fig:1p_wave_packet_hz} \end{figure} First, we consider dynamics of a single magnon in the XXZ model in the case of zero magnetic field, $B_i=0$. For instance, if we choose the following library of candidate terms, \begin{equation} u_t = F(1, u, u_x, u_{xx}, u_{xxx}, u_{xxxx}, u^2, u u_x, u u_{xx}, u u_{xxx}, u u_{xxxx}), \label{eq:1p_F_dict} \end{equation} and, using the data shown in Fig.~\ref{fig:1p_wave_packet}, we obtain the following PDE with the BruteForce algorithm for the case $\Delta/J=0.5$ and the penalty constant $\lambda_0=10^{-3}$: \begin{equation} i u_t + 0.4999 u_{xx} + 0.4997 u = 0. \label{eq:pde_xx_num} \end{equation} The temporal and spatial derivatives in Eq.~(\ref{eq:pde_xx_num}) were computed from data using the second-order finite-difference scheme, see details in Sec.~\ref{sec:num_schemes}. We included nonlinear terms up to the second order in $u$ to the candidate terms dictionary (\ref{eq:1p_F_dict}) in order to perform a consistency check of the sparse selection algorithm. Now we consider single-magnon dynamics in the presence of an external longitudinal magnetic field. We impose a parabolic magnetic field $B_i= B_0 (x_i-x_0)^2$, where $x_0=L/2$. Post-quench dynamics is confined by the trapping potential, and the evolution of the observable $u(t,x)$ is shown in Fig.~\ref{fig:1p_wave_packet_hz}. Recovering the PDE from the following ansatz, \begin{equation} \label{eq:1p_F_dict_hz} u_t = F(u, u_x, u_{xx}, u_{xxx}, u_{xxxx}, \bar x, \bar x^2, \bar x^3, \bar x^4, \bar x u, \bar x^2 u, \bar x^3 u, \bar x^4 u), \qquad \bar x = x-x_0, \end{equation} using data corresponding to Fig.~\ref{fig:1p_wave_packet_hz} ($\Delta/J=0.5$, $B_0 = 5\cdot 10^{-4}$) with the BruteForce, CrossEntropy, and STRidge algorithms, we obtain \begin{equation} \label{eq:extract_pde_xxz_hz(x)} i u_t = - 0.4998 u_{xx} - 0.4999 u + 4.998\cdot 10^{-4}\left(x-x_0\right)^2 u. \end{equation} The extracted PDE in Eq.~(\ref{eq:extract_pde_xxz_hz(x)}) matches with high precision the expected Eq.~(\ref{eq:pde_xxz_hz(x)}). In Eq.~(\ref{eq:extract_pde_xxz_hz(x)}), we again used the finite-difference approximation of the derivatives, see Sec.~\ref{sec:num_schemes}. The frontiers of reconstructed PDEs as a function of penalty parameter $\lambda_0$ corresponding to the cases of single-magnon dynamics with/without the confining magnetic field are shown in Fig.~\ref{fig:num_terms_1magnon}. \begin{figure} \centering \includegraphics[scale=0.5]{figs_suppl/num_terms_suppl.pdf} \caption{Number of terms on the rhs of the reconstructed PDE $u_t=F(\cdot)$, Eq.~(\ref{eq:1p_F_dict}), found with BruteForce algorithm vs the $L_0$ penalty constant $\lambda_0$. (a) Quench dynamics in the XXZ model in the single-magnon sector, with $\Delta/J=0.5$, for (a) $B(x)=0$, see Fig.~\ref{fig:1p_wave_packet}, and (b) inhomogeneous magnetic field $B(x)=B_0 e^{-(x-x_0)^2/\sigma^2}$. The ``underfit'' region corresponds to the range of $\lambda_0$, where the number of terms in the inferred PDE is underestimated, whereas in the ``overfit'' region our algorithm finds spurious terms that are not present in the true PDE. Spatial derivatives were calculated using the spectral method [see Section \ref{sec:num_schemes}].} \label{fig:num_terms_1magnon} \end{figure} \subsection{Numerical schemes for the approximation of derivatives}\label{sec:num_schemes} In this subsection, we discuss approximation schemes for computing derivatives from data and comment on how these numerical schemes affect recovered PDEs. For the purposes of reconstructing Eq.~(\ref{eq:pde_xx_num}), we employed the standard second-order finite difference scheme when calculating temporal $\partial_t u$ and spatial $\partial_x^n u$ derivatives from data: \begin{eqnarray} \label{eq:ut_finite_diff_scheme} && u_t(t, x) = \frac{u(t+dt, x)-u(t-dt, x)}{2dt} + \mathcal O(dt^2),\\ \label{eq:ux_finite_diff_scheme} && u_{x}(t, x) = \frac{u(t, x+dx)-u(t, x-dx)}{2dx} + \mathcal O(dx^2),\\ \label{eq:uxx_finite_diff_scheme} && u_{xx}(t, x) = \frac{u(t, x+dx)+u(t, x-dx)-2 u(t,x)}{dx^2} + \mathcal O(dx^2). \end{eqnarray} One can notice that the coefficients in the inferred PDE in Eq.~(\ref{eq:pde_xx_num}) are very close to the exact theoretical values. Such high precision of the recovered coefficients could be surprising at first glance, given that the PDE in Eq.~(\ref{eq:pde_xxz_hz(x)}) contains corrections with higher-order spatial derivatives. In fact, when using the second-order finite difference scheme (\ref{eq:uxx_finite_diff_scheme}), the finite difference discretization of PDE (\ref{eq:pde_xxz_hz(x)}) coincides with the exact differential-difference equation (\ref{eq:fin_diff_xxz_hz(x)}). The reconstruction error of the coefficients, when considering the second-order finite difference scheme, could be estimated as $\delta\xi \sim \mathcal O (dt^2 + dx^2)$. The spectral (Fourier) method for the calculation of spatial derivatives $u_x$ and $u_{xx}$ could be used as an alternative method to the finite difference schemes (\ref{eq:ux_finite_diff_scheme},~\ref{eq:uxx_finite_diff_scheme}) when periodic boundary conditions are imposed: \begin{equation} \label{eq:fourier_method} \partial_x^n u(t,x) = \mathrm{iFFT}\left[(iq)^n \hat{u}(t, q)\right], \qquad \hat{u}\left(t,q_m=\frac{2\pi m}{L}\right) = \mathrm{FFT}(u) = \sum_{j=0}^{L-1} e^{i q_m j} u(t, x_j), \end{equation} where FFT (iFFT) denotes Fast Fourier Transform (inverse Fast Fourier Transform). Taylor expansion of the ``kinetic term`` $\cos{q} = 1-\frac{q^2}{2!}+\frac{q^4}{4!} + \ldots$ in Eq.~(\ref{eq:cos_q_exact}) results in the following correction to the evolution PDE: \begin{equation} i u_t \approx \frac{J}{2}u_{xx}+\frac{J}{24}u_{xxxx} + (J-\Delta) u - B(x)u. \end{equation} Applying the spectral method for the calculation of spatial derivatives from data [shown in Fig.~\ref{fig:1p_wave_packet}] resulted in the following reconstructed PDE (parameters of the XXZ model are $J=-1$, $\Delta/J=0.5$, $B(x)=0$): \begin{equation} i u_t + 0.495 u_{xx} + 0.5\, u = 0. \end{equation} The STRidge algorithm turned out to be insensitive to the forth-order derivative term $u_{xxxx}$ and missed it during reconstruction. Performing a full search over all possible combinations of $M=10$ terms in $F(\cdot)$ and scanning across a range of values for the $L_0$ penalty factor $\lambda_0$, we were able to recover, at $\lambda_0=10^{-4}$, the expected form of the PDE that includes the forth-order derivative term: \begin{equation} i u_t + 0.4996 u_{xx} + 0.041 u_{xxxx} + 0.499 = 0, \end{equation} where we again used the spectral method to compute spatial derivatives from data. As displayed in Fig.~\ref{fig:num_terms_1magnon}, as we decrease the strength of the $L_0$ penalty term, we obtain a ``staircase'' of PDEs, which reproduces the gradient expansion of the exact PDE (\ref{eq:cos_q_exact}). Note, that each additional term in the inferred PDE persists over a finite range of $\lambda_0$ values. By increasing the precision of the input dataset (refining the spatiotemporal grid), it is possible in principle to recover higher-order derivative terms originating from the tight-binding dispersion $\propto\cos{(i\partial_x)}$. Generally, the reconstructed PDE could be slightly sensitive to the choice of the numerical scheme used for the calculation of temporal and spatial derivatives, as shown in the examples above. However, such dependence will mostly appear in the high-order gradient terms. It is worth noting that finite difference schemes could be used to recover differential-difference equations instead of PDEs [e.g.~Eq.~(\ref{eq:fin_diff_xxz_hz(x)})] even when the envelope function is not smooth and the continuous approximation is not valid. \subsection{Magnon dynamics in the long-range XXZ model}\label{sec:long_range} In this subsection, we consider the one-dimensional XXZ model with power-law-decaying spin-spin interactions, \begin{equation} \label{eq:H_lr} H = -\sum_{i > j} \frac{1}{|i-j|^\alpha} \Bigl(J\left(S^x_i S^x_j+ S^y_i S^y_j\right) + \Delta S^z_i S^z_j\Bigl), \end{equation} where $\alpha$ is a power-law exponent, and spin operators are defined as in Eq.~\eqref{eq:xxz_ham}. We will assume that $\alpha>1$, so that the Hamiltonian (\ref{eq:H_lr}) has a well-defined thermodynamic limit. The phase diagram for the model in Eq.~(\ref{eq:H_lr}) for $J=1$ was obtained in Ref.~\onlinecite{maghrebi2017continuous}. Depending on the value of $\Delta$, the ground state of the model (\ref{eq:H_lr}) can be in (i) the ferromagnetic phase for $\Delta>1$, or (ii) the antiferromagnetic phase for large $\alpha$-dependent values $\Delta<0$, or (iii) the XY phase (Tomonaga-Luttinger liquid) with algebraically decaying correlations (and characterized by the conformal charge $c=1$), or (iv) the continuous symmetry breaking phase for intermediate values of $\Delta$ and small power-law exponents $\alpha$. The continuous symmetry breaking phase, which is generally forbidden by the Mermin-Wagner theorem in the case of low dimensional systems with local interactions, arises as a consequence of the long-range interactions. The phase boundary between the ferromagnetic phase and either the XY or the continuous symmetry breaking phase corresponds to a first-order phase transition. Here, we will be considering only excitations in the ferromagnetic phase. The exact form of evolution equations for the observable $u(t,x_i)=\langle S^+_i(t)\rangle$ has the form \begin{equation} i \partial_t u(t,x_i) = -\frac{J}{2}\sum_{j\neq i} \frac{1}{|i-j|^\alpha} u(t,x_j) + \frac \Delta 2u(t,x_i)\sum_{j \neq i} \frac{1}{|i-j|^\alpha}. \end{equation} In the continuous limit, the evolution PDE reads~\cite{gong2016kaleidoscope} \begin{equation} i u_t = J \mathcal D(i \partial_x) u + c u, \quad \mathcal D(\hat q) := \sum_{n=1}^{\infty} \frac{1-\cos{(\hat q n)}}{n^\alpha}, \end{equation} where the constant $c=(\Delta-J)\sum_{n=1}^\infty n^{-\alpha} = (\Delta-J) \zeta(\alpha)$, where $\zeta(\alpha)$ is the Riemann zeta function. It is convenient to formulate the PDE-learning problem in the time-momentum $(t, q)$ representation instead of the $(t, x)$ representation by considering the Fourier components \begin{equation} \hat u(t,q) := \sum_{j=0}^{L-1} u(t, x_j) e^{-i q x_j}. \end{equation} Then, the equation of the Fourier component has the form \begin{equation} i \hat u_t = J \mathcal D(q) \hat u(t,q) + c \hat u(t,q), \end{equation} where the operator $\mathcal D(q)$ for non-integer $\alpha$ has the long-wavelength expansion \begin{equation} \label{eq:Dq_expansion} \mathcal D(q) = M_\alpha -\Gamma (1-\alpha) \cos{\left[\frac{\pi}{2}(\alpha-1)\right]}|q|^{\alpha-1} + \frac{1}{2!}\zeta(\alpha-2) q^2 - \frac{1}{4!}\zeta(\alpha-4)q^4 + \mathcal{O}(q^6), \end{equation} where \begin{equation} M_\alpha = \begin{cases} \frac{(-1)^{n+1}}{(2n)!}q^{2n}\log{|q|}, \quad &\alpha = 2n+1,\quad n\in\mathbb Z, \quad n\geq 1,\\ 0, &\text{other } \alpha>1. \end{cases} \end{equation} In the case of integer $\alpha$, new additional logarithmic terms will appear $\sim |q|^{\alpha-1} \log{|q|}$ (for odd integer $\alpha=3,5,\ldots$) in the expression in Eq.~(\ref{eq:Dq_expansion}). Indeed, logarithmic terms at odd integer values of $\alpha$ appear when accounting for the singularity of the zeta function $\zeta(1+\epsilon)=\frac{1}{\epsilon}+\gamma_E +\mathcal{O}(\epsilon)$ and the Gamma function $\Gamma(-n+\epsilon)=\frac{(-1)^n}{n!}\left(\frac{1}{\epsilon}+\psi_1(n+1) + \mathcal{O}(\epsilon)\right)$, where $\psi_1(z)=\Gamma'(z)/\Gamma(z)$ is the digamma function, $\psi_1(n)=\sum_{k=1}^{n-1}\frac{1}{k}-\gamma_E$. Considering $\alpha=2n+1+\epsilon$ and taking the limit $\epsilon\to 0$ gives the following contribution $ \frac{(-1)^{n+1}}{(2n)!}q^{2n}\log{|q|}$ that comes from the singular $\frac{1}{\epsilon}$ terms. We define the candidate terms library as \begin{equation} \label{eq:uq_ansatz} \partial_t \hat u = F(\hat u, q^2\hat u, q^4\hat u, |q|^\mu \hat u, \log{|q|}\hat u, q^2 \log{|q|}\hat u, q^4\log{|q|}\hat u), \end{equation} where $\mu$ is a free parameter subject to tuning. Next, we sequentially perform optimization of the $L_2+L_0$ loss function using a three-step procedure: (1) perform sparse selection of the most relevant candidate terms (e.g.~brute force search with an $L_0$ penalty term or STRidge algorithm), (2) get coefficients for each term in the library via least-squares regression, (3) run several steps of optimization (using the Powell line search algorithm) to find the best value for the parameter $\beta$. Steps (1), (2), and (3) are repeated in a loop. \begin{figure} \centering \includegraphics[scale=0.35]{./figs_suppl/lr_combo_suppl.pdf} \caption{Magnon wave packet propagation in the long range XXZ model ($L=400$, $\Delta/J=0.9$), exact vs reconstructed evolution: (a, b) $\alpha=3$, (c, d) $\alpha=2.5$. Panels (a,c) display the real part $\Re{[u(t,x)]}$ of the input dataset, while panels (b,d) show the difference between the exact solution and the recovered non-local PDEs (\ref{eq:pde_lr_alpha=3}, \ref{eq:pde_lr_alpha=2.5}). } \label{fig:long_range_xxz} \end{figure} The above-described algorithm, for the case $\alpha=3$, $\Delta/J=0.9$, $J=-1$ and initial conditions corresponding to Fig.~(\ref{fig:long_range_xxz}), results in the following reconstructed equation: \begin{equation} \label{eq:pde_lr_alpha=3} i u_t(t,x) + \int_{-\infty}^\infty dq\int_{-\infty}^\infty dx' \Bigl(0.752\, |q|^{2.02}+ 0.505 \, q^2\log{|q|}\Bigl) e^{iq(x-x')}u(t,x') - 0.12 u(t,x) = 0, \end{equation} which is in good agreement with the theoretically predicted equation up to a $\mathcal{O}(q^4 \hat u) $ correction term: \begin{equation} \label{eq:pde_lr_alpha=3_theory} i u_t(t,x) + \int_{-\infty}^\infty dq\int_{-\infty}^\infty dx' \Bigl(\frac{3}{4} q^2 + \frac 12 q^2\log{|q|}\Bigl) e^{iq(x-x')}u(t,x') - 0.1 \zeta(3)u(t,x)= 0, \end{equation} where $\zeta(3)\approx 1.202..$. Equations (\ref{eq:pde_lr_alpha=3},~\ref{eq:pde_lr_alpha=3_theory}) are written in the integro-differential form since we performed conversion from the momentum representation to the coordinate representation in Eq.~(\ref{eq:uq_ansatz}). In the case of non-integer $\alpha=2.5$, $\Delta/J=0.9$, $J=-1$, the reconstruction results in the equation \begin{equation} \label{eq:pde_lr_alpha=2.5} i u_t + 1.68\int_{-\infty}^\infty dq\int_{-\infty}^\infty dx' |q|^{1.505} e^{iq(x-x')}u(t,x') + 0.72 u_{xx}(t,x) - 0.134 u(t,x) = 0, \end{equation} that should be compared to the theoretically expected one from Eq.~(\ref{eq:Dq_expansion}): \begin{equation} i u_t +\frac{1}{\sqrt{2}}\Gamma\left(-\frac{3}{2}\right)\int_{-\infty}^\infty dq\int_{-\infty}^\infty dx'|q|^{3/2}e^{iq(x-x')}u(t,x') -\frac{1}{2}\zeta{\left(\frac{1}{2}\right)} u_{xx}(t,x) + 0.1\zeta(5/2) u(t,x), \end{equation} where $\frac{1}{\sqrt{2}}\Gamma\left(-\frac{3}{2}\right)\approx 1.671...$, $\frac{1}{2}\zeta{\left(\frac{1}{2}\right)}\approx -0.7301...$, and $\zeta(5/2) \approx 1.341...$ As an additional application of the reconstruction algorithm, from the inferred PDEs~(\ref{eq:pde_lr_alpha=2.5}, \ref{eq:pde_lr_alpha=3}) for the observable $u(t,x)$, one can extract physical parameters of the long-range XXZ model, including the power-law exponent $\alpha$, by comparing coefficients of the reconstructed PDE with the theoretical values. Hydrodynamic behavior in a trapped-ion quantum simulator was recently measured experimentally~\cite{joshi2021observing}. \subsection{Dynamics of a domain-wall initial state in the XXZ spin chain}\label{sec:domain_wall} In the main text, we showed how the PDE-learning approach allows one to recover evolution equations of a domain-wall initial state in the XXZ spin chain with nearest-neighbor couplings. In the present subsection, we provide additional details regarding our results for both the zero-temperature and the high-temperature initial states. We also elaborate on previously known theoretical results. The conservation of total magnetization along the $z$ axis implies the continuity equation of the form \begin{equation} \label{eq:spin_current} \partial_t u + \partial_x \mathcal{J}^{z}(u)=0, \quad u(t,x) \equiv \langle S^z(t,x) \rangle. \end{equation} The exact solution for the ``domain-wall'' initial state $|\psi_0\rangle = \ket{\downarrow}^{\otimes L/2} \ket{\uparrow}^{\otimes L/2}$ in the thermodynamic limit $L\to\infty$ is given by~\cite{collura2018analytic} \begin{align} \label{eq:sz_dw} \left\langle S^{z}\right\rangle&=\frac{1}{2 \pi / P} \arcsin \left(\frac{\zeta}{\zeta_{0}}\right), \\ \mathcal{J}^{z} &=\frac{1}{2 \pi / P} \zeta_{0}\left[\sqrt{1-\frac{\zeta^{2}}{\zeta_{0}^{2}}}-\cos \left(\frac{\pi}{P}\right)\right], \label{eq:jz_dw} \end{align} where $\zeta=x/t$ is the lightcone coordinate. Here the coefficients are given by $\gamma = \arccos{(\Delta/J)}$, $\zeta_0 = \sin{(\gamma)}/\sin{\left(\frac{\pi}{P}\right)}$, and $\gamma=\pi Q/P$, where $Q$ and $P$ are coprime integers. Formally, the derivation of Eqs.~(\ref{eq:sz_dw}, \ref{eq:jz_dw}) is restricted to the specific values of the anisotropy parameter, such that $Q/P=\frac{1}{\pi}\arccos{(\Delta/J)}\in \mathbb{Q}$ is a rational number. However, if $\frac{1}{\pi}\arccos{(\Delta/J)}$ is an irrational number, the ratio $Q/P$ can be tuned to approximate the irrational number with a desired precision. The overall sign in the expression for the current (\ref{eq:jz_dw}) assumes the specific choice of boundary conditions at infinity: $\langle S^z \rangle \to \pm 1/2$ for $x\to \pm \infty$. The evolution PDE could be simplified to the form \begin{equation} \label{eq:ut_zeta} u_t + \zeta_0 \sin{\left(\frac{2 \pi}{P} u\right)} u_x = 0. \end{equation} Using data obtained from numerical simulations of the dynamics of the XXZ spin chain, we perform PDE reconstruction. Specifying, using the library of terms \begin{equation} \label{eq:xx_dictionary} u_t = F(u_x, u_{xx}, u_{xxx}, u_{xxxx}, u u_x, u^2 u_{x}, u^3 u_{x}, u^4 u_x, u^5 u_x) \end{equation} and the data presented in Fig.~\ref{fig:xx_domain_wall}---obtained from numerical computation of the exact evolution for the case of the XX spin chain ($\Delta=0$---we obtain the following PDEs: \begin{eqnarray} \label{eq:xx_dw_2terms} & u_t& + 3.12 u u_{x} - 4.49 u^3 u_x = 0, \quad (\lambda_0 = 10^{-4})\\ & u_t& + 3.135 u u_{x} - 5.056 u^3 u_x + 1.92 u^5 u_x = 0, \quad (\lambda_0 = 10^{-6}). \label{eq:pde_extract_dw_delta=0} \end{eqnarray} The functional form (\ref{eq:xx_dictionary}) is consistent with spin-current conservation (\ref{eq:spin_current}). Coefficients in Eq.~(\ref{eq:ut_zeta}) are quite close to the theoretically expected ones, obtained via Taylor expansion of the $\sin(\cdot)$ term up to the 5th order: $u_t+\pi u - \frac{\pi^3}{3!}u^3 u_x + \frac{\pi^5}{5!}u^5 u_x\approx0$. \begin{figure} \includegraphics[scale=0.4]{./figs_suppl/domain_wall_delta=0_suppl.pdf} \caption{(a) Evolution of a domain-wall initial state in the 1D XX model, $\Delta=0$ (exact). (b) Difference between the exact evolution of $u(t,x)=\langle S^z(t,x)\rangle$ and the solution of the inferred PDE, Eq.~(\ref{eq:xx_dw_2terms}). Total number of sites is $L=2000$. Dashed line shows the starting time used for PDE reconstruction. } \label{fig:xx_domain_wall} \end{figure} \begin{figure} \includegraphics[scale=0.4]{figs_suppl/domain_wall_delta=0.5_suppl.pdf} \caption{(a) Evolution of a domain wall initial state in the XXZ model, $\Delta/J=0.5$ (TEBD). (b) Difference between exact evolution of magnetization and the solution of the inferred PDE, Eq.~(\ref{eq:dw_pde_ab}). The total number of sites is $L=200$. Original TEBD data $u(t,x)$ was smoothed along the $x$ dimension by applying the Savitsky-Golay filter. The MPS bond dimension was $\chi=200$.} \label{fig:xxz_domain_wall} \end{figure} For the XXZ model with $\Delta/J = 2$, we get $(P, Q) =(3, 1)$, $\zeta_0=1$, and the corresponding PDE in the limit $t\to \infty$ reads \begin{equation} u_t + \sin{\left(\frac{2\pi}{3} u \right)} u_x = 0. \end{equation} PDE reconstruction from TEBD data shown in Fig.~\ref{fig:xxz_domain_wall} gives the following equation: \begin{equation} \label{eq:dw_pde_ab} u_t + a u u_x - b u^3 u_x = 0, \end{equation} where $a\approx 2.1$, $b\approx 1.67$. Knowing parameters $(a,b)$ of the PDE allows one to extract the Hamiltonian parameter $\Delta/J=\cos{\gamma}$ directly from data: \begin{equation} \label{eq:dw_pde_zeta_p} u_t \approx -\zeta_0 \left(\frac{2\pi}{P}\right) u u_x + \frac{\zeta_0}{3!}\left(\frac{2\pi}{P}\right)^3 u^3 u_x . \end{equation} Comparing Eqs.~(\ref{eq:dw_pde_ab}) and (\ref{eq:dw_pde_zeta_p}), we obtain \begin{equation} P = 2\pi \sqrt{\frac{a}{6b}} \approx 2.9, \qquad \zeta_0 = \frac{a P}{2 \pi} \approx 0.96, \qquad \Delta/J = \sqrt{1-\zeta_0^2 \sin^2{\left(\frac{\pi}{P}\right)}} \approx 0.52. \end{equation} Motivated by the theoretically expected form of the evolution equations (\ref{eq:sz_dw}) and (\ref{eq:jz_dw}), we could also try to search for a PDE of the form \begin{equation} u_t = F\left(u_x, u_{xx}, u u_x, u^2 u_x, u^3 u_x, u^4 u_x, u^5 u_x, \sin{\left(\frac{2\pi}{P_1} u\right)} u_x,\, \sin{\left(\frac{2\pi}{P_2} u\right)} u_x, \ldots \right), \end{equation} where $P_i$ are integers. The goal of such a test is to see if the PDE-learning algorithm would be able to identify a concise form of the equation and find the correct value of $\Delta$. We set the integer parameter to be in the range $P_i \in \{1, 2, \ldots, 10\}$. The BruteForce and CrossEntropy algorithms were able to recover the theoretically expected equation from $M=19$ terms: \begin{eqnarray} u_t + 0.994 \sin{\left(\frac{2\pi}{3}u\right)}u_x = 0, \quad (\lambda_0 = 10^{-3}), \end{eqnarray} which immediately gives $\Delta/J\approx 0.5$. The algorithm finds a sparse solution and favors a compact form with the $\sin{(\cdot)}$ term on the rhs, as opposed to the truncated Taylor expansion for the same expression. Interestingly, the STRidge algorithm was not able to find the correct solution for any value of the penalty parameter $\lambda_0$. This shows that STRidge, although reliable in most test cases, sometimes fails. \begin{figure} \includegraphics[scale=0.36]{figs_suppl/domain_wall_delta=2_suppl.pdf} \caption{Evolution of a high-temperature domain-wall initial state in the XXZ spin chain in the gapped phase $\Delta/J=2$. (a) Data (high-precision tDMRG) is reproduced from Ref.~\onlinecite{ljubotina2017spin}. The density matrix of the initial state is given by Eq.~(\ref{eq:rho_thermal}). The dashed horizontal line, $t_0$, separates the portion of the data $t\geq t_0$ used for PDE-learning. (b) Difference between tDMRG data and the solution of the recovered PDE (\ref{eq:pde_delta=2}). (c) Comparison between spatial derivatives $\partial_x u(t, x)$, where $u(t,x)$ corresponds to data or to the solution of the PDE at the maximum evolution time, $t_f=200$. } \label{fig:prosen_data} \end{figure} The example considered above corresponds to the spreading of the domain-wall initial state in the gapless phase of the XXZ model at zero temperature. Interestingly, in the gapped phase, $\Delta/J>1$, equations~(\ref{eq:sz_dw}, \ref{eq:jz_dw}) are not valid: domain-wall evolution in the XXZ model freezes and the domain-wall spreading stops. As a result, the PDE reconstruction is problematic in this case. On the other hand, for high-temperature mixed initial states, the spin dynamics is qualitatively different. The initial high-temperature state is prepared by combining two reservoirs with the opposite direction of the longitudinal magnetic field, \begin{equation}\label{eq:rho_thermal} \rho(t=0) = \frac{\exp{(\mu \sum_{i\in L} \sigma^z_{i})}}{Z_L} \otimes \frac{\exp{(-\mu \sum_{j\in R}\sigma^z_j)}}{Z_R}, \end{equation} where $0<\mu \ll 1$. Using tDMRG data from Ref.~\onlinecite{ljubotina2017spin}, we perform PDE reconstruction for $\Delta/J=2$ and $\Delta/J=1$. In the gapped phase ($\Delta/J=2$) presented in Fig.~\ref{fig:prosen_data}, using ansatz \begin{equation} u_t = F(u_x, u_{xx}, u_{xxx}, u_{xxxx}, u u_x, u^2 u_{x}, u^3 u_{x}, u^4 u_{x}, u^5 u_{x}), \end{equation} we obtain the following equation for the rescaled magnetization $u=\mu^{-1}\langle S^z \rangle$: \begin{eqnarray} &u_t& = D u_{xx}, \quad D \approx 0.64, \label{eq:pde_delta=2} \end{eqnarray} which agrees with the self-similar scaling law in the gapped phase, $u(t,x)=f(x/\sqrt{t})$, observed numerically in Ref.~\onlinecite{ljubotina2017spin}. The value of the diffusion coefficient is close to the theoretically predicted value $D=0.76$ at infinite temperature for $\Delta/J=2$.~\cite{gopalakrishnan2019} In Fig.~\ref{fig:prosen_data}, we compare the tDMRG data and the solution of the reconstructed PDE, Eq.~(\ref{eq:pde_delta=2}); the agreement is excellent. In spite of a number of recent papers on the topic~\cite{gopalakrishnan2019, de2018hydrodynamic, de2020superdiffusion}, the full theoretical explanation of the properties of spin dynamics at the isotropic point $\Delta/J=1$ is still lacking. A superdiffusion behaviour at large times $t\gg 1$ was empirically observed in Ref.~\onlinecite{ljubotina2017spin}, $u(x,t) \propto f(x/t^{\eta})$, with an anomalous scaling exponent $\eta\approx 2/3$. Moreover, it was shown in Ref.~\onlinecite{ljubotina2019kardar} that the shape of the profile of the magnetization $f(y=x/t^{\eta})$ asymptotically approaches the KPZ scaling function, thus revealing a connection between the KPZ equation and the effective dynamics of magnetization in the Heisenberg model. Following our PDE reconstruction methodology, we are interested in finding a closed-form evolution equation for $u(t,x)$, where the rhs $u_t=F(\cdot)$ does not have an explicit time dependence. Using BruteForce, STRidge, and CrossEntropy algorithms (the list of candidate terms is shown in Table~\ref{table:t2}), we found the following equation that describes data with high precision: \begin{eqnarray} \label{eq:pde_delta=1_2terms} &u_t& + a u u_x = D u_{xx}, \quad a\approx 0.24,\; D\approx 1.90 \quad (\lambda_0=10^{-2}), \end{eqnarray} which is known as Burgers' equation. A similar diffusion-type term was recently predicted in Ref.~\onlinecite{de2018hydrodynamic} for integrable 1D models based on a generalized hydrodynamics approach. It is natural to interpret the discovered equation (\ref{eq:pde_delta=1_2terms}) as a noise-averaged stochastic Burgers' equation: \begin{equation} u_t + a u u_x = D u_{xx} + \partial_x \eta (x, t), \end{equation} where $\eta(x, t)$ represents uncorrelated Gaussian noise, $\langle\eta (x,t)\rangle=0$. The stochastic Burgers' equation is closely related to the 1D KPZ equation \begin{equation} h_t +\frac{a}{2}(h_x)^2 = D h_{xx} + \eta(x,t) \end{equation} via the substitution $u(t,x)=h_x(t,x)$. Therefore, our equation (\ref{eq:pde_delta=1_2terms}) also demonstrates a connection between magnetization dynamics in the Heisenberg model and the KPZ physics. Interestingly, we found that the solution of the Burgers' equation (\ref{eq:pde_delta=1_2terms}) obeys a KPZ-type scaling law $u(x/t^{2/3})$ for late evolution times, see Fig.~\ref{fig:kpz_scaling}. Although the KPZ scaling for the inferred deterministic equation (\ref{eq:pde_delta=1_2terms}) is exhibited numerically with high accuracy, we were not able to prove analytically whether or not the solution of Burgers' equation (\ref{eq:pde_delta=1_2terms}) with the initial condition given by data admits asymptotic scaling $u(x/t^{2/3})$ at $t\to\infty$. \begin{figure} \centering \includegraphics[scale=0.5]{figs_suppl/kpz_scaling.pdf} \caption{KPZ-scaling relation $u=u(x/t^{2/3})$ of the solution of the inferred deterministic PDE (\ref{eq:pde_delta=1_2terms}) for the magnetization dynamics in the high-temperature Heisenberg model ($\Delta/J=1$) at late evolution times. The KPZ scaling law has been observed numerically~\cite{ljubotina2019kardar} and is anticipated theoretically.~\cite{gopalakrishnan2019} The initial condition for the evolution of the PDE corresponds to tDMRG data at time $t_0=50$ (dashed horizontal line in Fig.~\ref{fig:prosen_data}). $T=200$ is the total evolution time [$\max(t)$] in the tDMRG data. We observe that the scaling relation for the solution of the PDE still holds for times $t\in [T, 3T]$, which is beyond the time range presented in the dataset $t\in[0, T]$. } \label{fig:kpz_scaling} \end{figure} \newpage \section{PDE-learning of hydrodynamic equations in fermionic systems: additional details}\label{sec:fermions} In this Section of the Supplementary Material, we provide details of PDE-learning in fermionic systems: the 1D non-interacting fermion gas and the strongly interacting Fermi-Hubbard model. In Section~\ref{sec:fermion_hydro}, we give a quick overview of the analytical derivation of hydrodynamic equations describing dynamics in the free fermion gas in the semiclassical approximation. In Section~\ref{sec:cos_corrections}, we derive correction terms to the hydrodynamic equations for the free fermion gas---terms that stem from the non-parabolic (tight-binding) dispersion---both analytically and using our PDE-learning algorithm. In Section~\ref{sec:fermion_hydro_quartic}, we consider hydrodynamics of the non-interacting fermion gas in the vicinity of a Lifshitz critical point. The salient feature of the Lifshitz critical point is the quartic fermion dispersion at small momenta, which results in an unusual hydrodynamic equation. In Section \ref{sec:particle_current} we derive expression for the particle current and velocity a tight-binding model with additional next-nearest-neighbour hopping terms. In Section~\ref{sec:global_symmetries}, we discuss the global symmetry properties of the hydrodynamic equations and show how leveraging of these symmetries significantly reduces the size of the search space of candidate PDEs. In Section~\ref{sec:partial_observ}, we propose a method to perform PDE-reconstruction from partial observations, when only data for fermion density evolution (but not velocity) is available. In Section~\ref{sec:single_pde_rho}, we derive a single second-order-in-time PDE that describes the evolution of density in a gas of free fermions. In Section~\ref{sec:inter_ferm_hydro}, we provide supplementary details on PDE-learning of hydrodynamics in the spinless Fermi-Hubbard model and summarize our findings from the main text. In particular, we discuss in more detail the connection between the discovered effective Euler equation and the Tomonaga-Luttinger theory. Finally, in Section~\ref{sec:viscosity}, we discuss the emergent Navier-Stokes equation and the role of the discovered viscosity term. \subsection{PDE-learning of bosonization equations: Semiclassical regime of hydrodynamics of non-interacting fermions}\label{sec:fermion_hydro_parabol} \label{sec:fermion_hydro} In this subsection, we provide details of the derivations of semiclassical hydrodynamic equations for a free-fermion gas. We consider a 1D non-interacting gas of spinless fermions on a lattice described by the tight-binding Hamiltonian \begin{equation} \label{eq:H_tight_binding} H = -J \sum_i (c^\dag_i c_{i+1} + c^\dag_{i+1} c_i) + \sum_i V_i c^\dag_i c_i, \end{equation} where $c_i$ ($c^\dag_i$) are fermion annihilation (creation) operators at lattice site $i$, $J$ is the hopping parameter, and $V_i$ is the external potential. The energy dispersion of free fermions in the low-density limit could be well-approximated as parabolic: $\varepsilon_k/2J = -\cos{(k)} \approx -1 + \frac{k^2}{2} + \mathcal{O}(k^4)$. The dynamics of the fermion gas with parabolic dispersion $H=\sum_k \frac{k^2}{2 m} c^\dag_k c_k$ in the Wentzel–Kramers–Brillouin (WKB) approximation could be described by hydrodynamic equations~\cite{bettelheim2008quantum}: \begin{eqnarray}\label{eq:euler_rho} &&\rho_t + (v \rho)_x =0,\\ &&\label{eq:euler_v} v_t + v v_x = - \frac{1}{m\rho} \partial_x P(\rho) - \partial_x V(x), \qquad P(\rho) = \frac{\pi^2}{3 m}\rho^3. \end{eqnarray} Eq.~(\ref{eq:euler_rho}) is the continuity equation, which describes the conservation of the total number of fermions. Eq.~(\ref{eq:euler_v}) is the Euler equation describing barotropic compressible fluid flow. Here $P(\rho)$ is the Pauli pressure that could be derived from the textbook thermodynamic relation $P(\rho) = \rho \partial_\rho \epsilon(\rho) - \epsilon(\rho)$, where $\epsilon(\rho)$ is the specific energy of the fermion gas (energy per unit volume): \begin{equation} \rho(x) = \int_{-k_F}^{k_F}\frac{dk}{2\pi} = \frac{k_F(x)}{\pi}, \qquad \epsilon(\rho) = \int_{-k_F}^{k_F}\frac{dk}{2\pi} \frac{k^2}{2m} = \frac{k^3_F(x)}{6\pi m} = \frac{\pi^2\rho^3}{6m}, \end{equation} where $k_F(x)$ is the local Fermi momentum. The system of hydrodynamic equations ~(\ref{eq:euler_rho}, \ref{eq:euler_v}) can be diagonalized by introducing two Riemann invariants $k_{R,L} = mv \pm \pi \rho$ corresponding to the local momenta of right- and left-movers: \begin{equation}\label{eq:J_PDE} \partial_t k_R + \frac{k_R}{m} \partial_x k_R=0, \qquad \partial_t k_L + \frac{k_L}{m} \partial_x k_L=0, \end{equation} where the effective velocity of the right (left) movers is given by the group velocity of right (left) moving fermions $v_{gr}(x)=\partial_k \varepsilon_k=k_{R,L}(x)/m$. Eqs.~(\ref{eq:J_PDE}) are known as the Riemann-Hopf equations (or the inviscid Burgers' equations). The Riemann-Hopf equations (\ref{eq:J_PDE}) form a shock-wave singularity at finite time $t_{c}$, which is also known as the ``gradient catastrophe''. The semiclassical hydrodynamic equations remain valid only for evolution times $t\leq t_{c}$. The collapse time $t_c$ depends on the density profile of the initial state: a larger amplitude of the density hump corresponds to a shorter $t_c$ due to higher non-linearity. The value of $t_c$ can be computed by solving the Riemann-Hopf Eqs.~(\ref{eq:J_PDE}) separately for left- and right-moving modes using the method of characteristics, see e.g.~Ref.~\onlinecite{whitham2011linear}. For example, for a given initial condition $k_R(t=0, x)=f(x)$, the collapse time $t_c$ corresponds to the minimal (over all choices of $\xi$) positive value of the expression $t_c = -\frac{m}{ f'(\xi)}$, which is $t_c=-\frac{m}{\min{f'(\xi)}}$ (assuming that $f'(\xi)$ takes negative values). For the equilibrium initial state (zero initial velocity), the local Fermi momenta of right- and left-movers are proportional to the local fermion density, $k_{R,L}(t=0,x)=\pm \pi\rho_0(x)$. Thus, the singularity formation time is inversely proportional to the amplitude of the density hump in the initial state, $t_c=-\frac{m}{\pi} [\min(\partial_x\rho_0(x))]^{-1}$, where $\rho_0(x)$ is the initial density profile. After the formation of the shock wave, the fermion density profile $\rho(t,x)$ develops quantum ripples, which are not captured by semiclassical equations~\cite{mirlin2013}. However, at $t>t_c$ the envelope of the density profile, after averaging over quantum oscillations, can still can be computed from semiclassical equations by using specialized PDE solvers (e.g.~Riemann solvers~\cite{godunov1959difference}), which allow one to propagate solutions beyond the shock-wave formation time. It is instructive to provide an alternative derivation of the hydrodynamic system ~(\ref{eq:euler_rho}, \ref{eq:euler_v}), which will be straightforward to generalize to other dispersion relations. Taking the transport equation (\ref{eq:J_PDE}) as a starting point, we can cast it in the form of hydrodynamic equations for the fermion density $\rho$ and the velocity $v$ by expressing $\rho$ and $v$ in terms of the local Fermi momenta of the left- and right-moving modes: $\rho=\rho(k_R, k_L)$, $v=v(k_R,k_L)$. The fermion density and the current read \begin{equation} \label{eq:rho_v_defin_from_k} \rho = \int_{k_L}^{k_R}\frac{dk}{2\pi} = \frac{k_R-k_L}{2\pi}, \quad j \equiv\rho v = \int_{k_L}^{k_R} \frac{dk}{2\pi} \partial_k \varepsilon(k) = \frac{1}{2\pi}\left(\varepsilon(k_R)-\varepsilon(k_L)\right). \end{equation} Substituting the parabolic dispersion $\varepsilon(k)=k^2/2m$ into Eq.~(\ref{eq:rho_v_defin_from_k}), solving for the momenta of the left- and right-moving modes in terms of the fermion density and velocity, $k_{R,L}(\rho, v)=mv\pm \pi\rho$, and substituting into the transport equation (\ref{eq:J_PDE}), we obtain \begin{equation} \label{eq:pm_kR_kL} \partial_t \left(mv\pm \pi\rho\right) + \frac{1}{m}\left(mv\pm \pi\rho\right)\partial_x\left(mv\pm \pi\rho\right)=0. \end{equation} By adding and subtracting the $(\pm)$ equations in (\ref{eq:pm_kR_kL}), we obtain the continuity and Euler equations, Eqs.~(\ref{eq:euler_rho}, \ref{eq:euler_v}). Now let us consider the quench dynamics where the initial state is prepared by applying a smooth localized potential, e.g. \begin{equation} \label{eq:V0_gauss} V(x) = V_0 e^{-(x-x_0)^2/\sigma^2}, \end{equation} and then setting the potential to zero at $t>0$. The initial density profile in the Thomas-Fermi approximation reads \begin{equation} \rho_{TF}(t=0, x) = \frac{1}{\pi}\sqrt{2(E-V(x))}, \qquad E = \frac{k_F^2(\infty)}{2m}, \end{equation} which can be explicitly obtained from Eq.~(\ref{eq:euler_v}) when setting $v\to 0$ and integrating the rhs over $x$. Our goal is to reconstruct hydrodynamic equations describing the evolution of $\rho(t, x)$ and $v(t, x)$ directly from data obtained via numerical simulations. We search for hydrodynamic equations of the form \begin{eqnarray} \label{eq:rho_dict_hydro} \rho_t = F(1, \rho, \rho_x, \rho_{xx}, v, v_x, v_{xx}, \rho v_x, v \rho_x, v v_x, \rho \rho_x, \ldots),\\ \label{eq:v_dict_hydro} v_t = G(1, \rho, \rho_x, \rho_{xx}, v, v_x, v_{xx}, \rho v_x, v \rho_x, v v_x, \rho \rho_x, \ldots). \end{eqnarray} From data for the fermion density and velocity presented in Fig.~\ref{fig:hydro_vs_exact}, we reconstruct the system of hydrodynamic PDEs. Both the BruteForce algorithm and the STRidge algorithm result in \begin{eqnarray} \label{eq:euler_recovered_rho} &\rho_t& + 1.006 \rho v_x + 1.0007 v \rho_x = 0, \\ \label{eq:euler_recovered_v} &v_t& + 0.97 v v_x + 9.45 \rho \rho_x = 0, \end{eqnarray} which is very close to the expected Eqs.~(\ref{eq:euler_rho}, \ref{eq:euler_v}). In Fig.~\ref{fig:hydro_vs_exact}, we also compare the data and the solution of the inferred system of PDEs~(\ref{eq:euler_recovered_rho}, \ref{eq:euler_recovered_v}). Interestingly, in the case of a small amplitude of the initial density hump, $\delta\rho/\rho_0\ll 1$, our PDE reconstruction algorithm recovers the correct form of linearized Euler equations: \begin{equation} \label{eq:linearized_euler} \begin{cases} \rho_t + \rho_0 v_x \approx 0,\\ v_t + \pi^2\rho_0 \rho_x \approx 0. \end{cases} \end{equation} These equations, in turn, imply the wave equation $\rho_{tt}=v_F^2 \rho_{xx}$, with the wave speed equal to the Fermi velocity, $v_F=\pi\rho_0$. For example, for the data presented in Fig.~(\ref{fig:fermion_wave_eq}), the BruteForce algorithm yields \begin{equation} \rho_t \approx -0.109 v_x, \quad v_t \approx -1.036 \rho_x, \label{eq:lin_hydro_reconstr} \end{equation} which is in perfect agreement with the linearized system (\ref{eq:linearized_euler}) for $\rho_0\approx 0.1$. \begin{figure}[h!] \centering \includegraphics[scale=0.3]{./figs_suppl/free_fermions_combo_suppl.pdf} \caption{Hydrodynamics in the 1D free-fermion system. (a,c) Exact evolution of fermion density $\rho(t,x)$ and velocity $v(t,x)$ in the tight-binding model. (b, d) The difference between the data and the solutions of recovered hydrodynamic PDEs ~(\ref{eq:euler_recovered_rho}, \ref{eq:euler_recovered_v}). Parameters of simulations: number of lattice sites $L=1000$, filling factor $\nu=0.1$, and initial potential $V(x) = V_0 e^{-(x-x_0)^2/\sigma^2}$ with $V_0/J=-0.2$, $\sigma = 0.2 L$. Periodic boundary conditions were imposed. The data for the density and velocity is the same as in Fig.~1 of the main text. } \label{fig:hydro_vs_exact} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=0.3]{figs_suppl/fermions_wave_eq_combo_suppl.pdf} \caption{Regime of linearized hydrodynamics ($\delta\rho/\rho_0 \ll 1$) in non-interacting fermion gas: data for the evolution of (a) fermion density and (b) fermion velocity. PDE reconstruction results in the system of linearized hydrodynamic equations~(\ref{eq:lin_hydro_reconstr}). The linearized system is equivalent to the wave equation $\rho_{tt}-v_F^2 \rho_{xx}=0$, with the wave speed given by the Fermi velocity $v_F=\pi\rho_0$. The amplitude of the external potential is $V_0/J=-0.02$. } \label{fig:fermion_wave_eq} \end{figure} \subsection{Corrections to hydrodynamic equations due to the tight-binding dispersion}\label{sec:cos_corrections} In this subsection, we consider how hydrodynamic equations (\ref{eq:euler_rho}) and (\ref{eq:euler_v}) are modified due to corrections generated by subleading terms in the expansion of the dispersion relation $\varepsilon_k=-2J \cos{(k)}=2J\left(-1+\frac{k^2}{2}-\frac{k^4}{4!}+\ldots\right)$ and analytically derive these correction terms, which we discovered with our PDE-learning algorithm [see main text]. Following the steps from Section~\ref{sec:fermion_hydro_parabol}, we first express the momenta of left- and right-movers via fermion density and velocity: \begin{eqnarray}\label{eq:k_RL} \frac{k_R-k_L}{2} = \pi \rho, \qquad \frac{k_R+k_L}{2} = \sin^{-1}{\left[ \frac{ \pi \,\rho\, m\,v}{ \sin{\pi\rho}}\right]}, \end{eqnarray} where $m=1/(2J)$. Since we are interested in finding corrections stemming from the deviation of the tight-binding dispersion from the parabolic dispersion in the limit $k_{R,L}\ll 1$, we assume that the fermions occupy the bottom of the band, $\rho\ll 1$. Furthermore, given that, to the leading order (i.e.~in parabolic approximation), the momenta of right- and left- movers are $k_{R,L}= mv\pm \pi\rho$, the condition $k_{R,L}\ll 1$ also implies that the fermion velocity should be small, $mv\ll 1$. Solving for $k_{R,L}(\rho, v)$ from Eq. (\ref{eq:k_RL}), we expand the solution in powers of $\rho$ and $v$. Keeping the terms $\propto v^{n_1} \rho^{n_2}$ with $n_1+n_2\leq 5$, we obtain: \begin{equation} k_{R,L} = \pm \pi\rho + \sin^{-1}{\left[ \frac{ \pi \rho\, m\,v}{ \sin{\pi\rho}}\right]} = \pm \pi\rho + m v + \frac{(mv)^3}{6} + \frac{3m^5}{40}v^5 + \frac{\pi^2}{6 } m v\rho^2 + \frac{\pi^2 m^3}{12}v^3\rho^2 + \frac{7 \pi^4 }{360}mv\rho^4 + \ldots. \end{equation} Dynamical equations for the momenta of the left- and right-movers read \begin{eqnarray}\label{eq:kR_t} &&\partial_t k_{R} + \frac{1}{m}\sin{(k_R)}\, \partial_x k_R = 0,\\ &&\partial_t k_{L} + \frac{1}{m}\sin{(k_L)}\, \partial_x k_L = 0. \label{eq:kL_t} \end{eqnarray} First, taking the difference of Eqs.~(\ref{eq:kR_t}, \ref{eq:kL_t}) and using the definition of particle current (\ref{eq:rho_v_defin_from_k}), \begin{equation}\label{eq:rho_v} \rho v = - \frac{1}{2 m \pi } \left(\cos{k_R}-\cos{k_L}\right), \end{equation} one can recover the exact continuity equation $\rho_t+(\rho v)_x= 0$, which remains valid to all orders in perturbation theory. Second, multiplying both sides of Eqs.~(\ref{eq:kR_t}, \ref{eq:kL_t}) by $\sin{(k_{R,L})}$, respectively, we obtain \begin{eqnarray}\label{eq:cos_kRL_t} -\partial_t \cos{(k_\alpha)} +\frac{1}{m}\partial_x \left(\frac{k_\alpha}{2}-\frac{1}{4}\sin{(2 k_\alpha)}\right) = 0, \quad \alpha = R, L. \end{eqnarray} Subtracting the two Eqs.~(\ref{eq:cos_kRL_t}) for right- and left- movers and using (\ref{eq:rho_v}), we arrive at the dynamical equation for the density of momentum: \begin{eqnarray} (\rho v)_t &=& - \frac{1}{2 \pi m^2}\partial_x \left(\frac{k_R-k_L}{2}-\frac{\sin{(2 k_R)}-\sin{(2 k_L)}}{4}\right) =\nonumber \\ &&- \frac{1}{2 \pi m^2 } \partial_x \left(2m^2\pi v^2 \rho + \frac{2\pi^3}{3}\rho^3 - \frac{2\pi^3}{3}m^2v^2\rho^3 - \frac{2\pi^5}{15}\rho^5 + \ldots\right) = 0, \end{eqnarray} where we performed Taylor expansion of $\sin{(\cdot)}$ up to the fifth order. Substituting $\rho_t=-( \rho v)_x$ from the continuity equation, we obtain a modified Euler equation with additional correction terms: \begin{eqnarray} &&v_t + v v_x + \frac{\pi^2}{m^2}\rho\rho_x = \frac{2\pi^2}{3}\rho^2 v v_x + \pi^2 v^2 \rho \rho_x + \frac{\pi^4}{3 m^2}\rho^3 \rho_x + \ldots. \label{eq:v_t_correct} \end{eqnarray} If we set $\rho_0 \approx 0.1$ and $m=1$ ($J=0.5$), which corresponds to the parameter regime used in Fig.~\ref{fig:hydro_vs_exact}, the signs and the magnitude of the corrections are in full agreement with the PDE (\ref{eq:euler_recovered_v}) extracted from direct simulations, as we will now discuss. One can perform dimensional analysis of equation (\ref{eq:v_t_correct}) by noticing that, for our choice of units (we set the lattice spacing to unity), we have $[v]=J$, $[\rho]=1$, $[x]=1$, $[t]=J^{-1}$. We performed a BruteForce search of corrections to the rhs of the Euler equation by considering the symmetry allowed terms, $(P,T)=(-,+)$, from Table~\ref{table:t2}: \begin{equation} v_t + v v_x + \pi^2\rho\rho_x = f(\rho^2\rho_x, \rho^3\rho_x, \ldots, \rho^5\rho_x, \rho v v_x, \rho^2 v v_x, v^2\rho_x, v^2\rho \rho_x, \ldots, (\log{\rho})_x, v^2 (\log{\rho})_x ). \end{equation} We found the following corrections using data shown in Fig.~\ref{fig:hydro_vs_exact}(a,c) [the dataset has the spatiotemporal resolution $N_t,N_x=(10^3, 10^3)$]: \begin{eqnarray} v_t+v v_x +\pi^2\rho\rho_x \approx 1.003\, \frac{2\pi^2}{3} \rho^2 v v_x + 1.18\, \pi^2 v^2\rho \rho_x + 0.967\, \frac{\pi^4}{3} \rho^3 \rho_x, \qquad (\lambda_0 = 10^{-5}), \end{eqnarray} which are in excellent agreement with the analytical result, Eq.~(\ref{eq:v_t_correct}). Moreover, starting from the generic ansatz $v_t=G(\cdot)$, our algorithm was able to recover the entire series of terms, including the leading terms ($v v_x$, $\rho\rho_x$) and the subleading corrections ($\rho^2 v v_x$, $v^2\rho \rho_x$, $\rho^3 \rho_x$): \begin{equation} \label{eq:vt_corrections_reconstr} v_t = - 1.008\, v v_x - 0.9993\, \pi^2 \rho \rho_x = 1.10\, \frac{2\pi^2}{3} \rho^2 v v_x+ 1.05\, \pi^2 v^2 \rho\rho_x + 0.96\, \frac{\pi^4}{3} \rho^3\rho_x, \qquad (\lambda_0 = 10^{-6}). \end{equation} The high spatiotemporal resolution of the data results in a high accuracy of the reconstructed coefficients of the hydrodynamic equation (\ref{eq:vt_corrections_reconstr}). \subsection{Fermion hydrodynamics at the Lifshitz transition} \label{sec:fermion_hydro_quartic} In this subsection, we provide the analytical derivation of semiclassical hydrodynamic equations in a fermion gas with quartic dispersion and present additional details of PDE-learning. Now we extend the tight-binding model (\ref{eq:H_tight_binding}) by adding next-nearest-neighbor hopping terms: \begin{equation} \label{eq:H_j1_j2} H = -J_1 \sum_i (c^\dag_i c_{i+1} + c^\dag_{i+1} c_i) - J_2 \sum_i (c^\dag_i c_{i+2} + c^\dag_{i+2} c_i). \end{equation} Fermion dispersion is $\varepsilon_k = - 2J_1 \cos{(k)} - 2 J_2 \cos{(2k)}$. In the long-wavelength limit, we can perform an expansion up to the fourth order in $k$: \begin{equation} \label{eq:p2_p4_dispersion} \varepsilon_k = \varepsilon_0 + \frac{\alpha k^2}{2} + \frac{\beta k^4}{4} + \mathcal{O}(k^6), \end{equation} where $\alpha = 2(J_1+4 J_2)$ and $\beta=-\left(\frac{J_1}{3}+\frac{16}{3}J_2\right)$. \begin{figure}[H] \centering \includegraphics[scale=0.3]{figs_suppl/j1_j2_fermions_combo_suppl.pdf} \caption{Fermion hydrodynamics at the Lifshitz critical point ($J_2/J_1=-0.25$), which is characterized by the quartic dispersion, $\varepsilon_k \approx \frac{\beta k^4}{4}$. Panel (a) displays exact simulations of evolution of fermion density, while panel (b) shows the difference between the data and the solution of the recovered PDE (\ref{eq:vt_reconstr_j1j2_main}). The initial state corresponds to the ground state in the Gaussian-shaped potential Eq.~(\ref{eq:V0_gauss}), with amplitude $V_0/J=-4\cdot 10^{-3}$ and width $\sigma=0.1 L$. At large times $t \gtrsim 6\cdot 10^{3}$, a shock wave starts to form, and the semiclassical hydrodynamic approximation breaks down.} \label{fig:hydro_p4} \end{figure} The hydrodynamic equation for the generalized dispersion with quadratic and quartic terms (\ref{eq:p2_p4_dispersion}) can be derived in analogy with Eq.~(\ref{eq:J_PDE}), considering separately right- and left-movers: \begin{equation} \label{eq:k_RL_p2_p4} \partial_t k_R + (\alpha k_R + \beta k_R^3)\partial_x k_R = 0, \quad \partial_t k_L + (\alpha k_L + \beta k_L^3)\partial_x k_L = 0. \end{equation} To derive the hydrodynamic equations, we proceed by analogy with Section~\ref{sec:fermion_hydro_parabol}. The expressions for the fermion density and current read: \begin{equation} \label{eq:rho_v_defin_j1j2} \rho(t,x) = \int_{k_L}^{k_R}\frac{dk}{2\pi} = \frac{k_R-k_L}{2\pi}, \quad \rho(t,x) v(t,x) = \int_{k_L}^{k_R} (\alpha k + \beta k^3) \frac{dk}{2\pi} = \frac{\alpha}{4\pi} (k^2_R - k^2_L) + \frac{\beta}{8\pi} (k_R^4-k^4_L). \end{equation} From Eq.~(\ref{eq:rho_v_defin_j1j2}) and Eq.~(\ref{eq:k_RL_p2_p4}), we obtain the exact continuity equation: \begin{equation} \rho_t + (\rho v)_x = 0. \end{equation} To express $k_{R,L}$ in terms of $\rho$ and $v$, we need to solve the following system of equations: \begin{equation} \label{eq:rho_v_j1_j2} \rho = \frac{k_R-k_L}{2\pi}, \quad v = \frac{1}{2}\left[\alpha + \frac{\beta}{2}(k_R^2+k_L^2)\right](k_R+k_L). \end{equation} Unfortunately, in order to express $k_{R,L}$ in terms of hydrodynamic variables $(\rho, \, v)$, one has to solve a qubic equation (\ref{eq:rho_v_j1_j2}). To simplify the problem, we consider the limit $\alpha\to 0$, corresponding to the quartic dispersion $\varepsilon_k = \beta k^4/4$. Solving Eq.~(\ref{eq:rho_v_j1_j2}) for $k_{R,L}$ and expanding the solution as a Taylor series in powers of $v/v_F$, we obtain \begin{equation} \label{eq:k_RL_expansion} k_{R,L} = \pm \pi \rho + \frac{v}{\beta \pi^2 \rho^2} + \mathcal{O}(v^3/v_F^3). \end{equation} Here $v_F=\partial_k \varepsilon_k|_{k=k_F}=\beta \pi^3 \rho^3$ is the Fermi velocity of a Fermi gas with quartic dispersion. Substituting Eq.~(\ref{eq:k_RL_expansion}) into Eq.~(\ref{eq:k_RL_p2_p4}) and keeping terms up to the second-order in $v$, we obtain \begin{eqnarray} &v_t& + 5 v v_x - v^2\frac{\rho_x}{\rho} + \beta^2 \pi^6 \rho^5 \rho_x = 0. \end{eqnarray} The initial density profile could be derived in the Thomas-Fermi approximation by imposing a condition that the Fermi energy is constant: \begin{equation} \rho_{TF}(t=0, x) = \frac{1}{\pi}\left[\frac{4}{\beta}(E - V(x))\right]^{\frac{1}{4}}. \end{equation} We perform PDE reconstruction of the equation using the following dictionary of candidate terms: \begin{equation} \label{eq:v_dict_pde_j1j2} v_t = G(\rho, \rho_x, \rho\rho_x, \rho^2\rho_x, \ldots, \rho^5\rho_x, \rho_{xx}, v, v_x, v_{xx}, v v_x, v^2 \rho_x, (\log{\rho})_x, v^2 (\log{\rho})_x, \ldots ), \end{equation} which contains terms up to order $v^2$. In order to constrain the search space, we remove terms that do not satisfy the $P$- and $T$-inversion symmetry constraint $(P,T) = (-,+)$. Performing such preselection, we composed the dictionary $G(\cdot)$ consisting of $M=20$ candidate terms marked with a check in Table~\ref{table:t2}. The PDE recovered from data presented in Fig.~\ref{fig:hydro_p4}(a) using the BruteForce and the CrossEntropy algorithms reads \begin{equation} \label{eq:vt_reconstr_j1j2} v_t \approx -4.98 v v_x - 225.7 \rho^5 \rho_x, \quad (J_1=0.5,\, J_2=-0.125,\, \beta=0.5). \end{equation} The term $\sim v^2 \rho_x/\rho$ is missing in the recovered Eq.~(\ref{eq:vt_reconstr_j1j2}); however, this term turns out to be negligible in the regime of parameters considered here. In Fig.~\ref{fig:hydro_p4}(b), we compare the solution of the inferred PDE (\ref{eq:vt_reconstr_j1j2}) with the original data. We find that adding the $L_2$ regularization term $\lambda_2$ to the loss function $\mathcal L = ||U_t - \Theta \cdot \xi||_2 + \lambda_0 ||\xi||_0 + \lambda_2 ||\xi||_2^2$ stabilizes the regression problem in the presence of highly nonlinear terms. Without $L_2$ regularization, we obtain extremely large values of the regression coefficients $\xi$. On the other hand, even very small value for the penalty constant, such as $\lambda_2=10^{-12}$, suffices. Interestingly, the STRidge algorithm was not able to identify a correct PDE. \subsection{Fermion current and velocity in lattice simulations}\label{sec:particle_current} In this subsection we derive the expression for the particle current and velocity for the 1D tight-binding Hamiltonian with/without interactions. Let us consider a subsystem cut between the lattice sites $i=i_0-1$ and $i=i_0$. The current between these two parts is defined via the particle number conservation, i.e. \begin{equation}\label{eq:j_ham} j:=-\left\langle \frac{d N_{L}}{d t}\right\rangle=-i\left\langle\left[H, N_{L}\right]\right\rangle \end{equation} where $N_{L}=\sum_{i<i_0} c_{i}^{\dagger} c_{i}$ is the number of particles to the left from the cut, $H$ is the system's Hamiltonian, and $\langle\ldots\rangle$ is the expectation value. We consider the tight-binding Hamiltonian (\ref{eq:H_j1_j2}) with the nearest-neighbour and next-nearest-neighbour hopping terms. The particle current can be expressed as \begin{equation}\label{eq:j_commutator} j(t, i_0)=-i \sum_{i<i_0, n} J_{1}\left\langle\left[c_{n}^{\dagger} c_{n+1}, c_{i}^{\dagger} c_{i}\right]\right\rangle+J_{1}\left\langle\left[c_{n+1}^{\dagger} c_{n}, c_{i}^{\dagger} c_{i}\right]\right\rangle+J_{2}\left\langle\left[c_{n}^{\dagger} c_{n+2}, c_{i}^{\dagger} c_{i}\right]\right\rangle+J_{2}\left\langle\left[c_{n+2}^{\dagger} c_{n}, c_{i}^{\dagger} c_{i}\right]\right\rangle. \end{equation} The expectation value of the commutator reads \begin{equation} \sum_{i<i_0}\left\langle\left[c_{a}^{\dagger} c_{b}, c_{i}^{\dagger} c_{i}\right]\right\rangle =\sum_{i<i_0}\left(\delta_{i b}-\delta_{i a}\right)\left\langle c_{a}^{\dagger} c_{b}\right\rangle=\left\{\begin{array}{ll} -\left\langle c_{a}^{\dagger} c_{b}\right\rangle, & a<i_0,\, b \geq i_0, \\ \left\langle c_{a}^{\dagger} c_{b}\right\rangle, & b<i_0,\, a \geq i_0, \\ 0, & \text { otherwise.} \end{array}\right. \end{equation} Thus, the current at the cut between the sites $i-1$ and $i$ has the following form \begin{equation}\label{eq:j_definit} j(t,i)=i J_{1}\left[\mathcal{G}_{i,i-1}(t)-\mathcal{G}_{i-1,i}(t)\right]+i J_{2}\left[\mathcal{G}_{i,i-2}(t)-\mathcal{G}_{i-2,i}(t)\right]+i J_{2}\left[\mathcal{G}_{i+1,i-1}(t)-\mathcal{G}_{i-1,i+1}(t)\right], \end{equation} where $\mathcal{G}_{a b}(t):=\left\langle c_{b}^{\dagger}(t) c_{a}(t)\right\rangle$ is the single-particle density matrix. The fermion velocity is related to the particle current as $v(t,i)=j(t,i)/\rho(t,i)$, where $\rho(t,i)=\left\langle c^\dag_i c_i \right\rangle$ is the on-site fermion density. We employ the expression~(\ref{eq:j_definit}) to construct datasets for the fermion velocity field for the PDE-learning algorithm both in the main text and throughout Section~\ref{sec:fermions} of the Supplementary Material. Note, that the expression for the current~(\ref{eq:j_definit}) remains unchanged in the presence of the Fermi-Hubbard-type interactions, e.g. $V_{\rm int}=U \sum_i n_i n_{i+1}$, since the interaction term $V_{\rm int}$ commutes with the fermion number operator $n_i=c^\dag_i c_i$, and therefore does not contribute to the particle current in~(\ref{eq:j_ham}). We use this fact later in Sections~\ref{sec:inter_ferm_hydro} and~\ref{sec:viscosity} when performing PDE-learning of hydrodynamics of interacting fermions. \subsection{Global symmetries and term preselection}\label{sec:global_symmetries} Prior knowledge of global symmetries, such as invariance with respect to time-reversal $(T)$/spatial-inversion $(P)$ transformations, provides a powerful method to significantly reduce the number of candidate terms in the dictionary. The transformation properties of the fermionic density and velocity are: \begin{equation} P(\rho)=1, \quad T(\rho)=1, \quad P(v)=-1, \quad T(v)=-1. \end{equation} The summary of the preselection procedure for fermionic systems obeying $P$- and $T$-inversion symmetries is presented in Table~\ref{table:t2}. \begin{center} \begin{table}[htbp] \caption{Example of candidate terms for the rhs of the Euler equation $v_t = G(\cdot)$, Eq.~(\ref{eq:v_dict_pde_j1j2}). Selected terms (see last column) have the following signature with respect to $P$- and $T$-inversion: $(P,T)=(-,+)$.} \label{table:t2} \begin{tabular}{|c|c|c|c|} \hline \textrm{Terms} & $P$ & $T$ & \textrm{Select}\\ \hline $const$, $\rho$, $\rho^2$, $\rho^3$, $\ldots$, $\rho^5$, $v^2$ & + & + & $\times$\\ $\rho_x$, $\rho \rho_x$, $\ldots$ $\rho^5\rho_x$ & - & + & \checkmark\\ $\rho_{xx}$, $\rho \rho_{xx}$, $\rho^2 \rho_{xx}$, $\ldots$, $v^2 \rho_{xx}$ & + & + & $\times$\\ $v^2\rho$, $v^2 \rho_{xx}$ & + & + & $\times$\\ $\rho_x v_x$, $v_{xx}$, $\rho v_{xx}$, $\rho^2 v_{xx}$& - & - & $\times$\\ $\rho_x^2$, $v_x^2$ & + & + & $\times$\\ $v$, $\rho v$, $\rho^2 v$, $\rho^3 v$ & - & - & $\times$\\ $v_x$, $\rho v_x$, $\rho^2 v_x$ & + & - & $\times$\\ $v v_x$, $\rho v v_x$, $\rho^2 v v_x$, $\ldots$, $\rho^5 v v_x$ & - & + & \checkmark\\ $v^2 \rho_x$, $v^2 \rho \rho_x$, $\ldots$, $v^2 \rho^5 \rho_x$, $(\log{\rho})_x$, $v^2 (\log{\rho})_x$ & - & + & \checkmark\\ \hline \end{tabular} \end{table} \end{center} \subsection{Learning of hydrodynamic PDEs from partial observations}\label{sec:partial_observ} In an experimental setting, it is quite common that only some physical observables can be directly measured. In this subsection, we propose two approaches for PDE-learning of hydrodynamic equations from partial observations, when only data for the evolution of density (but not velocity) is available. Such a situtation is common in ultracold-atom experiments, since it is relatively easy to measure the density of the atomic cloud via optical absorption, but it is hard to directly measure the velocity field of the atomic cloud in situ. The evolution of density between different time snapshots could be considered as a ``movie'' that contains information about particle velocity at each spatial point. The velocity field can then be extracted by integrating the continuity equation, Eq.~(\ref{eq:euler_rho}): \begin{equation}\label{eq:v_hidden} v(t,x) = -\frac{1}{\rho(t,x)} \partial_t \left[\int_{-\infty}^x dy\, \rho(t, y)\right]. \end{equation} The right-hand side of Eq.~(\ref{eq:v_hidden}) can be directly evaluated from the data at spatiotemporal points of interest. After the extraction of the velocity field $v(t,x)$, we can proceed with the standard PDE-reconstruction procedure of the hydrodynamic equation for the velocity, described in Section~\ref{sec:fermion_hydro}. Applying the method described above to the $\rho(t,x)$ data shown in Fig.~\ref{fig:hydro_vs_exact}, we reconstruct the Euler equation from the library of candidate terms $v_t=G(\cdot)$, Eq.~(\ref{eq:v_dict_hydro}), using the BruteForce, CrossEntropy, and STRidge algorithms (all three algorithms leading to the same result), \begin{equation} v_t + 0.97 v v_x + 9.42 \rho \rho_x = 0, \end{equation} which is in good agreement with the theoretically expected equation (\ref{eq:euler_v}) and nearly identical to Eq.~(\ref{eq:euler_recovered_v}), where both density and velocity data were used for PDE learning. The method presented above efficiently solves the problem of reconstructing the Euler equation for the velocity $v_t=G(\cdot)$ from partial observations (only from the density data $\rho(t,x)$). However, this method has a few drawbacks. (i) The velocity reconstruction procedure via Eq.~(\ref{eq:v_hidden}) introduces additional numerical errors due to the finite-difference computation of the time-derivative $\partial_t [\ldots]$. (ii) The situation worsens in the regions when the density approaches zero, $\rho\to 0$, resulting in a vanishing denominator in Eq.~(\ref{eq:v_hidden}), that amplifies numerical errors. (iii) The velocity reconstruction trick (\ref{eq:v_hidden}) works only if the continuity equation is valid. In problems where the total number of particles is not conserved (e.g.~in the presence of three-body loss in cold-atom experiments), the continuity equation is no longer exact and has to be modified with appropriate loss terms. Problem (ii) can be partially alleviated by the considering hydrodynamic equation for the particle current, \begin{equation} (\rho v)_t + (\rho v^2)_x = - \partial_x P(\rho), \end{equation} so that the we search for an unknown equation of the form $(\rho v)_t = G(\cdot)$. We utilized this method in the main text for extracting PDEs from experimental data (boson gas expansion on an atom chip). In addition, we propose a modified PDE learning method to address problems (ii) and (iii). We assume that \begin{eqnarray} &\rho_t& = F(\xi_1; \rho, v, \rho_x, \rho_{xx}, v_x, v_{xx}, \ldots),\\ &v_t& = G(\xi_2; \rho, v, \rho_x, \rho_{xx}, v_x, v_{xx}, \ldots), \end{eqnarray} where $\xi_{1,2}$ are the coefficients that parametrize functions $F$ and $G$. We define the objective function as \begin{equation}\label{eq:obj_func_pdesolve} \mathcal{L}(\xi_{1,2}; \rho^*, \lambda_0) = \sum_{t_k, x_i} \left|\rho^{*}(t_k, x_i) - \textrm{PDESolve}( t_k, x_i, \rho_0, v_0, F(\xi_1; \cdot), G(\xi_2; \cdot), BC\right| + \lambda_0 ||\xi_2||_0, \end{equation} where $\rho_0$ and $v_0$ are the initial conditions at $t=0$, $BC$ represents a set of boundary conditions, and $\rho^*(t_k, x_i)$ are the data points for the density evaluated at the spatiotemporal grid $\{t_k, x_i\}$. $\textrm{PDESolve}(\ldots)$ denotes a PDE solver that takes initial and boundary conditions and the coefficients $\xi_{1,2}$ parametrizing the unknown function $f(\xi_{1,2}; \cdot)$ and outputs a solution $\rho_{pde}$ at the grid points $\{t_k, x_i\}$. We assume that we know the initial conditions for the velocity [in quench experiments with ultracold atoms, one usually has $v(t=0, x)=0$]. The goal is to find a set of coefficients $\xi_{1,2}$ that minimize the objective function: \begin{equation}\label{eq:xi_12_solver} \xi_{1,2} = \argmin_{\xi_{1,2}} (\mathcal{L}(\xi_{1,2}; \rho^*, \lambda_0)). \end{equation} Optimization of Eq.~(\ref{eq:xi_12_solver}) is more computationally costly compared to the sparse-regression methods discussed in Sec.~\ref{sec:sparse_regr}. The former approach requires (i) discrete optimization to find optimal combinations of non-zero terms, (ii) continuous optimization to find optimal values of PDE coefficients $\xi_{1,2}$ (e.g.~by gradient-descent algorithm), (iii) solution of the PDE forward in time for each optimization step in order to evaluate the current value of the objective function (\ref{eq:obj_func_pdesolve}). Computational cost of this algorithm can be significantly improved by combining PDE solvers and reverse mode automatic differentiation~\cite{yashchuk2020bringing}. \subsection{Could the evolution of the density of non-interacting fermions be described by a single PDE?}\label{sec:single_pde_rho} In this subsection, we address the following question: is it possible to rewrite the coupled system of hydrodynamics equations describing fermion dynamics, \begin{equation} \begin{cases} \rho_t+(\rho v)_x=0,\\ v_t + v v_x + \pi^2 \rho \rho_x = 0, \label{eq:euler_system} \end{cases} \end{equation} as a single closed form PDE for the fermion density? The answer is positive, although surprisingly we did not find such an equation in the literature. We introduce an auxiliary variable $w$ as: \begin{eqnarray} &&\rho v = - w_t, \label{eq:w_t} \\ &&\rho = w_x, \label{eq:w_x} \end{eqnarray} so that the continuity equation is automatically satisfied. Solving Eq.~(\ref{eq:w_x}), we obtain $w(t,x) = \int_0^x dx'\, \rho(t,x') + g(t)$, where it is easy to show that $g(t)=const$ due to fixed boundary conditions at $x\to \pm \infty$. The physical meaning of $w(t,x)$ is the number of fermions to the left of coordinate $x$. Expressing velocity from Eqs.~(\ref{eq:w_t},~\ref{eq:w_x}) as $v=-w_t/w_x$ and substituting the result into the Euler equation (\ref{eq:euler_system}), we obtain \begin{equation} \label{eq:w_final} -\partial_t \left(\frac{w_t}{w_x}\right) + \frac{w_t}{w_x}\left(\frac{w_t}{w_x}\right)_x + \pi^2 w_x w_{xx} = 0. \end{equation} The resulting PDE (\ref{eq:w_final}) is second-order in time and depends only on the fermion density $\rho(t,x)$ via $w$. Unfortunately, Eq.~(\ref{eq:w_final}) has no transparent physical interpretation, to our knowledge. Although PDE-learning methodology allows one to reconstruct a second-order-in-time equation $w_{tt}=F(w_t, w_x, w_{tx}, \ldots)$, such an equation would be hard to interpret, compared to a conventional hydrodynamic system of equations for the density and the velocity. \subsection{Hydrodynamics of interacting fermions: emergent Euler and Navier-Stokes equations}\label{sec:inter_ferm_hydro} While in previous subsections we considered hydrodynamics in a non-interacting fermion gas, here we focus on the case of interacting fermions described by the 1D spinless Fermi-Hubbard model: \begin{equation}\label{eq:F-H_model} H = - J\sum_{i} (c^\dag_i c_{i+1} + c^\dag_{i+1} c_i) + U\sum_{i} n_i n_{i+1} - \mu \sum_i c^\dag_i c_i, \end{equation} where $U$ is the interaction constant, and $\mu$ is the chemical potential of the fermion gas. We perform a search of hydrodynamic-type equations for the fermion density and velocity using the following form of hydrodynamic equations: \begin{equation} \begin{cases} \rho_t+(\rho v)_x = 0,\\ v_t + v v_x = G(\rho_x, \rho_{xx}, v_x, v_{xx}, \ldots), \end{cases} \end{equation} where the function $G(\cdot)$ is unknown. We use a truncated library of terms corresponding to Table~\ref{table:t2}, where the terms are preselected based on the $P$-parity transformation, $P=-1$. Using the BruteForce search algorithm (with $\lambda_0=10^{-2}$), we found the following Euler equation, \begin{eqnarray} \label{eq:kappa_term} v_t+v v_x + \kappa(U)\rho \rho_x = 0, \end{eqnarray} describing the dynamics of interacting fermions. We now provide the summary of our analysis, consolidate some of the statements from the main text, and expand on the connection between the discovered Euler/Navier-Stokes fluid models and the Tomonaga-Luttinger theory: \begin{itemize} \item In the limit of vanishing interaction strength $U\to 0$, the coefficient $\kappa$ approaches the value $\kappa\approx \pi^2/m^2=4J^2\pi^2$, in agreement with the free-fermion theory for quadratic dispersion. At the same time, minor deviations from the predicted theoretical value $\kappa(U=0)\approx 4J^2\pi^2$ are primarily due to the lattice dispersion corrections, as discussed in Sec.~\ref{sec:cos_corrections}. \item In the limit of small density perturbations, $\delta\rho \ll \rho_0$, the viscosity term in Eq.~(\ref{eq:kappa_term}) can be neglected, and the density dynamics takes the form $\Bigl[\partial_t^2-v_{\rm eff}^2(U)\partial_x^2\Bigl]\rho(t,x)=0$, where $v_{\rm eff}(U)=\sqrt{\kappa(U)}$ is a renormalized quasiparticle velocity. This equation can be derived independently in the framework of the Tomonaga-Luttinger (T-L) theory\cite{fradkin2013field}, which predicts the renormalized velocity of the form \begin{equation}\label{eq:vf_TL} v_{\rm eff}(U)=v_{F0}\sqrt{1+\frac{U}{2\pi v_{F0}}}, \end{equation} where $v_{F0}=\pi\rho_0$ is the Fermi velocity in the non-interacting limit. Hence, given the relation between effective velocity and $\kappa(U)$, the value of $\kappa(U)$ in this regime must be close to $\kappa(U)=\kappa(0)\left[1+U/(2\pi^2\rho_0)\right]$. Indeed, we find a quantitative agreement between extracted values of $\kappa(U)$ and the T-L prediction for interaction strengths in the region $U/J \lesssim 1$. However, in the strongly interacting regime, $U/J\gg 1$, the observed values of $\kappa$ significantly deviate from the T-L theory (see Fig.~5 in the main text). Notably, the effective hydrodynamic model (\ref{eq:kappa_term}) works even beyond the linearized regime for relatively large density perturbations in both weakly- and strongly-interacting limits. \item Although the linearization of the discovered Euler equation (\ref{eq:kappa_term}) is equivalent to the T-L theory, to the best of our knowledge, the non-linear term $\kappa(U)\rho\rho_x$ cannot be explicitly derived from the T-L theory. \item Larger perturbations and longer-time evolution would have notable features that cannot be captured by a renormalized $\kappa(U)$. Lowering the penalty constant $\lambda_0$ and extending the library of candidate terms, we found that the deviations are well-captured by the equation \begin{equation} v_t+v v_x + \kappa \rho \rho_x = \nu v_{xx}. \label{eq:kappa_nu} \end{equation} Comparison between TEBD data and the solution of the Navier-Stokes model (\ref{eq:kappa_nu}) is shown in Fig.~\ref{fig:navier-stokes}. The viscosity term on the right-hand side significantly improves agreement with TEBD data at long evolution times. Despite the fact that the viscosity term violates time-reversal invariance of the effective Navier-Stokes model, it does not conflict with unitary dynamics, due to the presence of relaxation processes. Short-range interactions result in the entropy flow from small-scale fluid cells to large-scale structures, by analogy with classical hydrodynamics. More accurate analysis shows that the universal value of viscosity emerges only in late-time dynamics, see Sec.~\ref{sec:viscosity}. Notably, viscosity effects are beyond linear T-L theory. \item We verified that both Euler and Navier-Stokes PDEs~(\ref{eq:kappa_term}, \ref{eq:kappa_nu}) remain accurate for various initial states, see Fig.~\ref{fig:double_peak}. \end{itemize} \begin{figure}[h] \centering \includegraphics[scale=0.4]{./figs_suppl/inter_fermions_u=0.5_suppl.pdf} \caption{(a, b) TEBD data for the evolution of fermion density and velocity in the spinless Fermi-Hubbard model (\ref{eq:F-H_model}). (c) Difference between the solution of the reconstructed hydrodynamic PDE (``Navier-Stokes model'', Eq.~(\ref{eq:kappa_nu})) and the TEBD data ($U/J=4$). In our simulations, we fixed the value of the chemical potential at $\mu/J=-2$.} \label{fig:navier-stokes} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=0.3]{./figs_suppl/two_peak_inter_fermions_combo_suppl.pdf} \caption{Hydrodynamics in the Fermi-Hubbard model (\ref{eq:F-H_model}) in the strongly interacting regime, $U/J=4$. (a, b) Evolution of fermion density $\rho(t,x)$ and velocity $v(t,x)$ for an asymmetric double peak initial density profile. This plot demonstrates that the discovered Euler and Navier-Stokes equations (\ref{eq:kappa_term}, \ref{eq:kappa_nu}) remain valid for various initial conditions. } \label{fig:double_peak} \end{figure} \subsection{Analysis of an emergent Navier-Stokes equation and the viscosity coefficient}\label{sec:viscosity} In this subsection, we discuss an emergent viscosity term in the hydrodynamics of the 1D Fermi-Hubbard model in more detail and point out limitations of our analysis. In order to break integrability of the spinless Fermi-Hubbard model and consider a more generic case, we introduce next-nearest-neighbor fermion-fermion interactions: $H=H_{\rm f}+V_{\rm int}$. Here $H_{\rm f}$ is a tight-binding Hamiltonian and fermion-fermion interaction has the form $V_{\rm int} = U\sum_i n_i n_{i+1}+U_2\sum_i n_in_{i+2}$. Parameters $U$ and $U_2$ are the nearest-neighbor and the next-nearest-neighbor couplings, respectively. Remarkably, the hydrodynamic equation (\ref{eq:kappa_term}) remains valid in the presence of next-nearest-neighbor interactions: see comparison between TEBD data and the solution of the PDE in Fig.~\ref{fig:fermi_hubbard_nnn}. In this case, the pressure renormalization coefficient becomes a function of both coupling parameters: $\kappa(U, U_2)$. Hence the hydrodynamic model (\ref{eq:kappa_term}) is applicable to a wide range of models of interacting fermions (e.g.~the Fermi-Hubbard model with nearest-neighbor interactions, next-nearest-neighbor interactions, etc..). Discovered Eq.~(\ref{eq:kappa_term}) for an ideal Euler fluid works well for short evolution times. However, we found that for longer evolution times, the viscous Navier-Stokes model (\ref{eq:kappa_nu}) becomes more accurate. \begin{figure}[H] \centering \includegraphics[scale=0.3]{./figs_suppl/inter_ferm_nnn_combo_suppl.pdf} \caption{Fermion hydrodynamics in the Fermi-Hubbard model with nearest-neighbor interaction strength $U$ and next-nearest-neighbor interaction strength $U_2$. Panels (a, c) show data for fermion density, and panels (b, d) show data for the velocity at the final evolution time, $t_f=60$. Parameters of the Fermi-Hubbard Hamiltonian are (a,~b) $U/J=4$, $U_2/J=0$, (c,~d) $U/J=2$, $U_2/J=-1$. The blue and red solid lines correspond to the discovered Euler-like equation (\ref{eq:kappa_term}) and Navier-Stokes-like equation (\ref{eq:kappa_nu}), respectively. } \label{fig:fermi_hubbard_nnn} \end{figure} The viscosity term in Eq.~(\ref{eq:kappa_nu}) discovered by our algorithm has an important qualitative role: it prevents the formation of the gradient catastrophe. We found that the viscosity coefficient obtained from TEBD data $\nu$ could drift with time, see Fig.~\ref{fig:viscosity_evolution}. We extract the time dependence of the viscosity coefficient by performing our PDE-reconstruction within a sliding temporal window $[t,\, t+0.2 T]$, where $T$ is the total evolution time. At the beginning of the evolution, the effect of the viscosity term is negligible, and the effective dynamics is well-described by the inviscid Euler equation. The viscosity coefficient extracted from TEBD data is close to zero at the start of the evolution and grows with evolution time approaching a fixed value $\nu\sim J$ at long evolution times, $t\to \infty$. We checked that the asymptotic value for the viscosity coefficient remains stable to variations of the initial state (changing the amplitude of the external potential), see Fig.~\ref{fig:viscosity_evolution}. Generally, the value of the viscosity coefficient will depend on the values of interaction constants $U$ and $U_2$. In the non-interacting limit $U, U_2\to 0$, the viscosity coefficient must vanish, $\nu\to 0$. In our simulations, the magnitude of the effective viscosity term $\nu v_{xx}$ is much smaller compared to the dominant pressure term $\kappa \rho\rho_x$, which affects the precision of the extracted value of $\nu$. The saturation of the viscosity coefficient $\nu(t\to\infty, U, U_2)\to const$ can be interpreted as an onset of local equilibration in the interacting fermion gas. \begin{figure}[H] \centering \includegraphics[scale=0.45]{./figs_suppl/nu_vs_time.pdf} \caption{Time dependence of the extracted viscosity coefficient $\nu(U, U_2, t)$ in the extended Fermi-Hubbard model with nearest-neighbor interactions ($U/J=U_2/J=2$). Viscosity is extracted within the sliding time window $[t_{start}, t_{start}+0.2 T]$, where $T$ is the total evolution time. We present data for five initial conditions corresponding to diferent values of the amplitude of the Gaussian potential $V_0$. At large evolution times, the viscosity coefficient saturates to a constant value $\nu(t\to \infty, U, U_2) \sim J $. } \label{fig:viscosity_evolution} \end{figure} \section{Details of numerical simulations} In the present Section, we provide additional details of numerical simulations for data generation. In Section~\ref{sec:errors}, we analyze the errors in the coefficients of recovered PDEs and the dependence of the errors on the spatiotemporal resolution of the input data. Simulations of dynamics in the 1D XX spin chain and the non-interacting fermion chain were performed by exact diagonalization of the single-particle density matrix, $\mathcal G_{ij}(t)=\langle c_i^\dag (t) c_j(t)\rangle$. In the cases where single-magnon dynamics in the XXZ spin chain was considered, we exactly solved the Schr\"odinger equation in the single-magnon sector of the full Hamiltonian and computed the observables. TEBD simulations for the dynamics of the domain-wall initial state in the XXZ model and for the dynamics in the interacting Fermi-Hubbard model were performed with the TenPy package~\cite{tenpy}. The matrix-product-state (MPS) bond dimension was set to $\chi=200$. We checked that an increase of the MPS bond dimension to $\chi=300$ resulted in a small change in the values of the recovered coefficients of the hydrodynamic PDEs ($<0.1 \%$). \subsection{Error analysis}\label{sec:errors} In this subsection, we would like to comment on the sources of error in our PDE-reconstruction procedure. The primary sources of error encountered in PDE-learning from numerical simulations are (i) errors in numerical schemes for the evaluation of high-order derivatives from data, (ii) numerical errors in the dataset (e.g.~truncation errors in TEBD simulations), (iii) physical-model errors originating from higher-order corrections to the approximate PDE: corrections beyond the hydrodynamic approximation, high-order terms in gradient expansion, etc... A substantial amount of noise in the data can confuse the sparse regression algorithm, thereby introducing spurious terms and/or shifting the values of the extracted coefficients. Of course, experimental data is usually more noisy as compared to numerical simulations, thus affecting the reliability of the recovered equations. Below we discuss the role of the spatiotemporal resolution on the quality of PDE reconstruction, see Table~\ref{tab:table_terms}. We found that leading semiclassical terms are robustly identified with our method, even for very ``pixelated'' data, see Fig.~\ref{fig:pixelized}. \begin{table} [hbtp] \caption{Dependence of PDE reconstruction performance on the spatiotemporal resolution $(N_t, N_x)$ of the dataset for the quench problem in the non-interacting fermion gas [see Sections \ref{sec:fermion_hydro_parabol} and \ref{sec:cos_corrections}]. While changing the resolution of the dataset, we keep the spatiotemporal extent $(T,L)$ fixed. The candidate terms in the Euler equation $v_t=G(\cdot)$ are preselected based on $(P, T)$ symmetry, resulting in $M=20$ terms, see Table~\ref{table:t2}. The leading WKB terms $v_t+v v_x + \pi^2\rho\rho_x + \ldots = 0$ were correctly identified by the BruteForce algorithm for the entire range of $(N_t,\, N_x)$ presented. When decreasing the number of spatiotemporal points some of the correction terms originating from the tigh-binding dispersion were misidentified, the number of misidentified terms $(b_1 \rho^3\rho_x, b_2 v^2\rho \rho_x, b_3 \rho^2 v v_x)$ is shown in the entries of the table.} \label{tab:table_terms} \includegraphics[scale=0.2]{figs_suppl/table_error_terms.pdf} \end{table} Statistical uncertainty in the values of regression coefficients could be estimated if the covariance matrix of the error term $\epsilon = y - A \xi$ is known, $\Sigma = \mathbb{E}[\epsilon\, \epsilon^T]$. However, for a given dataset, the residual term $\epsilon$ is a fixed vector, and therefore its covariance is not known. As an alternative approach to estimating the statistical error in $\xi$, we randomly select a subset of rows of the regression vector and the regression matrix: we split data in 10 batches of equal size each containing $10\%$ of the data points. Next we find the regression vector $\xi$ for each batch and estimate the uncertainty in the regression coefficients as an element-wise standard deviation of the values of $\xi$ across batches. The resulting statistical error, as well as the empirical error $|\xi-\xi_{true}|$ (deviation of recovered coefficients from the exact theoretical values), is shown in Fig.~\ref{fig:empir_stat_err}, where $\xi_{true}$ are the theoretically expected coefficients. Comparing left and right panels of Fig.~\ref{fig:empir_stat_err}, we see that the statistical error remains significantly lower compared to the empirical error. Therefore, the uncertainties in the values of the regression coefficients are primarily systematic and generated by noise in the numerically calculated derivatives, as well as by the mutual bias from higher-order nonlinear terms. Additionally, we analyze the dependence of the reconstruction error on the choice of the upper cutoff of the evolution time $t_f$ in the dataset, see Fig.~\ref{fig:tmax_err}. \begin{figure}[hbtp] \centering \includegraphics[scale=0.3]{figs_suppl/coarse_combo_suppl.pdf} \caption{Fermion hydrodynamics (non-interacting fermions): PDE-learning from low resolution data for (a) density and (b) velocity. When significantly decreasing data resolution to $(N_t,N_x)=(10,10)$, our algorithm was still able to identify the correct form of the hydrodynamic PDE, although the error in the values of the coefficients became significant: $\rho_t + 1.39\, \rho v_x + 1.14\,v \rho_x = 0$ ($\lambda_0=10^{-5}$), and $v_t+1.94\, v v_x + 10.3 \rho \rho_x = 0$ ($\lambda_0=10^{-4}$). } \label{fig:pixelized} \end{figure} \begin{figure} \centering \includegraphics[scale=0.3]{figs_suppl/empir_error_suppl.pdf} \caption{Dependence of the empirical error $|\xi-\xi_{true}|$ on the dataset temporal resolution $N_t$ in recovered coefficients for the $v v_x$ and $\rho \rho_x$ terms in the PDE $v_t=\xi_{v v_x} v v_x + \xi_{\rho \rho_x}\rho\rho_x$. Here $\xi_{true}=(\xi_{v v_x},\xi_{\rho\rho_x})=-(1,\pi^2)$ corresponds to the theoretical values of the coefficients for the leading WKB terms. Scattered points show empirical error in the coefficients (a) $\xi_{vv_x}$ and (b) $\xi_{\rho\rho_x}$ for individual random batches of subsampled data. This plot illustrates that the uncertainty in the reconstruction of coefficients in nonlinear PDEs is primarily systematic, since the spread of the error distribution in batches is much smaller than the mean error for a fixed value of $N_t \in [25, 1000]$. The number of spatial points was fixed at $N_x=1000$. The input dataset corresponds to Fig.~\ref{fig:hydro_vs_exact}. } \label{fig:empir_stat_err} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=0.3]{figs_suppl/err_xi_combo_suppl.pdf} \caption{(a) Dependence of the reconstruction error of the coefficients of the leading WKB terms, $v_t=\xi_{vv_x}vv_x + \xi_{\rho\rho_x}\rho\rho_x$, on the choice of the training time window. The reconstruction error is defined as $|\xi-\xi_{true}|$. (b) Input data for the evolution of fermion density, $\rho(t,x)$. The solid horizontal black lines in (b) show the upper cutoff $t_f$ of the training window $t\in [0, t_f]$. At large values $t_f\sim T$, the reconstruction error $\xi_{v v_x}$ and $\xi_{\rho \rho_x}$ grows due to high spatial gradients of the density and the velocity (the ``gradient catastrophe''). The gradient catastrophe is a well-known feature of the semiclassical description of fermionic dynamics~\cite{mirlin2013}.} \label{fig:tmax_err} \end{figure} \section{Details of PDE-learning from experimental data: interacting bosons on an atom chip} In this Section, we provide details of PDE-learning from experimental data, including details of data post-processing/interpolation. We process the experimental data as follows. The data from Ref.~\onlinecite{schemmer2019generalized} contains density profiles of the atomic cloud. The spatiotemporal resolution of the original data is $[565 \times 7]$. The seven experimental snapshots at different evolution times correspond to the time range $t\in[0, 85]$ ms. The total length of the 1D atomic cloud is $L\sim 10^{3}\mu m$. The original post-processed experimental data contains high-frequency spatial noise resulting in negative values of the measured density $\rho(t,x)$. In order to reconstruct continuous equations, we remove the spatial noise and increase the time resolution. We first remove the high-frequency component of the spatial noise by applying a Gaussian filter with the variance parameter $\sigma_x/L=2.5\times 10^{-2}$ for each of the seven snapshots. Next, to suppress the remaining low-frequency noise, we apply the Savitsky-Gollay~\cite{savitzky1964smoothing} filter with a sliding window of length 41 and a polynomial of order 2. Finally, we perform cubic 2D interpolation of the resulting data in order to increase resolution along the temporal dimension, which results in the final dataset with spatiotemporal dimensions $[200\times 200]$. The data for the particle velocity was reconstructed by leveraging the continuity equation, Eq.~(\ref{eq:v_hidden})---the result is shown in Fig.~\ref{fig:atom_chip_interp}(b). The interpolation process may impact the precision of the learning process. For example, the interpolated particle velocities at the start of the quench (i.e.~$t=0$) are non-zero, see Fig.~\ref{fig:atom_chip_interp} (middle panel). In contrast, based on the experimental quench protocol, we expect zero particle velocity immediately after the quench of the confining double-well potential. This mismatch is a byproduct of insufficient temporal resolution of the original data. This issue, however, has a limited adverse impact since the inferred velocities at the initial times are not too large. Also, while solving the inferred PDE forward, Fig.~\ref{fig:atom_chip_interp} (c), we plugged $\rho(t=0,x)=\rho_{data}(0,x)$ and $v(t=0,x) = 0$ as the initial conditions. \begin{figure} \centering \includegraphics[scale=0.4]{figs_suppl/atom_chip_combo_suppl.pdf} \caption{(a, b) Post-processed experimental data [from Ref.~\onlinecite{schemmer2019generalized}] for (a) density $\rho(t,x)$ and (b) particle current $j(t,x)=\rho v$ corresponding to boson cloud expansion on an atom chip from a double well potential. Here $t_{max}=85$ms is the maximum evolution time in the experimental dataset. The particle-current data was reconstructed from the density data $\rho(t,x)$ by utilizing the continuity equation. (c) Atom cloud density (in arbitrary units): original unprocessed data (thick fading line), post-processed data (dashed line), and the solution of the inferred PDE (solid line, Eq.~(20) in the main text). The individual density profiles at different times were shifted vertically relative to each for visualization purposes (black horizontal lines correspond to the origin of the vertical axis). The seven density profiles in (c) correspond to evolution times labeled in (a).} \label{fig:atom_chip_interp} \end{figure} \end{document}
2023-04-23T06:41:16.444Z
2021-11-04T01:23:59.000Z
redpajama/arxiv
arxiv_0001
2,028
17,797
5529206729319be4c15a65190a57b19f6e3b9965
\section{\label{sec:intro}Introduction} Networks have increasingly become important abstractions for representing a plethora of real-world complex systems. This spans the variety of social networks of friendship connections, ecological networks of predators and preys, biochemical networks of proteins and metabolites, and technological networks like power grids or the internet \cite{strogatz2001networks, albert2002networks, newman2003structure, wasserman1994sna}. Often, network structures are not entirely known, and are inferred from partial observations of connections, or of signals generated by the system \cite{newman2018inference, goldenberg2010inference, peixoto2019dynamics, godoylorite2021socialising, hoffmann2020communityunobserved}. Inference must account for confounding by many sources of uncertainty, especially measurement errors, which has resulted in a surge of statistical inference methods that yield a probabilistic network model \cite{peixoto2018reconstructing}. Moreover, many real-world networks, such as social networks of entire societies, tend to contain a large number of nodes. Estimating properties of very large networks can be computationally prohibitive, especially if one is interested in higher-order properties such as the size of the giant component, or the average distance between any two nodes in the networks. Both of these have impact on overall graph connectivity, network robustness, percolation properties and system synchronizability \cite{cohen2000percolation, callaway2000percolation, newman2001random, nishikawa2003sync}, which have been studied deeply for many physical, biological and social systems. The principal motivation for this work is to analytically estimate such network properties by having access either to just an expectation of network realizations, or to a statistical network model, without simulating networks or obtaining an error-free observation of one. In particular, we focus on geodesic properties of a network, and establish a distribution of shortest path lengths on the giant component in the supercritical regime---when the giant component exists---or on small components in the subcritical regime---when the giant component does not exist. \paragraph*{Problem setting.}We consider any random graph wherein edges between any two nodes are independent of one another, perhaps after conditioning on some other node variables. Many popular random graph models are instances of this kind---like the stochastic block model (SBM), random dot-product graph (RDPG), random geometric graph (RGG) or more generally any inhomogeneous random graph. Those explicitly encoding higher-order interactions and dependencies such as exponential random graph models \cite{harris2013ergm} are excluded from this study. We assume network sparsity in the sense that the number of edges varies linearly in the number of nodes. We derive a set of recursive relations in the asymptotic limit that stipulates the full shortest path length distribution (SPLD) between any two nodes on the giant component of a network in the supercritical regime, and on small components otherwise. \paragraph*{Prior work on the SPLD.} Previous results on the SPLD have focused almost exclusively on the simplest model of Erd\H{o}s--R\'{e}nyi (ER) graphs \cite{blondel2007distance, katzav2015analytical}, or average path lengths in degree-configuration models \cite{vanderhofstad2005distanceconfigmodel, vanderhoorn2018distancedirectedconfig} or scalar latent variable models \cite{fronczak2004average}, and some results display appreciable discrepancies between theory and empirics in the tail-end of the distribution---especially for networks with small degrees \cite{blondel2007distance, katzav2015analytical}. Related work has determined the distribution of picking a path between two nodes in a given network \cite{franccoisse2017bagofpaths}---however, this approach does not directly model the distribution of shortest path \emph{lengths} between node pairs. Description of the full SPLD for a general family of network models has, before this work, remained elusive \cite{jackson2017socialnetworkeconomics}. \paragraph*{Our contributions to the SPLD.} The proposed approach provides, to the best of our knowledge, the most accurate description of the distribution of shortest path lengths between any node pair for a very general class of (possibly directed) networks. We determine a closed-form lower bound of the SPLD's survival function which is tight for finite lengths in the asymptotic limit, and has a natural interpretation of traversing independent shortest paths in the graph. The closed-form is specified by an iterated integral operator defined over functions on the node space, whose kernel function indicates the likelihood of an edge existing between two nodes. The integral operator is analogous to the expected adjacency matrix in the finite-dimensional setting, and summarizes the dependence of a node function on all other nodes in the network. If the kernel is symmetric, it leads to an expression for the SPLD in terms of the spectral decomposition of the integral operator. Under specific scenarios, our method recovers known results on geodesics, such as the small-world property of ER graphs \cite{erdos1960evolution, albert2002networks}, or the ultrasmall property of Barab\'{a}si and Albert (BA) graphs \cite{cohen2003ultrasmall}. We provide new results for the illustrative models considered namely SBM, Gaussian RGG, RDPG and sparse graphon, and also apply them to real-world networks. Most prior work in the field has produced analytic results concerning \emph{specific} phenomena on \emph{specific} random graph models. However, our approach to the SPLD unifies the study of shortest paths, and related phenomena of connectedness and centralities, into one theoretical framework. In Tab. \ref{tab:summary_of_results}, we include an index for the main analytical results. \paragraph*{Our contributions to network percolation theory.} Phenomena related to percolation, i.e. random occupation of sites or bonds, are well-studied in the statistical physics literature. For networks, we can consider the \emph{bond} percolation threshold, at which a giant component exists in the network. There are analytic results on the bond percolation threshold for specific random graph models---graphs with a given degree sequence \cite{molloy1995critical, molloy1998gcc, newman2007ccconfig, kryven2017gccconfig}, or those with a power law degree distribution \cite{callaway2000percolation, cohen2000percolation}. Other notable works have derived thresholds for a more general class of inhomogeneous random graph models with independent edges placed according to symmetric kernels \cite{soderberg2002general, soderberg2003properties, bollobas2007phase, allard2015percolation}, and for sparse empirical networks \cite{karrer2014percolationsparse, hamilton2014sparsepercolation}, by formulating corresponding branching or message passing processes. Here, we establish a direct relationship between the lower bound of the survival function of the SPLD and the bond percolation threshold. Our formalism adds to the literature by supplying the percolation threshold for the class of sparse random graph models with independent edges, regardless of the symmetry of the kernel function, which leads to new results and connections on percolation behavior in the models considered. Tab. \ref{tab:results} summarizes select analytical results on the bond percolation threshold. \paragraph*{Our contributions to geodesic-based centralities.} Network centralities are structural measures of importance of nodes in a network, and those based on shortest paths like closeness and betweenness centralities are of great interest when studying real-world networks \cite{freeman1978centrality, borgatti2006centrality}. Prior work on dense RGGs has derived closed-form expressions for node betweenness \cite{giles2015rgg}. Previous research has also defined spectrum-based centralities for graphons \cite{avella2018centralitygraphon} analogous to the eigenvector \cite{bonacich1972eigenvector} and Katz centralities \cite{katz1953centrality}. Here, we express closeness in closed-form, and betweenness analytically, for general random graph families that include sparse graphons and RGGs as special cases. In summary, local and global properties of networks, of both empirical and theoretical interest, can be extracted from the SPLD. The article is organized as follows. In Sec. \ref{sec:spd} we describe the probabilistic framework to derive the SPLD for random graphs defined by an ensemble average model. In Sec. \ref{sec:general_graphs}, we extend this formalism to general random graph families, both directed and undirected, and in Sec. \ref{sec:geodesics_specific} we highlight specific cases of popular network models and real-world networks. In Sec. \ref{sec:percolation}, we draw a connection between the SPLD and bond percolation threshold, thus deriving a condition for percolation for any random graph model. Then in Sec. \ref{sec:path_stats}, we show how path-based statistics like average geodesic lengths, and centralities of node closeness and betweenness can be analytically estimated. Finally, we conclude in Sec. \ref{sec:conclusion} with a summary of our results and limitations of this framework. \begin{table*} \caption{\label{tab:summary_of_results}Index of equations for main results regarding (A) the shortest path length distribution, (B) percolation behaviour, (C) mean geodesic lengths, (D) closeness centrality, and (E) node betweenness centrality.} \begin{ruledtabular} \begin{tabular}{cl|c|c|c|c|c|c|c|c} & Random graph model & \makecell{Ensemble\\average} & \makecell{Independent\\edge models} & \makecell{ER\\graph} & SBM & RDPG & \makecell{Gaussian\\RGG} & \makecell{Multiplicative\\graphon} & \makecell{Scale-free\\graphon} \\ \hline\hline \multirow{3}{0.25cm}{A}& Exact recursive form $(\Psiaf_l,\Omegaaf_l),(\psiaf_l,\omegaaf_l)$& \ref{eq:spd_main}, \ref{eq:prob_connect_exact}, \ref{eq:gcc_consistency} & \multicolumn{7}{c}{\ref{eq:spd_main_general}, \ref{eq:gcc_consistency_general}, \ref{eq:init_omega_general}} \\\cline{2-10} & Closed-form $(\Psicf_l,\Omegacf_l),(\psicf_l,\omegacf_l)$ & \ref{eq:prob_connect_exact}, \ref{eq:gcc_consistency}, \ref{eq:sf_avg} & \multicolumn{7}{c}{\ref{eq:gcc_consistency_general}, \ref{eq:init_omega_general}, \ref{eq:spd_analytic_general_eig}} \\\cline{2-10} \rule{0pt}{3ex} & Apx. closed-form $(\Psiacf_l,\Omegaacf_l),(\psiacf_l,\omegaacf_l)$ & \ref{eq:sf_avg_uncorrected} & \ref{eq:spd_analytic_general_eig_uncorrected} & \ref{eq:spd_er} & \ref{eq:spd_sbm} & \ref{eq:spd_rdpg} & \ref{eq:grgg_omega_ansatz}, \ref{eq:spd_grgg_homophily} & \multicolumn{2}{c}{\ref{eq:spd_r1g_main}} \\\hline \multirow{2}{0.25cm}{B}& Percolation probability $\rho$& \ref{eq:gcc_consistency} & \ref{eq:gcc_consistency_general} & \multicolumn{2}{c|}{\ref{eq:gcc_consistency_sbm}} & \ref{eq:gcc_consistency_rdpg} & \ref{eq:rho_grgg} & \multicolumn{2}{c}{\ref{eq:gcc_consistency_r1g}}\\\cline{2-10} & Percolation threshold & \ref{eq:percolation_thresh_ensembleavg} & \ref{eq:spectral_condition} & \ref{eq:perc_ergraphon} & \ref{eq:percolation_thresh_sbm}, \ref{eq:percolation_sbm} & \ref{eq:percolation_thresh_rdpg} & \ref{eq:percolation_grgg} & \ref{eq:percolation_thresh_r1g} & \ref{eq:perc_sfg_deg} \\\hline \multirow{2}{0.25cm}{C}& Mean geodesic length & \ref{eq:avgspl} & \multicolumn{7}{c}{\ref{eq:avgspl_general_exact}}\\\cline{2-10} & Apx. mean geodesic length & \ref{eq:avgspl_apx} & \multicolumn{5}{c|}{\ref{eq:avgspl_general}} & \ref{eq:aspl_nu_phi}, \ref{eq:aspl_rank1}\footnote{Multiplicative graphons are equivalent to canonical degree-configuration models (see Sec. \ref{sec:graphons}) and to rank-1 models (see Sec. \ref{sec:perc_rank1})} & \ref{eq:sfg_aspl_eg}\\\hline \multirow{2}{0.25cm}{D}& Node closeness & \ref{eq:closeness_ensemble} & \multicolumn{7}{c}{\ref{eq:closeness_general}} \\\cline{2-10} & Apx. node closeness & & \multicolumn{5}{c|}{\ref{eq:closeness_apx}} & \multicolumn{2}{c}{\ref{eq:closeness_rank1}}\\\hline \multirow{2}{0.25cm}{E}& Node betweenness & \multicolumn{8}{c}{\ref{eq:btw_def}, \ref{eq:prob_bridge}, \ref{eq:bridge_prob}} \\\cline{2-10} & Apx. node betweenness & \multicolumn{8}{c}{\ref{eq:btw_def}, \ref{eq:prob_bridge}, \ref{eq:bridge_prob_apx}} \end{tabular} \end{ruledtabular} \end{table*} \section{\label{sec:spd}Shortest path length distribution} We first derive the distribution of shortest path lengths between two nodes in a network using a recursive approach. In Sec. \ref{sec:spd_supercritical} we consider the supercritical regime, describe a lemma that permits the construction of a geodesic via intervening nodes, and discuss a technical condition of sparsity required to generate a set of recursive equations. To supply the initial condition, in Sec. \ref{sec:perc_prob} we derive the percolation probability of a node. In Sec. \ref{sec:spd_subcritical}, the formalism is extended to the subcritical regime. Then in Sec. \ref{sec:closedform_bound}, we extract a closed-form bound of the SPLD. \paragraph*{Definitions.} We consider a network of $n$ nodes without self-loops represented by the $n\times n$ adjacency matrix $A$. That is, for two nodes indexed by $i$ (``source'') and $j$ (``target''), $A_{ij}=1$ if there is an edge from $i$ to $j$ and $A_{ij}=0$ otherwise, and $A_{ii}=0$. We assume knowledge of the expectation in the ensemble average sense: access to the expected adjacency matrix $\avg{A}$ such that \begin{equation} \label{eq:bernoulli_model} A_{ij}\sim\bernoulli(\avg{A}_{ij}). \end{equation} We use the notation $\avg{\cdot}$ when averaging over the ensemble. For directed networks, all edges are added independently of each other. For undirected networks, enforce $\avg{A}=\avg{A}^T$ and, without loss of generality, use node indices as an arbitrary ordering: for $i<j$, $A_{ij}$ is generated independently from Eq. \ref{eq:bernoulli_model}, and $A_{ji}=A_{ij}$. Let $\lambda_{ij}\in\integernonneg$ be the random variable denoting length of the shortest path from node $i$ to node $j$. \paragraph*{Node degrees and sparsity.}The out- and in-degrees of node $i$, that respectively encode the number of edges emanating from and incident on $i$, are given by \begin{subequations} \label{eq:def_degree} \begin{align} \degreeout_i &= \sum_{j\ne i}A_{ij}, \\ \degreein_i &= \sum_{j\ne i}A_{ji}. \end{align} \end{subequations} Since edges are added independently, from Eq. \ref{eq:bernoulli_model} the out- and in-degrees of node $i$ follow a Poisson binomial distribution in this ensemble, with the expectation from Eq. \ref{eq:def_degree}: \begin{subequations} \label{eq:degree_ensemble_node} \begin{align} \label{eq:degree_ensemble_node_out} \avg{\degreeout_i}&=\avg{\sum_{j\ne i}A_{ij}}=\sum_{j\ne i}\avg{A}_{ij},\\ \label{eq:degree_ensemble_node_in} \avg{\degreein_i}&=\avg{\sum_{j\ne i}A_{ji}}=\sum_{j\ne i}\avg{A}_{ji}, \end{align} \end{subequations} where the second equality in Eqs. \ref{eq:degree_ensemble_node_out}, \ref{eq:degree_ensemble_node_in} arise by linearity of expectation. We further define the mean network degree as the mean network \emph{out}-degree: \begin{equation} \label{eq:degree_ensemble_network} \avg{\degree}\triangleq\expect{\degreeout}=\frac{\sum_i\sum_{j\ne i}\avg{A}_{ij}}{n}, \end{equation} which we remark can be equivalently defined as the mean network \emph{in}-degree, since both are equal. We use the notation $\expect{\cdot}$ when averaging over nodes. For undirected networks we define the degree of node $i$ as: \begin{equation} \label{eq:def_degree_undirected} \degree_i \triangleq \degreeout_i = \degreein_i, \end{equation} whose expectation is provided by Eq. \ref{eq:degree_ensemble_node}. In this work, we assume that the network is sparse in the sense that nodes have a bounded expected degree asymptotically. From Eq. \ref{eq:degree_ensemble_node}, it is then sufficient that $\forall (i,j)$: \begin{equation} \label{eq:sparsity_constraint} \avg{A}_{ij}=\order{n^{-1}}. \end{equation} We remark that due to sparsity, the out- and in-degrees asymptotically follow a Poisson distribution (see Appendix \ref{sec:apdx_finite_size}). \subsection{\label{sec:spd_supercritical}Supercritical regime} In this section, we focus on the supercritical regime. We say that a node pair $(i,j)$ is supercritical if asymptotically there can exist a giant component in the network such that there exists a path from $i$ to $j$ going via nodes on that giant component. For an undirected network, a giant component is a connected component whose size is of the order of the number of nodes $n$ in the network, i.e. $\order{n}$. Since edges are undirected, if a node $i$ can reach a node $j$ on a giant component, then it can reach \emph{and} be reached from every other node on that giant component. For directed networks the concept of a giant component is more subtle, as it is not necessary for a (directed) path to exist from $i$ to $j$, even if one exists from $j$ to $i$. Given this non-trivial difference, we mostly consider networks in an undirected setting in the main text, with results for directed networks reserved for the appendix. We refer the reader to Appendix \ref{sec:apdx_asymmetric} for a discussion on directed networks. Without loss of generality, we assume that $\avg{A}$ is not permutation similar to a block diagonal matrix, i.e. it is irreducible. (If $\avg{A}$ were reducible, there would exist node subsets that can never have edges between them, and we can simply consider the SPLD separately for subgraphs induced by those node subsets. This assumption is not necessary for our formalism and can be relaxed, but it simplifies the exposition: if $\avg{A}$ is irreducible, then asymptotically there can only exist a unique giant component.) Let $\phi_i$ be the event that node $i$ is on the giant component, and $\neg\phi_i$ be the event that it is not. We consider the distribution of $\lambda_{ij}$ conditioned on the source node $i$ being on the giant component. It is useful to define matrices $\Omegaaf_l, \Psiaf_l$ encoding the survival function and conditional probability mass function of the SPLD respectively: \begin{subequations} \label{eq:def_omega_psi} \begin{align} \label{eq:def_psi} [\Psiaf_l]_{ij}&\triangleq P(\lambda_{ij}>l|\phi_i),\\ \label{eq:def_omega} [\Omegaaf_l]_{ij}&\triangleq P(\lambda_{ij}=l|\lambda_{ij}>l-1,\phi_i), \end{align} \end{subequations} which we refer to as the ``survival function matrix'' and ``conditional probability mass function'' (conditional PMF) matrix respectively. We use the notation $[X]_{ij}$ to refer to the $(i,j)^{th}$ element of a matrix $X$, to avoid any confusion where it might arise (when $X$ has an associated subscript, or is a product of matrices). \paragraph*{Recursive setup.} Since the geodesic being longer than $l$ necessarily implies that it is longer than $l-1$, modeling recursively for the non-existence of a geodesic from $i$ to $j$ up to some length $l$ is convenient: \begin{equation} \label{eq:spd_recursion_0} P(\lambda_{ij}>l|\phi_i) = P(\lambda_{ij}>l|\lambda_{ij}>l-1,\phi_i)P(\lambda_{ij}>l-1|\phi_i). \end{equation} We can write the first factor of the recursive equation as the conditional likelihood of no path of length $l$ existing between $i,j$, which can be written by accounting for the non-existence of paths of length $l$ from $i$ to $j$ via any node $u\ne (i,j)$ such that there is a direct edge from $u$ to $j$. Due to sparsity in the asymptotic limit, the probability of existence of a geodesic of any finite length $l$ is asymptotically of $\order{n^{-1}}$ (Appendix \ref{sec:apdx_finite_size} for details). It then follows that there is vanishing correlation between pairs of shortest paths of length $l$ from $i$ to $j$ via pairs of nodes $u, v\ne (i,j)$, which simplifies the overall likelihood in Eq. \ref{eq:spd_recursion_0} into a product of individual likelihoods: \begin{equation}\label{eq:spd_recursion} \begin{split} &P(\lambda_{ij}> l|\lambda_{ij}>l-1,\phi_i) \\ =& \prod_{u\ne(i,j)}\left[1-P(\lambda_{iu}=l-1, \lambda_{uj}=1|\lambda_{ij}>l-1,\phi_i)\right]. \end{split} \end{equation} We emphasize that this assumption holds on the giant component for finite geodesic lengths as $n\to\infty$: as we consider longer geodesics of $\order{\log n}$, the likelihood of a node to be at that distance approaches $\order{1}$ instead of $\order{n^{-1}}$, even for sparse networks. This induces finite-size effects on the SPLD, which become incrementally concentrated around the mode of the SPLD as network size increases. See Appendix \ref{sec:apdx_finite_size} for an extended treatment of finite-size effects. To simplify the term on the RHS of Eq. \ref{eq:spd_recursion}, we use Lemma \ref{lemma:1} in Appendix \ref{sec:apdx_lemma1}, that exploits the assumption of conditionally independent edges in the asymptotic limit to show that: \begin{equation}\label{eq:lemma1} \begin{split} P(\lambda_{iu}=l-1, \lambda_{uj}=1|\lambda_{ij}\ge l,\phi_i) =& P(\lambda_{iu}=l-1|\phi_i)\\&\times P(A_{uj}=1). \end{split} \end{equation} From Eqs. \ref{eq:spd_recursion}, \ref{eq:lemma1} we have $P(\lambda_{iu}=l-1, \lambda_{uj}=1|\lambda_{ij}>l-1,\phi_i) = P(\lambda_{iu}=l-1|\phi_i)P(A_{uj}=1)$. Finally, note that $P(\lambda_{iu}=l-1|\phi_i) = P(\lambda_{iu}=l-1|\lambda_{iu}>l-2,\phi_i)P(\lambda_{iu}>l-2|\phi_i)$ to write Eq. \ref{eq:spd_recursion} as \begin{equation} \label{eq:spd_recursion_2} \begin{split} P(\lambda_{ij}>l|&\lambda_{ij}>l-1,\phi_i) = 1-P(\lambda_{ij}= l|\lambda_{ij}>l-1,\phi_i) \\=&\prod_{u\ne(i,j)}[1-P(\lambda_{iu}=l-1|\lambda_{iu}>l-2,\phi_i)\\&\times P(\lambda_{iu}>l-2|\phi_i)P(A_{uj}=1)]. \end{split} \end{equation} Using the definitions in Eq. \ref{eq:def_omega_psi} we can write Eqs. \ref{eq:spd_recursion_0}, \ref{eq:spd_recursion_2} succinctly in terms of the survival function matrix and conditional PMF matrix of the SPLD: \begin{subequations} \label{eq:spd_main} \begin{align} \label{eq:spd_main_psi} [\Psiaf_l]_{ij} &= (1 - [\Omegaaf_l]_{ij})[\Psiaf_{l-1}]_{ij}, \\ \label{eq:spd_main_omega} [\Omegaaf_l]_{ij} &= 1 - \exp\left(\sum_u\log\left(1-[\Omegaaf_{l-1}]_{iu}[\Psiaf_{l-2}]_{iu}\avg{A}_{uj}\right)\right), \end{align} \end{subequations} where $\exp$ and $\log$ refer to element-wise exponentiation and natural logarithm respectively. The above pair of recursive equations, together with the initial conditions $[\Omegaaf_1]_{ij}\triangleq P(\lambda_{ij}=1|\lambda_{ij}>0,\phi_i)=P(A_{ij}=1|\phi_i)$ and $[\Psiaf_0]_{ij}\triangleq P(\lambda_{ij}>0|\phi_i)=1$, completely define the distribution of shortest paths, from which other network properties of interest can be extracted. Note that $[\Omegaaf_0]_{ij}$ does not exist, while $1-[\Psiaf_\infty]_{ij}$ naturally encodes the probability of the shortest path between $i,j$ being of infinite length, i.e. $j$ not being on the giant component, as $i$ is already on the giant component. Since self-loops are ignored, $[\Psiaf_l]_{ii}\triangleq 0$ and $[\Omegaaf_l]_{ii}\triangleq 0$. To define the initial condition $[\Omegaaf_1]_{ij}=P(A_{ij}=1|\phi_i)$, we use Lemma \ref{lemma:deg_gcc} in Appendix \ref{sec:apdx_connect_gcc} which shows that: \begin{equation} \label{eq:prob_connect_exact} P(A_{ij}=1|\phi_i) = P(A_{ij}=1)\left\{1+\left[\frac{1}{P(\phi_i)}-1\right]P(\phi_j)\right\}. \end{equation} Estimating the RHS of Eq. \ref{eq:prob_connect_exact} requires the probability of a node to be on the giant component, i.e. to be ``percolating'', which we derive in Sec. \ref{sec:perc_prob}. \subsection{\label{sec:perc_prob}Percolation probability} To solve Eq. \ref{eq:spd_main}, we need to solve Eq. \ref{eq:prob_connect_exact} and find percolation probabilities. In this section, we provide two solutions which respectively yield an ``analytic form'' and an ``approximate analytic form'' of the SPLD. \paragraph*{Analytic form.}In principle, percolation probabilities can be extracted from the survival function of the geodesic length distribution. By continuity of the distribution, \begin{equation}\label{eq:perc_prob_limit} P(\lambda_{ij}=\infty|\phi_i) = \lim_{l\to\infty} P(\lambda_{ij}>l|\phi_i). \end{equation} That is, the steady state of recursive Eq. \ref{eq:spd_main_psi} is indicative of the amount of probability mass at $\lambda_{ij}=\infty$, when $i$ is on the giant component. Correspondingly, $\forall (i, j)$, we have \begin{equation}\label{eq:perc_prob_pinf} P(\phi_j)=1-P(\lambda_{ij}=\infty|\phi_i). \end{equation} Eqs. \ref{eq:spd_main}, \ref{eq:prob_connect_exact}, \ref{eq:perc_prob_limit} and \ref{eq:perc_prob_pinf} provide us with the full ``analytic form'' of the SPLD. \paragraph*{Approximate analytic form.}At first, computing the RHS of Eq. \ref{eq:perc_prob_pinf} appears to be a circular problem, since we require the limiting value of the survival function of the SPLD to obtain the initial condition for it. However, we empirically observe that precision in the initial condition is important for agreement of the survival function only for smaller geodesic lengths, whereas we obtain the expected limiting value when using a na\"{i}ve approximation of the initial condition by setting \begin{equation} \label{eq:prob_connect_apx} P(A_{ij}=1|\phi_i) = P(A_{ij}=1). \end{equation} Therefore, running the recursive setup once gives access to $P(\lambda_{ij}=\infty|\phi_i)$, which can then be used to derive the (exact) analytic form of the SPLD from Eqs. \ref{eq:spd_main}, \ref{eq:prob_connect_exact} and \ref{eq:perc_prob_pinf} in a second recursion. Henceforth, we refer to the SPLD obtained by using the na\"{i}ve initial condition in Eq. \ref{eq:prob_connect_apx}, alongside Eq. \ref{eq:spd_main}, as the ``approximate analytic form'' of the SPLD. We note from Eq. \ref{eq:prob_connect_exact} that the approximation in Eq. \ref{eq:prob_connect_apx} holds when the network is almost surely connected: $P(\phi_i)\to 1$. More generally, using the approximation will underestimate the probability mass on short geodesics---refer to Appendix \ref{sec:apdx_connect_gcc} for details. For an ER graph of mean degree $\avg{\degree}$, where asymptotically $\avg{A}_{ij}=\frac{\avg{\degree}}{n}$, Fig. \ref{fig:spd_er_d2} indicates good agreement between the empirical and analytic cumulative distribution functions (CDF) of the SPLD. We note that the approximate analytic CDF remains a good approximation even for shorter geodesics. \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{fig/spd_er_d2_n1024_loglinear.pdf} \caption{Analytic cumulative distribution function (CDF) of shortest path lengths for an ER graph agree with the empirical CDF, where the source node is on the giant component. Network size is fixed at $n=1024$, while mean degree at $\avg{\degree}=2$. Solid and dotted lines indicate analytic solutions derived from analytic (Eqs. \ref{eq:spd_main}, \ref{eq:prob_connect_exact}, \ref{eq:gcc_consistency}) and approximate analytic forms (Eqs. \ref{eq:spd_main}, \ref{eq:prob_connect_apx}) respectively, while dash-dotted line indicates the closed-form bound obtained from Eq. \ref{eq:sf_avg}. Symbols and bars indicate empirical estimates: mean and standard error over 10 network samples. Dashed asymptote indicates size of the giant component as estimated from the self-consistency Eq. \ref{eq:gcc_consistency}. The approximate analytic form marginally underestimates the probability mass for shorter lengths, as is evident on the log-scale (main plot). The closed-form bound shows good agreement for shorter lengths, but deviates strongly for longer ones (inset plot on the linear-scale)---saturating to unity for any percolating graph. There is good agreement between the analytic and empirical estimates, with some deviation around the mode of the distribution, due to finite-size effects---see Appendix \ref{sec:apdx_finite_size}.} \label{fig:spd_er_d2} \end{figure} \paragraph*{Self-consistency equation.}Alternatively, we can separately derive percolating probabilities without invoking the SPLD. Consider generating the network edges in some order, such that the edges of node $i$ are generated as the final step, without loss of generality. Due to conditionally-independent edges, in the asymptotic limit, considering the network without edges of $i$ will make vanishing difference to the likelihood of other nodes $j\ne i$ belonging to the giant component. We note that $i$ will be on the giant component only if it connects directly to at least one $j$---with probability $P(A_{ij}=1)$---such that $j$ itself is on the giant component---with probability $P(\phi_j)$. That is \begin{equation*} \begin{split} P(\phi_i)&=1-\prod_{j\ne i}\left[1-P(A_{ij}=1)P(\phi_j)\right]\\ &=1-\exp\left(\sum_{j\ne i}\log\left(1-P(A_{ij}=1)P(\phi_j)\right)\right)\\ &\approx 1-\exp\left(-\sum_{j\ne i}P(A_{ij}=1)P(\phi_j)\right), \end{split} \end{equation*} where the first-order logarithmic approximation works for sparse networks asymptotically since $\avg{A}_{ij}=\order{n^{-1}}$. Let $\boldsymbol{\rho}$ be the vector encoding $\rho_i\triangleq P(\phi_{i})$. Then percolation probability of every node is given by a non-trivial solution to the following transcendental vector equation: \begin{equation}\label{eq:gcc_consistency} \boldsymbol\rho = \boldsymbol{u}-\exp\left(-\avg{A}\boldsymbol\rho\right), \end{equation} where $\boldsymbol{u}$ is an all-ones vector of length $n$. For the simplest case of ER graphs, this becomes a scalar equation. In Fig. \ref{fig:gcc_consistency_spld} in Appendix \ref{sec:apdx_general_sparsity}, we demonstrate for ER graphs that there is indeed an agreement between the percolating probabilities observed empirically, from the approximate analytic form (using Eqs. \ref{eq:spd_main}, \ref{eq:perc_prob_limit}, \ref{eq:perc_prob_pinf} and \ref{eq:prob_connect_apx}), and from the self-consistency Eq. \ref{eq:gcc_consistency} via function iteration. In all results that follow, we use Eqs. \ref{eq:spd_main}, \ref{eq:prob_connect_exact}, and \ref{eq:gcc_consistency} to yield the ``analytic form'' of the SPLD in the supercritical regime. \subsection{\label{sec:spd_subcritical}Subcritical regime} We now extend the SPLD formalism to the subcritical regime. A node pair $(i,j)$ is subcritical if asymptotically there cannot exist a giant component such that there exists a path from $i$ to $j$ that goes through it. This implies that asymptotically, either there cannot exist \emph{any} path between $i,j$, in which case $\avg{A}$ is reducible and the SPLD trivially has all probability mass at infinity, or $i,j$ can only exist on a small component yielding the subcriticality condition \begin{equation} \label{eq:condition_subcritical} P(\phi_i)=P(\phi_j)=0, \end{equation} in which case we consider the SPLD on the small component containing $i,j$. Analogous to Eq. \ref{eq:def_omega_psi}, we can define the conditional PMF and survival function matrices, but without the conditioning on $\phi_i$ (which is an impossible event). Also, the recursive setup described in Sec. \ref{sec:spd_supercritical}, alongside the key result in Lemma \ref{lemma:1}, can be applied verbatim in the subcritical regime. The only difference is in the initial condition for the conditional PMF: \begin{equation} \label{eq:init_omega_subcritical} [\Omegaaf_1]_{ij}\triangleq P(\lambda_{ij}=1|\lambda_{ij}>0)=P(A_{ij}=1), \end{equation} which is completely defined under the ensemble average model. That is, Eqs. \ref{eq:spd_main} and \ref{eq:init_omega_subcritical} yield the ``analytic form'' of SPLD when $i,j$ are in the subcritical regime. In the sections that follow, we focus on the supercritical regime, but remark that results for the subcritical regime naturally follow by dropping the conditioning on $\phi_i$. In Fig. \ref{fig:spd_er_smallcomponent} in Appendix \ref{sec:apdx_finite_size}, we show the empirics and analytics agree for subcritical ER graphs of varying mean degree. \subsection{\label{sec:closedform_bound}Closed-form bound of SPLD} While the recursive formulation in Eq. \ref{eq:spd_main} is powerful, additional approximations allow for analytical progress for both the SPLD and expectation of geodesic lengths. Since the network is sparse, in the infinite-size limit we can use Eq. \ref{eq:sparsity_constraint} to apply a first-order approximation for the logarithm in Eq. \ref{eq:spd_main_omega}. Let $\odot$ indicate element-wise product, then \begin{equation} \label{eq:spd_analytic_omega_1} \Omegaaf_l \approx \boldsymbol{u}\boldsymbol{u}^T-\exp\left(-(\Omegaaf_{l-1}\odot\Psiaf_{l-2})\avg{A}\right). \end{equation} Let $l=2$, for which $[\Psiaf_{l-2}]_{iu}\triangleq P(\lambda_{iu}>l-2|\phi_i)= P(\lambda_{iu}>0|\phi_i)=1$. Then we can write from Eq. \ref{eq:spd_analytic_omega_1}: \begin{equation} \label{eq:spd_analytic_omega_2} \Omegaaf_2 = \boldsymbol{u}\boldsymbol{u}^T-\exp\left(-\Omegaaf_1\avg{A}\right). \end{equation} Due to sparsity, from Eqs. \ref{eq:sparsity_constraint} and \ref{eq:prob_connect_exact} we obtain: \begin{equation} \label{eq:spd_analytic_omega_3} \Omegaaf_1=\order{n^{-1}}\implies\Omegaaf_1\avg{A}=\order{n^{-1}}. \end{equation} Then application of a first-order approximation for the exponential in Eq. \ref{eq:spd_analytic_omega_2} yields: \begin{equation} \label{eq:spd_analytic_omega_4} \Omegaaf_2 \approx \Omegaaf_1\avg{A}. \end{equation} From Eqs. \ref{eq:spd_main_psi} and \ref{eq:spd_analytic_omega_2} we obtain: \begin{equation} \label{eq:spd_analytic_omega_5} \Psiaf_1 =\exp\left(-\Omegaaf_1\avg{A}\right). \end{equation} Next, consider Eq. \ref{eq:spd_analytic_omega_1} with $l=3$, for which using Eqs. \ref{eq:spd_analytic_omega_3} and \ref{eq:spd_analytic_omega_5} we can assume $\Psiaf_{l-2}=\Psiaf_1\approx 1$. This yields an equation for $l=3$ similar to Eq. \ref{eq:spd_analytic_omega_2}: \begin{equation} \label{eq:spd_analytic_omega_6} \Omegaaf_3 = \boldsymbol{u}\boldsymbol{u}^T-\exp\left(-\Omegaaf_2\avg{A}\right)=\boldsymbol{u}\boldsymbol{u}^T-\exp\left(-\Omegaaf_1\avg{A}^2\right), \end{equation} where we have used Eq. \ref{eq:spd_analytic_omega_4}. Due to sparsity, we can apply identical arguments as above to obtain: \begin{subequations} \label{eq:spd_analytic_omega_7} \begin{align} &\Omega_1\avg{A}^2=\order{n^{-1}}\\ \implies&\Omegaaf_3\approx\Omegaaf_1\avg{A}^2\\ \implies&\Psiaf_2=\exp\left(-\Omegaaf_1\left(\avg{A}+\avg{A}^2 \right)\right). \end{align} \end{subequations} In the infinite-size limit, by induction for any finite $l$, we can propagate through the sparsity assumption \begin{equation} \label{eq:spd_analytic_omega_8} \Omegaaf_{l}=\order{n^{-1}}, \end{equation} that results in \begin{subequations} \label{eq:spd_analytic} \begin{align} \label{eq:spd_analytic_omega} \Omegaaf_l &\approx \Omegaaf_{l-1}\avg{A}, \\ \label{eq:spd_analytic_psi} \Psiaf_l &\approx \exp\left(-\sum_{k=1}^l\Omegaaf_k\right). \end{align} \end{subequations} We emphasize that this induction relies on assuming $[\Psiaf_{l-2}]_{iu}\triangleq P(\lambda_{iu}>l-2|\phi_i)\approx 1$, which is equivalent to assuming that the conditional PMF $P(\lambda_{ij}=l|\lambda_{ij}>l-1,\phi_i)$ approximates the PMF $P(\lambda_{ij}=l|\phi_i)$. In the subcritical regime, (and therefore ignoring the conditioning on $\phi_i$,) this holds for any value of $l$ in a network of finite (but large) size $n$, since $j$ is almost surely on a different component from $i$'s, i.e. $P(\lambda_{ij}>l-1)\approx 1$. In the supercritical regime, this holds for any bounded value of $l$ in the infinite-size limit, since the shortest path between any two arbitrary nodes is almost surely no less than $l$, and thus the event $\lambda_{ij}>l-1$ does not inform $\lambda_{ij}=l$. For finite-sized networks in the supercritical regime, this approximation will evidently work only for smaller path lengths. More precisely, for geodesic lengths longer than $\order{\log n}$---which is around the mode of the SPLD---boundary effects due to finite network size become apparent, Eq. \ref{eq:spd_analytic_omega_8} no longer holds, and Eq. \ref{eq:spd_analytic_omega} stops being a tight bound on the actual conditional PMF. (See Appendix \ref{sec:apdx_finite_size} for details.) In fact, the ``conditional PMF'' encoded by $\Omegaaf_l$ may not be a valid probability measure. However, it yields an expression for the survival function matrix from Eq. \ref{eq:spd_analytic_psi} in terms of sum of powers of $\avg{A}$: \begin{equation} \label{eq:sf_avg} \Psicf_l \triangleq \exp\left(-\Omegaaf_1\sum_{k=1}^l\avg{A}^{k-1}\right), \end{equation} which, on its own terms, is a valid probability measure, and alongside Eqs. \ref{eq:prob_connect_exact}, \ref{eq:gcc_consistency} completely describes the SPLD. We refer to Eqs. \ref{eq:prob_connect_exact}, \ref{eq:gcc_consistency} and \ref{eq:sf_avg} as the ``closed-form'' (bound) of the SPLD, and emphasize that these approximations underestimate probability mass on longer geodesics: the closed-form of the survival (cumulative distribution) function of the SPLD is a lower (upper) bound on the analytic form, which is tight for shorter lengths in finite-size networks. These approximations will not be useful if we are interested in computing the size of the giant component, since the closed-form in Eq. \ref{eq:sf_avg} will always have a limiting value of $1$ in the supercritical regime (see Theorems \ref{lemma:perc_thresh}, \ref{lemma:nodespace_partition} in Appendix \ref{sec:apdx_perc_part}). However, if we are interested in geodesics shorter than the modal length, or the expectation of geodesic lengths rather than the full SPLD, then these are seen to be reasonable approximations. We demonstrate this behaviour for ER graphs in Fig. \ref{fig:spd_er}, and for more general random graph models in Sec. \ref{sec:geodesics_specific}. \paragraph*{Approximate closed form of the SPLD.}Finally, we consider a scenario which produces a helpful interpretation for the survival function of the SPLD. Putting the na\"{i}ve initial condition from Eq. \ref{eq:prob_connect_apx} in Eq. \ref{eq:sf_avg} results in an ``approximate closed-form'' of the survival function of the SPLD: \begin{equation} \label{eq:sf_avg_uncorrected} \Psiacf_l \triangleq \exp\left(-\sum_{k=1}^l\avg{A}^{k}\right). \end{equation} We note that because using Eq. \ref{eq:prob_connect_apx} marginally underestimates probability mass for shorter geodesics, while the approximations used to arrive at Eq. \ref{eq:sf_avg} vanishingly overestimate it, Eq. \ref{eq:sf_avg_uncorrected} does not necessarily define a bound on the survival function. Regardless, Eq. \ref{eq:sf_avg_uncorrected} shows that the approximate closed-form of the survival function of the SPLD at length $l$ is encoded by the sum of powers of $\avg{A}$ from $1$ to $l$. This is reminiscent of the well-known result that sum of powers of $A$ encodes the number of paths up to length $l$ \cite{newman2018networks}. We emphasize that if all node pairs are in the subcritical regime, then Eq. \ref{eq:init_omega_subcritical} yields exactly $\Omega_1=\avg{A}$, i.e. Eq. \ref{eq:sf_avg_uncorrected} is an exact and tight closed-form bound on the SPLD, evident in Fig. \ref{fig:spd_er_smallcomponent} in Appendix \ref{sec:apdx_finite_size}. \paragraph*{Interpretation.}For an alternate comprehension of the expression in Eq. \ref{eq:sf_avg_uncorrected}, apply a first-order approximation to its RHS---in the sparse infinite-size limit---to obtain for node pair $(i,j)$: $[\Psiacf_l]_{ij}\approx 1-\sum_{k=1}^l[\avg{A}^k]_{ij}$. \emph{$[\Psiacf_l]_{ij}$ can be approximated as the probability of $i$, which is on the giant component, failing to connect to $j$ (in the asymptotic limit) via any of the independent paths of lengths $1$ through $l$.} To see how, note that $\avg{A}=\order{n^{-1}}\implies\avg{A}^k=\order{n^{-1}}$. If all paths of length $k$ between $i,j$ are independent of one another---an independent-path assumption---then $\avg{A}^k_{ij}$ encodes the likelihood of a path of length $k$ between them, since higher-order terms cancel out due to the sparsity of $\avg{A}^k$ noted above. If paths of length $1,2,\cdots l$ are independent of one another---an independent-path-length assumption---then $\sum_{k=1}^l[\avg{A}^k]_{ij}$ encodes the probability of a path existing between $i,j$ of any length up to $l$ i.e. $P(\lambda_{ij}\le l|\phi_i)$, since again, higher-order terms cancel out due to sparsity. Then $1-\sum_{k=1}^l[\avg{A}^k]_{ij}$ is $P(\lambda_{ij}>l|\phi_i)$, which is exactly the LHS of Eq. \ref{eq:sf_avg_uncorrected}. This interpretation provides a retrospective derivation for the approximate closed-form of the survival function of the SPLD. \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{fig/spd_er_n1024.pdf} \caption{Empirical, analytic and closed-form cumulative distribution functions (CDF) of shortest path lengths where the source node is on the giant component, for an ER graph with varying connectivity. Network size is fixed at $n=1024$, while mean degree varies as $\avg{\degree}\in\{1.25, 1.5, 2, 4, 8, 16\}$. Solid and dotted lines indicate analytic (Eqs. \ref{eq:spd_main}, \ref{eq:prob_connect_exact}, \ref{eq:gcc_consistency}) and closed-form solutions (Eqs. \ref{eq:prob_connect_exact}, \ref{eq:sf_avg}), respectively. Symbols and bars indicate empirical estimates: mean and standard error over 10 network samples. Dashed asymptote indicates size of the giant component as estimated from the self-consistency Eq. \ref{eq:gcc_consistency}. While the analytic SPLD is in good agreement for all connectivities at all geodesic lengths, the closed-form SPLD is in good agreement for all connectivities at shorter lengths, while serving as an upper bound to the empirical CDF.} \label{fig:spd_er} \end{figure} \section{\label{sec:general_graphs}Geodesics in general random graphs with independent edges} Since $\avg{A}$ is representative of any underlying statistical network model (see Appendix \ref{sec:apdx_asymmetric}), we can generalize equations for the SPLD when we do not have access to a given network ensemble $\avg{A}$, but have knowledge of a (possibly inferred) random graph model that treats both edges and node identities as random variables. In this section, we first derive analogous equations to Sec. \ref{sec:spd} for the expected degree, SPLD and percolation probabilities in this general setting, which will appear as functions instead of vectors. We also define a linear operator corresponding to $\avg{A}$, which will determine key network properties. \paragraph*{Definitions.}In its most general form, consider a topological space $V$ of nodes: such as a discrete space for SBMs \cite{holland1983sbm}, a Euclidean space for RGGs \cite{penrose2003rgg}, or an inner-product space for RDPGs \cite{young2007rdpg}. Let $\mu$ be a probability measure on it that encodes the distribution of nodes in $V$, which we refer to as the ``node density'' in $V$. Consider the product space $V\times V$ of edges, and their corresponding probability measure $\nu$ that encodes the probability of edges conditioned on pairs of node locations in $V$, e.g. the Euclidean co-ordinates of two nodes in case of an RGG. This function will be referred to as the ``connectivity kernel'' in $V\times V$. As before, we assume sparsity in the sense that $\nu=\order{n^{-1}}$ almost everywhere. Lower-case variables will be used to indicate nodes in $V$. We generate a network of $n$ nodes according to node distribution $\mu$ yielding the collection $\mathcal{V}=\{x_i| x_i \sim \mu, i \in \{1,2\cdots n\}\}$ and add edges between nodes according to a sparse connectivity kernel $\nu$ yielding the collection $\mathcal{E}=\{(x_i, x_j)| (x_i, x_j) \sim \nu, (i, j)\in \{1,2\cdots n\}^2 \thinspace\mathrm{s.t.}\thinspace i\ne j\}$. The result is a graph without self-loops $\mathcal{G}=(\mathcal{V}, \mathcal{E})$ that represents the full network, and may be directed or undirected, contingent on the symmetry of $\nu$. For an undirected network we impose the additional constraints that (1) $\nu$ is symmetric, i.e. $\nu(x,y)=\nu(y,x)$, and (2) edges are generated assuming an (arbitrary) ordering of nodes such that $\overline{\mathcal{E}}\triangleq\{(x_i, x_j)| (x_i, x_j) \sim \nu, (i,j)\in \{1,2\cdots n\}^2\thinspace\mathrm{s.t.}\thinspace i<j\}$ and $\mathcal{E}=\overline{\mathcal{E}}\cup\{(x_j,x_i)|(x_i,x_j)\in\overline{\mathcal{E}}\}$. We emphasize that while both directed and undirected graphs can be generated using symmetric kernels, graphs generated from asymmetric kernels must necessarily be directed. We also remark that by permitting asymmetric kernels, this model is a moderate extension of the sparse inhomogeneous random graph model \cite{bollobas2007phase, bollobas2011sparsegraphs}, which we specifically consider in Sec. \ref{sec:graphons}. \paragraph*{Degree functions.}Some network statistics become immediately apparent for this model with independent edges, such as expected node degrees. For a node located at $x\in V$, let $\degreeout(x),\degreein(x)$ be its out- and in-degree. Recall from Eq. \ref{eq:degree_ensemble_node_out} that for the ensemble average model, the expected out-degree for node $i$ is given by $\avg{\degreeout_i}=\sum_{j\ne i}\avg{A}_{ij}$. In the general setting, the sum of expectation over $n-1$ nodes asymptotically translates to $n$ times the expectation over node space $V$. This yields expressions analogous to Eq. \ref{eq:degree_ensemble_node} for the expected out- and in-degree at $x$, and expected network degree: \begin{subequations} \label{eq:general_degrees} \begin{align} \label{eq:general_degrees_out} \avg{\degreeout(x)}&\triangleq\meandegreeout(x) = n\int_V\nu(x,y)d\mu(y),\\ \label{eq:general_degrees_in} \avg{\degreein(x)}&\triangleq\meandegreein(x) = n\int_V\nu(y,x)d\mu(y),\\ \label{eq:general_degrees_mean} \avg{\degree}&\triangleq \expect[\mu]{\meandegreeout(x)}=\int_V\meandegreeout(x)d\mu(x), \end{align} \end{subequations} where we use the convention of defining the mean network degree as the mean network \emph{out}-degree, and the notation $\expect[\mu]{\cdot}$ when averaging over the node space $V$. For undirected networks $\nu(x,y)=\nu(y,x)$ almost everywhere, in which case we define the degree of node at $x$ similarly to Eq. \ref{eq:def_degree_undirected} as $\degree(x)\triangleq\degreeout(x)= \degreein(x)$, yielding: \begin{equation} \label{eq:general_degrees_deg} \avg{\degree(x)}\triangleq\meandegree(x)=\meandegreeout(x)= \meandegreein(x). \end{equation} As before, the sparsity assumption asymptotically implies bounded node degrees and a Poisson degree distribution at $x$ (see Appendix \ref{sec:apdx_general_sparsity}): \begin{equation} \label{eq:degree_distribution} \degree(x)\sim\poisson\left(\meandegree(x)\right), \end{equation} with similar expressions for the out- and in-degrees, and the network degree distribution is a mixture of Poisson distributions with the expectation $\avg{\degree}$. \paragraph*{Analytic form of the SPLD.}Analogous to the survival function matrix $\Psiaf_l$ and conditional PMF matrix $\Omegaaf_l$ of the SPLD defined in Eq. \ref{eq:def_omega_psi}, we define the survival function and conditional PMF of the SPLD respectively: \begin{subequations} \label{eq:def_omega_psi_general} \begin{align} \label{eq:def_psi_general} \psiaf_l(x,y)&\triangleq P(\lambda_{xy}>l|\phi_x),\\ \label{eq:def_omega_general} \omegaaf_l(x,y)&\triangleq P(\lambda_{xy}=l|\lambda_{xy}>l-1,\phi_x), \end{align} \end{subequations} where the source node is at $x\in V$ and target node at $y\in V$. Then, assuming large $n$, the sum over all nodes in Eq. \ref{eq:spd_main} becomes $n$ times the integral over the entire node space $V$. \begin{subequations} \label{eq:spd_main_general} \begin{align}\label{eq:spd_main_general_psi} \psiaf_l(x,y) =&[1 - \omegaaf_l(x,y)]\psiaf_{l-1}(x,y), \\\label{eq:spd_main_general_omega} \begin{split} \omegaaf_l(x,y) =& 1 - \exp\bigg(n\int_V\log\big(1\\ &-(\omegaaf_{l-1}\cdot\psiaf_{l-2})(x,z)\nu(z,y)\big)d\mu(z)\bigg). \end{split} \end{align} \end{subequations} The above set of recursive equations, together with the initial conditions $\omegaaf_1(x,y)\triangleq P(\lambda_{xy}=1|\lambda_{xy}>0,\phi_x)=P(A_{xy}=1|\phi_x)$ and $\psiaf_0(x,y)\triangleq P(\lambda_{xy}>0|\phi_x)=1$, completely define the distribution of shortest path lengths. We remark that $\psiaf_l(x,x)$ and $\omegaaf_l(x,x)$ encode the distribution of shortest path lengths between nodes with identical locations in $V$. To define the initial condition $\omegaaf_1$, we require percolation probabilities in $V$. Let $\rho(x)$ be the probability that a node located at $x$ is on the giant component. Following the same argument as for the ensemble average model, and replacing the sum over nodes by $n$ times the integral over node space, we can write $\rho(x)=1-\exp\left(n\int_V\log\left(1-\nu(x,y)\rho(y)\right)d\mu(y)\right)$. Since $\nu=\order{n^{-1}}$ almost everywhere, we can use a first-order approximation for the logarithm to write $\rho$ as the solution to a self-consistent integral equation: \begin{equation}\label{eq:gcc_consistency_general} \rho(x) = 1-\exp\left(-n\int_V\nu(x,y)\rho(y)d\mu(y)\right). \end{equation} This leads to the base case (analogous to Eq. \ref{eq:prob_connect_exact}): \begin{equation}\label{eq:init_omega_general} \omegaaf_1(x,y)=\nu(x,y)\left\{1+\left[\frac{1}{\rho(x)}-1\right]\rho(y)\right\}, \end{equation} needed to solve Eq. \ref{eq:spd_main_general}. \paragraph*{Closed-form bound of the SPLD.} Under the approximations made for the ensemble average model, we obtain a set of equations analogous to Eq. \ref{eq:spd_analytic} for the closed-form (bound) of the SPLD: \begin{subequations} \label{eq:spd_general} \begin{align} \label{eq:spd_general_omega} \omegacf_l(x,y) &\triangleq n\int_V\omegacf_{l-1}(x,z)\nu(z,y)d\mu(z),\\ \label{eq:spd_general_psi} \psicf_l(x,y) &\triangleq \exp\left(-\sum_{k=1}^l\omegacf_k(x,y)\right), \end{align} \end{subequations} To our knowledge, there is no alternative means to analytically access the SPLD for sparse general random graph families with independent edges. Substituting the base case of $\omegaaf_1(x,y)$ in Eq. \ref{eq:spd_general_omega}, we can write: \begin{equation} \label{eq:spd_analytic_general_omega} \begin{split} \omegacf_l(x,y) = n^{l-1}\int_V\int_V\cdots\int_V\omegaaf_1(x,z_1)\nu(z_1,z_2)&\cdots\\ \times\nu(z_{l-1},y)d\mu(z_1)d\mu(z_2)\cdots& d\mu(z_{l-1}). \end{split} \end{equation} From Eq. \ref{eq:spd_general_psi}, the survival function at length $l$ is encoded by the sum of iterated integrals. \paragraph*{Integral operator.} Due to sparsity, we can define a compact integral operator $T$ on the space of functions in $V$ (see Appendix \ref{sec:apdx_general_sparsity}): \begin{equation} \label{eq:integral_op} (Tf)(x)\triangleq n\int_V\nu(x,y)f(y)d\mu(y), \end{equation} that can be viewed as an analogue of the sparse expected adjacency matrix $\avg{A}$. It maps a function evaluated at node location $x$ to an expectation of the function evaluated at other node locations $y$, weighted by the node density at $y$ and the probability of a node at $y$ to connect to a node at $x$. For example, if $\forall x: f(x)=1$, then $Tf$ becomes the out-degree function from Eq. \ref{eq:general_degrees_out}. If one applies $T$ to the percolation probability function $\rho(x)$, then the self-consistency Eq. \ref{eq:gcc_consistency_general} can be written as $\rho(x)=1-\exp\left(-(T\rho)(x)\right)$. Therefore, many network quantities of interest can be extracted using $T$. For the rest of this section, we assume the kernel $\nu$ is symmetric, implying that $T$ is self-adjoint, and refer the reader to Appendix \ref{sec:asym_kernel} for a discussion on asymmetric kernels. This allows us to apply the spectral theorem for compact self-adjoint operators \cite{riesz1955hilbert}. Let $T$ have rank $N$, then there exists an orthonormal system of eigenfunctions of $T$ defined as $\{\varphi_i\}_{i=1}^N$, corresponding to ordered non-zero eigenvalues $\{\tau_i\}_{i=1}^N$, such that $\{|\tau_i|\}_{i=1}^N$ is monotonically non-increasing with index $i$. Due to compactness of $T$, either $N$ is finite, or $\lim_{i\to\infty}\tau_i=0$. The following eigenfunction expansions hold: \begin{subequations} \begin{align} \label{eq:transform_rep} (Tf)(x) &= \sum_{i=1}^N\tau_i\left(\int_V f(y)\varphi_i(y)d\mu(y)\right)\varphi_i(x),\\ \label{eq:kernel_rep} \nu(x,y) &= \frac{1}{n}\sum_{i=1}^N\tau_i\varphi_i(x)\varphi_i(y). \end{align} \end{subequations} \paragraph*{Eigenvalues and homophily.}Given Eq. \ref{eq:kernel_rep}, we note that if an eigenvalue $\tau_i$ is positive (negative), then it raises the connection probability for node locations having the same (opposite) sign of the eigenfunction $\varphi_i$. Since nodes with the same (opposite) sign of an eigenfunction can be seen as being similar (dissimilar) along that ``dimension'', positive eigenvalues indicate \emph{homophily}: the phenomenon of similar nodes being more likely to connect to one another, that is widely observed in social networks \cite{mcpherson1987homophily, mcpherson2001birds}. Whereas negative eigenvalues indicate \emph{heterophily}: dissimilar nodes being more likely to connect to one another, such as in multipartite graphs. \paragraph*{Closed-form of the SPLD.}Substituting Eq. \ref{eq:kernel_rep} in Eq. \ref{eq:spd_analytic_general_omega}, we can integrate all intermediate variables from $z_{l-1}$ to $z_{2}$ by exploiting the orthonormality of $\{\varphi_i\}_{i=1}^N$: \begin{equation} \label{eq:orthonormal_basis} \int_V\varphi_i(x)\varphi_j(x)d\mu(x)=\delta_{ij}, \end{equation} where $\delta_{ij}$ is the Kronecker delta, which results in \begin{equation} \label{eq:spd_analytic_general_eig_omega} \omegacf_l(x,y) = \int_V\sum_{i=1}^N\tau_i^{l-1}\omegaaf_1(x,z)\varphi_i(z)\varphi_i(y)d\mu(z), \end{equation} where we have suppressed the index of $z_1$. Putting this in Eq. \ref{eq:spd_general_psi}, we can push the outermost sum over lengths $k$ through due to compactness of $T$. Let $\widetilde{S}_l(a)\triangleq{1+a+\cdots a^{l-1}}$ be the geometric sum of $a$ starting at $1$ up to $l$ terms---where $\widetilde{S}_0(a)\triangleq 0$---then the closed-form of the survival function is described by: \begin{equation} \label{eq:spd_analytic_general_eig} \psicf_l(x,y) = \exp\left(-\int_V\sum_{i=1}^N\widetilde{S}_l(\tau_i)\omegaaf_1(x,z)\varphi_i(z)\varphi_i(y)\right). \end{equation} \paragraph*{Approximate closed-form of the SPLD.}Similarly to Eq. \ref{eq:sf_avg_uncorrected}, we can solve Eqs. \ref{eq:spd_general_omega} and \ref{eq:spd_general_psi}, with a na\"{i}ve initial condition analogous to Eq. \ref{eq:prob_connect_apx}: \begin{equation} \label{eq:prob_connect_apx_general} \omegaaf_1(x,y)=\nu(x,y). \end{equation} Let $S_l(a)\triangleq{a+a^2+\cdots a^{l}}$ be the geometric sum of $a$ starting at $a$ up to $l$ terms---where $S_0(a)\triangleq 0$. Then from Eq. \ref{eq:spd_analytic_general_eig} we obtain the approximate closed-form of the survival function: \begin{equation} \label{eq:spd_analytic_general_eig_uncorrected} \psiacf_l(x,y) \triangleq \exp\left(-\frac{1}{n}\sum_{i=1}^NS_l(\tau_i)\varphi_i(x)\varphi_i(y)\right). \end{equation} \paragraph*{Interpretation.}Assuming $\tau_i\ne 1$ $\forall i\in\{1,2,\cdots N\}$, we can write $S_l(\tau_i)=\tau_i\frac{\tau_i^l-1}{\tau_i-1}$. Define $a_i(x,y)\triangleq\frac{\tau_i\varphi_i(x)\varphi_i(y)}{n(\tau_i-1)}$, $b_i\triangleq\log|\tau_i|$, and $\sgn(\cdot)$ as the sign function. Then we can rewrite Eq. \ref{eq:spd_analytic_general_eig_uncorrected} as the product of survival functions of a (discrete version of the) Gompertz distribution, wherein the $i^{th}$ term in the product only depends on the eigenpair $(\tau_i,\varphi_i)$: \begin{equation} \label{eq:sf_gompertz} \psiacf_l(x,y)=\prod_{i=1}^N\exp\left(-a_i(x,y)\left[\sgn(\tau_i)^le^{b_il}-1\right]\right). \end{equation} The Gompertz distribution is a reflection of the Gumbel distribution around the origin---one of the three extreme value distributions with an exponential tail \cite{gnedenko1943distribution}. It has been previously shown to model lengths of self-avoiding walks in ER graphs \cite{tishby2016sarw}. \paragraph*{SPLD for a random node pair.}We can further define a distribution for the shortest path length between a randomly selected node pair in the network, which should be informative about typical geodesic lengths in the network, by averaging over the source and target nodes \begin{equation} \label{eq:spd_general_psi_agg} \psiacf(l) \triangleq \expect[\mu^2]{\psiacf_l(x,y)}=\int_{V}\int_{V}\psiacf_l(x,y)d\mu(x)d\mu(y). \end{equation} We use the notation $\expect[\mu^2]{\cdot}$ when averaging over $V\times V$. By applying Jensen's inequality \cite{jensen1906fonctions} to Eq. \ref{eq:spd_general_psi_agg}---in the form $\exp(\expect{Z})\le\expect{\exp(Z)}$ for some random variable $Z$---we can obtain a lower bound on $\psiacf(l)$, which will be tight if the variance in survival function over different node pairs is small: \begin{equation} \label{eq:spd_general_psi_agg_bound} \psiacf(l) \ge \exp\left(-\frac{1}{n}\sum_{i=1}^NS_l(\tau_i)\expect[\mu]{\varphi_i(x)}^2\right), \end{equation} which permits a description of the network's expected geodesic length in terms of the eigenvalues and expectation of eigenfunctions of $T$. \section{\label{sec:geodesics_specific}Geodesics in specific random graph families} In this section, we consider illustrative models that are special cases of the general setting considered in Sec. \ref{sec:general_graphs}. We focus on symmetric kernels, that coincide with the sparse inhomogeneous random graph models of Ref. \cite{bollobas2007phase}. In particular, we elaborate on the percolation probability of nodes, and determine the approximate closed-form of the survival function of SPLD between node pairs---which is asymptotically a good approximation for finite lengths in the supercritical regime, and an exact tight bound for all lengths in the subcritical regime. \subsection{Stochastic block model}\label{sec:sbm} Stochastic block models (SBM) have been popularly used for modeling social networks \cite{holland1983sbm, karrer2011dcsbm}, since they directly capture notions of social homophily that ``like befriends like'' \cite{mcpherson1987homophily, mcpherson2001birds}, and social segregation \cite{moody2001race}. They are a discrete space model wherein nodes are divided into communities or ``blocks'', whose probabilistic connections are modeled by block-level parameters. In essence, SBMs are a form of ``graph-coarsening'' wherein edges between nodes can be aggregated into edges between blocks of nodes \cite{peixoto2014nestedsbm}. This property can be leveraged to study the SPLD in empirical graphs (see Appendix \ref{sec:graphcoarsening}). Theorem \ref{lemma:general_sbm_equiv} in Appendix \ref{sec:apdx_sbm_equiv} shows that there exists an $\epsilon$-equivalent SBM for any general random graph model with independent edges. This property can be used to approximate a continuous space model by an SBM via discretization up to a desired level of accuracy. Establishing a framework for the SPLD in SBMs thus yields applications in a wide variety of settings. \paragraph*{Definitions.} Consider $n$ nodes, each of them belonging to one of $k$ blocks where $k<n$, according to a categorical distribution given by $\boldsymbol\pi\in(0,1]^k$ such that $\sum_i\pi_i=1$. We let $Z\in\{0,1\}^{n\times k}$ represent the assignment matrix where exactly one entry in every row is $1$ and the rest are $0$. The probability of two nodes connecting to each other depends entirely on the blocks they belong to, i.e. for two nodes indexed by $i,j$: $P(A_{ij}=1|Z_{ix}=1,Z_{jy}=1)\triangleq\frac{B_{xy}}{n}$ where $B_{xy}\ge 0$ measures the ``affinity'' between blocks $x$ and $y$, where $x,y\in\{1,2,\cdots k\}$. The ``block matrix'' $B$ along with the ``distribution vector'' $\boldsymbol\pi$, completely define this probabilistic model. More succinctly, we can write \begin{equation} \label{eq:sbm} \begin{split} Z_i &\sim \categorical\left(\boldsymbol\pi\right), \\ A_{ij} &\sim \bernoulli\left(\frac{[ZB Z^T]_{ij}}{n}\right). \end{split} \end{equation} This parametrization ensures sparsity i.e. $\nu=\order{n^{-1}}$, wherein the expected degree of any node remains the same for sufficiently large values of $n$---an assumption that holds particularly well for social networks. \paragraph*{Degree and percolation probability.}Functions for expected degree $\meandegree$, percolation probability $\rho$, conditional PMF $\omegaaf_l$ and survival function $\psiaf_l$ of the SPLD, too possess a block structure. This translates Eq. \ref{eq:general_degrees_deg} into an expression for the length-$k$ ``block degree vector'': \begin{equation} \label{eq:degree_sbm} \boldsymbol{\meandegree}=B\boldsymbol{\pi}, \end{equation} where $\meandegree_x$ is the expected degree for a node in block $x$, and therefore from Eq. \ref{eq:general_degrees_mean} the average network degree is given by $\avg{\degree}=\boldsymbol{\pi}^TB\boldsymbol{\pi}$. Next, from Eq. \ref{eq:gcc_consistency_general} we obtain an equation for the length-$k$ ``block percolation vector'' $\boldsymbol{\rho}$ where $\rho_x$ is the percolation probability for a node in block $x$: \begin{equation} \label{eq:gcc_consistency_sbm} \boldsymbol{\rho}=\boldsymbol{u}-\exp\left(-B\Pi\boldsymbol{\rho}\right), \end{equation} where we define $\Pi\triangleq\diag(\boldsymbol\pi)$ as the diagonal distribution matrix, and $\boldsymbol{u}$ is the all-ones vector of length $k$. \paragraph*{Analytic form of the SPLD.}From Eq. \ref{eq:spd_main_general} we get recursive equations for $k\times k$ ``block matrices'' $\Psiaf_l, \Omegaaf_l$: \begin{equation} \label{eq:spd_main_sbm} \begin{split} \Psiaf_l &= (\boldsymbol{u}\boldsymbol{u}^T - \Omegaaf_l)\odot\Psiaf_{l-1}, \\ \Omegaaf_l &= \boldsymbol{u}\boldsymbol{u}^T - \exp \Bigg(n\sum_{x=1}^k\pi_x\\ &\times\log\left(\boldsymbol{u}\boldsymbol{u}^T-\frac{[\Omegaaf_{l-1}\odot\Psiaf_{l-2}]_{:x}[B]_{x:}}{n}\right)\Bigg), \end{split} \end{equation} with the initial condition from Eq. \ref{eq:init_omega_general} yielding: \begin{equation} \label{eq:spd_sbm_init} \Omegaaf_1=\frac{B + (R^{-1}-I)BR}{n}, \end{equation} where $R\triangleq\diag(\boldsymbol{\rho})$, and $\Psiaf_0=\boldsymbol{u}\boldsymbol{u}^T$. We have used the notation $[X]_{:i}$ to indicate the $i^{th}$ column vector of matrix $X$, and $[X]_{i:}$ to indicate the $i^{th}$ row vector of $X$. Fig. \ref{fig:bipartite_cdf} shows the distribution of shortest path lengths between nodes in a $2$-block SBM with a bipartite structure, obtained by solving Eqs. \ref{eq:spd_main_sbm}, \ref{eq:spd_sbm_init} and \ref{eq:gcc_consistency_sbm}, which is in good agreement with the empirical SPLD. Refer to Fig. \ref{fig:bipartite_cdf_smallcomponent} in Appendix \ref{sec:apdx_finite_size} for the SPLD of a bipartite SBM in a subcritical regime. \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{fig/spd_sbm_d0880_n1024.pdf} \caption{Empirical, analytic, and approximate closed-form CDF of shortest path lengths where the source node is on the giant component, agree with each other for a bipartite SBM with block matrix $B=\big(\begin{smallmatrix} 0 & 8\\ 8 & 0 \end{smallmatrix}\big)$, distribution vector $\boldsymbol\pi=(0.2, 0.8)$, and $n=1024$. Rows correspond to the block membership of source node. Left column depicts the PMF, which highlights bipartitivity of the network, and the right column depicts the CDF, whose tail value agrees with the percolation probability of target node, indicated by the dashed asymptote and calculated from Eq. \ref{eq:gcc_consistency_sbm}. Solid lines represent analytic form using Eqs. \ref{eq:spd_main_sbm}, \ref{eq:spd_sbm_init}, \ref{eq:gcc_consistency_sbm}, while dash-dotted lines represent approximate closed-form using Eq. \ref{eq:spd_sbm}, and dotted lines with bars represent empirics, i.e. mean and standard error over 10 samples.} \label{fig:bipartite_cdf} \end{figure} \paragraph*{Approximate closed-form of the SPLD.}Using Eq. \ref{eq:spd_analytic_general_omega} yields the survival function of the SPLD as the summation over matrix powers (see Appendix \ref{sec:apdx_sbm}), producing the $k\times k$ survival function block matrix: \begin{equation} \label{eq:spd_sbm} \Psiacf_l = \exp\left(-\frac{B}{n}\sum_{i=1}^l(\Pi B)^{i-1}\right). \end{equation} Fig. \ref{fig:bipartite_cdf} shows the approximate closed-form of the SPLD between nodes in a $2$-block bipartite SBM, obtained by solving Eq. \ref{eq:spd_sbm}. As previously discussed in Sec. \ref{sec:closedform_bound}, the approximate closed-form SPLD agrees with the empirical SPLD for shorter geodesic lengths in finite-size networks. If $\Pi B-I$ is non-singular, we can evaluate the expression in Eq. \ref{eq:spd_sbm} as a matrix series to obtain $$\Psiacf_l = \exp\left(-\frac{B}{n}((\Pi B)^l-I)(\Pi B-I)^{-1}\right).$$ For the special case of an ER graph, where the likelihood of an edge is identical across all node pairs, we have $k=1$, $B\triangleq\avg{\degree}$, $\Pi\triangleq 1$, and the survival function is scalar valued: \begin{equation} \label{eq:spd_er} \psiacf(l) = \begin{cases} \exp\left(-\frac{\avg{\degree}(\avg{\degree}^l-1)}{n(\avg{\degree}-1)}\right) &\mbox{if }\avg{\degree}\ne 1,\\ \exp\left(-\frac{l}{n}\right) &\mbox{otherwise.} \end{cases} \end{equation} Evidently, larger the mean degree, shorter the geodesic lengths in the network. Previous work has demonstrated an analytic expression for the survival function of geodesic lengths in ER graphs, given by Eq. 14 of Ref. \cite{fronczak2004average} as $\psiacf(l) = \exp\left(-\frac{\avg{\degree}^l}{n}\right)$ which is in slight disagreement with Eq. \ref{eq:spd_er}. Particularly at $l=0$, Eq. \ref{eq:spd_er} will correctly evaluate to $1$, while the other to $\exp(-n^{-1})$. \paragraph*{Illustrative examples.}For a less trivial example, we consider a $k$-block SBM with equi-sized blocks and constant mean degree $\avg{\degree}$, such that $B=\delta I+\left(\avg{\degree}-\delta/k\right)\boldsymbol{u}\boldsymbol{u}^T$, where $\delta\in[-\avg{\degree} k/(k-1),\avg{\degree} k]$ quantifies the amount of homophily---positive $\delta$, wherein nodes are more likely to connect to nodes from the same block---or heterophily---negative $\delta$, wherein nodes are more likely to connect to nodes from other blocks. As before, let $S_l(a)$ be the geometric sum of $a$ starting at $a$ up to $l$ terms, then from Eq. \ref{eq:spd_sbm} the survival function block matrix is given by (see Appendix \ref{sec:apdx_sbm}) \begin{equation} \label{eq:spd_sbm_k} \Psiacf_l = \exp\boldsymbol{\bigg(}-\frac{1}{n}\left\{S_l(\delta/k)kI+[S_l(\avg{\degree})-S_l(\delta/k)]\boldsymbol{u}\boldsymbol{u}^T\right\}\boldsymbol{\bigg)}. \end{equation} The form of $\Psiacf_l$ is analogous to that of $B$---naturally $\Psiacf_1=B$, but for larger $l$ the off-diagonal (inter-block) elements of the survival function block matrix evolve as the exponential of difference in geometric sums of $S_l(\delta/k)$ and $S_l(\avg{\degree})$. In particular, consider a 2-block perfectly heterophilous SBM, i.e. $k=2$ and $\delta/k=-\avg{\degree}$. This would correspond to a bipartite network, since nodes never connect directly with nodes of their own block. Consequently, all paths between nodes of different communities must be of odd length. The expression in Eq. \ref{eq:spd_sbm_k} correctly suggests that the inter-block survival function does not change for even values of $l$ due to cancelling out of the even powers of $\avg{\degree}$, that is, there is no probability mass at even values of $l$. We next consider a general SBM with a symmetric block matrix B. Let $Q\Lambda Q^T$ be the eigendecomposition of symmetric matrix $\Pi^\frac{1}{2}B\Pi^\frac{1}{2}$ such that columns of the orthogonal matrix $Q$ encode the eigenvectors and the diagonal matrix $\Lambda$ encodes the corresponding eigenvalues. Then, we can write $(\Pi B)^{i-1}=\Pi^\frac{1}{2}Q\Lambda^{i-1}Q^T\Pi^{-\frac{1}{2}}$. Putting in Eq. \ref{eq:spd_sbm}, we obtain \begin{equation} \label{eq:spd_sbm_eig} \Psiacf_l = \exp\left(-\frac{1}{n}\Pi^{-\frac{1}{2}}QS_l(\Lambda)Q^T\Pi^{-\frac{1}{2}}\right), \end{equation} which is the matrix analogue of Eqs. \ref{eq:spd_analytic_general_eig_uncorrected} and \ref{eq:spd_er}. Evidently, for general block matrices, a weighted geometric sum of eigenvalues of $B\Pi$ governs the whole distribution, with positive eigenvalues---indicating homophily---and negative eigenvalues---indicating heterophily---contributing differently to the distribution. \subsection{\label{sec:graphcoarsening}Labeled empirical graphs} Previous work in estimating dissimilarity measures on empirical graphs has determined a Gibbs–Boltzmann distribution on picking a path between two nodes over a bag-of-paths in a given network \cite{franccoisse2017bagofpaths}. However, this bag-of-paths approach does not directly model the distribution of shortest path \emph{lengths} between node pairs. Our proposed approach provides the desired distribution, but for networks generated from some underlying random graph model with independent edges. One method to apply our approach is to first infer a model given the observed network. Inference can be performed in a myriad of ways \cite{newman2016inferenceannotated, newman2018inference, goldenberg2010inference}, but may induce computational overhead if the network is very large. Another method is to exploit the graph-coarsening property of SBMs described in Sec. \ref{sec:sbm}, wherein nodes are completely defined by their block membership, to determine the SPLD of empirical graphs. This has the advantage of reducing the parametrization from one in terms of a large number of nodes $n$, to one in terms of a small number of blocks $k\ll n$. If we have a network represented by the adjacency matrix $A$, with known node labels indicated by the assignment matrix $Z$, then assuming that $(A,Z)$ is generated by an SBM permits a maximum likelihood estimate of its parameters from Eq. \ref{eq:sbm}, regardless of how node labels were inferred, and with no computational cost except for summation of node and edge counts. We refer the reader to Appendix \ref{sec:apdx_gcsbm} for more details. The SPLD associated with the inferred SBM can then be used to study shortest path lengths in the original network. \paragraph*{Illustrative example.}We consider a real-world network of email communications between members of a European research institution \cite{snapeuemail, snapcollab-euemail-gnutella, snapnets} denoted by the adjacency matrix $A_{eue}$ (see Appendix \ref{sec:apdx_datasets} for more details on the dataset). Each node has an attribute corresponding to one of the $k=42$ departments which the individual belongs to, which can serve to provide the assignment matrix $Z_{dep}$ at no additional cost. We can also derive more meaningful community labels for the nodes by applying community detection methods \cite{girvan2002community, newman2006modularity}, such as modularity maximization \cite{clauset2004modmax, hagberg2008networkx}, which can provide a different assignment $Z_{mod}$ across $k=5$ modules. We also infer a hierarchical SBM for this network that can infer blocks at multiple levels of coarsening \cite{peixoto2014nestedsbm, peixoto2014graphtool}, generating a hierarchy of labels yielding assignments $Z_{sbm2}, Z_{sbm3}$ at levels 2 and 3 of the inferred hierarchical SBM, possessing $k=36$ and $k=10$ blocks respectively. Using Eq. \ref{eq:sbm_mle} we can derive corresponding SBMs, and the analytic form of the SPLD between block pairs using Eq. \ref{eq:spd_main_sbm}. In Fig. \ref{fig:spl_statistics_empirical} we compare the empirical and analytic means of geodesic lengths for every block pair in this network, which are in good agreement for all assignment procedures considered. Notably, this includes $Z_{dep}$ which requires no additional computational overhead. The agreement is stronger for block pairs with shorter geodesics between them. The departure for longer geodesics is likely due to correlations within $A_{eue}$, wherein longer-than-expected geodesics would beget even longer geodesics, resulting in the analytics mostly underestimating the empirics. \begin{figure} \centering \includegraphics[width=\columnwidth]{fig/spd_euemail.pdf} \caption{The empirical average geodesic lengths of a real-world email network $A_{eue}$ (on the $x$-axis) are well-approximated by the mean obtained from the analytic form of the SPLD (on the $y$-axis, using Eqs. \ref{eq:gcc_consistency_sbm}, \ref{eq:spd_main_sbm}, \ref{eq:spd_sbm_init}), when nodes with the same ``label'' are grouped into a single block to form an SBM with $k$ blocks. Subplots correspond to different types of labeling: (top-left) $Z_{dep}$ leverages the homophily assumption by using a node attribute as the label---here, the department of the e-mailer; (top-right) $Z_{mod}$ uses modularity maximization \cite{clauset2004modmax, hagberg2008networkx} to infer network blocks assigned as the label; (bottom) $Z_{sbm2}, Z_{sbm3}$ use a hierarchical SBM \cite{peixoto2014nestedsbm, peixoto2014graphtool} to infer a hierarchy of blocks which allows for multi-level coarsening. Symbols indicate the mean geodesic length between nodes of block pairs. Inset figures indicate the mean (black markers) and standard deviation (red bars) of geodesic lengths between block pairs, averaged over 10 samples of the corresponding SBM established via coarsening. Both the mean and standard deviation of the empirical SPLD are well approximated by the analytic SPLD.} \label{fig:spl_statistics_empirical} \end{figure} \subsection{\label{sec:rdpg:}Random dot-product graph} While SBMs are commonly used due to their simplicity and ability to model any community structure, they can be unrealistic in terms of other network attributes. For instance, their degree distribution is a mixture of Poisson distributions which can be restrictive with regards to some real-world networks possessing heavy-tailed degree distributions like power laws \cite{albert2002networks}. This has led to exploration of other models that can capture modularity alongside arbitrary degree distributions, of which the random dot-product graph (RDPG) \cite{kraetzl2005rdpg, young2007rdpg} is an important example. Here, the connectivity kernel is a function of the dot-product of latent vectors which represent nodes. Consider some $k$-dimensional bounded real vector space $X \subset\real^k$ wherein nodes are ``embedded'', such that the likelihood of $\boldsymbol{x},\boldsymbol{y}\in X$ connecting is proportional to a function of the dot-product of their positions $\boldsymbol{x}^T\boldsymbol{y}$. This method of ``graph embeddings'' is especially popular in statistical machine learning, wherein continuous representations of discrete objects such as graphs are learnt \cite{hamilton2017graphrepresentation, cai2018surveygraphembedding, goyal2018graphembedding}, making them amenable to downstream predictive tasks. Many approaches for generating such an embedding rely on the dot-product model. For instance, embedding techniques based on matrix factorization, like graph factorization \cite{ahmed2013graphfactorization}, assume that the likelihood of nodes located at $\boldsymbol{x}$ and $\boldsymbol{y}$ to connect is proportional to $\boldsymbol{x}^T\boldsymbol{y}$, while random-walk based techniques, like ``node2vec'' \cite{grover2016node2vec}, assume the likelihood is proportional to $\exp(\boldsymbol{x}^T\boldsymbol{y})$. \paragraph*{Symmetric \& positive semi-definite kernels.}We emphasize that the RDPG is a special case of the general random graph families described in Sec. \ref{sec:general_graphs} when the connectivity kernel is symmetric and positive semi-definite, i.e. all eigenvalues of $T$, defined in Eq. \ref{eq:integral_op}, are non-negative. (Contrary to SBMs, this excludes the possibility of heterophilous structures like bipartitivity, but results in uniform absolute convergence of the kernel's eigenexpansion in Eq. \ref{eq:kernel_rep} \cite{riesz1955hilbert, mercer1909kernel}.) In other words, any positive semi-definite kernel can be written as a dot-product in some feature space (see Appendix \ref{sec:apdx_rdpg_nonlin}). It is therefore sufficient here to consider the simplest setting of $V$ being Euclidean and the kernel being linear in the dot-product, giving rise to an RDPG. For a kernel that is a non-linear function of the dot-product, we can use random Fourier features \cite{rahimi2007random} to derive an explicit feature map, where a kernel that is linear in the dot-product can be assumed. We refer the reader to Appendix \ref{sec:apdx_rdpg_nonlin} for an illustrative example. \paragraph*{Definitions.}We consider a non-negative bounded subspace $X\subset\real^k_{\ge 0}$ with the connectivity kernel $\nu(\boldsymbol{x}, \boldsymbol{y}) = \beta\boldsymbol{x}^T\boldsymbol{y}$ such that $\beta>0$ and $\beta=\order{n^{-1}}$ which encodes sparsity. This is a common setting for RDPGs: in the canonical degree-configuration model where $k=1$, $X$ encodes precisely the expected degree of a node, with the node density $\mu$ governing the degree distribution \cite{young2007rdpg}. We define: \begin{subequations} \begin{align} \label{eq:def_rdpg_meanvec} \boldsymbol{\phi}&\triangleq\int_X \boldsymbol{x}d\mu, \\ \label{eq:def_rdpg_mommat} \Phi&\triangleq n\beta\int_X \boldsymbol{x}\boldsymbol{x}^Td\mu, \end{align} \end{subequations} where $\boldsymbol{\phi}$ indicates the length-$k$ mean vector in $X$, and $\Phi$ refers to the $k\times k$ matrix of second moments, also known as the autocorrelation matrix, scaled by $n\beta$ that encodes the covariance in space $X$ as per the measure $\mu$ and is necessarily positive semi-definite. \paragraph*{Degree and percolation probability.}From Eq. \ref{eq:general_degrees_deg}, it can be seen that the expected degree at $\boldsymbol{x}$ is given by: \begin{equation} \label{eq:degree_rdpg} \meandegree(\boldsymbol{x}) = n\beta\boldsymbol{\phi}^T\boldsymbol{x}, \end{equation} and from Eq. \ref{eq:general_degrees_mean} the average network degree is given by $\avg{\degree}=n\beta\boldsymbol{\phi}^T\boldsymbol{\phi}$. Given that $\rho(\boldsymbol{x})$ encodes the percolation probability for a node at $\boldsymbol{x}$, define $\boldsymbol{\rho}\triangleq\int_X \boldsymbol{x}\rho(\boldsymbol{x})d\mu$ to be the ``mean percolation vector'' in $X$, then Eq. \ref{eq:gcc_consistency_general} yields: \begin{subequations} \label{eq:gcc_consistency_rdpg} \begin{align} \label{eq:gcc_consistency_rdpg_rhox} \begin{split} \rho(\boldsymbol{x})&=1-\exp\left(-n\beta\boldsymbol{x}^T\int_X\boldsymbol{y}\rho(\boldsymbol{y})d\mu(\boldsymbol{y})\right),\\ &=1-\exp\left(-n\beta\boldsymbol{x}^T\boldsymbol{\rho}\right), \end{split}\\ \label{eq:gcc_consistency_rdpg_rho} \begin{split} \int_X\boldsymbol{x}\rho(\boldsymbol{x})d\mu&= \int_X\boldsymbol{x}d\mu-\int_X\boldsymbol{x}\exp \left(-n\beta\boldsymbol{x}^T\boldsymbol{\rho}\right)d\mu,\\ \implies\boldsymbol{\rho}&=\boldsymbol{\phi}-\int_X\boldsymbol{x}\exp\left(-n\beta\boldsymbol{x}^T\boldsymbol{\rho}\right)d\mu, \end{split} \end{align} \end{subequations} where we apply the definition of $\boldsymbol\phi$ from Eq. \ref{eq:def_rdpg_meanvec}. Eq. \ref{eq:gcc_consistency_rdpg_rho} is a self-consistency vector equation for $\boldsymbol{\rho}$ which once solved can be used to solve the self-consistency scalar Eq. \ref{eq:gcc_consistency_rdpg_rhox} for the percolation probability of any node location. To solve the former, we can make use of an $m$-block SBM approximation of the $k$-dimensional RDPG---as described in Appendix \ref{sec:apdx_sbm_apx_Rk}---and solve for $\boldsymbol{\rho}$ numerically via function iteration. \paragraph*{Approximate closed-form of the SPLD.}For this and subsequent sections (Sec. \ref{sec:rgg} and \ref{sec:graphons}), we focus on the approximate closed-form of the survival function of the SPLD. In particular, Eq. \ref{eq:spd_general_omega} for the conditional PMF would read as $\omegaacf_l(\boldsymbol{x},\boldsymbol{y})=\beta\boldsymbol{x}^T\left[n\beta\int_X\boldsymbol{z}\boldsymbol{z}^Td\mu(\boldsymbol{z})\right]^{l-1}\boldsymbol{y}$, translating Eq. \ref{eq:spd_general_psi} for the survival function into \begin{equation} \label{eq:spd_rdpg} \psiacf_l(\boldsymbol{x}, \boldsymbol{y}) = \exp\left(-\beta\boldsymbol{x}^T\left(\sum_{k=0}^{l-1}\Phi^k\right)\boldsymbol{y}\right), \end{equation} where we use the definition of $\Phi$ from Eq. \ref{eq:def_rdpg_mommat}. Since $\Phi$ is symmetric, let $Q\Lambda Q^T$ be its eigendecomposition, such that columns of the orthogonal matrix $Q$ encode the eigenvectors and the diagonal matrix $\Lambda$ encodes corresponding eigenvalues. Let $\widetilde{S}_l(a)$ be the geometric sum of $a$ starting at $1$ up to $l$ terms. Then we obtain from Eq. \ref{eq:spd_rdpg}: \begin{equation} \label{eq:spd_rdpg_eig} \psiacf_l(\boldsymbol{x}, \boldsymbol{y}) = \exp\left(-\beta\boldsymbol{x}^TQ\widetilde{S}_l(\Lambda)Q^T\boldsymbol{y}\right). \end{equation} This is analogous to the expressions obtained via eigendecomposition for the general random graph family in Eq. \ref{eq:spd_analytic_general_eig_uncorrected}, for ER graphs in Eq. \ref{eq:spd_er} and for SBMs in Eq. \ref{eq:spd_sbm_eig}. Alternately, consider the expected survival function of the whole network in Eq. \ref{eq:spd_general_psi_agg}. By applying Jensen's inequality \cite{jensen1906fonctions} to Eq. \ref{eq:spd_rdpg}, we obtain a lower bound on $\psiacf(l)$, similar to the one in Eq. \ref{eq:spd_general_psi_agg_bound}, which will be tight if the variance in survival function over different node pairs is small: \begin{equation} \label{eq:spd_rdpg_network} \psiacf(l) \ge \exp\left(-\beta\boldsymbol{\phi}^T\widetilde{S}_l(\Phi)\boldsymbol{\phi}\right), \end{equation} where we use the definition of $\boldsymbol\phi$ from Eq. \ref{eq:def_rdpg_meanvec}. This permits a description of the network's expected geodesic length entirely in terms of the first and second moments in $X$, which can be especially useful when we have access to just the sample mean and covariance, instead of the full distribution in $X$. \paragraph*{Illustrative example.} We consider $X$ restricted to the $(k-1)$-standard simplex in $\real^k$, and $\mu$ corresponding to the Dirichlet distribution on that simplex given by the concentration vector $\boldsymbol{\alpha}\in[0,\infty)^k$. This represents a node $\boldsymbol{x}\in[0,1]^k$ such that $\sum_ix_i=1$, which allows us to interpret the node's location as the likelihood of belonging to one of $k$ communities located at the corners of the simplex---a continuous analogue of the SBM. Let $\avg{\degree}$ be the mean degree of the network, and $\bar\alpha=\boldsymbol\alpha^T\boldsymbol{u}$, then it can be shown that the approximate closed-form of the conditional PMF of the SPLD is given by: $\omegaacf_l(\boldsymbol{x},\boldsymbol{y})=\frac{\avg{\degree}\bar\alpha^2}{n\left\lVert\boldsymbol{\alpha}\right\rVert^2}\boldsymbol{x}^T\left\{\frac{\avg{\degree}\bar\alpha}{\left\lVert\boldsymbol{\alpha}\right\rVert^2(1+\bar\alpha)}\left[\diag(\boldsymbol{\alpha})+\boldsymbol{\alpha}\boldsymbol{\alpha}^T\right]\right\}^{l-1}\boldsymbol{y}$ (see Appendix \ref{sec:apdx_rdpg}). In Fig. \ref{fig:spl_rdpg}, we plot various node and node pair statistics for a ``Dirichlet RDPG'' when $n=512, \avg{\degree}=4$ and $\boldsymbol\alpha=\{0.8, 0.8, 2\}$ using Eq. \ref{eq:degree_rdpg} for a node's degree, Eq. \ref{eq:gcc_consistency_rdpg_rhox} for a node's percolation probability, and the approximate closed-form of the SPLD in Eq. \ref{eq:spd_rdpg} to compute an analytic estimate of expected geodesic length between node pairs using. In Fig. \ref{fig:spl_emp_vs_ana} we further show that this analytic estimate is in good agreement with empirical estimates. \begin{figure*} \centering \subfloat[Dirichlet RDPG ($k=3$)]{\label{fig:spl_rdpg} \includegraphics[width=\columnwidth]{fig/spd_n512_rdpg.pdf}} \subfloat[Gaussian RGG ($k=2$)]{\label{fig:spl_grgg} \includegraphics[width=\columnwidth]{fig/spd_n512_grgg.pdf}}\\%.png \subfloat[Max graphon ($k=1$)]{\label{fig:spl_maxg} \includegraphics[width=\columnwidth]{fig/spd_n512_maxg.pdf} \subfloat[Scale-free graphon ($k=1$)]{\label{fig:spl_sfg} \includegraphics[width=\columnwidth]{fig/spd_n512_sfg.pdf} \caption{Node and node pair functions for various random graph models considered in Sec. \ref{sec:geodesics_specific}. Density function $\mu(x)$ refers to the distribution of nodes in node space $V$, connectivity kernel $\nu(x,y)$ refers to likelihood of connection between node pairs, degree function $\meandegree(x)$ refers to the expected degree of a node at $x$, percolation function $\rho(x)$ indicates the probability of a node at $x$ to be on the giant component, and geodesic function $\avg{\lambda_{xy}}$ refers to the expected shortest path length between node pairs. For one-dimensional models---(c) max graphon ($V=[0,1]$) and (d) scale-free graphon ($V=[0.01,1]$)---node functions ($\mu$, $\rho$, and $\meandegree$) are shown on $V$, while node pair functions ($\nu$ and $\lambda$) and network sample are shown on $V\times V$. For higher dimensional models---(a) Dirichlet RDPG (functions shown on the standard 2-simplex) and (b) Gaussian RGG (functions shown on $[-3,3]\times[-3,3]$)---node pair functions are shown between $x$ and the mean in $V$ indicated by $\avg{x}$, which is itself marked by red dashed lines or crosses. Description of model parameters and equations used to compute these functions are in respective subsections. We note that $\lambda$ is estimated from the approximate closed-form of the SPLD.} \label{fig:spl_models} \end{figure*} \subsection{\label{sec:rgg}Gaussian random geometric graph} An inner-product space is a good abstraction when the space of nodes is latent, but for some real-world spaces equipped with distances over which the likelihood of connection decays---like \emph{spatial} networks \cite{barnett2007spatially}---it is sensible to consider a metric space of nodes. This notion is precisely captured by random geometric graph (RGG) models \cite{penrose2003rgg, penrose2016rgg, dettmann2016rgg}. The metric space usually depends on the nature of networks being modeled, such as a Euclidean space for communication networks or a hyperbolic space for social networks \cite{krioukov2010hyperbolic, barthelemy2011spatial}. \paragraph*{Definitions.}In this section, we focus on \emph{soft} random geometric graphs, wherein the probability of connection decays smoothly with distance---specifically, we consider a $k$-dimensional Euclidean space $\real^k$, with a squared-exponential decay function \cite{penrose2016rgg}. This is akin to having an ellipsoidal connection ``bubble'' around every node $\boldsymbol{x}\in\real^k$, i.e. \begin{equation} \label{eq:grgg_nu} \nu(\boldsymbol{x},\boldsymbol{y}) = \beta\exp\left(-\frac{1}{2}(\boldsymbol{x}-\boldsymbol{y})^TR^{-1}(\boldsymbol{x}-\boldsymbol{y})\right), \end{equation} where $\beta=\order{n^{-1}}$ is the probability of connecting to a node with identical co-ordinates, and $R$ is a $k\times k$ symmetric positive-definite matrix encoding the scale of connections in this node space. For concreteness, assume a standard multivariate Gaussian distribution of nodes centered at the origin: \begin{equation} \label{eq:grgg_mu} \mu(\boldsymbol{x})=(2\pi)^{-\frac{k}{2}}\exp\left(-\frac{1}{2}\boldsymbol{x}^T\boldsymbol{x}\right). \end{equation} We remark that this formalism extends to a multivariate Gaussian node distribution through an affine transformation of the node space $V$ and scale matrix $R$ (see Appendix \ref{sec:apdx_grgg}). This is what we refer to as the Gaussian random geometric graph (Gaussian RGG) \cite{garrod2018connectivity}. (Arguably, this should be termed a \emph{doubly} Gaussian RGG, where both the node distribution and connectivity kernel are Gaussian.) \paragraph*{Degree and percolation probability.}Let $|\cdot|$ indicate the matrix determinant. It can be shown that the expected degree at location $\boldsymbol{x}$, using Eq. \ref{eq:general_degrees_deg}, is given by a Gaussian curve: \begin{equation} \label{eq:degree_grgg} \meandegree(\boldsymbol{x})=\frac{n\beta}{|I+R^{-1}|^\frac{1}{2}}\exp\left(-\frac{1}{2}\boldsymbol{x}^T(I+R)^{-1}\boldsymbol{x}\right), \end{equation} and correspondingly from Eq. \ref{eq:general_degrees_mean} the average network degree is given by \begin{equation} \label{eq:degree_grgg_mean} \avg{\degree}=n\beta\left(|I+R^{-1}|\cdot|I+(I+R)^{-1}|\right)^{-\frac{1}{2}}, \end{equation} see Appendix \ref{sec:apdx_grgg}. If we assume infinitely large connection scales, i.e. $R^{-1}\to 0$, this results in $\avg{\degree}=n\beta$, which can be seen as the usual ER graph where all spatial structure is lost, since nodes connect to any other node with the same likelihood $\frac{\avg{\degree}}{n}$. We next consider the percolation probability at location $\boldsymbol{x}$. Similar to the discrete approximation used for RDPGs, we can discretize the node space into a fine grid to derive percolation probabilities (see Appendix \ref{sec:apdx_sbm_apx_Rk}). However, here we apply an ansatz that the percolation probability is given by a generalization of the Gaussian function: \begin{equation} \label{eq:rho_grgg} \rho(\boldsymbol{x}) = a\exp\left(-\left(\boldsymbol{x}^TC\boldsymbol{x}\right)^b\right), \end{equation} where $0\le a\le 1$ governs percolation probability at the origin, $b\ge 1$ controls the shape of the percolation surface, and $C$ is a $k\times k$ symmetric positive semi-definite matrix that indicates the scale of the surface. As shown later in Sec. \ref{sec:percolation}, $C$ commutes with the scale matrix $R$, implying that $C$ and $R$ preserve each others eigenspaces. Consequently, we need to infer $k$ non-negative eigenvalues of $C$, yielding a total of $k+2$ parameters to fit the percolation surface at grid locations via constrained optimization. We can then use Eq. \ref{eq:rho_grgg} to obtain percolation probabilities at any node location in $\real^k$. \paragraph*{Approximate closed-form of the SPLD.} Using the expression for the conditional PMF in Eq. \ref{eq:spd_general_omega}, we can write the approximate closed-form of the conditional PMF of the SPLD succinctly via a set of recursive coefficients: see Eq. \ref{eq:grgg_omega_ansatz} in Appendix \ref{sec:apdx_grgg}. To make further analytical progress, we consider a special scenario of high ``spatial homophily'' which is the opposite of the ER setting. Here, spatial embedding contributes a lot since the connection scales are very small: $R\to 0$. Taking this limit in Eq. \ref{eq:degree_grgg_mean}, the mean degree is given by $\avg{\degree}=n\beta\sqrt{\frac{|R|}{2^k}}$. Using Eq. \ref{eq:spd_general_omega}, the approximate closed-form of the conditional PMF of the SPLD can be written as: \begin{equation} \label{eq:spd_grgg_homophily} \omegaacf_l(\boldsymbol{x},\boldsymbol{y}) = \frac{\left(\avg{\degree} 2^\frac{k}{2}\right)^l}{n\sqrt{|lR|}}\exp\left(-\frac{1}{2}(\boldsymbol{x}-\boldsymbol{y})^T(lR)^{-1}(\boldsymbol{x}-\boldsymbol{y})\right), \end{equation} see Appendix \ref{sec:apdx_grgg}. Here, we can interpret $\omegaacf_l(\boldsymbol{x},\boldsymbol{y})\triangleq P(\lambda_{\boldsymbol{x}\boldsymbol{y}}=l|\lambda_{\boldsymbol{x}\boldsymbol{y}}>l-1)$ as a ``shortest path'' connectivity kernel, where a node ``inflates'' its bubble of nearest neighbours, as defined by the scale matrix $R$, by a factor of $l$ to potentially form shortest path connections of length $l$. (For $l=1$, this simply reduces to the connectivity kernel in Eq. \ref{eq:grgg_nu}.) It then follows that the approximate closed-form of the survival function of the SPLD is given by Eq. \ref{eq:spd_general_psi} as $\psiacf_l(\boldsymbol{x},\boldsymbol{y}) = \exp\left(-\sum_{q=1}^l\omegaacf_q(\boldsymbol{x},\boldsymbol{y})\right)$ with, as previously, an analogous interpretation to Eq. \ref{eq:sf_avg_uncorrected} in terms of independent geodesics. In Fig. \ref{fig:spl_grgg}, we plot various node and node pair statistics for a Gaussian RGG when $n=512, \avg{\degree}=4$ and the scale matrix is given by $R=\big(\begin{smallmatrix} 0.08 & 0.04\\ 0.04 & 0.08 \end{smallmatrix}\big)$, using Eq. \ref{eq:degree_grgg} for a node's degree, Eq. \ref{eq:rho_grgg} for a node's percolation probability, and the approximate closed-form of the SPLD in Eq. \ref{eq:grgg_omega_ansatz} for the analytic estimate of expected geodesic lengths between node pairs. We also show in Fig. \ref{fig:spl_emp_vs_ana} that this estimate is in good agreement with the empirics, with marginally increasing deviations for longer geodesics, likely because the closed-form of the SPLD overestimates probability mass at shorter geodesic lengths. \subsection{Sparse graphons}\label{sec:graphons} In the most general setting, any conditionally independent edge model with a symmetric connectivity kernel can be expressed by considering a sequence of graphs in some continuum limit, called graph functions or graphon \cite{lovasz2006graphon, lovasz2012graphon, orbanz2014graphons}. Typically, the node space for a graphon is restricted to the real interval $[0,1]$ where nodes are distributed according to the standard uniform distribution $\mathcal{U}(0,1)$. Then all burden of modeling edge probabilities is transferred to the symmetric kernel $W:[0,1]^2\to[0,1]$, referred to as the ``$W$-graphon''. Given their flexibility, these functions can get arbitrarily complex. While $W$-graphons are usually formulated as the dense limit of a graph sequence with $\order{n^2}$ edges \cite{lovasz2006graphon}, here we are interested in the sparse limit with $\order{n}$ edges, also referred to as the ``inhomogeneous random graph model'' \cite{bollobas2011sparsegraphs}. For brevity, throughout this paper we refer to $W:[0,1]^2\to[0,1]$ such that $W=\order{n^{-1}}$ as a ``sparse graphon'', or simply as ``graphon''. The SPLD framework of Sec. \ref{sec:general_graphs} translates immediately to sparse graphons, with the simplicity offered by the symmetry of $W$, and by assuming $x\sim\mathcal{U}(0,1)$, with regards to numerical integration. \paragraph*{Illustrative example: max graphon.}As an example, consider a sparse version of the ``max graphon'', that arises as the limit of a uniform attachment process \cite{borgs2011maxgraphon, klimm2021modularity} given by $W(x,y)=\beta(1-\max(x,y))$, where $\beta>0$ and $\beta=\order{n^{-1}}$. In Fig. \ref{fig:spl_maxg} we show various node and node pair functions for this graphon with $n=512$, $n\beta=8$, wherein the expected degree, percolation probability, and expected geodesic lengths are all obtained via numerical integration of Eqs. \ref{eq:general_degrees_deg}, \ref{eq:gcc_consistency_general} and \ref{eq:spd_analytic_general_eig_uncorrected} respectively. In Fig. \ref{fig:spl_emp_vs_ana}, we show that the empirical and analytic estimates of the geodesic lengths are in good agreement, except for longer path lengths---likely due to the vanishingly low percolation probabilities of the tail-end of the node space as $x\to 1$. As previously noted, we can discretize any continuous graph model at a chosen scale to obtain an equivalent SBM representation of it---see Appendix \ref{sec:apdx_mle_sbm_apx_R} for a discussion on discretizing graphons in particular. In Fig. \ref{fig:spl_emp_vs_ana}, we also include empirics and analytics for the SBM corresponding to max graphons, that corroborate well with results obtained via numerical integration. \paragraph*{Sparse multiplicative graphons.}We next consider a scenario where the formalism simplifies further. Let $f:[0,1]\to[0,1]$ be a function such that $f=\order{n^{-\frac{1}{2}}}$. Then, we define a sparse multiplicative graphon over a node pair $W_\times(x,y)$ to be one which can be written as the product of that function applied to the nodes separately: \begin{equation} \label{eq:def_mult_graphon} W_\times(x,y)\triangleq f(x)f(y). \end{equation} We remark that an asymmetric and directed version of this graphon can be obtained by considering two separate functions $f$ and $g$, but we restrict our discussion here to symmetric multiplicative graphons. \paragraph*{Degree and percolation probability.}To derive network properties, it will be useful to define two statistics for the multiplicative graphon: \begin{subequations} \label{eq:mult_graphon_stats} \begin{align} \label{eq:mult_graphon_stats_zeta} \zeta&\triangleq\int_0^1f(x)dx,\\ \label{eq:mult_graphon_stats_eta} \eta&\triangleq n\int_0^1f(x)^2dx, \end{align} \end{subequations} which are indicative of the first and second moment of $f(x)$. From Eqs. \ref{eq:general_degrees_deg} and \ref{eq:general_degrees_mean}, and using the definition in Eq. \ref{eq:mult_graphon_stats_zeta}, the expected degree at $x$ and expected network degree are given by \begin{subequations} \label{eq:degree_r1g_overall} \begin{align} \label{eq:degree_r1g} \meandegree(x) &= nf(x)\int_0^1f(y)dy=n\zeta f(x),\\ \label{eq:degree_net_r1g} \avg{\degree}&=n\zeta\int_0^1f(x)dx=n\zeta^2. \end{align} \end{subequations} From \ref{eq:degree_r1g}, it is evident that $f(x)\propto \meandegree(x)$ encodes the expected degree at $x$, rendering multiplicative graphons as equivalent to canonical degree-configuration models that lack any modularity structure \cite{klimm2021modularity}. However, they can still capture degree-related properties of real-world graphs, like ``scale-free'' networks showcasing power law degree distributions. Using Eq. \ref{eq:degree_r1g_overall}, we can rewrite $f(x)$, $\zeta$ and $\eta$ from Eq. \ref{eq:mult_graphon_stats} in terms of degree statistics as: \begin{subequations} \label{eq:mult_graphon_stats_deg} \begin{align} \label{eq:mult_graphon_foo_deg} f(x)&=\frac{\meandegree(x)}{\sqrt{n\avg{\degree}}}\\ \label{eq:mult_graphon_stats_zeta_deg} \zeta&=\sqrt{\frac{\avg{\degree}}{n}},\\ \label{eq:mult_graphon_stats_eta_deg_2} \eta&=\frac{\expect[\mu]{\meandegree(x)^2}}{\avg{\degree}}. \end{align} \end{subequations} Recall from Eq. \ref{eq:degree_distribution} that the degree at $x$ is Poisson distributed with rate $\meandegree(x)$. Then the second moment of the degree distribution is given by the law of total expectation: \begin{equation} \label{eq:degree_distribution_second_moment} \begin{split} &\avg{\degree^2}=\expect[\mu]{\avg{\degree(x)^2}}=\expect[\mu]{\meandegree(x)^2+\meandegree(x)}\\ \implies&\expect[\mu]{\meandegree(x)^2}=\avg{\degree^2}-\avg{\degree}, \end{split} \end{equation} where we apply the definition of mean degree from Eq. \ref{eq:general_degrees_mean}. Using Eqs. \ref{eq:degree_distribution_second_moment}, \ref{eq:mult_graphon_stats_eta_deg_2} yields $\eta$ in terms of the first and second moments of the degree distribution: \begin{equation} \label{eq:mult_graphon_stats_eta_deg} \eta=\frac{\avg{\degree^2}}{\avg{\degree}}-1. \end{equation} We next consider the percolation probability at $x$. We define $\rho\triangleq\int_0^1f(x)\rho(x)dx$, then we obtain from Eq. \ref{eq:gcc_consistency_general} for percolation probability and Eq. \ref{eq:mult_graphon_stats_zeta}: \begin{subequations} \label{eq:gcc_consistency_r1g} \begin{align} \label{eq:gcc_consistency_r1g_rhox} \begin{split} \rho(x)&=1-\exp\left(-n\int_0^1f(x)f(y)\rho(y)dy\right),\\ &=1-\exp\left(-n\rho f(x)\right), \end{split}\\ \label{eq:gcc_consistency_r1g_rho} \begin{split} \int_0^1f(x)\rho(x)dx&= \int_0^1f(x)\left[1-\exp\left(-n\rho f(x)\right)\right]dx,\\ \implies\rho&=\zeta-\int_0^1f(x)\exp\left(-n\rho f(x)\right)dx. \end{split} \end{align} \end{subequations} Eq. \ref{eq:gcc_consistency_r1g_rho} is a self-consistency scalar equation for $\rho$ which once solved can be used to solve the self-consistency scalar Eq. \ref{eq:gcc_consistency_r1g_rhox} for the percolation probability of any node location. \paragraph*{Approximate closed-form of the SPLD.} Exploiting the multiplicative nature of the kernel in Eq. \ref{eq:def_mult_graphon}, we obtain the conditional PMF $\omegaacf_l(x, y)$ and survival function $\psiacf_l(x,y)$ of the approximate closed-form SPLD from Eqs. \ref{eq:spd_general_omega}, \ref{eq:spd_general_psi} as \begin{subequations} \label{eq:spd_r1g_main} \begin{align} \label{eq:spd_r1g_omega} \omegaacf_l(x, y) &= \eta^{l-1}f(x)f(y)=\eta^{l-1}\frac{\meandegree(x)\meandegree(y)}{n\avg{\degree}},\\ \label{eq:spd_r1g_psi} \psiacf_l(x, y) &= \begin{cases} \exp\left(-\frac{\eta^l-1}{\eta-1}\frac{\meandegree(x)\meandegree(y)}{n\avg{\degree}}\right) &\mbox{if }\eta\ne 1,\\ \exp\left(-l\frac{\meandegree(x)\meandegree(y)}{n\avg{\degree}}\right) &\mbox{otherwise,} \end{cases} \end{align} \end{subequations} where we express $f(x)$ in terms of degree using Eq. \ref{eq:mult_graphon_foo_deg}. From Eq. \ref{eq:spd_r1g_psi} we note that the distribution of shortest path length between two nodes in a degree-configuration model is encoded by the product of their expected degree. We also observe that a larger variance in the degree distribution renders a larger value for $\eta$ from Eq. \ref{eq:mult_graphon_stats_eta_deg}, and therefore shorter geodesic lengths \cite{vanderhofstad2005distanceconfigmodel}. If we consider the expected survival function of the SPLD for the whole network $\psiacf(l)$, as defined in Eq. \ref{eq:spd_general_psi_agg}, then applying Jensen's inequality \cite{jensen1906fonctions} to Eq. \ref{eq:spd_r1g_psi} yields a lower bound on $\psiacf(l)$, analogous to the bounds in Eqs. \ref{eq:spd_general_psi_agg_bound}, \ref{eq:spd_rdpg_network}: \begin{equation} \label{eq:spd_r1g_network} \psiacf(l) \ge \begin{cases} \exp\left(-\frac{\avg{\degree}(\eta^l-1)}{n(\eta-1)}\right) &\mbox{if }\eta\ne 1,\\ \exp\left(-l\frac{\avg{\degree}}{n}\right) &\mbox{otherwise,} \end{cases} \end{equation} where we have used the definition of mean degree in Eq. \ref{eq:general_degrees_mean}. Together, Eqs. \ref{eq:spd_r1g_network} and \ref{eq:mult_graphon_stats_eta_deg} provide a bound for the SPLD in a degree-configuration model entirely in terms of the first and second moments of the degree distribution. This bound is tight when the variance in survival function across node pairs is small. For instance, in an ER graph where every node has the same expected degree $\avg{\degree}$, there is no variance in the survival function across node pairs. Each node has a Poisson degree distribution yielding $\eta=\avg{\degree}$ from Eq. \ref{eq:mult_graphon_stats_eta_deg}, which substituted in Eq. \ref{eq:spd_r1g_network} leads precisely to the expression we previously obtained in Eq. \ref{eq:spd_er}. \paragraph*{Illustrative example: random regular graphs.} Since the SPLD depends only on the first two moments of the degree distribution, we can consider an extreme where the degree distribution has zero variance: the example of random $\degree$-regular graphs, wherein every node has the same degree $\degree$, and $\degree\in\integerpos$ such that $n\degree$ is even, but connections are otherwise random between node pairs \cite{bollobas2001random}. Evidently, the degree constraint on random \emph{regular} graphs prohibits conditionally independent edges, whereas the framework used to derive Eq. \ref{eq:spd_r1g_psi} assumes a conditionally independent edge model. Remarkably, since Eq. \ref{eq:spd_r1g_psi} is based only on degree moments, we can still derive an SPLD for random regular graphs. Because every node has the same degree $\degree$, we have \begin{subequations} \label{eq:def_random_reg_graph} \begin{align} &\forall x\in V: \meandegree(x)=\avg{\degree}=\degree,\\ &\avg{\degree^2}=\degree^2. \end{align} \end{subequations} Then Eqs. \ref{eq:mult_graphon_stats_eta_deg} and \ref{eq:def_random_reg_graph} yield $\eta=\degree-1$, and we can rewrite Eq. \ref{eq:spd_r1g_psi} as: \begin{equation} \label{eq:spd_randomregular_psi} \psiacf_l(x, y) = \begin{cases} \exp\left(-\frac{d[(\degree-1)^l-1]}{n(\degree-2)}\right) &\mbox{if }\degree\ne 2,\\ \exp\left(-\frac{2l}{n}\right) &\mbox{otherwise.} \end{cases} \end{equation} It is worth analyzing Eq. \ref{eq:spd_randomregular_psi} when $\degree=1$: every node in the random regular graph is attached to exactly one other node, thus the graph is composed of $n/2$ disconnected edges and does not have a giant component. Picking a node at random, the likelihood that another random node is directly connected to it is asymptotically $n^{-1}$, and the probability mass at other shortest path lengths is zero, yielding the survival function of the SPLD as $1-n^{-1}$ for any $l\ge 1$. This is precisely what we obtain from Eq. \ref{eq:spd_randomregular_psi} by setting $\degree=1$ and applying a first-order approximation to the exponential. For $\degree=2$, the network is composed of one or more disconnected cycles. Picking a node at random on a cycle of asymptotically large length, there are exactly $2$ nodes at a distance of $l$ from it, on either side. Thus, the probability mass at length $l$ is asymptotically $2n^{-1}$, yielding the survival function of the SPLD as $1-2ln^{-1}$. As before, we obtain this expression by applying a first-order approximation to the exponential in Eq. \ref{eq:spd_randomregular_psi}. In Fig. \ref{fig:spd_randomregular}, we show that Eq. \ref{eq:spd_randomregular_psi} is a very good approximation of the SPLD for other degrees too. \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{fig/spd_randomregular_n1024.pdf} \caption{Empirical and approximate closed-form cumulative distribution functions (CDF) of shortest path lengths for a random $\degree$-regular graph. Network size is fixed at $n=1024$, while degree varies as $\degree\in\{2, 3, 4, 8, 16, 32\}$. Solid line indicates the approximate closed-form solution (Eqs. \ref{eq:spd_randomregular_psi}). Symbols and bars indicate empirical estimates: mean and standard error over 10 network samples. (The variation over samples is negligible.) For $\degree=2$, the network is at the phase transition (see Sec. \ref{sec:perc_rank1}), and symbols are shown at every fifth geodesic length for clarity.} \label{fig:spd_randomregular} \end{figure} \paragraph*{Illustrative example: scale-free networks.} We next consider the other extreme where the degree distribution has infinite variance. A typical example is of ``scale-free'' networks, whose nodes follow a power law degree distribution. Although scale-free networks are usually generated by a dynamic process like preferential attachment \cite{albert2002networks, barabasi1999bagraph}, here we consider a \emph{static} version based on vertex-fitness that permits a model with conditionally independent edges, similar to the stochastic fitness model in Ref. \cite{caldarelli2002scale}. We define a multiplicative scale-free graphon with $f(x)=\sqrt{\beta}\left(\frac{x}{h}\right)^{-\alpha}$, i.e. $W_\times(x,y)=\beta\left(\frac{xy}{h^2}\right)^{-\alpha}$, for some scalars $0<\alpha\le 1$, $0< h \ll 1$, $\beta>0$, with the node space restricted to the real interval $[h,1]$ i.e. $\mu(x)=\frac{1}{1-h}$ if $x\in[1,h]$ and $0$ otherwise---a minor modification to the usual assumption of a standard uniform distribution. Here, $\alpha$ controls the exponent of the power law governing the degree distribution: using Eq. \ref{eq:degree_r1g}, the expected degree at $x$ is $\meandegree(x)\propto x^{-\alpha}$ which implies a degree distribution $\degree\propto \degree^{-\theta}$ where $\theta\triangleq 1+\frac{1}{\alpha}$. (The distribution is not a \emph{pure} power law, and might show departures from usually studied scale-free graphs in the small $\theta$ regime, as shown in the derivation of the degree distribution in Eq. \ref{eq:sfg_degree_true} in Appendix \ref{sec:apdx_graphons}.) The definition for $\eta$ in Eq. \ref{eq:mult_graphon_stats_eta} yields: \begin{equation} \label{eq:sfg_eta} \eta = \begin{cases} n\beta\frac{h\log h^{-1}}{1-h} &\mbox{if }\alpha=1/2,\\ n\beta\frac{h^{2\alpha}-h}{(1-h)(1-2\alpha)} &\mbox{otherwise,} \end{cases} \end{equation} and for $\zeta$ in Eq. \ref{eq:mult_graphon_stats_zeta} yields: \begin{equation} \label{eq:sfg_gamma} \zeta=\begin{cases} \sqrt{\beta}\frac{h\log h^{-1}}{1-h} &\mbox{if }\alpha=1,\\ \sqrt{\beta}\frac{h^{\alpha}-h}{(1-h)(1-\alpha)} &\mbox{otherwise}, \end{cases} \end{equation} which can be inserted into Eqs. \ref{eq:degree_r1g_overall}, \ref{eq:spd_r1g_psi} to obtain the approximate closed-form of the survival function of the SPLD in scale-free graphons. We illustrate with three special cases based on the power law exponent of the degree distribution: (1) an ER graph (where the power law exponent $\theta\to\infty\implies\alpha\to 0$), (2) a BA scale-free graphon (where the power law exponent $\theta=3\implies\alpha=\frac{1}{2}$ \cite{barabasi1999bagraph}), and (3) a ``highly scale-free'' graphon (for which $\theta=2\implies\alpha=1$). We express the degree-controlling parameter $h$ in terms of the mean degree $\avg{\degree}$ (see Appendix \ref{sec:apdx_sfg_spl}). Assuming $h\ll 1$, from Eqs. \ref{eq:degree_net_r1g}, \ref{eq:sfg_eta}, \ref{eq:sfg_gamma} we obtain: \begin{equation} \label{eq:sfg_h} h = \begin{cases} \frac{\avg{\degree}}{4n\beta} &\mbox{if }\alpha=\frac{1}{2},\\ \left(\sqrt{\frac{n\beta}{\avg{\degree}}}\log\sqrt{\frac{n\beta}{\avg{\degree}}}\right)^{-1} &\mbox{if }\alpha=1, \end{cases} \end{equation} and for the ER graph ($\alpha=0$) Eqs. \ref{eq:degree_net_r1g}, \ref{eq:sfg_eta} yield: \begin{equation} \label{eq:sfg_er_degree} \avg{\degree}=n\beta. \end{equation} Assuming $h\ll 1$, we obtain from Eqs. \ref{eq:sfg_eta}, \ref{eq:sfg_h}, \ref{eq:sfg_er_degree}: \begin{equation} \label{eq:sfg_eta_} \eta = \begin{cases} \avg{\degree} &\mbox{if }\alpha=0,\\ \frac{\avg{\degree}}{4}\log\left(\frac{4n\beta}{\avg{\degree}}\right) &\mbox{if }\alpha=\frac{1}{2},\\ \frac{\sqrt{n\beta \avg{\degree}}}{\log \sqrt{\frac{n\beta}{\avg{\degree}}}} &\mbox{if }\alpha=1. \end{cases} \end{equation} From Eq. \ref{eq:sfg_eta_} we note that assuming sparsity---of the form $f(x)=\order{n^{-\frac{1}{2}}}\implies\beta=\order{n^{-1}}$---yields asymptotically bounded values of $\eta$ for the BA and highly scale-free graphons, and thus for the degree's variance, unlike typical scale-free networks. Therefore, we do not assume sparsity here and set $\beta=1$, while maintaining finite mean degree $\avg{\degree}$, which yields asymptotically unbounded $\eta$ and variance in the degrees, and permits a check of our formalism when sparsity is not enforced. In Fig. \ref{fig:spl_sfg} we show various node and node pair functions for the BA scale-free graphon with $n=512$, $\avg{\degree}=4$ and $\alpha=1/2$. The expected degree, percolation probability, and expected geodesic lengths are obtained via Eqs. \ref{eq:degree_r1g}, \ref{eq:gcc_consistency_r1g} and \ref{eq:spd_r1g_psi} respectively. In Fig. \ref{fig:spl_emp_vs_ana}, we show that the empirical and analytic estimates of expected geodesic lengths are in good agreement for this scale-free graphon, and its discretized SBM counterpart. \begin{figure} \centering \includegraphics[width=\columnwidth]{fig/spd_n512_emp_vs_ana_stddev.pdf} \caption{Estimates of geodesic lengths from the approximate closed-form of the SPLD (on the $x$-axis) agree with empirical estimates (on the $y$-axis) for the random graph models considered in Sec. \ref{sec:geodesics_specific}. For each model, 10 network samples were generated. For each network sample, the empirical geodesic length and the one derived from their respective closed-forms were computed for every node pair. To prevent clutter, the range of closed-form estimates so-derived was divided into 20 equal partitions, and each node pair of each network sample was placed in the corresponding partition. Then, the mean and standard deviation of closed-form and empirical estimates for each bin were calculated, per network sample. Consequently, each symbol $\circ$ indicates the expected mean over 10 samples, with error bars indicating the expected standard deviation. } \label{fig:spl_emp_vs_ana} \end{figure} \subsection{Summary of results on the SPLD} In Sec. \ref{sec:geodesics_specific}, we have shown how geodesic statistics can be extracted for sparse versions of many popular random graph models, and of empirical networks when coarsened into SBMs. The approximate closed-form of the SPLD reveals further insight for the graph models considered: we showed for \begin{enumerate} \item SBMs that the survival function is expressed via an eigendecomposition of the block matrix (Eq. \ref{eq:spd_sbm_eig}). \item RDPGs that the mean vector and covariance matrix of the node distribution can specify a lower bound of the survival function for geodesics between a random node pair (Eq. \ref{eq:spd_rdpg_network}). \item Gaussian RGGs with high spatial homophily, that the conditional PMF at length $l$ can be interpreted as a ``shortest-path'' connectivity kernel when changing the connection scales by a factor of $l$ (Eq. \ref{eq:spd_grgg_homophily}). \item multiplicative graphons where the connectivity kernel of a node pair is a product of node functions, (equivalent to canonical degree-configuration models,) that the product of expected degrees of the source and target nodes specifies the survival function (Eq. \ref{eq:spd_r1g_psi}), and the survival function of the geodesic length between a random node pair can be bounded from below using the first and second moments of the degree distribution (Eqs. \ref{eq:spd_r1g_network}, \ref{eq:mult_graphon_stats_eta_deg}). \end{enumerate} \section{\label{sec:percolation}Size and existence of giant component} Once the distribution of geodesic lengths is established, a suite of network properties can be inferred.To our knowledge, there has not yet been an approach that explicitly relates the SPLD to percolation behaviour: here, we draw this direct connection. We show how to estimate the size of the giant component, and obtain the bond percolation threshold which determines whether a giant component exists in the network, using only the distribution of shortest path lengths. \paragraph*{Size of the giant component.} For an undirected network with $n$ nodes, the giant component refers to the largest connected component in the graph and scales in size as $\order{n}$. As before, let $\phi_i$ be the event that node $i$ is on the giant component of the network, then the expected number of nodes on the giant component, denoted by $n_{gc}$, is given by $n_{gc}=\sum_iP(\phi_i).$ This can be generalized to any graph model, by replacing the sum by an expectation of the percolation probability over the node space scaled by $n$: \begin{equation}\label{eq:gcc_size_general} n_{gc}=n\int_V\rho(x)d\mu(x), \end{equation} which can be solved using the self-consistency Eq. \ref{eq:gcc_consistency_general} for the percolation probability. This is evident in Fig. \ref{fig:percolation_thresholds}, where we plot the analytic mean percolation probability and empirical proportion of nodes on the giant component for three different models in various parameter regimes. However, as described in Sec. \ref{sec:perc_prob}, the SPLD is sufficient to estimate percolation probabilities using Eqs. \ref{eq:perc_prob_limit}, \ref{eq:perc_prob_pinf}. Let $x$ and $y$ indicate locations of two nodes in the network such that the node at $y$ is on the giant component. The steady state of Eq. \ref{eq:spd_main_general_psi} for the survival function of SPLD between nodes at $y$ and $x$ is indicative of $\rho(x)$, which from Eq. \ref{eq:perc_prob_pinf} results in: \begin{equation}\label{eq:gcc_size_general_psi} n_{gc}=n\left[1-\int_V\psiaf_\infty(y,x)d\mu(x)\right], \end{equation} where we define \begin{equation} \label{eq:def_psi_inf_general} \begin{split} \psiaf_{\infty}(x,y)&\triangleq P(\lambda_{xy}=\infty|\phi_x)\\&=\lim_{l\to\infty}P(\lambda_{xy}>l|\phi_x)=\lim_{l\to\infty}\psiaf_l(x,y), \end{split} \end{equation} that can be computed as the limit value of $\psiaf_l$. (Since nodes are completely identified by their location, with a slight abuse of notation we use $\lambda_{xy}$ to refer to the geodesic length between nodes at $x,y$, and $\phi_x$ to indicate the node at $x$ being on the giant component.) Note that although the RHS of Eq. \ref{eq:gcc_size_general_psi} appears to depend on the source node at $y$, the limit value $\psiaf_\infty(y,x)$ should be independent of it, and only depend on the target node at $x$. Computationally, we do observe close concordance in the limit values regardless of source nodes---as indicated in Fig. \ref{fig:bipartite_cdf} for an SBM---but we can take an expectation of Eq. \ref{eq:gcc_size_general_psi} over $y$ for a more robust computation. In practice, the limit is easy to compute as $\psiaf_l$ saturates quickly with $l$ for a wide range of models. We note that in directed networks, the limiting value of the survival function of the SPLD will similarly yield the size of a node's out-component (see Appendix \ref{sec:spd_directed} for details). \begin{figure} \centering \includegraphics[width=\columnwidth]{fig/perc_sbm_grgg_sfg_new.pdf} \caption{Percolation thresholds for (a) a 2-block SBM with block matrix $B=\big(\begin{smallmatrix}c_{\mathrm{in}} & c_{\mathrm{out}}\\c_{\mathrm{out}} & c_{\mathrm{in}} \end{smallmatrix}\big)$ for different values of the proportion of minority community $\pi$, computed from Eq. \ref{eq:percolation_sbm}, (b) a 2-dimensional Gaussian RGG with scale matrix $R=\Big(\begin{smallmatrix}r_1^{-1} & 0\\0 & r_2^{-1}\end{smallmatrix}\Big)$ for different probabilities of connecting to a node with identical co-ordinates $\beta$, computed from Eq. \ref{eq:percolation_grgg}, and (c) a scale-free graphon with mean degree $d$ and network size $n$ for different values of the exponent $\alpha$ where the power law exponent is given by $\theta\triangleq 1+\frac{1}{\alpha}$, computed by setting $\eta=1$ from Eq. \ref{eq:eigval_mul_graphon}. The middle column shows the threshold (in red; also marked in black in the corresponding left subplot) alongside the variation in mean percolation probability (in blue) for given values of $\pi,k,\alpha$ respectively---estimated by taking the expectation of Eqs. \ref{eq:gcc_consistency_sbm}, \ref{eq:rho_grgg} and \ref{eq:gcc_consistency_r1g_rhox} over their respective node spaces. For the BA graphon with $\alpha=0.5$, the solid and dotted red lines indicate the exact and asymptotic conditions from Eqs. \ref{eq:perc_bagraphon} and \ref{eq:perc_bagraphon_inf}, respectively. Similarly, the right column shows the empirical proportion of nodes on the giant component---using $1$ sample of the model per parameter tuple given by the $x$ and $y$ axes. Evidently, parameter regions indicated by the percolation threshold coincide with those having vanishing mean percolation likelihood $\avg{\rho}$ and a vanishing proportion of nodes on the giant component. We remark that the empirical contour plots of all models, and the analytic contour plot for the Gaussian RGG, have been smoothed with a Gaussian filter with unit standard deviation for visual clarity.} \label{fig:percolation_thresholds} \end{figure} \paragraph*{Bond percolation threshold.}There have been a variety of approaches in the literature to find the percolation threshold, which indicates whether a giant component exists in the network. For a graph with a given degree sequence, Ref. \cite{molloy1995critical} established the criterion as when nodes are expected to have more neighbors-of-neighbors than neighbors, permitting the nodes to exist on a giant component. For inhomogeneous graphs with hidden colors, which are equivalent to SBMs, Refs. \cite{soderberg2002general, soderberg2003properties} use a branching process to ``reveal'' the giant component starting from a source node, and determine the percolation threshold as when the trivial solution becomes unstable resulting in infinite trees in the process, yielding a criterion of the largest eigenvalue of a relevant matrix being greater than unity. For graphs with symmetric kernels more generally, Ref. \cite{bollobas2007phase} uses a similar strategy to establish the percolation threshold as when the norm of the integral operator related to the connectivity kernel is greater than unity. More recent work on sparse empirical networks \cite{karrer2014percolationsparse} has developed a message passing scheme to determine that the percolation threshold is given by the largest eigenvalue of the Hashimoto or nonbacktracking matrix \cite{hashimoto1989nonbacktracking}. In this section, we contribute to the understanding of percolation in graphs---both undirected and directed---generated by sparse connectivity kernels---both symmetric and asymmetric---in the asymptotic limit, by making use of the closed-form bound of the SPLD. We first establish an equivalence between non-existence of the giant component and the closed-form of the conditional PMF of the SPLD. Then, we show a further equivalence of this condition to the spectral radius (the largest absolute eigenvalue) of the integral operator $T$ (which, as defined by Eq. \ref{eq:integral_op}, is analogous to $\avg{A}$), being less than unity. Finally, we derive percolation thresholds for two classes of random graph models in Sec. \ref{sec:perc_rank1} and Sec. \ref{sec:perc_higher_rank}. \begin{theorem}[Geodesic condition for percolation]\label{lemma:perc_thresh} Consider a network with $n$ nodes in $V$. Let $\omegacf_l(x,y;n)$ be the closed-form of the conditional PMF of the SPLD between nodes at $x,y\in V$, as given by Eq. \ref{eq:spd_general_omega}, where we make explicit the dependence on network size $n$. Then in the asymptotic limit, a giant component does not exist if and only if $\forall x,y\in V:\displaystyle{\lim_{n\to\infty}}\limsup_{l\to\infty}\omegacf_l(x,y;n)=0$. \end{theorem} A proof is enclosed in Appendix \ref{sec:apdx_perc_phenomena}. In its essence, Theorem \ref{lemma:perc_thresh} states that the network percolates iff the support of the closed-form of the conditional PMF of the SPLD, defined in Eq. \ref{eq:spd_general_omega}, between a set of nodes (of non-zero measure) is unbounded in the geodesic length. This is in analogy to the study of percolation by setting up an equivalent branching process, and noting that in the supercritical regime the process yields infinite-size trees \cite{bollobas2007phase}. From Eq. \ref{eq:spd_analytic_general_eig_omega}, we can express the conditional PMF of the SPLD for a symmetric connectivity kernel as a function of the eigenvalues of the integral operator defined in Eq. \ref{eq:integral_op}. This leads to a more useful condition for percolation. \begin{theorem}[Spectral condition for percolation]\label{lemma:spectral_condition} Let $T$ be the integral operator related to a symmetric connectivity kernel, as defined by Eq \ref{eq:integral_op}, and $r(T)$ be its spectral radius, i.e. the largest absolute value of its eigenvalues. Then, the network has a giant component if and only if \begin{equation} \label{eq:spectral_condition} r(T)>1, \end{equation} with the phase transition at $r(T)=1$.\end{theorem} A proof is enclosed in Appendix \ref{sec:apdx_perc_phenomena}. While Theorem \ref{lemma:spectral_condition} makes use of the symmetry of the connectivity kernel, this result generalizes to asymmetric connectivity kernels which can lead to more interesting versions of directed graphs. That is, a directed network generated by an asymmetric kernel has a giant in-/out-component iff $r(T)>1$---see Theorem \ref{lemma:spectral_asym} in Appendix \ref{sec:apdx_asymmetric}. Given the spectral condition, we can derive the bond percolation threshold as $r(T)^{-1}$. The spectral radius of $T$ (as defined in Eq. \ref{eq:integral_op}) can by solved for using the eigenvalue equation: \begin{equation} \label{eq:eigenvalue} Tf = \tau f, \end{equation} where $f\in F$ is an eigenfunction in an appropriate function space $F$, and $\tau$ the corresponding eigenvalue. In the following sections, we show what this general condition for percolation entails for specific graph models, with results on the bond percolation threshold summarized in Tab. \ref{tab:results}. \begin{table} \caption{\label{tab:results}Bond percolation threshold for different random graph models considered in text: $n$ refers to number of nodes, $g$ refers to a node function which can be a scalar ($g$), vector ($\boldsymbol{g}$ of appropriate length) or otherwise ($g(x)$ or $g(\boldsymbol{x})$) depending on node space $V$. $T$ refers to the integral operator in Eq. \ref{eq:integral_op}, $r(\cdot)$ indicates the spectral radius, and the bond percolation threshold is given by $r(T)^{-1}$ from Theorem \ref{lemma:spectral_condition}.} \begin{ruledtabular} \begin{tabular}{lcc} Model & $(Tg)(x)$ & $r(T)$ \\ \hline ER graph\footnotemark[1] & $\avg{\degree} g$ & $\avg{\degree}$ \\ SBM\footnotemark[2] & $B\Pi\boldsymbol{g}$ & $r\left(B\Pi\right)$ \\ $2$-block SBM\footnotemark[3] & $\Big(\begin{smallmatrix}c_\mathrm{in}\pi & c_\mathrm{out}(1-\pi)\\c_\mathrm{out}\pi & c_\mathrm{in}(1-\pi) \end{smallmatrix}\Big)\boldsymbol{g}$ & $\delta\left(c_\mathrm{out}^2-c_\mathrm{in}^2\right)+c_\mathrm{in}$ \\\hline Ensemble avg. & $\avg{A}\boldsymbol{g}$ & $r\left(\avg{A}\right)$ \\ \makecell[l]{Directed rank-\\$1$ ensemble\footnotemark[1]\footnotemark[4]} & $\frac{1}{n}\boldsymbol{a}\boldsymbol{b}^T\boldsymbol{g}$ & $\frac{\avg{\degreein\degreeout}}{\avg{\degree}}$ \\\hline RDPG\footnotemark[5]\footnotemark[6] & $n\beta\int_X\boldsymbol{x}^T\boldsymbol{y}g(\boldsymbol{y})d\mu$ & $r\left(\Phi\right)$\\\hline \makecell[l]{Gaussian\\RGG\footnotemark[5]\footnotemark[7]\footnotemark[8]} & $n\int_{\real^k}\nu\left(\boldsymbol{x},\boldsymbol{y};R\right)g(\boldsymbol{y})d\mu$ & $\frac{n\beta}{\prod_{i=1}^k\sqrt{\tau_i+\frac{\sqrt{1+4\tau_i}+1}{2}}}$\\ \makecell[l]{Unit-scale Gau-\\ssian RGG\footnotemark[5]\footnotemark[7]\footnotemark[9]} & $n\int_{\real^k}\nu\left(\boldsymbol{x},\boldsymbol{y};I\right)g(\boldsymbol{y})d\mu$ & $n\beta\varphi^{-k}$\\\hline \makecell[l]{Multiplicative\\graphon\footnotemark[10]} & $nf(x)\int_0^1f(y)g(y)dy$ & $n\int_0^1f(x)^2dx$ \\ \makecell[l]{Degree-config.\\graphon\footnotemark[1]\footnotemark[11]} & $\frac{\meandegree(x)}{\avg{\degree}}\int_0^1\meandegree(y)g(y)dy$ & $\frac{\avg{\degree^2}}{\avg{\degree}}-1$\\ BA graphon\footnotemark[1]\footnotemark[12] & $\frac{nh}{\sqrt{x}}\int_h^1\frac{g(y)}{\sqrt{y}}dy$ & $\frac{\avg{\degree}(\log n+\log\log n)}{4}$\\ \makecell[l]{Highly scale-\\free graphon\footnotemark[1]\footnotemark[12]} & $\frac{nh^2}{x}\int_h^1\frac{g(y)}{y}dy$ & $\frac{\avg{\degree} n}{\left(\log n-\log\log n\right)^2}$\\ \end{tabular} \footnotetext[1]{mean degree $\avg{\degree}$} \footnotetext[2]{$k\times k$ block matrix $B$, distribution vector $\boldsymbol\pi$ and $\Pi\triangleq\diag(\boldsymbol{\pi})$} \footnotetext[3]{$\pi$ is minority proportion and dispersion $\delta\triangleq\pi(1-\pi)$} \footnotetext[4]{expected product of in- and out-degrees $\avg{\degreein\degreeout}$} \footnotetext[5]{$\beta=\order{n^{-1}}$ scales the connectivity kernel} \footnotetext[6]{$k\times k$ scaled second-moment matrix $\Phi\triangleq n\beta\int_X\boldsymbol{x}\boldsymbol{x}^Td\mu$} \footnotetext[7]{$\nu(\boldsymbol{x},\boldsymbol{y};R)=\beta\exp\left(-\frac{1}{2}(\boldsymbol{x}-\boldsymbol{y})^TR^{-1}(\boldsymbol{x}-\boldsymbol{y})\right)$} \footnotetext[8]{$\{\tau_i\}_{i=1}^k$ are the eigenvalues of $R^{-1}$} \footnotetext[9]{$\varphi\triangleq\frac{1+\sqrt{5}}{2}$ is the golden ratio} \footnotetext[10]{$\nu(x,y)=W_\times(x,y)\triangleq f(x)f(y)$} \footnotetext[11]{mean degree at $x:\meandegree(x)$, $2^{nd}$ moment of degree distribution $\avg{\degree^2}$} \footnotetext[12]{$h\ll 1$ controls mean degree $\avg{\degree}$} \end{ruledtabular} \end{table} \subsection{\label{sec:perc_rank1}Percolation in rank-1 models} We first consider the case when the integral operator $T$ has exactly one non-zero eigenvalue: what we refer to as rank-1 models. Special cases of various models discussed in Sec. \ref{sec:geodesics_specific} are examples of rank-1 models. In particular, multiplicative graphon defined in Eq. \ref{eq:def_mult_graphon} of Sec. \ref{sec:graphons}, where the connectivity kernel is given by $W_\times(x,y)=f(x)f(y)$, is of rank 1. The spectral radius is given by solving the eigenvalue Eq. \ref{eq:eigenvalue} for the eigenpair $(\tau,g)$, using the definition of $T$ from Eq. \ref{eq:integral_op} and of the multiplicative graphon kernel from Eq. \ref{eq:def_mult_graphon}: \begin{equation} \label{eq:eigval_mul_graphon} \begin{split} &(Tg)(x) = n f(x)\int_0^1f(y)g(y)dy = \tau g(x)\\ \implies &g=f, \textrm{ and } \tau=n\int_0^1f(y)^2dy=\eta, \end{split} \end{equation} where we apply the definition of $\eta$ from Eq. \ref{eq:mult_graphon_stats_eta}. Thus, the eigenvalue is given by $\eta$, and the eigenfunction by $f(x)$. Note that the normalized eigenfunction is given by $\frac{f(x)}{\sqrt{\int_0^1f(x)^2dx}}=\sqrt{\frac{n}{\eta}}f(x)$, which can be put into the kernel's eigenfunction expansion of Eq. \ref{eq:kernel_rep} to verify that $\eta$ is the sole eigenvalue: multiplicative graphons are rank-1 models. \paragraph*{Equivalence of rank-1 models and multiplicative graphons} The converse is also true: any rank-1 model has an equivalent multiplicative graphon formulation for it (see Theorem \ref{lemma:rank1_multiplicative} in Appendix \ref{sec:apdx_rank1_equiv}). We previously observed in Sec. \ref{sec:graphons} that multiplicative graphons are equivalent to canonical degree-configuration models. This allows us to establish results for multiplicative graphons, but use them for any rank-1 or degree-configuration model. For example, the spectral condition for percolation in multiplicative graphons yields: \begin{equation} \label{eq:percolation_thresh_r1g} \eta>1, \end{equation} with $\eta$ defined in Eq. \ref{eq:mult_graphon_stats_eta}. Using Eq. \ref{eq:mult_graphon_stats_eta_deg}, we obtain the percolation condition in degree-configuration models in terms of the first and second moments of the degree distribution \cite{molloy1995critical, cohen2001breakdown} as \begin{equation} \label{eq:percolation_thresh_deg_config} \frac{\avg{\degree^2}}{\avg{\degree}}-1>1\implies\frac{\avg{\degree^2}}{\avg{\degree}}>2. \end{equation} This result was first obtained using the classic percolation criterion of Molloy and Reed \cite{molloy1995critical} suggesting that a giant component exists in a graph with a given degree sequence when nodes have a higher expectation of number of neighbours at length $2$ (``neighbours-of-neighbours'') than at length $1$ (``neighbours''). Here, we have shown that this criterion coincides with the spectral condition (Theorem \ref{lemma:spectral_condition}) for the simplest class of rank-1 models, but for higher rank models there is a more complicated relationship between Molloy and Reed's criterion and the spectral condition for percolation (see Appendix \ref{sec:apdx_rank1_perc}). Interestingly, although the result in Eq. \ref{eq:percolation_thresh_deg_config} has been obtained when assuming conditionally independent edges, it holds for the random $\degree$-regular graph considered in Sec. \ref{sec:graphons}. Here, $\avg{\degree}=\degree$ and $\avg{\degree^2}=\degree^2$, yielding the percolation condition $\degree>2$ from Eq. \ref{eq:percolation_thresh_deg_config}, previously arrived at through other means \cite{cohen2001breakdown, karrer2014percolationsparse}, and evident in Fig. \ref{fig:spd_randomregular}. \paragraph*{Illustrative example: scale-free graphon.}For scale-free graphons defined in Sec. \ref{sec:graphons}, the percolation condition is obtained by using the value of $\eta$ from Eq. \ref{eq:sfg_eta}. Consider the three illustrative examples from Sec. \ref{sec:graphons}, for which we previously expressed $\eta$ in terms of the mean degree and network size in Eq. \ref{eq:sfg_eta_}. Setting the free parameter $\beta=1$ as before, it follows that the ER, BA and highly scale-free graphons respectively percolate when \begin{subequations} \label{eq:perc_sfg} \begin{align} \label{eq:perc_ergraphon} &\avg{\degree}>1 &\quad\mbox{if }\alpha=0,\\ \label{eq:perc_bagraphon} &\frac{\avg{\degree}}{4}\log\left(\frac{4n}{\avg{\degree}}\right)>1 &\quad\mbox{if }\alpha=\frac{1}{2},\\ \label{eq:perc_sfgraphon} &\frac{\sqrt{n\avg{\degree}}}{\log \sqrt{\frac{n}{\avg{\degree}}}}>1 &\quad\mbox{if }\alpha=1. \end{align} \end{subequations} In Fig. \ref{fig:percolation_thresholds} we plot the condition for BA graphon. For the ER graph, Eq. \ref{eq:perc_ergraphon} is the well-studied percolation criterion $\avg{\degree}>1$ \cite{erdos1960evolution}. For scale-free graphons, assuming large $n$ gives an asymptotic condition on $\avg{\degree}$ from Eqs. \ref{eq:perc_bagraphon} and \ref{eq:perc_sfgraphon} (see Appendix \ref{sec:apdx_sfg_spl}) as follows: \begin{subequations} \label{eq:perc_sfg_deg} \begin{align} \label{eq:perc_bagraphon_inf} &\avg{\degree}>\frac{4}{\log n+\log\log n} &\quad\mbox{if }\alpha=\frac{1}{2},\\ \label{eq:perc_sfgraphon_inf} &\avg{\degree}>\frac{(\log n-\log\log n)^2}{n} &\quad\mbox{if }\alpha=1. \end{align} \end{subequations} Both of these conditions are decreasing functions of $n$ for large $n$, converging to zero asymptotically, unlike the constraint for an ER graph which is independent of the network size. We remark that for a BA graph, the RHS of Eq. \ref{eq:perc_bagraphon_inf} is a slowly-varying function of $n$ of $\order{(\log n)^{-1}}$, while for a ``highly scale-free'' graph the RHS of Eq. \ref{eq:perc_sfgraphon_inf} is a regularly-varying function of $n$ of $\order{n^{-1}(\log n)^2}$ with a faster convergence \cite{bingham1989regular}. This phenomenon is evident in the percolation constraints on $\avg{\degree}$ against $n$ for different values of $\alpha$, as shown in Fig. \ref{fig:percolation_thresholds}. Altogether, conditions in Eqs. \ref{eq:perc_bagraphon_inf} and \ref{eq:perc_sfgraphon_inf} recapitulate previous results on the resilience of scale-free networks with power law exponent $2<\theta<3$ to failure in terms of an asymptotically null percolation threshold \cite{cohen2000percolation, cohen2001breakdown, callaway2000percolation}. \paragraph*{Illustrative example: directed degree-configuration model.} We consider a rank-1 ensemble average model where the matrix $\avg{A}$ is equivalent to the operator $T$, when it corresponds to a canonical directed degree-configuration model. Let $\avg{A}\triangleq\frac{\boldsymbol{a}\boldsymbol{b}^T}{n}$ where $\boldsymbol{a},\boldsymbol{b}\in\realnonneg[n]$, i.e. $\avg{A}_{ij}=\frac{a_ib_j}{n}$. Evidently, this is an asymmetric connectivity kernel, for which too the spectral condition holds (see Theorem \ref{lemma:spectral_asym} in Appendix \ref{sec:apdx_asymmetric}). From the eigenvalue Eq. \ref{eq:eigenvalue} we get for the eigenvalue $\tau$ and (unnormalized) right eigenvector $\boldsymbol{v}$: \begin{equation} \label{eq:eigval_ensemble} \begin{split} &\avg{A}\boldsymbol{v} = \frac{1}{n}\boldsymbol{a}\boldsymbol{b}^T\boldsymbol{v} = \tau \boldsymbol{v}\\ \implies &\boldsymbol{v}=\boldsymbol{a}, \textrm{ and } \tau=\frac{\boldsymbol{a}^T\boldsymbol{b}}{n}. \end{split} \end{equation} From Eqs. \ref{eq:degree_ensemble_node_out}, \ref{eq:degree_ensemble_node_in}, the expected out-degree of node $i$ is given by $\avg{\degreeout_i}=\frac{a_i\sum_jb_j}{n}$, and the expected in-degree by $\avg{\degreein_i}=\frac{b_i\sum_ja_j}{n}$. From Eq. \ref{eq:degree_ensemble_network}, the expected network degree is given by $\avg{\degree}=\frac{\sum_ia_i\sum_jb_j}{n^2}$. Consider the expectation of the product of in- and out-degrees via the law of total expectation: $\avg{\degreein\degreeout}=\expect{\avg{\degreein\degreeout}}=\expect{\avg{\degreein}\avg{\degreeout}}$, where we use the conditional independence of in- and out-degrees given the node identity. We can further write: \begin{equation} \label{eq:directed_config_model_eig} \begin{split} \expect{\avg{\degreein}\avg{\degreeout}}&=\sum_i\avg{\degreein_i}\avg{\degreeout_i}/n\\ &=\sum_ia_ib_i\sum_ja_j\sum_kb_k/n^3=\tau\avg{\degree}\\\implies\tau&=\frac{\avg{\degreein\degreeout}}{\avg{\degree}} \end{split} \end{equation}where we have used the expression for $\tau$ in Eq. \ref{eq:eigval_ensemble}. By setting $\tau>1$ for percolation, we recover the percolation condition for the directed configuration model in terms of the expected product of the in- and out-degrees and expected degree \cite{newman2001random, cooper2004percdirectedconfig} as$$\frac{\avg{\degreein\degreeout}}{\avg{\degree}}>1.$$ \subsection{\label{sec:perc_higher_rank}Percolation in higher rank models} When the model's integral operator $T$ does not have a unique non-zero eigenvalue, then we refer to it as a higher rank model. Most higher dimensional independent edge models we have previously considered---ensemble average model, SBM, RDPG, RGG, non-multiplicative graphon like the max graphon---fall in this category. Notable results in percolation theory for models with a symmetric kernel can be recovered from our result on the spectral condition in Theorem \ref{lemma:spectral_condition}. Prior work on the sparse stochastic block model with a symmetric block matrix $B$ and diagonal distribution matrix $\Pi$ has determined that the network percolates when the largest eigenvalue of $B\Pi$, which we note below coincides with the definition of $T$ in Eq. \ref{eq:integral_op}, is greater than unity \cite{soderberg2002general, soderberg2003properties}. In general, sparse inhomogeneous graphs with symmetric kernels have been shown to percolate when the 2-norm of an integral operator that describes the model, and coincides with the definition of $T$ in Eq. \ref{eq:integral_op}, is greater than unity \cite{bollobas2007phase}. Since the kernel is symmetric, $T$ is self-adjoint, and its 2-norm coincides with its spectral radius. Therefore, Theorem \ref{lemma:spectral_condition} recovers the known percolation conditions for sparse symmetric kernels. More generally, when $T$ may not be self-adjoint, its 2-norm is bounded from below by its spectral radius: $\norm[2]{T}\ge r(T)$. Therefore $r(T)>1\implies\norm[2]{T}>1$, yielding the spectral condition as the ``stronger'' (and accurate) condition relative to the 2-norm condition when considering asymmetric connectivity kernels (see Appendix \ref{sec:apdx_asymmetric} for an example). Integral operators for various finite-dimensional models considered in this text can be written succinctly. Below, we derive the percolation thresholds for SBMs, ensemble average models, and RDPGs. We also find the threshold for Gaussian RGGs, which disproves a conjecture about the existence of a percolation threshold for RGGs with a non-uniform node density \cite{barnett2007spatially}. \paragraph*{Illustrative example: SBM.}Consider a $k$-block SBM from Sec. \ref{sec:sbm}, for which the class of eigenfunctions $F$ becomes the $k$-dimensional Euclidean vector space. From Eq. \ref{eq:integral_op}, the integral operator takes the form of a matrix: $T\triangleq B\Pi$, and $f\in F$ is an eigenvector of $T$. The spectral condition for percolation suggests: \begin{equation} \label{eq:percolation_thresh_sbm} r(B\Pi)>1. \end{equation} For example, consider a $2$-block SBM where the minority community occupies $0\le \pi\le 0.5$ of the share, and a ``planted-partition'' block matrix given by $B=\big(\begin{smallmatrix}c_{\mathrm{in}} & c_{\mathrm{out}}\\c_{\mathrm{out}} & c_{\mathrm{in}} \end{smallmatrix}\big)$, where $c_{\mathrm{in}}$ accounts for intra-group affinity whereas $c_{\mathrm{out}}$ for inter-group affinity. Then it can be shown (see Appendix \ref{sec:apdx_sbm_perc}) that $T$ has precisely two (real) eigenvalues: $\frac{c_{\mathrm{in}}}{2}\left\{1\pm\sqrt{1+4\pi(1-\pi)\left[\left(\frac{c_{\mathrm{out}}}{c_{\mathrm{in}}}\right)^2-1\right]}\right\}$. Setting the larger eigenvalue to be greater than unity, we obtain: \begin{equation} \label{eq:percolation_sbm} \pi(1-\pi)\left(c_{\mathrm{out}}^2-c_{\mathrm{in}}^2\right)+c_{\mathrm{in}}>1, \end{equation} when $c_{\mathrm{in}}\le 2$, which is a hyperbolic constraint on the affinities, while for $c_{\mathrm{in}}>2$, the network percolates regardless. If $\pi=0.5$, i.e. there is no minority community, then we obtain the linear constraint $c_{\mathrm{in}}+c_{\mathrm{out}}>2$ \cite{schawe2020gccsbm}. Whereas if $\pi=0$, i.e. there is a very strong minority which makes this a $1$-block SBM which is equivalent to an ER graph, then we obtain the trivial constraint $c_{\mathrm{in}}>1$ \cite{erdos1960evolution}. In Fig. \ref{fig:percolation_thresholds}, we show the percolation thresholds for various values of $\pi$. \paragraph*{Illustrative example: ensemble average model.} For the ensemble average model we have $T\triangleq\avg{A}$, resulting in the spectral condition \begin{equation} \label{eq:percolation_thresh_ensembleavg} r\left(\avg{A}\right)>1 \end{equation} It is known that the spectral radius of the adjacency matrix $r(A)$ has implications for dynamics of processes on the graph, such as the sharp epidemic threshold being given by $r(A)^{-1}$\cite{van2008virus}. Recent work on \emph{dense} graphs has shown the bond percolation threshold to be given by $r(A)^{-1}$ \cite{bollobas2010percolationdense}. For \emph{sparse} (and locally tree-like) graphs, the percolation threshold is slightly higher and given by the inverse of the largest eigenvalue of the Hashimoto or nonbacktracking matrix \cite{hashimoto1989nonbacktracking}. Eq. \ref{eq:percolation_thresh_ensembleavg} adds to this literature by showing that for sparse graphs the inverse of the largest eigenvalue of the \emph{expected} adjacency matrix $r(\avg{A})^{-1}$ gives the bond percolation threshold. Notably, our result yields an asymptotic equivalence between the spectral radius of the nonbacktracking matrix and the corresponding expected adjacency matrix in sparse independent edge models. \paragraph*{Illustrative example: RDPG.}We next consider the eigenvalue problem for an RDPG defined in Sec. \ref{sec:rdpg:}, for which the integral operator is: $(Tf)(\boldsymbol{x})=n\beta\int_X\boldsymbol{x}^T\boldsymbol{y}f(\boldsymbol{y})d\mu(\boldsymbol{y}).$ Let $F:X\to\real$ be the space of dot-product operations, i.e. $f$ has the form $f(\boldsymbol{x}; \boldsymbol{v})=\boldsymbol{x}^T\boldsymbol{v}$ for some $\boldsymbol{v}\in\real^k$, which when put in the definition for $T$ leads to: \begin{equation} \label{eq:percolation_rdpg} \begin{split} (Tf)(\boldsymbol{x})&=n\beta\boldsymbol{x}^T\int_X\boldsymbol{y}\boldsymbol{y}^T\boldsymbol{v}d\mu(\boldsymbol{y})\\ &=\boldsymbol{x}^T\Phi\boldsymbol{v}, \end{split} \end{equation} where we apply the definition of $\Phi$ from Eq. \ref{eq:def_rdpg_mommat}. From the eigenvalue Eq. \ref{eq:eigenvalue} we obtain for the eigenfunction: $(Tf)(\boldsymbol{x})=\tau\boldsymbol{x}^T\boldsymbol{v}$. Comparing this to the RHS of Eq. \ref{eq:percolation_rdpg}, it must be the case that $\Phi\boldsymbol{v}=\tau\boldsymbol{v}$, i.e. $\boldsymbol{v}$ is an eigenvector of $\Phi$ with eigenvalue $\tau$. If $\Phi$ has an eigenpair $(\tau_i,\boldsymbol{v}_i)$, then $T$ has the eigenpair $(\tau_i,\boldsymbol{v}_i^T\boldsymbol{x})$: $T$ has the same spectrum as $\Phi$, which leads to the spectral condition \begin{equation} \label{eq:percolation_thresh_rdpg} r(\Phi)>1. \end{equation} \paragraph*{Percolation via the approximate closed-form SPLD.}For higher rank models considered above, the spectral condition can also be considered through the approximate closed-form of the SPLD. From Theorem \ref{lemma:perc_thresh}, the network percolates when the closed-form of the conditional PMF of the SPLD $\omegacf_l(x,y)$ diverges for some $x,y\in V$. The proof for Theorem \ref{lemma:perc_thresh} holds just as well when using the \emph{approximate} closed-form of the conditional PMF of the SPLD $\omegaacf_l$, since it only differs from the closed-form $\omegacf_l$ in its initial condition (see Corollary \ref{lemma:perc_thresh_apx} in Appendix \ref{sec:apdx_asymmetric}). Given the definition for the approximate closed-form of the survival function of the SPLD $\psiacf_l$ in Eq. \ref{eq:spd_general_psi}, we can extract the condition for divergence by observing the expressions for $\psiacf_l$. In particular, Eq. \ref{eq:sf_avg_uncorrected} for ensemble average models, Eq. \ref{eq:spd_sbm} for SBMs and Eq. \ref{eq:spd_rdpg} for RDPGs, immediately yield the divergence conditions $r\left(\avg{A}\right)>1$, $r\left(B\Pi\right)>1$ and $r\left(\Phi\right)>1$, recapitulating percolation conditions in Eqs. \ref{eq:percolation_thresh_ensembleavg}, \ref{eq:percolation_thresh_sbm} and \ref{eq:percolation_thresh_rdpg} respectively. \paragraph*{Illustrative example: Gaussian RGG.} For other models, the percolation threshold may not become apparent from the SPLD alone---like by scrutinizing Eq. \ref{eq:spd_grgg_homophily} for spatially homophilous Gaussian RGGs. Thus, we must consider the eigenvalue problem in Eq. \ref{eq:eigenvalue} for Gaussian RGGs. Given that the node distribution is Gaussian, let the function space be $F:\real^k\to\real$ such that the eigenfunction $f\in F$ has the form $f(\boldsymbol{x}; \alpha,C)=\alpha\exp\left(-\frac{1}{2}\boldsymbol{x}^TC^{-1}\boldsymbol{x}\right)$ for some $\alpha\in\real$ and $C\in\real^k\times\real^k$ where $C$ is positive-definite, which ensures that $f$ vanishes at infinity. Then it can be shown using the eigenvalue Eq. \ref{eq:eigenvalue} (see Appendix \ref{sec:apdx_grgg_perc}) that the spectral radius of $T$ is given by \begin{equation} \label{eq:eig_grgg_tau} r(T)=\frac{n\beta}{\left|I+C^{-1}+R^{-1}\right|^\frac{1}{2}} \end{equation} where $|\cdot|$ indicates the matrix determinant, and $C$ satisfies $C^2-RC-R=0$. Let $(\tau_i, \boldsymbol{v}_i)$ be an eigenpair of $R^{-1}$, then we can write $\left|I+C^{-1}+R^{-1}\right|=\prod_{i=1}^k\left(\tau_i+\frac{\sqrt{1+4\tau_i}+1}{2}\right)$ (see Appendix \ref{sec:apdx_grgg_perc}). Plugging into the spectral condition we obtain from Eq. \ref{eq:eig_grgg_tau}: \begin{equation} \label{eq:percolation_grgg} n\beta>\prod_{i=1}^k\left(\tau_i+\frac{\sqrt{1+4\tau_i}+1}{2}\right)^{\frac{1}{2}}, \end{equation} which expresses a percolation constraint on connectivity parameter $\beta$---which from Eq. \ref{eq:grgg_nu} indicates the likelihood of connection to a node with identical co-ordinates---in terms of the connectivity scales. Since $R$ is positive-definite, each factor in the product of this inequality can be no smaller than $1$, leading to the minimal requirement that $n\beta>1$. This also implies that every additional dimension can only raise the percolation threshold in terms of $\beta$. The mean degree from Eq. \ref{eq:degree_grgg_mean} can be written in terms of $\beta$ and eigenvalues of $R^{-1}$ as $\avg{\degree}=n\beta\prod_{i=1}^k(2\tau_i+1)^{-\frac{1}{2}}$, which when put in Eq. \ref{eq:percolation_grgg} results in a percolation threshold for the mean degree: \begin{equation} \label{eq:percolation_grgg_deg} \avg{\degree}>\prod_{i=1}^k\left(\frac{1}{2}+\frac{\sqrt{1+4\tau_i}}{2(1+2\tau_i)}\right)^\frac{1}{2}, \end{equation} where each factor on the RHS of this inequality lies within the range $\left[\sqrt{2}^{-1},1\right]$, with the boundary values attained for largest and smallest connection scales: $\tau_i=0$, $\tau_i=\infty$, respectively. This implies that every additional dimension can only lower the percolation threshold in terms of the mean degree $\avg{\degree}$. For very high-dimensional spaces (large $k$), the mean degree tends exponentially closer to $0$, suggesting increasing robustness of a Gaussian RGG network to failure. This is in sharp contrast to the result for RGGs with uniform distribution \cite{dall2002rgg}, wherein although the mean degree constraint decreases with $k$, it decreases to the fiducial limit of $1$ for ER graphs. This difference can be attributed to the nature of higher-dimensional Gaussians, since the expected degree itself is a Gaussian function in the node space, causing nodes close to the origin in $\real^k$ to ``bear the burden'' of percolation in lieu of peripheral nodes (see Appendix \ref{sec:apdx_grgg_perc_deg}). For an illustrative example, consider a ``unit-scale'' Gaussian RGG where the scale matrix in Eq. \ref{eq:grgg_nu} is $R=I$, i.e. node connection scales are of the same order as the node distribution's variances along each dimension, which from Eq. \ref{eq:grgg_mu} are equal to $1$. This sets all eigenvalues of $R^{-1}$ to $1$, yielding from Eqs. \ref{eq:percolation_grgg}, \ref{eq:percolation_grgg_deg} the percolation conditions: \begin{equation*} \label{eq:percolation_grgg_unit} \begin{split} n\beta&>\varphi^k,\\ \avg{\degree}&>\left(\frac{\varphi}{\sqrt{3}}\right)^k, \end{split} \end{equation*} where $\varphi\triangleq\frac{1+\sqrt{5}}{2}$ is the golden ratio. In Fig. \ref{fig:percolation_thresholds}, we show the percolation thresholds for a $2$-dimensional Gaussian RGG with a diagonal scale matrix $R=\Big(\begin{smallmatrix}r_1^{-1} & 0\\0 & r_2^{-1}\end{smallmatrix}\Big)$ for various values of $\beta$---with $\tau_1=r_1, \tau_2=r_2$. Evidently, for any given value of $\beta$, longer connection scales encourage percolation. In particular, we remark that our results reject the conjecture in Ref. \cite{barnett2007spatially} regarding spatially embedded networks, which suggests that ``there is a phase transition only in the case of uniform ensembles'', i.e. where the node distribution is uniform. Here, we provide a counterexample in the form of a Gaussian RGG, by showing that for a non-uniform (Gaussian) node distribution, there still exists a critical value of the connectivity parameter for a giant component to exist. \section{\label{sec:path_stats}Path-based statistics} Access to the full geodesic length distribution permits us to compute path-based statistics for the network, like the mean geodesic length which we describe in Sec. \ref{sec:aspl}. Besides network-level properties, there are some node-level functions---like node centralities measuring the importance of nodes in a networks \cite{borgatti2006centrality}---which require the computation of shortest paths. In Sec. \ref{sec:closeness} and Sec. \ref{sec:betweenness}, we show how the general graph framework facilitates estimation of the expectation of two important centrality measures of closeness and betweenness. \subsection{\label{sec:aspl}Mean geodesic length} Since some nodes may have a non-zero likelihood of not being on the giant component, any estimate of the mean geodesic length must condition on the source and target nodes being on the giant component. We refer the reader to Appendix \ref{sec:apdx_aspl_analytic} for obtaining this ``analytic estimate'' of mean geodesic length, using the analytic form of the SPLD. In this section, we focus on the general random graph model of Sec. \ref{sec:general_graphs}, while describing the use of the approximate closed-form of its SPLD, which (1) facilitates easy computation without numerical integration, and (2) for rank-1 models leads to a closed-form expression for the mean geodesic length. \paragraph*{Mean geodesic length using approximate closed-form of the SPLD.}If the network is almost surely connected, then the analytic estimate of the mean geodesic length in Eq. \ref{eq:avgspl_general_exact} simplifies to the usual mean of a discrete random variable, given by the sum of the survival function of the SPLD: \begin{equation} \label{eq:avgspl_general} \avg{\lambda} =\int_V\int_V\sum_{l=0}^\infty\psiaf_l(x,y)d\mu(x)d\mu(y). \end{equation} Naturally, the RHS of Eq. \ref{eq:avgspl_general} is finite if and only if $\lim_{l\to\infty}\psiaf_l(x,y)=0$ almost everywhere, signifying that all node pairs have finite mean geodesic lengths, which will be true by assumption of the network being almost surely connected. If the network is not almost surely connected, we can still obtain a finite (albeit approximate) estimate from Eq. \ref{eq:avgspl_general}, when using the (approximate) closed-form of the SPLD, whose survival function $\psiacf_l(x,y)$ we previously derived for various random graph models. Assume that every node location pair $(x,y)\in V\times V$ is in the supercritical regime, then it can be shown that the approximate closed-form of the survival function of the SPLD, as defined in Eq. \ref{eq:spd_analytic_general_eig_uncorrected}, has a limiting value of zero: $\lim_{l\to\infty}\psiacf_l(x,y)=0$. We refer the reader to Appendix \ref{sec:apdx_perc_part} for more details (including the scenario where not every location pair in $V\times V$ may be supercritical). For nodes at $x, y$, we define the expectation of their distance $\lambda_{xy}$, and consequently expectation of the mean network distance $\lambda$, as \begin{subequations} \label{eq:avgspl_closedform} \begin{align}\label{eq:avgspl_closedform_xy} \avg{\lambda_{xy}}&\triangleq\sum_{l=0}^\infty\psiacf_l(x,y),\\\label{eq:avgspl_closedform_total} \avg{\lambda}&\triangleq\expect[\mu^2]{\avg{\lambda_{xy}}}=\int_V\int_V\avg{\lambda_{xy}}d\mu(x)d\mu(y), \end{align} \end{subequations} which must be finite. In what follows, we focus on computing mean geodesic lengths from Eq. \ref{eq:avgspl_closedform}, which we refer to as the ``approximate closed-form of mean geodesic length''. Since the closed-form for the survival function of the SPLD is a lower bound, it can be used to obtain a lower bound on the mean geodesic length. Also, because it underestimates probability mass at longer lengths, the bound will be tighter for networks with smaller diameters. \paragraph*{Closed-form expression for multiplicative graphons.}For sparse graphs in the asymptotic limit, we can make use of the Poisson summation formula---see Eq. \ref{eq:psf_sf_main} in Appendix \ref{sec:apdx_apx_spl}---to write the RHS of Eq. \ref{eq:avgspl_closedform_xy} as $\sum_{l=0}^\infty\psiacf_l(x,y)\approx\frac{1}{2}+\int_0^\infty\psiacf_l(x,y)dl$. Then plugging into Eq. \ref{eq:avgspl_closedform_total}, we get \begin{subequations} \begin{align} \label{eq:avgspl_general_network} \avg{\lambda} &=\frac{1}{2}+ \int_V\int_V\psiacf_{0\to\infty}(x,y) d\mu(x)d\mu(y),\\ \label{eq:avgspl_general_xy} \psiacf_{0\to\infty}(x,y) &\triangleq \int_0^\infty\psiacf_l(x,y)\thinspace dl. \end{align} \end{subequations} To make further analytical progress, we consider the setup of multiplicative graphons with the connectivity kernel in Eq. \ref{eq:def_mult_graphon}. Eq. \ref{eq:spd_r1g_psi} yields, in the supercritical regime where from Eq. \ref{eq:eigval_mul_graphon} $\eta >1$, a closed-form for $\psiacf_{0\to\infty}(x,y)$---see Appendix \ref{sec:apdx_apx_spl} for details---as \begin{equation} \label{eq:aspl_separable} \psiacf_{0\to\infty}(x,y) = \frac{\log(\eta-1)-\gamma-\log\left(f(x)f(y)\right)}{\log\eta}, \end{equation} where $\gamma\approx 0.57722$ is the Euler-Mascheroni constant. It then follows from Eq. \ref{eq:avgspl_general_network} that: \begin{equation} \label{eq:aspl_nu_phi} \avg{\lambda} = \frac{1}{2}+\frac{\log(\eta-1)-\gamma-2\expect[\mu]{\log f(x)}}{\log\eta}. \end{equation} As described in Eqs. \ref{eq:mult_graphon_stats_deg} and \ref{eq:mult_graphon_stats_eta_deg}, multiplicative graphons are equivalent to a canonical degree-configuration model: \begin{equation} \label{eq:aspl_degconfig} \avg{\lambda}=\frac{1}{2}+\frac{\log\left(n\left(\avg{\degree^2}-2\avg{\degree}\right)\right)-\gamma-2\expect[\mu]{\log\left(\meandegree(x)\right)}}{\log\left(\frac{\avg{\degree^2}}{\avg{\degree}}-1\right)} \end{equation} This recapitulates prior results on the average distance in the degree-configuration model varying as $\log_b n$ where $b\triangleq\frac{\avg{\degree^2}}{\avg{\degree}}-1$. \cite{vanderhofstad2005distanceconfigmodel}. \begin{figure*} \centering \subfloat[Random $\degree$-regular graphs]{\label{fig:aspl_rrg}\includegraphics[width=0.7\columnwidth]{fig/spd_randomregular_d.pdf}} \subfloat[Scale-free graphon]{\label{fig:aspl_sf}\includegraphics[width=0.7\columnwidth]{fig/aspl_sfg_d2.pdf}} \subfloat[Planted-partition $2$-block SBM]{\label{fig:aspl_sbm}\includegraphics[width=0.7\columnwidth]{fig/aspl_sbm_d4.pdf}} \caption{Mean geodesic length for various random graph models. (a) Closed-form (solid line; Eq. \ref{eq:aspl_random_regular}) and empirical ($\circ$) estimates for random $\degree$-regular graphs scale-free graphons show good agreement across different values of $\degree$ and network size $n$. (b) Closed-form (solid line; Eqs. \ref{eq:aspl_nu_phi}, \ref{eq:sfg_intlogfoo}) and empirical ($\circ$) estimates for scale-free graphons show good agreement across different values of the power law exponent $\theta=(1+\alpha^{-1})$ and network size $n$, with fixed mean degree $\avg{\degree}=2$. (c) Analytic ($\circ$; Eqs. \ref{eq:avgspl_closedform_total}, \ref{eq:spd_sbm}) and rank-1 closed-form ($\times$; Eq. \ref{eq:aspl_rank1}) estimates for a planted-partition $2$-block SBM, where $x$-axis corresponds to different levels of homophily scaled to $[-1,1]$ to facilitate interpretation across different values of the minority community proportion $\pi$, with fixed network size $n=512$ and mean degree $\avg{\degree}=4$.} \label{fig:aspl} \end{figure*} \paragraph*{Illustrative example: random regular graphs.} First, we illustrate this result with the random $\degree$-regular graphs considered in Eq. \ref{eq:def_random_reg_graph}, in the supercritical regime $\degree>2$: \begin{equation} \label{eq:aspl_random_regular} \avg{\lambda}=\frac{1}{2}+\frac{\log\left(n\left(1-\frac{2}{d}\right)\right)-\gamma}{\log\left(\degree-1\right)}, \end{equation} which exemplifies the logarithmic dependence on $n$ \cite{cerf1974spdrandomregular}. There is also a close connection here to the degree-diameter problem in graph theory: a graph with maximum degree $\degree$ and diameter (\emph{longest} shortest path length) $k$ can have no more than \begin{equation} \label{eq:def_moore_graph} M_{d,k}=1+\frac{\degree\left[(\degree-1)^k-1\right]}{\degree-2} \end{equation} nodes---an upper bound that is met rarely, and exactly only for Moore graphs that are necessarily $\degree$-regular \cite{hoffman1960mooregraph, miller2012moore}. Then, a $\degree$-regular random graph of the (asymptotically large) size $n\approx e^\gamma M_{d,k}$ (where $e^\gamma\approx 1.78107$) will have, from Eqs. \ref{eq:aspl_random_regular} and \ref{eq:def_moore_graph}, a mean geodesic length of \begin{equation} \label{eq:aspl_moore_graph} \avg{\lambda}=\frac{1}{2}+k. \end{equation} In Fig. \ref{fig:aspl_rrg}, we plot the closed-form expression of the average geodesic length using Eq. \ref{eq:aspl_random_regular} and empirical estimates, which agree well. \paragraph*{Illustrative example: scale-free graphon.}We next illustrate with the ``scale-free'' graphons described in Sec. \ref{sec:graphons}, where $f(x)=\sqrt{\beta}\left(\frac{x}{h}\right)^{-\alpha}$. This leads to: \begin{equation} \label{eq:sfg_intlogfoo} \begin{split} \expect[\mu]{\log f(x)} &= \int_h^1\log f(x)d\mu(x)\\&=\log\sqrt{\beta}+\alpha\left(1+\frac{\log h}{1-h}\right), \end{split} \end{equation} which can be inserted into Eq. \ref{eq:aspl_nu_phi}. In Fig. \ref{fig:aspl_sf} we plot the closed-form expression of the average geodesic length using Eqs. \ref{eq:aspl_nu_phi} and \ref{eq:sfg_intlogfoo}, alongside empirical estimates, for different values of $\alpha$ and network size $n$, while setting the free parameter $\beta=1$ to capture asymptotically unbounded degree variance. Focussing on the three scale-free graphons of interest, assuming $h\ll 1$ we can write for the ER, BA and highly-scale-free graphons using Eqs. \ref{eq:sfg_h} and \ref{eq:sfg_intlogfoo}: \begin{equation} \label{eq:sfg_intlogfoo_eg} \expect[\mu]{\log f(x)} = \begin{cases} -\log\sqrt{\frac{n}{\avg{\degree}}} &\mbox{if }\alpha=0,\\ \frac{1}{2}-\log\sqrt{\frac{4n}{\avg{\degree}}} &\mbox{if }\alpha=\frac{1}{2},\\ 1 - \log\left(\sqrt{\frac{n}{\avg{\degree}}}\log\sqrt{\frac{n}{\avg{\degree}}}\right) &\mbox{if }\alpha=1. \end{cases} \end{equation} Then substituting in Eq. \ref{eq:aspl_nu_phi} alongside the value for $\eta$ from Eq. \ref{eq:sfg_eta}, and taking the asymptotic limit for $n$ while holding $\avg{\degree}$ constant (see Appendix \ref{sec:apdx_sfg_spl}), we obtain the order of expected geodesic lengths as: \begin{equation} \label{eq:sfg_aspl_eg} \avg{\lambda} = \begin{cases} \order{\log n} &\mbox{if }\alpha=0,\\ \order{\frac{\log n}{\log\log n}} &\mbox{if }\alpha=\frac{1}{2},\\ \order{1} &\mbox{if }\alpha=1. \end{cases} \end{equation} For ER graphons, we obtain the $\log n$ dependence which exemplifies the small-world property: on average, the distance between nodes scales logarithmically with the network size \cite{albert2002networks, fronczak2004average}. For BA graphons, we obtain the $\frac{\log n}{\log\log n}$ dependence which marks an ``ultra'' small-worldness \cite{cohen2003ultrasmall, fronczak2004average}. Finally for ``highly scale-free'' graphons so-defined, asymptotically the average geodesic lengths does not scale with the network size, in contrast to typical behaviour for highly scale-free graphs \cite{chung2002avgdistpowerlaw, fronczak2004average}. \paragraph*{Rank-1 models.} We next consider rank-1 models more generally, described in Sec. \ref{sec:perc_rank1}, for which the corresponding integral operator $T$ in Eq. \ref{eq:integral_op} has only one non-zero eigenvalue $\tau$, and corresponding eigenfunction $\varphi$. From Theorem \ref{lemma:rank1_multiplicative} in Appendix \ref{sec:apdx_rank1_equiv}, it is possible to write for any rank-1 graph model the average geodesic length asymptotically by setting $\eta=\tau$ and $f(x)=\sqrt{\frac{\tau}{n}}\varphi(x)$ in Eq. \ref{eq:aspl_nu_phi}: \begin{equation} \label{eq:aspl_rank1} \avg{\lambda} = \frac{1}{2}+\frac{\log\left(n \left((1-\tau^{-1}\right)\right)-\gamma-2\expect[\mu]{\log \varphi}}{\log\tau}. \end{equation} For large eigenvalue $\tau$, the average geodesic length is governed by the logarithm of the eigenvalue ($\log\tau$) and expectation of logarithm of the corresponding eigenfunction ($\expect[\mu]{\log\varphi}$). Furthermore, this result also generalizes to the asymmetric kernel setting (see Appendix \ref{sec:apdx_apx_spl}). \paragraph*{Rank-1 approximations.}When the graph model is of a higher rank, then there is no closed-form expression for the average geodesic length in terms of standard mathematical functions. However, if all but the leading eigenvalue are ``small enough'' in absolute value, it may be reasonable to use Eq. \ref{eq:aspl_rank1} through a ``rank-1 approximation'' of the random graph model, by plugging in the leading eigenvalue and eigenfunction of $T$. We refer to this as the ``rank-1 approximate closed-form'' of mean geodesic length. Specifically, it can be shown that when the non-leading eigenvalues of $T$ are small and positive, then asymptotically the additive correction to Eq. \ref{eq:aspl_rank1} tends to zero (see Appendix \ref{sec:apdx_apx_spl}). This allows us to write:$$\avg{\lambda}=\order{\left(\log r(T)\right)^{-1}},$$where $r(T)$ is the spectral radius. In particular, for the ensemble average model we obtain $\avg{\lambda}=\order{\left(\log r(\avg{A})\right)^{-1}}$. Prior work has established the relevance of $\log r(A)$ as being the topological entropy of a graph \cite{delvenne2011centrality}---where $A$ is the adjacency matrix of the graph---which is the maximal entropy rate that can be achieved by a (biased) random walker on the graph. That is, a distribution over walker's paths of a fixed length $l$ is uniform up to a constant \cite{burda2009maxent}, or in other words, all paths of same length are equi-probable \cite{lambiotte2014random}. Here, we show that the expected geodesic length can scale as the inverse of $\log r(\avg{A})$. \paragraph*{Illustrative example: SBM.}For a synthetic example, if the random graph model is an SBM, then $\tau$ corresponds to the spectral radius of $B\Pi$, while $\boldsymbol{\varphi}$ to its corresponding eigenvector---see Appendix \ref{sec:apdx_sbm_perc}. In Fig. \ref{fig:aspl_sbm}, we plot the mean geodesic length from Eq. \ref{eq:avgspl_closedform_total}---which makes use of the approximate closed-form of the survival function from Eq. \ref{eq:spd_sbm}---and the rank-1 approximate closed-form of the mean geodesic length from Eq. \ref{eq:aspl_rank1}, for the planted-partition $2$-block SBM with varying proportions of the minority community $\pi$, and varying levels of homophily which scale in proportion to $c_{\mathrm{in}}-c_{\mathrm{out}}$. This is done while holding mean degree and network size constant, so that any variation in geodesic lengths is entirely due to variation in homophily---see Appendix \ref{sec:apdx_sbm_perc}. We observe that an increase in heterophily induces shorter geodesics in the network under block imbalance, while extreme homophily or heterophily are not the most optimal, with regards to minimizing average geodesic length, for any level of $\pi$. The two estimates are in very good agreement when the SBM is close to being of rank 1, with discrepancies arising for extreme levels of homophily or heterophily, when the second eigenvalue and its eigenvector---indicative of block membership---becomes important. \paragraph*{Illustrative example: empirical networks.} In Fig. \ref{fig:aspl_snap}, we show the applicability of the rank-1 approximation for empirical networks. First, by coarsening them into corresponding SBMs using the method described in Appendix \ref{sec:apdx_gcsbm}. Then, by comparing the empirical mean geodesic length to (1) the analytic form obtained from Eq. \ref{eq:avgspl_closedform_total}---computed by using the approximate closed-form of the survival function of the SPLD for an SBM from Eq. \ref{eq:spd_sbm}---and to (2) the rank-1 approximate closed-form of the mean geodesic length from Eq. \ref{eq:aspl_rank1}. We observe good agreement for a variety of the real-world networks considered---see Appendix \ref{sec:apdx_datasets}---although the level of agreement depends on selecting an ``appropriate'' level of coarsening. This demonstrates the power of our approach of coarsening real-world networks, and applying a subsequent rank-1 approximation to obtain average geodesic lengths in closed-form. \begin{figure} \includegraphics[width=1\columnwidth]{fig/aspl_snap.pdf} \caption{For multiple real-world networks, the empirical geodesic length (black bars) are in agreement with the approximate closed-form estimates when ``coarsened'' into SBMs ($\circ$; Eq. \ref{eq:avgspl_closedform_total}), and estimates when applying a rank-1 approximation ($\times$; Eq. \ref{eq:aspl_rank1}). Colors indicate different levels of coarsening obtained from a hierarchical SBM \cite{peixoto2014nestedsbm}, i.e. number of blocks from $k=1$ (blue) indicating the ER graph to bigger SBMs (red) with a cutoff of $64$ blocks. We observe that larger SBMs typically estimate the mean geodesic length better---indicating that a na\"{i}ve approximation of the network as an ER graph loses information about geodesics in the networks considered. This holds when using either the approximate closed-form SPLD of the corresponding SBM, or when using the rank-1 approximation, with the latter overestimating the length more when excessive homophily (or modularity) is to be expected, such as in the Facebook friendship network \cite{snapfb}.} \label{fig:aspl_snap} \end{figure} \subsection{Closeness centrality}\label{sec:closeness} One centrality which can be computed directly from the SPLD is ``closeness'', which is motivated by the question of how close, on average, a node is from every other node in the network. Here, we consider the expectation of node closeness in the general random graph model of Sec. \ref{sec:general_graphs}. We also establish a closed-form of closeness for rank-1 graph models, which can credibly approximate closeness for higher rank models. \paragraph*{Closeness from the SPLD.} Following the harmonic definition of closeness \cite{marchiori2000harmony, rochat2009closeness}---which naturally extends to disconnected graphs with infinite distances---the normalized closeness centrality of node $k$ is defined by the expectation of reciprocal distances to the node: \begin{equation} \label{eq:def_closeness_rochat} \gamma_k = \frac{1}{n}\sum_{i\ne k} \frac{1}{\lambda_{ik}}=\expect{\lambda_{ik}^{-1}}, \end{equation} where $\lambda_{ik}$ refers to the length of the shortest path from $i$ to $k$. Given that we have a random graph model, the closeness centrality of a node is itself a random variable. In Appendix \ref{sec:apdx_closeness_analytic}, we derive the ``analytic expectation'' of closeness using the analytic form of the survival function of the SPLD in Eq. \ref{eq:spd_main_general_psi}. To make further analytical progress, we use the approximate closed-form of the survival function of the SPLD from Eq. \ref{eq:spd_analytic_general_eig_uncorrected} in the general random graph setting of Sec. \ref{sec:general_graphs} (in analogy to estimating the closed-form of mean geodesic length in Sec. \ref{sec:aspl}). Then, assuming every node pair in $V$ is in the supercritical regime and the network is almost surely connected, we define for a node at $z\in V$ its expected closeness: \begin{equation} \label{eq:closeness2} \bar{\gamma}(z) \triangleq\expect[\mu]{\avg{\lambda_{xz}^{-1}}}, \end{equation} where the inner expectation over the inverse of the shortest path length between $x,z$ is taken using the approximate closed-form of the survival function in Eq. \ref{eq:spd_analytic_general_eig_uncorrected}. Since taking the inverse is a convex function for positive variables, from Jensen's inequality \cite{jensen1906fonctions} we can write for a random variable $Z$: $\expect{Z^{-1}}\ge \expect{Z}^{-1}$, which allows us to propagate the inverse in Eq. \ref{eq:closeness2} outwards, and define a lower-bound of closeness as defined by Eq. \ref{eq:closeness2} \begin{equation} \label{eq:closeness3} \underline{\gamma}(z) \triangleq\expect[\mu]{\avg{\lambda_{xz}}}^{-1}, \end{equation} which should be tight if the variance of geodesic lengths in the network is not too high. We note, this is the exact form we would arrive at by following the original definition of closeness for connected networks by Bavelas \cite{bavelas1950closeness} as the inverse of farness. Substituting for $\avg{\lambda_{xz}}$ in Eq. \ref{eq:closeness3} using Eq. \ref{eq:avgspl_closedform_xy}, we obtain in the general setting an approximate expected closeness for a node at $z$: \begin{equation} \label{eq:closeness_apx} \underline{\gamma}(z) =\left[\int_V\sum_{l=0}^\infty\psiacf_l(x,z)d\mu(x)\right]^{-1}, \end{equation} which closely resembles Eq. \ref{eq:avgspl_closedform_total} for the mean geodesic length, as the inverse of expected distance to $z$ marginalized over only the source node space. \paragraph*{Rank-1 models and approximations.}For rank-1 models with the eigenpair $(\tau,\varphi)$, we can follow the same approach as in Sec. \ref{sec:aspl}, to obtain the expected closeness function in closed-form: \begin{equation} \label{eq:closeness_rank1} \underline{\gamma}(z) = \left[\frac{1}{2}+\frac{\log\left(n (1-\tau^{-1})\right)-\gamma-\expect[\mu]{\log \varphi}-\log\varphi(z)}{\log\tau}\right]^{-1}, \end{equation} where $\gamma$ is the Euler-Mascheroni constant, analogous to Eq. \ref{eq:aspl_rank1} for expected geodesic lengths, except marginalized only over the source node space. Since $\varphi(z)$ is the leading eigenfunction evaluated at $z$, it can be interpreted as defining the eigenvector centrality at $z$ \cite{avella2018centralitygraphon}. From Eq. \ref{eq:closeness_rank1} we show that for rank-1 models, the logarithm of inverse of eigenvector centrality of a node is proportional to the inverse of its closeness:$$\underline{\gamma}(z)^{-1} = \order{\log\left(\varphi(z)^{-1}\right)}.$$ Similar to Sec. \ref{sec:aspl}, we can apply here a rank-1 approximation to higher rank models and obtain a closed-form estimate of expected node closeness using Eq. \ref{eq:closeness_rank1} (not presented here). \paragraph*{Illustrative example: Gaussian RGG.}To demonstrate the applicability of this framework in computing node closeness, we consider a $1$-dimensional Gaussian RGG with different connectivity scales $R$, and compute its expected closeness centrality after discretization into a $32$-block SBM---using the method described in Appendix \ref{sec:apdx_mle_sbm_apx_R}. We plot the empirical as well as various analytic estimates of closeness described here in Fig. \ref{fig:centrality}. We note good agreement between the empirical and analytic estimates of expected closeness, even when using the rank-1 approximate closed-form. \begin{figure*} \centering \includegraphics[width=\textwidth]{fig/cent_n512_grgg1.pdf} \caption{Analytic and empirical estimates of geodesic-based centralities agree for an exemplar $1$-dimensional Gaussian RGG (with fixed mean degree $\avg{\degree}=2$, network size $n=512$, varying connectivity scale $R$, and discretized into $32$-block SBMs). Left column shows the analytic (solid), approximate analytic (dotted) and rank-1 closed-form (crosses) estimates of expectation of node centralities, where the node-of-interest is given to be on the giant component. The $x$-axis indicates block index from $1$ to $32$; successive block indices encode contiguous segments of $\real$ such that nodes, distributed by a standard Gaussian distribution on $\real$, are distributed uniformly across the $32$ blocks. For ``harmonic'' closeness (top-row), the analytic estimate is given by Eq. \ref{eq:closeness_general}, the approximate analytic estimate by Eq. \ref{eq:closeness_apx}, and the rank-1 approximate closed-form estimate by Eq. \ref{eq:closeness_rank1}. Evidently, closeness centrality of peripheral nodes massively declines as connection scales become smaller. Also, the rank-1 approximation estimate remains in reasonable agreement with the analytic form. For betweenness (bottom-row), the analytic estimate is given by Eqs. \ref{eq:btw_def} and \ref{eq:prob_bridge}---where the bridging probability is given by Eq. \ref{eq:bridge_prob} of Lemma \ref{lemma:bridge}---and the approximate analytic estimate by Eqs. \ref{eq:btw_def} and \ref{eq:prob_bridge}---where the bridging probability is given by Eq. \ref{eq:bridge_prob_apx} of Lemma \ref{lemma:bridge_apx}. Here too we see qualitatively similar behaviour, that nodes on the periphery (at the center) decrease (increase) in betweenness as connection scales lower. We note that the approximate analytic value overestimates betweenness of central nodes, and there is no closed-form estimate for betweenness.} \label{fig:centrality} \end{figure*} \subsection{Betweenness centrality}\label{sec:betweenness} We now consider the node betweenness centrality, which is a measure of how important a node is based on how many other nodes it forms a geodesic ``bridge'' between. We show how a generalization of Lemma \ref{lemma:1} enables us to analytically estimate expected node betweenness using the analytic form of the SPLD. \paragraph*{Betweenness from the SPLD.}We formulate betweenness for the ensemble average model of Sec. \ref{sec:spd}, but note that the framework can be extended to the general random graph models of Sec. \ref{sec:general_graphs}. Consider a given network of $n$ nodes, and let $\zeta_{ij}$ indicate the number of shortest paths between nodes $i$ and $j$, and $\zeta_{ij}(k)$ be the number of shortest paths between $i,j$ that pass through a given node $k$. Then asymptotically, the normalized node betweenness centrality of $k$ as defined by Freeman \cite{freeman1977set} is given by $$\beta_k=\frac{1}{n^2}\sum_{(i,j)\ne k}\frac{\zeta_{ij}(k)}{\zeta_{ij}}=\expect{\frac{\zeta_{ij}(k)}{\zeta_{ij}}},$$ where the expectation is over source and target nodes $i,j$. The computation of $\beta_k$ would typically necessitate a description of \emph{number} of shortest paths $\zeta_{ij}$ between two nodes $i,j$. However, the original motivation for betweenness is to compute the ``probability that $k$ falls on a randomly selected geodesic linking $i$ and $j$'' \cite{freeman1977set}. In our random graph framework, through the shortest path length distribution, we can have access to this probability, on average. Let $\chi_{ijk}$ indicate the probability that a shortest path between nodes $i$ and $j$ passes through $k$. We thus propose a probabilistic definition for the expected betweenness centrality as \begin{equation} \label{eq:btw_def} \bar{\beta}_k \triangleq \expect{\chi_{ijk}}, \end{equation} which in turn requires an expression for $\chi_{ijk}$. Asymptotically, since the size of the giant component scales with network size, $k$ is expected to form a geodesic ``bridge'' between $i$ and $j$ only if the nodes $i,j,k$ are on the giant component. If we consider $k$ to form a bridge of length $l$, where $l$ is the geodesic length between $i,j$, then it can take a single value from $\{2,3,\cdots\}$ since all shortest paths between $i,j$ must be of the same length. Let $\overline\chi_{ijk}(l)$ be the probability that the shortest path between $i,j$ is of length $l$ and it passes through $k$. Also, if the bridge is of length $l$, then $k$ must be placed at a distance of $p$ from $i$ and $l-p$ from $j$, where $p$ takes precisely one value from $\{1,2,\cdots l-1\}$, since the paths from $i$ to $k$ (and similarly from $k$ to $j$) must be the shortest between them and therefore all of same length $p$ (and similarly of same length $l-p$). Let $\widetilde\chi_{ijk}(p;l)$ be the ``$l,p$-bridging'' probability that the shortest path between $i,j$ is of length $l$, and it passes through $k$ such that $k$ is at a distance $p$ from $i$. Then, $\chi_{ijk}$ can be written as the sum over these mutually exclusive and exhaustive events: \begin{subequations} \label{eq:prob_bridge} \begin{align} \label{eq:prob_bridge_1} \chi_{ijk} &=\sum_{l=2}^\infty\overline{\chi}_{ijk}(l), \textrm{ where}\\ \label{eq:prob_bridge_2} \overline{\chi}_{ijk}(l)&= \sum_{p=1}^{l-1}\widetilde{\chi}_{ijk}(p;l)P(\phi_i)P(\phi_j)P(\phi_k),\\ \label{eq:prob_bridge_3} \widetilde{\chi}_{ijk}(p;l)&\triangleq P(\lambda_{ik}=p,\lambda_{kj}=l-p,\lambda_{ij}=l|\phi_i,\phi_j,\phi_k). \end{align} \end{subequations} To compute the RHS of Eq. \ref{eq:prob_bridge_3}, it is useful to prove a ``$l,p$-bridging probability'' lemma that generalizes Lemma \ref{lemma:1}, and estimates $\widetilde{\chi}_{ijk}(p;l)$ using the SPLD, which can be inserted in Eq. \ref{eq:prob_bridge} to compute the expected betweenness of any node from Eq. \ref{eq:btw_def}. We refer the reader to Lemma \ref{lemma:bridge} in Appendix \ref{sec:apdx_lemma1}, and note here that estimating $ \widetilde{\chi}_{ijk}(p;l)$ from the SPLD involves a recursion over values of $p$ given $l$. We also consider an approximation to this lemma, and refer the reader to Lemma \ref{lemma:bridge_apx} in Appendix \ref{sec:apdx_lemma1}. The approximation is asymptotically tight for finite ``bridge'' lengths in infinite-size networks, works well for shorter ``bridge'' lengths in finite-size networks, and yields a succinct closed-form of the bridging probabilities, thus avoiding the recursion. \begin{figure} \centering \subfloat[ER Graphs of varying mean degree $\avg{\degree}$, $n=1024$]{\label{fig:bridge_er}\includegraphics[width=\columnwidth]{fig/spd_er_n1024_bridge.pdf}}\\ \subfloat[Bipartite SBM with $B=\big(\begin{smallmatrix} 0 & 8\\ 8 & 0 \end{smallmatrix}\big)$, $\boldsymbol\pi=(0.2, 0.8)$, $n=1024$]{\label{fig:bridge_sbm}\includegraphics[width=\columnwidth]{fig/spd_sbm_d0880_n1024_bridge.pdf}} \caption{Analytic and empirical estimates of the expected bridging probability (BP) $\expect[\mu^2]{\chi_{ijk}(l)}$, i.e. the likelihood that node $k$ lies on a shortest path of length $l$ in the graph, are in good agreement, as shown here for two random graph models: (a) ER graphs with varying mean degree, where the $y$-axis indicates cumulative BP, i.e. the likelihood that $k$ forms a shortest path of length up to $l$, and (b) a bipartite SBM, where the main $y$-axis indicates BP while the inset plot indicates cumulative BP. Solid (dotted) line correspond to the (approximate) analytic form as given by Lemma \ref{lemma:bridge} (Lemma \ref{lemma:bridge_apx}), while the dashed line [in (a)] and markers $\circ$ [in (b)] indicate empirics. Bars in (b) represent the standard deviation over 10 network samples, and have been excluded from (a) for clarity. We remark that the saturating value of the cumulative BP defines betweenness from Eqs. \ref{eq:btw_def}, \ref{eq:prob_bridge}.} \label{fig:bridge} \end{figure} Lemmas \ref{lemma:bridge}, \ref{lemma:bridge_apx}, and consequently the expression for betweenness from Eqs. \ref{eq:btw_def}, \ref{eq:prob_bridge}, can be re-written in the notation for the general random graph framework of Sec. \ref{sec:general_graphs}, where we estimate the expected betweenness at a node location $x\in V$, the expectation is taken over $V$, and sums over nodes are replaced by integrals over $V$ scaled by $n$. However, unlike for closeness, we do not obtain closed-form approximations for betweenness centrality. Regardless, we emphasize that the recursive method remains computationally easy to apply in practice, as the mode of the SPLD is not too large for finite-sized networks. In Fig. \ref{fig:bridge}, we plot the expected bridging probability for a node $k$ given by $\expect{\overline{\chi}_{ijk}(l)}$ in (1) ER graphs of varying degree, and (2) a bipartite SBM---computed both empirically and analytically using the bridging probability Lemma \ref{lemma:bridge} and its approximate version Lemma \ref{lemma:bridge_apx}. \paragraph*{Illustrative example: Gaussian RGG.}Using Eq. \ref{eq:prob_bridge} and Lemma \ref{lemma:bridge}, we can find the expected betweenness of any node from Eq. \ref{eq:btw_def}, which is given exactly by the cumulative expected bridging probability. In Fig. \ref{fig:centrality}, we demonstrate this approach to compute expected node betweenness for a $1$-dimensional Gaussian RGG with different connectivity scales $R$, that was previously considered in Sec. \ref{sec:closeness}. We note good agreement in empirical and analytic expected betweenness, especially when the connection scales are not too small. We also note that the \emph{approximate} bridging probability (from Eq. \ref{eq:bridge_prob_apx}, Lemma \ref{lemma:bridge_apx}) consistently overestimates betweenness of nodes, and moreso for nodes with higher betweenness. This is especially apparent at the smallest connection scale of $R=0.1$, likely because smaller scales induce longer geodesic lengths on average, and the approximation in Lemma \ref{lemma:bridge_apx} works better for shorter geodesic lengths in a finite-size network. However, if the application is only to obtain a ranking of nodes by betweenness, then it may suffice. \paragraph*{Relationship to centralities based on matrix functions.}In this section, we have shown how computationally intensive measures of closeness and betweenness can be analytically estimated for large sparse independent edge model using the SPLD framework. We close by drawing a theoretical connection to the centralities literature. From Eq. \ref{eq:sf_avg}, the approximate closed-form survival function of the SPLD for the ensemble average model is given by a function of sum of powers of $\avg{A}$. This is reminiscent of well-studied centralities which can be expressed as matrix functions of the adjacency matrix $A$ \cite{estrada2010matrixfunction}, i.e. as weighted sums of $A^l$ (and/or $(A^T)^l$, for directed networks). In particular, in the asymptotic limit, we can define weighted versions of closeness and betweenness whose form involves a weighted sum of $\avg{A}^l$, akin to the down-weighting of $A^l$ when computing measures like Katz centrality \cite{katz1953centrality}, subgraph centrality \cite{estrada2005subgraph}, and other bi-directional measures \cite{cooper2010role}. We refer the reader to Appendix \ref{sec:apdx_centralities} for more details. \section{\label{sec:conclusion}Conclusion} In this work, we have derived an analytic distribution of shortest path lengths (SPLD) for networks, directed or undirected, generated by a sparse random graph model, symmetric or asymmetric, with conditionally independent edges in the asymptotic limit. The distribution describes shortest paths on the giant component when it exists (the supercritical regime), and on the small components otherwise (the subcritical regime). The SPLD is given by a pair of recursive equations which can be easily solved with initial conditions supplied by the form of the random graph model. We have obtained a closed-form lower bound on the survival function of the SPLD. In the supercritical regime, the bound is tight for finite lengths in the asymptotic limit, and for shorter lengths in finite-size networks. In the subcritical regime, it is tight for all lengths in asymptotically large networks. The lower bound provides an approximate closed-form of the survival function of the SPLD up to length $l$ that resembles the process of hitting a target node $j$ from a source node $i$ via any of the independent geodesics of independent lengths up to $l$, i.e. it is given by an exponential of the negative likelihood of independent geodesics up to length $l$. This generalizes previous analytic \cite{blondel2007distance, katzav2015analytical} and closed-form approaches \cite{fronczak2004average} to model the SPLD in ER graphs and scalar latent variable models. Tab. \ref{tab:summary_of_results} summarizes an index for these analytical results on the SPLD. We have shown that it is possible to analytically and therefore cheaply compute the expectation of key node-level statistics that use shortest path lengths, namely node closeness and node betweenness centralities. For large real-world graphs like social networks, that may have millions of nodes, the ground truth graph or simulated networks can be prohibitively large. Therefore, computation of pairs of all shortest paths is intractable, but our approach makes such computations much easier. \paragraph*{SPLD in general random graph families.}Transitioning away from the ensemble average setting where one has access to the expected adjacency matrix $\avg{A}$, we have defined a general framework of random graph models in some node space $V$ which generalizes inhomogeneous random graphs \cite{bollobas2007phase} to the asymmetric setting permitting interesting behaviours in directed networks. This encompasses a diverse set of models like stochastic block models (SBM), random geometric graphs (RGG), random dot-product graphs (RDPG) and (sparse) graphons. We have derived a closed-form bound of the survival function of the SPLD in this general setting, whose expression is determined by an iterated integral operator $T$ defined over functions on $V$, and the connectivity kernel represents the likelihood of an edge existing between two nodes in $V$. The operator $T$ is analogous to $\avg{A}$ in the ensemble average model. For symmetric kernels, this yields an expression for the SPLD in terms of the spectral decomposition of $T$. In particular, we show for SBMs that the SPLD is expressed as an eigendecomposition of the block matrix. For illustrative examples of each of the above-mentioned models, we have derived the approximate closed-form of the SPLD revealing novel insights, particularly for higher-dimensional models whose shortest paths have not been previously studied analytically. Despite the assumptions involved, we have shown for various models---including ``Gaussian RGG'', ``Dirichlet RDPG'', random $\degree$-regular graphs, and ``scale-free graphons''---that there is good agreement in the approximate closed-form and empirical estimates of expected shortest path lengths between node pairs. \paragraph*{Scope of applications.}From an applied perspective, we provide empirical corroboration of our framework by demonstrating how real-world networks can be cheaply ``coarsened'' into SBMs to compute their expected SPLD analytically (see Figs. \ref{fig:aspl_snap} and \ref{fig:spl_statistics_empirical}). Our results on RDPG find relevance in the burgeoning field of statistical machine learning on graphs, wherein nodes are typically embedded in a Euclidean space $\real^k$ equipped with the dot-product. We have shown that the matrix of second moments in $\real^k$ (or in a corresponding feature space $\real^d$ when the kernel is not linear in the dot-product) completely defines the SPLD for nodes located at $\boldsymbol{x},\boldsymbol{y}\in \real^k$. This is particularly useful for sequential learning or querying for distances from individual nodes in prohibitively large networks. More generally, our theoretical framework to analytically estimate the SPLD can aid recent advances in graph representation learning based on graph neural networks (GNN), which incorporate knowledge of inter-node distances beyond immediate neighbours, and are provably more expressive than GNNs which do not \cite{li2020distanceencoding}. \paragraph*{Bond percolation threshold.}Many local and global properties of interest can be extracted from the full SPLD. For instance, the expected size of the giant component (and of the in-/out-components for directed networks), can be estimated as the limit of the cumulative distribution function of the SPLD. Notably, there are wider theoretical implications as well. We have shown how the bond percolation threshold, at which the giant component appears, can be determined in the general setting of a conditionally independent edge model using the SPLD, regardless of whether the network is directed, and regardless of the symmetry of the connectivity kernel. Specifically, we have shown that asymptotically a giant (in-/out-)component exists, if and only if the spectral radius of the integral operator is greater than unity, i.e. $r(T)>1$, with the phase transition at $r(T)=1$. This draws a connection between the SPLD and percolation behaviour, and extends previous results on the percolation threshold for inhomogeneous graphs with symmetric kernels \cite{soderberg2002general, soderberg2003properties, bollobas2007phase}. We have validated this result for both discrete space (2-block SBM) and continuous space network models (2-dimensional Gaussian RGG, scale-free graphon). By proving the existence of a percolation threshold in terms of the connectivity parameter for a non-uniform continuous space ensemble like the Gaussian RGG, we have provided a counterexample to the conjecture in Ref. \cite{barnett2007spatially} regarding spatially embedded networks, which had suggested that ``there is a phase transition only in the case of uniform ensembles.'' In the context of scale-free graphons, our approach yields expressions for the critical mean network degree $\avg{\degree}_c$ as a function of the network size $n$ for percolation to occur, which adds to our understanding of the robustness of scale-free networks to failure \cite{cohen2000percolation, cohen2001breakdown, callaway2000percolation}. In particular, $\avg{\degree}_c(n)$ is a slowly-decreasing function in $n$ for BA graphs and a regularly-decreasing function in $n$ for ``highly scale-free'' networks, unlike for ER graphs where $\avg{\degree}_c(n)=1$ \cite{erdos1960evolution}. We have termed a class of ``rank-1'' random graph models, whose integral operator $T$ has exactly one non-zero (positive) eigenvalue, which is also its spectral radius. We have shown that for any rank-1 graphon, the spectral condition for percolation is equivalent to the well-known Molloy and Reed criterion \cite{molloy1995critical} for percolation, which says that percolation occurs when the expected number of neighbors-of-neighbors of a node is higher than the expected number of neighbors itself. Tab. \ref{tab:results} summarizes our analytical results on the bond percolation threshold. \paragraph*{Path-based statistics for nodes and networks.}Given the SPLD, we can obtain an estimate of its first moment, i.e. the mean geodesic length. We have shown that the approximate closed-form of the SPLD provides a useful and good approximation of the average shortest path length. For rank-1 models, which are equivalent to the canonical degree-configuration model, it is possible to obtain the mean in closed-form by knowledge of the eigenvalue and expectation of the logarithm of its corresponding eigenfunction. We have demonstrated this approach for random $\degree$-regular graphs for varying degrees, and for scale-free graphons spanning all levels of ``scale-freeness''. Synthesizing a closed-form estimate of the mean geodesic length for random $\degree$-regular graphs also draws a connection to the degree-diameter problem in graph theory via the Moore bound \cite{hoffman1960mooregraph}. For higher rank models, we can make a suitable rank-1 approximation and still obtain a good closed-form estimate of the mean geodesic length, as we have shown for 2-block SBMs, and for a wide variety of real-world networks. More generally, for models where eigenvalues that follow the leading eigenvalue are small and positive, this leads to the expected geodesic length scaling as $\order{\left(\log r(T)\right)^{-1}}$. In a similar vein, we have shown how the expected closeness centrality at a node location is related to the logarithm of the leading eigenfunction of $T$ evaluated at that location. \paragraph*{Limitations.}The assumptions involved impose limitations on where our approach is applicable, and suggest extensions to be considered for future work. The conditionally-independent-edges assumption might hinder tight local clustering which is found commonly in some graphs like social networks \cite{holland1971transitivity, holland1976local}, and sparsity constraints may not hold for broad degree distributions over finite network sizes---both being important assumptions for deriving the recursive equations for the SPLD. The approximations involved in obtaining the closed-form lower bound discount probability mass at longer path lengths, and thus may not hold well for some models, particularly for finite-size network with very large diameters. In practice however, we have observed good agreement between analytics and empirics for the statistics we considered, both for Gaussian RGGs that are highly spatial---and thus exhibit some degree of local clustering---and scale-free graphons---that have heavy-tailed degree distributions. \acknowledgements This work has been supported by EPSRC grant EP/N014529/1. The authors would like to thank Mauricio Barahona, Asher Mullokandov, George Cantwell, Florian Klimm, Till Hoffmann and Matthew Garrod for insightful discussions and helpful comments on the manuscript.
2023-04-23T06:41:16.717Z
2021-11-04T01:21:59.000Z
redpajama/arxiv
arxiv_0001
2,036
30,441
e95b9081f18f56770539930081dc665f03fbceb7
\section{Introduction} A global interaction in a system takes place when all its elements share a common influence or source of information. Global interactions occur in many physical, chemical, biological, social, and economic systems, such as parallel electric circuits, coupled oscillators \cite{Kuramoto,Naka}, charge density waves \cite{Gruner}, Josephson junction arrays \cite{Wie}, multimode lasers \cite{Wie2}, neural networks, evolution models, ecological systems \cite{Kan1}, social networks \cite{Newman}, economic exchange \cite{Yako}, mass media influence \cite{Media1,Media2}, and cross-cultural interactions \cite{Cross}. A variety of phenomena can occur in systems subject to global interactions; for example, chaos synchronization, dynamical clustering, nontrivial collective behavior, chaotic itineracy \cite{Kan2,Manrubia}, chimera states \cite{Sethia,Yedel}, quorum sensing \cite{Ojalvo}. These behaviors have been investigated in arrays of globally coupled oscillators in diverse experiments \cite{Wang,Monte,Taylor,Tinsley,Scholl}. Global interactions can provide relevant descriptions in networks possessing highly interconnected elements or long-range interactions. Systems with global and local interactions have also been studied \cite{Kan3}. A global interaction field may consist of an external influence acting on all the elements of a system, as in a driven dynamical system \cite{Parra}; or it may arise from the interactions between the elements, such as a mean field \cite{Piko}, in which case we have an autonomous dynamical system. In most situations, systems subject to either type of global field have been studied separately. In this article, we investigate the dynamics of systems subject to the simultaneous influence of external and autonomous global interaction fields. As a simple model for such systems, we study a network of coupled maps with coexisting external and autonomous global fields. Specifically, we focus on the important phenomenon of chaos synchronization. With this aim, we consider local map units possessing robust chaos dynamics. A chaotic attractor is robust if there exists a neighborhood in its parameter space where windows of periodic orbits are absent \cite{Grebogi}. Robust chaos is an advantageous property in applications that require reliable functioning in a chaotic regime, since the chaotic behavior cannot be destroyed by arbitrarily small perturbations of the parameters. In Section~2, we present the model for a system subject to coexisting autonomous and external global fields. We define two types of synchronization states for this system in relation to the external field, complete synchronization and generalized synchronization, and characterize them through statistical quantities. In Section~3, we carry out a stability analysis of the synchronization states and derive conditions for their stable behavior in terms of the system parameters. Section~4 contains applications of the model with coexisting global fields for maps exhibiting robust chaos. The states of chaos synchronization for the system and their stability boundaries are characterized on the space of parameters expressing the strength of the coupling to the global fields. In particular, we show that the state of generalized synchronization of chaos can appear even when the functional forms of the external field and the local maps are equal, a situation that does not occur for this family of maps if only one global field is present. This behavior represents a collective ordering of the system in a state alternative to that of the driving field. Conclusions are given in Section~5. \section{Coupled map network with autonomous and external global fields} As a model of a system of chaotic oscillators with coexisting autonomous and external global fields, we consider a network of coupled maps in the form \begin{eqnarray} y_{t+1} &=& g(y_t) , \label{eq1.Modelo} \\ x_{t+1}^i &=& (1-\epsilon_1-\epsilon_2)f(x_t^i) + \epsilon_1 h_t + \epsilon_2 g(y_t) , \label{eq2.Modelo} \\ h_t &=& \frac{1}{N}\sum_{j=1}^{N}f(x_t^j) , \label{eq3.Modelo} \end{eqnarray} where $x_t^i$ represents the states variable of element $i$ $(i=1,2, \ldots,N)$ at discrete time $t$; $N$ is the size of the system; $f(x_t^i)$ describes the local chaotic dynamics; $y_{t+1}=g(y_t)$ is an external global field that acts as a homogeneous drive with independent chaotic dynamics; $h_t(x_t^j)$ is an autonomous global field that corresponds to the mean field of the system; $\epsilon_{1}$ is a parameter measuring the strength of the coupling of the elements to the mean field $h_t$; and $\epsilon_{2}$ expresses the intensity of the coupling to the external field. We assume a diffusive form of the coupling for both fields. A synchronization state for the system Eqs.~(\ref{eq1.Modelo})-(\ref{eq3.Modelo}) occurs when the $N$ elements share the same state; that is, $x_t^i=x_t^j$, $\forall i,j$. Then, the mean field becomes $h_t=f(x_t^i)$, $\forall i$. Two types of synchronization states can be defined in relation to the external global field $g(y_t)$: (i) complete synchronization, where the $N$ elements in the system are synchronized among themselves and also to the external driving field; i. e., $x_t^i=x_{t}^{j}=y_t$, $\forall i,j$, or $h_t=f(x_t^i)=g(y_t)$; and (ii) internal or generalized synchronization, where the $N$ elements get synchronized among themselves but not to the external global field; i. e., $x_t^i=x_t^j \neq y_t$, $\forall i,j$, or $h_t=f(x_t^i)\neq g(y_t)$. To characterize the synchronization states of the system Eqs.~(\ref{eq1.Modelo})-(\ref{eq3.Modelo}) we calculate the asymptotic time average $\langle\sigma\rangle$ (after discarding transients) of the instantaneous standard deviation of the distribution of state variables $\sigma_{t}$, defined as \begin{equation} \sigma_{t}=\left[\frac{1}{N}\sum_{i=1}^{N}(x_{t}^{i}-\bar{x}_{t})^{2}\right]^{1/2}, \end{equation} where \begin{equation} \bar{x}_{t}=\frac{1}{N}\sum_{i=1}^{N}x_t^i. \end{equation} Additionally, we calculate the asymptotic time average $\langle\delta\rangle$ (after discarding transients) of the instantaneous difference \begin{equation} \delta_t=\left|\bar{x}_t-y_t \right|. \end{equation} Then, a complete synchronization state $x_t^i=x_{t}^{j}=y_t$, $\forall i,j$, where the maps are synchronized to the external global field, is characterized by the the values $\langle \sigma \rangle=0$ and $\langle \delta \rangle=0$. An internal or generalized synchronization state $x_t^i=x_t^j \neq y_t$, $\forall i,j$, where the maps are synchronized among themselves but not the external field, corresponds to $\langle \sigma \rangle=0$ and $\langle \delta \rangle\neq 0$. In practice, we set the numerical condition $\langle \sigma \rangle < 10^{-7}$ and $\langle \delta \rangle < 10^{-7}$ for the zero values of these statistical quantities to characterize the above synchronization states. \section{Stability analysis of synchronized states} The system Eqs.~(\ref{eq1.Modelo})-(\ref{eq3.Modelo}) can be written in vector form as \begin{equation} \mathbf{x}_{t+1}=\textbf{M}\mathbf{f}(\mathbf{x}_{t}), \label{eq.ModeloVectorial} \end{equation} where $\mathbf{x}_t$ and $\mathbf{f}(\mathbf{x}_t)$ are $(N+1)$-dimensional state vectors expressed as \begin{equation} \mathbf{x}_{t}=\begin{pmatrix} y_t \\ x_{t}^1 \\ x_{t}^2 \\ \vdots \\ x_{t}^N\end{pmatrix}, \; \qquad \mathbf{f}(\mathbf{x}_{t})=\begin{pmatrix} g(y_t) \\ f(x_{t}^1) \\ f(x_{t}^2) \\ \vdots \\ f(x_{t}^N) \end{pmatrix}, \, \end{equation} and $\textbf{M}$ is the $(N+1)\times(N+1)$ matrix \begin{equation} \textbf{M}=(1-\epsilon_1-\epsilon_2)\textbf{I} + \frac{1}{N}\textbf{C}, \end{equation} where $\mathbf{I}$ is the $(N+1)\times(N+1)$ identity matrix and $\mathbf{C}$ is the $(N+1)\times(N+1)$ matrix that represents the coupling to the global fields, given by \begin{equation} \textbf{C}=\begin{pmatrix} (\epsilon_1+\epsilon_2) N & 0& \cdots & 0 \\ \epsilon_{2}N & \epsilon_1 &\cdots & \epsilon_1\\ \vdots & \ddots & \vdots & \vdots \\ \epsilon_{2}N & \epsilon_1 & \cdots & \epsilon_1 \end{pmatrix}. \end{equation} The linear stability condition for synchronization can be expressed in terms of the Lyapunov exponents for the system Eq.~(\ref{eq.ModeloVectorial}). This requires the knowledge of the $(N+1)$ eigenvalues of matrix $\mathbf{M}$, given by \begin{equation} \mu_{k}=(1-\epsilon_1-\epsilon_2) + \frac{1}{N}c_{k}, \qquad k=1,2,\ldots,N+1, \end{equation} where $c_k$ are the eigenvalues of matrix $\mathbf{C}$, corresponding to $c_1=(\epsilon_1 +\epsilon_2) N$, $c_2=\epsilon_1 N$, and $c_k=0$ for $k>2$, which is $(N-1)$--times degenerated. Then, the eigenvalues of matrix $\mathbf{M}$ are \begin{eqnarray} \mu_{1} &= & 1, \label{e1} \\ \mu_{2} &= & 1-\epsilon_{2}, \label{e2} \\ \mu_{k} &= & 1-\epsilon_1-\epsilon_2, \quad k>2, \\ & & (N-1)-\mbox{times degenerated}. \nonumber \label{e3} \end{eqnarray} The eigenvectors of the matrix $\textbf{M}$ satisfying $\textbf{M}\mathbf{u}_k=\mu_k\mathbf{u}_k$ are also eigenvectors of the matrix $\textbf{C}$, and they are given by \begin{equation} \mathbf{u}_1=\begin{pmatrix} 1 \\ 1 \\ \vdots \\ 1 \\1 \end{pmatrix}, \, \\ \qquad \mathbf{u}_2=\begin{pmatrix} 0 \\ 1 \\ \vdots \\ 1 \\ 1 \end{pmatrix}, \qquad \mathbf{u}_k=\begin{pmatrix} 0 \\ a_1 \\a_2 \\ \vdots\\ a_N \end{pmatrix}, \end{equation} where the components $a_k$ of the eigenvectors $\mathbf{u}_k$, $k>2$, satisfy the condition $\sum_{k=1}^N a_{k}=0$, since the eigenvectors $\mathbf{u}_k$ are orthogonal to the eigenvectors $\mathbf{u}_1$ and $\mathbf{u}_2$. The eigenvectors of the matrix $\mathbf{M}$ constitute a complete basis where the state $\mathbf{x}_t$ of the system Eq.~(\ref{eq.ModeloVectorial}) can be expressed as a linear combination. In particular, the complete synchronization state of $\mathbf{x}_t$ is associated to the eigenvector $\mathbf{u}_1$, while the generalized synchronization state is represented by the eigenvector $\mathbf{u}_2$. The $(N+1)$ Lyapunov exponents $(\Lambda_1,\Lambda_2,\ldots,\Lambda_{N+1})$ of the system Eq.~(\ref{eq.ModeloVectorial}) are defined as \begin{eqnarray} &(e^{\Lambda_{1}},e^{\Lambda_{2}},\cdots e^{\Lambda_{N+1}})= & \nonumber\\ & \lim_{T \to \infty}\left(\mbox{magnitude of eigenvalues of} \; \left|\prod_{t=0}^{T-1}\mathbf{J}(\mathbf{x}_t)\right|\right)^{1/T}, & \nonumber \label{DefExpLyap} \end{eqnarray} where $\textbf{J}$ is the Jacobian matrix of the system Eq.~(\ref{eq.ModeloVectorial}), whose components are \begin{equation} J_{ij}=\left[(1-\epsilon_1-\epsilon_2)\delta_{ij} + \frac{1}{N}c_{ij}\right]\frac{\partial{[\mathbf{f}(\mathbf{x}_t)]_i}}{\partial{x_j}}, \end{equation} where $c_{ij}$ are the $ij$-components of matrix $\mathbf{C}$ and $[\mathbf{f}(\mathbf{x}_t)]_i$ is the $i$-component of vector $\mathbf{f}(\mathbf{x}_t)$. Then, we obtain \begin{equation} e^{\Lambda_{k}}=\lim_{T \to \infty}\left|\mu_{k}^{T}\prod_{t=0}^{T-1}f'(x_t^k)\right|^{1/T}, \qquad k=1,\ldots,N+1. \end{equation} where $\mu_k$, $k=1,2,\ldots,N+1$, are the eigenvalues of matrix $\mathbf{M}$. Substitution of the eigenvalues $\mu_{k}$, gives the Lyapunov exponents for the system Eq.~(\ref{eq.ModeloVectorial}), \begin{eqnarray} \Lambda_{1}&=&\lambda_g, \label{Lyap}\\ \Lambda_{2}&=&\ln(1-\epsilon_{2}) +\lim_{T\to \infty}\frac{1}{T}\sum_{t=0}^{T-1}\ln\left|f'(x_t^1)\right|, \label{Lyap1}\\ \Lambda_{k}&=&\ln(1-\epsilon_1-\epsilon_2) + \lim_{T\to \infty}\frac{1}{T}\sum_{t=0}^{T-1}\ln\left|f'(x_t^k)\right|,\nonumber \\ & & \hspace{5.1cm} k >2, \label{Lyap2} \end{eqnarray} where $\lambda_{g}$ is the Lyapunov exponent of the driven map $g(y_t)$, which is positive since $g(y_t)$ is assumed chaotic. Note that, in general, the limit terms in Eqs.~(\ref{Lyap1}) and (\ref{Lyap2}) depend on $\epsilon_1$ and $\epsilon_2$ since the iterates $x_t^i$ are obtained from the coupled system Eqs.~(\ref{eq1.Modelo})-(\ref{eq3.Modelo}). At synchronization, these terms are equal and we denote them by $\lambda_f$. The stability of the synchronized state is given by the condition \begin{equation} e^{\Lambda_k}=\left|\mu_k e^{\lambda_k}\right|<1, \label{cond-mod} \end{equation} where $\lambda_1=\lambda_g$; $\lambda_k=\lambda_f$, for $k>1$. Perturbations of the state $\mathbf{x}_t$ along the homogeneous eigenvector $\mathbf{u}_1=(1,1,\ldots,1)$ do not affect the coherence of the system; thus the stability condition corresponding to the eigenvalue $\mu_1$ is irrelevant for the complete synchronized state. Then, condition Eq.~(\ref{cond-mod}) with the next eigenvalue $\mu_2$ provides the range of parameter values where the complete synchronized state $x_t^i=y_t$, $\forall i$, is stable; i. e., \begin{equation} \left|(1-\epsilon_2)e^{\lambda_f}\right|<1 \Rightarrow \quad 1-\frac{1}{e^{\lambda_f}} < \epsilon_2 < 1+\frac{1}{e^{\lambda_f}}. \label{cs} \end{equation} Equivalently, complete synchronization takes place when $\Lambda_2 <0$. On the other hand, the internal or generalized synchronization state $\mathbf{x}_t$ of the system is proportional to the eigenvector $\mathbf{u}_2=(1,1,\ldots,0)$. The stability condition of this state is given by the next degenerate eigenvalue $\mu_k$, $k>2$; that is, \begin{eqnarray} \left|(1-\epsilon_1-\epsilon_2)e^{\lambda_f}\right|<1 \quad & \nonumber \\ \Rightarrow \; 1-\epsilon_2-\frac{1}{e^{\lambda_f}} <\epsilon_1 < 1-\epsilon_2+\frac{1}{e^{\lambda_f}} .& \label{gs} \end{eqnarray} The condition for stable generalized synchronization can be also be expressed as $\Lambda_k <0$. Because of the eigenvalues Eqs.~(\ref{e1})-(\ref{e3}), the stability conditions Eqs.~(\ref{cs}) and (\ref{gs}) can be achieved for any system size $N \geq 2$. Equations ~(\ref{cs}) and (\ref{gs}) describe the regions on the space of the coupling parameters $(\epsilon_1,\epsilon_2)$ where complete and internal synchronization can respectively occur in the system with coexisting global fields, Eqs.~(\ref{eq1.Modelo})-(\ref{eq3.Modelo}). \section{Applications} We consider the system Eqs.~(\ref{eq1.Modelo})-(\ref{eq3.Modelo}) with local dynamics described by the tent map \begin{equation} f(x_t^i) = \dfrac{r}{2}\left|1-2 \,x_t^i\right|, \label{eq.Tienda} \end{equation} which exhibits robust chaos for $r \in (1,2]$ with $x_t^i \in [0,1]$. We fix the local parameter at the value $r=2$ and assume the external driving field equal to the local dynamics, i.e., $g=f$. Figure~\ref{Tent}(a) shows the statistical quantities $\langle \sigma \rangle$ and $\langle \delta \rangle$ that characterize the collective synchronization states for this system as functions of the coupling parameter $\epsilon_2$, with fixed $\epsilon_1=0.2$. System size is $N=5000$. Labels indicate the regions of the parameter $\epsilon_2$ where different synchronization states take place: D (desynchronized state) where $\langle \sigma \rangle \neq 0$ and $\langle \delta \rangle \neq 0$; SG (generalized or internal synchronization) corresponding to $\langle \sigma \rangle=0$ and $\langle \delta \rangle \neq 0$; and SC (complete synchronization) characterized by $\langle \sigma \rangle=0$ and $\langle \delta \rangle=0$. Figure~\ref{Tent}(b) shows the Lyapunov exponents $\Lambda_1$, $\Lambda_2$, and $\Lambda_3$ as functions of $\epsilon_2$, with fixed $\epsilon_1=0.2$, for a system of minimum size $N=2$, since the stability conditions for the synchronized states are satisfied for $N \geq 2$. The Lyapunov exponent $\Lambda_1=\lambda_g=\ln 2$ is positive. The transition of the exponent $\Lambda_3$ from positive to negative values signals de onset of stable generalized synchronization (GS), while $\Lambda_2=0$ indicates the boundary of the complete synchronization state (CS) that is stable for $\Lambda_2 < 0$. The corresponding boundaries of the regions GS and CS coincide exactly in both figures. \begin{figure}[h] \centering \includegraphics[scale=.75]{Fig1a.eps}\\ \vspace{0.4cm} \includegraphics[scale=.75]{Fig1b.eps} \caption{(a) Statistical quantities $\langle\sigma\rangle$ (red line) and $\langle\delta\rangle$ (blue line) as functions of $\epsilon_2$ for the system Eqs.~(\ref{eq1.Modelo})-(\ref{eq3.Modelo}) with local tent map Eq.~(\ref{eq.Tienda}) with $r=2$, and external field $g=f$. Fixed $\epsilon_1=0.2$, size $N=5000$. For each value of $\epsilon_2$ both quantities are averaged over $20000$ iterates after discarding $5000$ transients. (b) Lyapunov exponents $\Lambda_1=\lambda_g$ (black line), $\Lambda_2$ (blue line) and $\Lambda_3$ (red line) as functions of $\epsilon_2$ for the system Eqs.~(\ref{eq1.Modelo})-(\ref{eq3.Modelo}) with minimum size $N=2$ and $g=f$ with fixed $\epsilon_1=0.2$. For each value of $\epsilon_2$ the Lyapunov exponents were calculated with $25000$ iterations after discarding $5000$ transients. In this case, $\lambda_f=\lambda_g=\ln 2$. Labels on both figures indicate D: desynchronized or incoherent state; GS: generalized or internal synchronization; CS: complete synchronization.} \label{Tent} \end{figure} Figure~\ref{Curvas_Tienda} shows the collective synchronization states of the system Eqs.~(\ref{eq1.Modelo})-(\ref{eq3.Modelo}) with the local tent map and external drive $g=f$ with $r=2$ on the space of parameters $(\epsilon_1,\epsilon_2)$. The regions on this space where the different states occur are indicated by labels. The boundaries of the synchronized states are calculated analytically from conditions Eqs.~(\ref{cs}) and (\ref{gs}). These boundaries coincide with the criteria for the quantities $\langle\sigma\rangle$ and $\langle\delta\rangle$ characterizing each state, as explained above. \begin{figure}[h] \centering \includegraphics[scale=.72]{Fig2.eps} \caption{Synchronization states for the system Eqs.~(\ref{eq1.Modelo})-(\ref{eq3.Modelo}) with local tent map and external drive $g=f$ on the space of parameters $(\epsilon_1,\epsilon_2)$. Fixed parameters: $r=2, N=5000$. Labels indicate the regions where these states can be found: D: desynchronization; GS: generalized synchronization; CS: complete synchronization; E: escape. The boundaries determined analytically with Eqs.~(\ref{cs}) and (\ref{gs}) and by the quantities $\langle\sigma\rangle$ and $\langle\delta\rangle$ coincide exactly. The region labeled E corresponds to coupling parameter values $(\epsilon_1,\epsilon_2)$ for which the state variables of the system escape to infinite. The boundary for region E is given by the upper stability boundary of the generalized synchronization state given by Eq.~(\ref{gs}).} \label{Curvas_Tienda} \end{figure} To understand the nature of the collective behaviors, Fig.~\ref{Attractor} shows the attractors corresponding to the different synchronization states for the reduced size system Eqs.~(\ref{eq1.Modelo})-(\ref{eq3.Modelo}), with the local tent map and external drive $g=f$. The desynchronized state (D) in Fig.~\ref{Attractor}(a) has all positive Lyapunov exponents and shows no definite structure. In the internal synchronization state (GS) with $\Lambda_1 >0$, $\Lambda_2>0$, and $\Lambda_3 <0$, displayed in Fig.~\ref{Attractor}(b), the dynamics collapses onto an attractor lying on the plane $x_t^1=x_t^2$. This plane constitutes the synchronization manifold where $x_t^1=x_t^2=\bar{x}_t$. Thus, the chaotic attractor on this plane represents a nontrivial functional relation, different from the identity, between $\bar{x}_t$ and the drive $y_t$. In general, for the state of generalized synchronization with $N>2$, a chaotic attractor arises between the times series of the mean field $\bar{x}_t$ and that of the drive signal $y_t$. The complete synchronized state (CS), possessing $\Lambda_1 >0$, $\Lambda_2<0$, $\Lambda_3 <0$, is characterized by the attractor lying along the diagonal line $x_t^1=x_t^2=y_t$, as shown in Fig.~\ref{Attractor}(c). In this situation, $\bar{x}_t=y_t$. \begin{figure}[h] \centering \includegraphics[scale=.75]{Fig3a.eps} \includegraphics[scale=.75]{Fig3b.eps} \includegraphics[scale=.75]{Fig3c.eps} \caption{Attractors on the three-dimensional phase space of the reduced size system Eqs.~(\ref{eq1.Modelo})-(\ref{eq3.Modelo}) with the local tent map and external drive $g=f$. Fixed parameters: $r=2$, $\epsilon_1=0.2$. (a) Desynchronized state (D), $\epsilon_2=0.2$. (b) Generalized synchronization (GS), $\epsilon_2=0.4$. (c) Complete synchronization (CS), $\epsilon_2=0.6$.} \label{Attractor} \end{figure} The emergence of a chaotic attractor is a characteristic feature of generalized synchronization in a drive-response system when the drive function $g$ is different from the response system function $f$. The generalized or internal synchronization state does not arise if only the external drive $g=f$ acts on the system of tent maps Eqs.~(\ref{eq1.Modelo})-(\ref{eq3.Modelo}), i.e., if $\epsilon_1=0$; nor can it appear with mean field coupling alone. The emergence of the GS state in this system when $g=f$ requires the coexistence of both, the autonomous global field and the external drive. Thus, we have a situation where the presence of an autonomous global interaction allows the synchronization of the maps in a state alternative to that of the forcing external field. As another example, we consider a local chaotic dynamics given the logarithmic map \begin{equation} f(x_t^i) = b + \ln\left|x_t^i\right|. \label{eq.Log} \end{equation} This map is unbound and possesses robust chaos, with no windows of periodicity, for the parameter interval $b\in[-1,1]$ \cite{Jap}. \begin{figure}[h] \centering \includegraphics[scale=.74]{Fig4a.eps} \\ \vspace{0.3cm} \includegraphics[scale=.74]{Fig4b.eps} \caption{Synchronization states for the system Eqs.~(\ref{eq1.Modelo})-(\ref{eq3.Modelo}) with local logarithmic map Eq.~(\ref{eq.Log}) and $N=5000$ on the space of parameters $(\epsilon_1,\epsilon_2)$. (a) External drive function equal to the local dynamics, $g=f=-0.7+\ln|x|$. (b) External drive different from the local map, with $g= 0.5+\ln|x|$ and $f=-0.7+\ln|x|$. Labels indicate the states D: desynchronization; GS: generalized synchronization; CS: complete synchronization. The boundaries determined from conditions Eqs.~(\ref{cs}) (blue line) and (\ref{gs}) (red lines) coincide with those obtained with the quantities $\langle\sigma\rangle$ and $\langle\delta\rangle$.} \label{CL-Loga} \end{figure} Figure~\ref{CL-Loga}(a) shows the synchronization states of the system Eqs.~(\ref{eq1.Modelo})-(\ref{eq3.Modelo}) with the local chaotic map Eq.~(\ref{eq.Log}) and external drive $g=f$ on the plane $(\epsilon_1,\epsilon_2)$. Labels indicate the regions where the different synchronization states take place. The boundaries of the synchronized states are calculated numerically from Eqs.~(\ref{cs}) and (\ref{gs}). The lower boundary of the GS state is calculated from Eq.~(\ref{gs}) where $\lambda_f \neq \lambda_g$. For the upper boundary of the CS state, the local maps are already synchronized to the drive $y_t$ and therefore $\lambda_f =\lambda_g$; thus Eq.~(\ref{gs}) gives a straight line on the plane $(\epsilon_1,\epsilon_2)$. Figure~\ref{CL-Loga}(b) shows the synchronization states corresponding to an external drive $g \neq f$; only generalized synchronization (GS) can occur in this case. There is no escape in either situation, since the map dynamics is unbounded. \section{Conclusions} We have studied a coupled map model for a system subject to coexisting autonomous and external global fields. We have investigated the states of chaos synchronization in this system, consisting of (i) complete synchronization, where the maps synchronize among themselves and to the external global field, and (ii) generalized or internal synchronization, where the maps synchronize among themselves but not to the external field. The generalized synchronization state can be described by the appearance of a chaotic attractor between the time series of the mean field of the system and the external driving field. We have performed the stability analysis for both synchronization states and found that the stability conditions can be achieved for a system of minimum size of two maps subject to a common drive. The equivalence of the dynamics for a minimum size system is a characteristic feature of systems with global interactions \cite{Parra2}. By considering local tent maps and logarithmic maps that possess robust chaos dynamics, we have focused on the chaotic synchronization behavior of the system. We have characterized the synchronization states on the space of the coupling parameters by using the stability conditions of these states as well as statistical quantities, with complete agreement in all cases. The emergence of the state of generalized synchronization of chaos, when the drive and the local maps have the same functional form, requires the presence of both global fields. This behavior is similar to the phenomenon of spontaneous ordering against an external field found in nonequilibrium systems \cite{Media1}. Our results suggest that, in addition to chaos synchronization, other collective behaviors either observed or absent in a system with only one type of global interaction can be modified when both autonomous and external global fields are present. \section*{Acknowledgment} This work was supported by ViceCanciller\'ia de Investigaci\'on e Innovaci\'on, Universidad Yachay Tech, Ecuador.
2023-04-23T06:41:16.773Z
2021-11-04T01:22:49.000Z
redpajama/arxiv
arxiv_0001
2,037
3,995
8ccbd72545f47d15bd83aaf14b30d51f80d5c5e4
\section{Introduction} If $K$ is a convex body in $\mathbb{R}^{n+m}$ and $\pi:\mathbb{R}^{n+m}\to V$ is the orthogonal projection onto a subspace $V\subset\mathbb{R}^{n+m}$ of dimension $n$, the fiber body of $K$ with respect to $\pi$ is the \emph{average} of the fibers of $K$ under this projection: \begin{equation}\label{eq:intro} \Sigma_\pi K= \int_{\pi(K)} \left(K\cap\pi^{-1}(x)\right) \mathrm{d} x. \end{equation} This expression will be made rigorous in Proposition~\ref{prop:supportaverage}. Such notion was introduced for polytopes by Billera and Sturmfels in \cite{fiberpolytopes}. It has been investigated in many different contexts, from combinatorics such as in \cite{athanasiadis2000fiber} to algebraic geometry and even tropical geometry in the context of polynomial systems \cite{esterovkhovanskii, esterovmixedfiber, sturmfels2008tropical}. Notably, recent studies concern the particular case of monotone path polytopes \cite{black2021monotone}. This paper is dedicated to the study of the fiber body of convex bodies that are not polytopes. In Section~\ref{sec:Generalities} the general properties of fiber bodies are stated, with particular focus on the description of the faces of $\Sigma_\pi K$ (Corollary~\ref{cor:sectionrepdirection}). In the rest of the paper, each section regards the fiber body of a particular class of convex bodies. Section~\ref{sec:Puffed} applies directly the description of the faces to certain convex bodies that we call \emph{puffed polytopes}. They are convex bodies that are obtained from polytopes by taking the ``derivative'' of their algebraic boundary (see Definition~\ref{def:puffed}). Propositions~\ref{prop:puff1}, \ref{prop:puff2} and \ref{prop:puffn} describe the strict convexity of the fiber body of a puffed polytope. As a concrete example we study the case of the elliptope with a particular projection. In Section~\ref{sec:Smooth} we investigate the class of curved convex bodies. Namely, we consider convex bodies whose boundary are $C^2$ hypersurface with no ``flat'' directions, i.e. with a strictly positive curvature. In that case Theorem~\ref{thm:supofsmooth} gives an explicit formula for the support function of $\Sigma_\pi K$, directly in terms of the support function of $K$. This is an improvement of equation~\eqref{eq:supportaverage} which involves the support function of the fibers. We immediately give an example in which the support function of the fiber body is easily computed using Theorem~\ref{thm:supofsmooth}. The last section is dedicated to the case of zonoids. Zonoids arise as limits of finite Minkowski sums of segments. We prove that the fiber body of a zonoid is a zonoid, and give an explicit formula to "compute" it in Theorem~\ref{thm:Fiberofzonoids}. We then focus on a particular class of zonoids that are finite Minkowski sums of discs in $3$--space, called \emph{discotopes}. After giving a general description of discotopes as algebraic bodies, we illustrate our formula for zonoids by computing the fiber body of a specific discotope. \subsection*{Acknowledgments} The authors wish to thank Antonio Lerario and Bernd Sturmfels without whom this project would not have existed, and Rainer Sinn for his helpful comments. We want to thank also Fulvio Gesmundo for interesting discussions. \section{Generalities}\label{sec:Generalities} \subsection{Main definitions} Consider the Euclidean vector space $\mathbb{R}^{n+m}$ endowed with the standard Euclidean structure and let $V\subset \mathbb{R}^{n+m}$ be a subspace of dimensions $n$. Denote by $W$ its orthogonal complement, such that $\mathbb{R}^{n+m}=V\oplus W$. Let $\pi : \mathbb{R}^{n+m}\to V$ be the orthogonal projection onto $V$. Throughout this article we will canonically identify the Euclidean space with its dual. However the notation is meant to be consistent: $x,y,z$ will denote vectors, whereas we will use $u,v,w$ for dual vectors. We call \emph{convex bodies} the non--empty compact convex subsets of a vector space. The space of convex bodies in a vector space $E$ is denoted by $\mathscr{K}(E)$. The \emph{support function} of a convex body $K\in\mathscr{K}(\mathbb{R}^{n+m})$ is the function $h_K:\mathbb{R}^{n+m}\to \mathbb{R}$ given for all $u\in \mathbb{R}^{n+m}$ by \begin{equation}\label{eq:defsupp} h_K(u):=\max\set{\langle u, x\rangle}{x\in K}, \end{equation} where $\langle \cdot , \cdot \rangle$ is the standard Euclidean scalar product. This map becomes handy when manipulating convex bodies as it satisfies some useful properties (see~\cite[Section~$1.7.1$]{bible} for proofs and more details). \begin{proposition}\label{prop:propertieshK}Let $K,L\in\mathscr{K}(\mathbb{R}^{n+m})$ with their respective support functions $h_K,h_L$. Then \begin{enumerate}[label=(\roman*)] \item $h_K=h_L$ if and only if $K=L$; \item If $T:\mathbb{R}^{n+m}\to \mathbb{R}^k$ is a linear map then $h_{TK}=h_K\circ T^{t}$; \item $h_K$ is differentiable at $u\in\mathbb{R}^{n+m}$ if and only if the point $x$ realizing the maximum in~\eqref{eq:defsupp} is unique. In that case $x=\nabla h (u).$ \end{enumerate} \end{proposition} If $K\in \mathscr{K}(\mathbb{R}^{n+m})$ we write $K_x$ for the orthogonal projection onto $W$ of the fiber of $\pi|_K$ over $x$, namely \begin{equation} K_x := \set{y \in W}{(x,y)\in K}. \end{equation} \begin{definition} Consider $\gamma : \pi(K) \to W$ such that for all $x\in \pi(K)$, $\gamma (x) \in K_x$. Such map is called a \emph{section of $\pi$}, or just \emph{section} when there is no ambiguity. \end{definition} Using this notion we are able now to define our main object of study. In this paper \emph{measurable} is always intended with respect to the Borelians. \begin{definition}\label{def:fiberbody} The \emph{fiber body} of $K$ with respect to the projection $\pi$ is the convex body \begin{equation} \Sp{K}:=\set{\int_{\pi(K)}\gamma(x) \mathrm{d} x}{\gamma :\pi(K) \to W \hbox{ measurable section}} \in \mathscr{K}(W). \end{equation} Here $\mathrm{d} x$ denotes the integration with respect to the $n$--dimensional Lebesgue measure on $V$. \end{definition} \begin{remark} Note that, with this setting, if $\pi(K)$ is of dimension $<n$, then its fiber body is $\Sp{K}=\{0\}$. \end{remark} This definition of fiber bodies, that can be found for example in \cite{esterovmixedfiber} under the name \emph{Minkowski integral}, extends the classic construction of fiber polytopes \cite{fiberpolytopes}, up to a constant. Here, we choose to omit the normalization $\frac{1}{\Vol(\pi(K))}$ in front of the integral used by Billera and Sturmfels in order to make apparent the \emph{degree} of the map $\Sp$ seen in \eqref{eq:degreeofSp}. This degree becomes clear with the notion of \emph{mixed fiber body}, see~\cite[Theorem~$1.2$]{esterovmixedfiber}. \begin{proposition}\label{prop:n+1hom} For any $\lambda\in\mathbb{R}$ we have $\Sp{(\lambda K)} =\lambda |\lambda|^{n} \Sp{K}$. In particular if $\lambda\geq 0$ \begin{equation}\label{eq:degreeofSp} \Sp{(\lambda K)} =\lambda^{n+1} \Sp{K}. \end{equation} \end{proposition} \begin{proof} If $\lambda=0$ it is clear that the fiber body of $\{0\}$ is $\{0\}$. Suppose now that $\lambda\neq 0$ and let $\gamma:\pi(K)\to W$ be a section. We can define another section $\tilde{\gamma}:\lambda \pi(K) \to W$ by $\tilde{\gamma}(x):=\lambda \gamma \left( \frac{x}{\lambda} \right)$. Using the change of variables $y=x/\lambda$, we get that \begin{equation} \int_{\lambda \pi(K)}\tilde{\gamma}(x)\ \mathrm{d} x = \lambda |\lambda|^{n} \int_{\pi(K)}\gamma(y)\ \mathrm{d} y. \end{equation} This proves that $\Sp{\lambda K} \subseteq \lambda |\lambda|^{n} \Sp{K}$. Repeating the same argument for $\lambda^{-1}$ instead of $\lambda$, the other inclusion follows. \end{proof} \begin{corollary} If $K$ is centrally symmetric then so is $\Sp{K}$. \begin{proof} Apply the previous proposition with $\lambda=-1$ to get $\Sigma_\pi\left((-1) K\right)=(-1)\Sigma_\pi K$. If $K$ is centrally symmetric with respect to the origin then $(-1)K=K$ and the result follows. The general case is obtained by a translation. \end{proof} \end{corollary} As a consequence of the definition, it is possible to deduce a formula for the support function of the fiber body. This is the rigorous version of equation~\eqref{eq:intro}. \begin{proposition} \label{prop:supportaverage} For any $u\in W$ we have \begin{equation}\label{eq:supportaverage} h_{\Sp{K}}(u)=\int_{\pi(K)} h_{K_x}(u) \mathrm{d} x. \end{equation} \end{proposition} \begin{proof} By definition \begin{equation} h_{\Sp{K}}(u)=\sup\set{\int_{\pi(K)} \langle u, \gamma(x) \rangle \ \mathrm{d} x}{\gamma\hbox{ measurable section}} \leq \int_{\pi(K)} h_{K_x}(u) \mathrm{d} x. \end{equation} To obtain the equality, it is enough to note that there exists a measurable section $\gamma_u:\pi(K)\to W$ with the following property: for all $x\in \pi(K)$ the point $\gamma_u(x)$ maximizes the linear form $\langle u, \cdot \rangle$ on $K_x$. In other words for all $x\in \pi(K)$, $\langle u, \gamma_u(x)\rangle=h_{K_x}(u)$. \end{proof} More generally the fiber body behaves well under the action of $\hbox{GL}(V)\oplus\hbox{GL}(W)$ as a subgroup of $\hbox{GL}(\mathbb{R}^{n+m}).$ \begin{proposition} Let $g_n\in\hbox{GL}(V)$, $g_m\in \hbox{GL}(W)$ and $K\in \mathscr{K}(\mathbb{R}^{n+m})$. Then \begin{equation} \Sp{\Big( (g_n \oplus g_m)(K) \Big)} = |\det(g_n)| \cdot g_m \Big( \Sp{K} \Big). \end{equation} \end{proposition} \begin{proof} This is a quite straightforward consequence of the definitions. After observing that \begin{equation} \Big( (g_n \oplus g_m)(K) \Big)_x = g_m \Big( K_{g_n^{-1}(x)} \Big) \end{equation} and $\pi\left( (g_n \oplus g_m)(K)\right)= g_n \pi(K)$, use equation~\eqref{eq:supportaverage} with the change of variables $x\mapsto g_n^{-1}x$. By Proposition~\ref{prop:propertieshK}--$\mathit{(ii)}$ we have $h_{g_m K_x}(u)=h_{K_x}(g_m^T u)$, so the thesis follows. \end{proof} \subsection{Properties of the sections} By definition, a point $y$ of the fiber body $\Sp{K}$ is the integral $y=\int_{\pi(K)}\gamma(x) \mathrm{d} x$ of a \emph{measurable} section $\gamma$. Thus $\gamma$ can be modified on a set of measure zero without changing the point $y$, i.e. $y$ only depends on the $L^1$ class of $\gamma$. It is natural to ask what our favourite representative in this $L^1$ class will be. In the case where $K$ is a polytope, $\gamma$ can always be chosen continuous. However if $K$ is not a polytope and if $y$ belongs to the boundary of $\Sp{K}$, a continuous representative may not exists. This is due to the fact that in general the map $x\mapsto K_x$ is only upper semicontinuous, see~\cite[Section~$6$]{KhovanskiiFamily}. \begin{example}\label{ex:counterex_continuity} Consider the function $f : S^1 \to \mathbb{R}$ such that \begin{equation} f(x,y) = \begin{cases} 0 & x<0 \\ 1 & x\geq 0 \end{cases} \end{equation} and let $K := \conv(\text{graph}(f)) \subset \mathbb{R}^3$ in Figure~\ref{fig:counterex_continuity}. This is a semialgebraic convex body, whose boundary may be subdivided in $8$ distinct pieces: two half--discs lying on the planes $\{z=0\}$ and $\{z=1\}$, two triangles with vertices $(-1,0,0),(0,\pm 1,1)$ and $(1,0,1),(0,\pm 1,0)$ respectively, four cones with vertices $(0,\pm 1,0),(0,\pm 1, 1)$. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{counterex.png} \caption{The convex body of Example~\ref{ex:counterex_continuity}. In its boundary there are $2$ green half--discs, $2$ red triangles and $4$ blue cones.} \label{fig:counterex_continuity} \end{figure} Let $\pi : \mathbb{R}^3\to \mathbb{R}$ be the projection on the first coordinate $\pi(x,y,z)=x$. Then the point $p\in\Sp{K}\subset \mathbb{R}^2$ maximizing the linear form associated to $(y,z)=(1,0)$ must have only non--continuous sections. This can be proved using the representation of a face given by~\eqref{eq:repoffacetrue} below. \end{example} However if $y$ belongs to a face of the fiber body, $\gamma$ can be chosen in such a way that it always belongs to the face of the fiber maximizing the same linear form. Let us clarify this. \begin{definition} Let $y\in\Sp{K}$. We say that a section $\gamma$ \emph{represents} $y$ if $y=\int_{\pi(K)}\gamma(x) \mathrm{d} x$. \end{definition} \begin{proposition} Let $K\in \mathscr{K}(\mathbb{R}^{n+m})$ and let $\Sp{K}$ be its fiber body. The set of its points that can be represented by a continuous section is convex and dense. In particular, all interior points of $\Sp{K}$ can be represented by a continuous section. \end{proposition} \begin{proof} Consider the set \begin{equation} C = \set{\int_{\pi(K)}\gamma(x) \mathrm{d} x}{\gamma :\pi(K) \to K \text{ continuous section}} \end{equation} that is clearly contained in the fiber body $\Sp{K}$. It is convex: take $a,b \in C$ represented by continuous sections $\alpha,\beta :\pi(K) \to K$ respectively. Then any convex combination can be written as $c = t a + (1-t) b = \int_{\pi(K)} \Big( t \alpha(x) + (1-t)\beta(x)\Big) dx$. Since $t \alpha + (1-t)\beta$ is a continuous section for any $t\in[0,1]$, $C$ is convex. We now need to prove that the set $C$ is also dense in $\Sp{K}$. Let $\gamma$ be a measurable section; by definition it is a measurable function $\gamma:\pi(K) \to W$, such that $\gamma(x)\in K_x$ for all $x\in\pi(K)$. For every $\epsilon>0$ there exists a continuous function $g:\pi(K) \to W$ with $\| \gamma - g\|_{L^1}<\epsilon$, but this is not necessarily a section of $K$, since a priori $g(x)$ can be outside $K_x$. Hence define $\tilde{\gamma}:\pi(K) \to W$ such that \begin{equation} \tilde{\gamma}(x) = p\Big( K_x, g(x) \Big) \end{equation} where $p(A,a)$ is the nearest point map at $a$ with respect to the convex set $A$. By \cite[Lemma~$1.8.11$]{bible} $\tilde{\gamma}$ is continuous and by definition $\text{graph}(\tilde{\gamma}) \subset K$. Therefore $\int_{\pi(K)} \tilde{\gamma} \in C$. Moreover, \begin{equation} \| \gamma - \tilde{\gamma}\|_{L^1} \leq \| \gamma - g\|_{L^1} <\epsilon \end{equation} hence the density is proved. As a consequence we get that $ \text{int}\Sp{K} \subseteq C \subseteq \Sp{K}$ so all the interior points of the fiber body have a continuous representative. \end{proof} If $y$ is not an interior point then it belongs to a face of the body. Let us focus on that case. \begin{definition} Let $K\in \mathscr{K}(\mathbb{R}^{n+m})$ and let $u\in \mathbb{R}^{n+m}$. We denote by $K^u$ the face of $K$ in direction $u$, that is all the points of $K$ that maximize the linear form $\langle u, \cdot \rangle$: \begin{equation} K^u:=\set{y\in K}{\langle u, y\rangle=h_K(u)}. \end{equation} Moreover, if $\mathcal{U}=\{u_1,\ldots,u_k\}$ is an ordered family of vectors of $\mathbb{R}^{n+m}$, we write \begin{equation} K^{\mathcal{U}}:=\left(\cdots\left(K^{u_1}\right)^{u_2}\cdots\right)^{u_k}. \end{equation} \end{definition} The first step is to show that the faces of the fiber body can be described using only particular sections, that belong to corresponding faces \emph{almost everywhere}. \begin{lemma}\label{lem:sectionrepdirection} Let $\mathcal{U}=\{u_1,\ldots,u_k\}$ be a an ordered family of linearly independent vectors of $W$, take $y\in\Sp{K} $ and let $\gamma: \pi(K)\to W$ be a section that represents $y$. Then $y\in\left(\Sp{K}\right)^\mathcal{U}$ if and only if $\gamma (x)\in \left(K_x\right)^\mathcal{U}$ for almost all $x\in \pi(K)$. In particular we have that \begin{equation}\label{eq:repofface} \left( \Sp{K}\right)^{\mathcal{U}}=\set{\int_{\pi(K)}\gamma(x) \mathrm{d} x}{\gamma \text{ section such that} \ \gamma(x)\in \left(K_x\right)^{\mathcal{U}} \text{ for almost all }x}. \end{equation} \begin{proof} Suppose first that $\mathcal{U}=\{u\}.$ Assume that $\gamma(x)$ is not in $\left(K_x\right)^u$ for all $x$ in a set of non--zero measure $\mathscr{O}\subset \pi(K)$. Then there exists a measurable function $\xi:\pi(K)\to W$ with $\langle u,\xi\rangle\geq0$ and $\langle u,\xi(x)\rangle>0$ for all $x\in\mathscr{O}$, such that $\tilde{\gamma}:=\gamma+\xi$ is a section (for example you can take $\tilde{\gamma}(x)$ to be the nearest point on $K_x$ of $\gamma(x)+u$). Let $\tilde{y}:=\int_{\pi(K)}\tilde{\gamma}$ . Then $\langle u,\tilde{y}\rangle =\langle u, y\rangle + \int_{\pi(K)}\langle u, \xi\rangle >\langle u, y\rangle$. Thus $y$ does not belong to the face $\left(\Sp{K}\right)^u$. Suppose now that $y$ is not in the face $\left(\Sp{K}\right)^u$. Then there exists $\tilde{y}\in \Sp{K}$ such that $\langle u, \tilde{y}\rangle > \langle u, y\rangle$. Let $\tilde{\gamma}$ be a section that represents $\tilde{y}$. It follows that $\int_{\pi(K)}\langle u, \tilde{\gamma}\rangle > \int_{\pi(K)}\langle u, \gamma \rangle$. This implies the existence of a set $\mathscr{O}\subset \pi(K)$ of non--zero measure where $\langle u, \tilde{\gamma}(x)\rangle > \langle u, \gamma(x) \rangle$ for all $x\in\mathscr{O}$. Thus for all $x\in\mathscr{O}$, $\gamma(x)$ does not belong to the face $\left(K_x\right)^u$. In the case $\mathcal{U}=\{u_1,\ldots,u_{k+1}\}$ we can apply inductively the same argument. Replace $\Sp{K}$ by $\left(\Sp{K}\right)^{\{u_1,\ldots,u_k\}}$ and $u$ by $u_{k+1}$, and use the representation of $\left(\Sp{K}\right)^{\{u_1,\ldots,u_k\}}$ given by~\eqref{eq:repofface}. \end{proof} \end{lemma} A particular case is when the family is a basis. Note that taking the face in a direction $u$ decreases the dimension at least by $1$ (if the starting convex body is of full dimension). We can deduce that if $\mathcal{B}$ is a basis, then $K^\mathcal{B}$ consists of only one point $\left\{ z\right\}$. Recall that a point $z\in K$ is called \emph{extreme} if it cannot be written as a non--trivial linear combination of two other points of $K$ (see \cite[Section~II.$3$]{barvinok2002course}). \begin{lemma}\label{lem:extremeBasis} Let $z\in K$ be an extreme point, then there exists a basis $\mathcal{B}$ such that $K^\mathcal{B}=\left\{ z\right\}$. \begin{proof} Consider $u_1$ such that $z\in K^{u_1}$. Either $K^{u_1} = \{z\}$ or it is a convex body of non--zero dimension in $u_1^{\perp}$. In the first case any basis with first vector $u_1$ satisfies the required condition. In the other case, $\{z\}$ is an extreme point of $K^{u_1}$. In particular there exists $u_2$, linearly independent to $u_1$, such that $\{z\}\in (K^{u_1})^{u_2}$. We can iterate this process for at most $n+m$ times in total, hence we find a basis $\mathcal{B}$ such that $K^\mathcal{B}=\left\{ z\right\}$. \end{proof} \end{lemma} \begin{lemma} Let $y\in \Sp{K}$ be an extreme point. Then the section $\gamma$ that represents $y$ is unique in $L^1$. In other words, two sections representing $y$ must coincide almost everywhere. \end{lemma} \begin{proof} By Lemma~\ref{lem:extremeBasis} there exists a basis $\mathcal{B}$ of $W$ such that $\left(\Sp{K}\right)^\mathcal{B}=\{y\}$. By Lemma~\ref{lem:sectionrepdirection} any section $\gamma$ that represents $y$ must be such that $\gamma(x)\in \left(K_x\right)^\mathcal{B}$ for almost all $x$. But $\left(K_x\right)^\mathcal{B}$ consists of only one point, thus is the graph of a section $\gamma_\mathcal{B}$. This means that any section $\gamma$ that represents $y$ must be equal to $\gamma_\mathcal{B}$ almost everywhere and this concludes the proof. \end{proof} In the future, we would like to avoid saying things like ``almost everywhere'' and ``up to a set of measure zero''. We will see that a section that represents a point $y$ in the boundary of $\Sp{K}$ can be chosen to maximize at every point (and not only almost every) the same linear form(s) as $y$. \begin{proposition}\label{prop:faithfulsec} Let $y\in \left( \Sp{K}\right)^u$, then there exists a section $\gamma$ that represents $y$ such that $\gamma(x)\in K_x^u$ for \emph{all} $x\in\pi(K)$. We say that $\gamma$ represents $y$ \emph{faithfully}. \begin{proof} Suppose first that $y$ is an extreme point. Then there is a basis $\mathcal{B}$ of $W$ such that $\{y\}=\Sp{K}^\mathcal{B}.$ Since $y\in\Sp{K}^u$ we can choose $\mathcal{B}=\left\{ u, u_2\ldots, u_m\right\}$. Then we consider the section $\gamma_\mathcal{B}$ such that for every $x\in \pi(K)$, $\{\gamma_\mathcal{B}(x)\}=K_x^\mathcal{B}$. This section represents $y$ and since $K_x^\mathcal{B}\subset K_x^u$, it is as required. On the other hand if $y$ is not extreme, by definition there exist extreme points $y_1,\ldots, y_k\in\left( \Sp{K}\right)^u$ such that $y=\sum_{i=1}^k\lambda_i y_i$, with $\lambda_i\geq0$, $\sum_{i=1}^k\lambda_i=1$. For every $i$ take a section $\gamma_i$ that represents faithfully $y_i$. Then $\gamma:=\sum_{i=1}^k\lambda_i \gamma_i$ is a section that represents $y$ faithfully. \end{proof} \end{proposition} From this together with Lemma~\ref{lem:sectionrepdirection} we obtain a more handy description of the faces of the fiber body that is~\eqref{eq:repofface}, without the unpleasant word ``almost''. \begin{corollary}\label{cor:sectionrepdirection} For any $u\in W$, \begin{equation}\label{eq:repoffacetrue} \left( \Sp{K}\right)^{u}=\set{\int_{\pi(K)}\gamma(x) \mathrm{d} x}{\gamma \text{ section such that} \ \gamma(x)\in \left(K_x\right)^{u}\, \forall x}. \end{equation} \end{corollary} As a consequence, following the proof of Proposition~\ref{prop:supportaverage}, it is possible to describe the faces of the fiber body in terms of their support function. \begin{lemma}\label{lem:segment} For every $u,v\in W$, $h_{ (\Sp{K})^u}(v)=\int_{\pi(K)} h_{(K_x)^u}(v)\ \mathrm{d} x$. \end{lemma} \subsection{Strict convexity} In the case where $K^u$ consists of only one point we say that $K$ is \emph{strictly convex in direction $u$}. Moreover, a convex body is said to be \emph{strictly convex} if it is strictly convex in every direction. We now investigate this property for fiber bodies. \begin{proposition}\label{prop:strictlyconvex} Let $K\in\mathscr{K}(\mathbb{R}^{n+m})$ and let us fix a vector $u\in W$. The following are equivalent: \begin{enumerate} \item $\Sp{K}$ is strictly convex in direction $u$; \\ \item almost all the fibers $K_x$ are strictly convex in direction $u$. \end{enumerate} \end{proposition} \begin{proof} By Proposition~\ref{prop:propertieshK}--$\mathit{(iii)}$, a convex body is strictly convex in direction $u$ if and only if its support function is $\mathcal{C}^1$ at $u$. Therefore, if almost all the fibers $K_x$ are strictly convex in $u$, then being the convex body compact, the support function $h_{\Sp{K}}(u)=\int_{\pi(K)} h_{K_x}(u) \mathrm{d} x$ is $\mathcal{C}^1$ at $u$, i.e. the fiber body is strictly convex in that direction. Now suppose that $\Sp{K}$ is strictly convex in direction $u$, i.e. $\left(\Sp{K}\right)^u$ consists of just one point $y$. This means that the support function of this face is linear and it is given by $\langle y, \cdot \rangle$. We now prove that the support function of $K_x^u$ is linear for almost all $x$, and this will conclude the proof. Lemma~\ref{lem:segment} implies that \begin{equation} h_{\left(\Sp{K}\right)^u} = \int_{\pi(K)}h_{K_x^u} \mathrm{d} x= \langle y, \cdot \rangle. \end{equation} For any two vectors $v_1,v_2$, we have \begin{equation} \langle y, v_1 + v_2 \rangle = \int_{\pi(K)} h_{K_x^u}(v_1+v_2) dx \leq \int_{\pi(K)} h_{K_x^u}(v_1) dx + \int_{\pi(K)} h_{K_x^u}(v_2) dx = \langle y, v_1 \rangle + \langle y, v_2 \rangle \end{equation} thus the inequality in the middle must be an equality. But since $h_{K_x^u}(v_1+v_2) \leq h_{K_x^u}(v_1) + h_{K_x^u}(v_2)$, we get that this is an equality for almost all $x$, i.e. the support function of $K_x^u$ is linear for almost every $x\in {\pi(K)}$. Therefore almost all the fibers are strictly convex. \end{proof} \section{Puffed polytopes}\label{sec:Puffed} In this section we introduce a particular class of convex bodies arising from polytopes. A known concept in the context of hyperbolic polynomials and hyperbolicity cones is that of the \emph{derivative cone}; see \cite{renegarhyperbolic} or \cite{sanyalderivativecones}. Since we are dealing with compact objects, we will repeat the same construction in affine coordinates, i.e. for polytopes instead of polyhedral cones. Let $P$ be a full--dimensional polytope in $\mathbb{R}^N$, containing the origin, with $d$ facets given by affine equations $l_1(x_1,\ldots ,x_N)=a_1, \ldots ,l_d(x_1,\ldots,x_N)=a_d$. Consider the polynomial \begin{equation}\label{eq:polytope} p(x_1, \ldots ,x_N) = \prod_{i=1}^d \left(l_i(x_1,\ldots ,x_N) -a_i\right). \end{equation} Its zero locus is the algebraic boundary of $P$, i.e. the algebraic closure of the boundary, in the Zariski topology, as in \cite{sinn_alg_bound}. Consider the homogenization of $p$, that is $\tilde{p}(x_1, \ldots ,x_N,w)= \prod_{i=1}^d \left(l_i(x_1,\ldots ,x_N) -a_i w\right)$. It is the algebraic boundary of a polyhedral cone and it is hyperbolic with respect to the direction $(0,\ldots,0,1)\in \mathbb{R}^{N+1}$. Then for all $i<d$ the polynomial \begin{equation}\label{eq:der_puff} \left( \frac{\partial^i}{\partial w^i} \tilde{p} \right) (x_1,\ldots ,x_N, 1) \end{equation} is the algebraic boundary of a convex set containing the origin, see \cite{sanyalderivativecones}. \begin{definition}\label{def:puffed} Let $Z_i$ be the zero locus of~\eqref{eq:der_puff} in $\mathbb{R}^N$. The \emph{$i$-th puffed $P$} is the closure of the connected component of the origin in $\mathbb{R}^N\setminus Z_i$. We denote it by $\puff{i}{P}$. \end{definition} In particular the first puffed polytope is always a spectrahedron \cite{sanyalderivativecones}. As the name suggests, the puffed polytopes $\puff{i}{P}$ are fat, inflated versions of the polytope $P$ and in fact contain $P$. On the other hand, despite the definition involves a derivation, the operation of ``taking the puffed'' does not behave as a derivative. In particular $\puff{i}{P} \neq \puff{1}{\puff{i-1}{P}}$ because for a convex body which is not a polytope this operation is not even defined, and it does not commute with the Minkowski sum, as explained in the next example. \begin{proposition} In general for polytopes $P_1, P_2$ \begin{equation} \puff{1}{P_1+P_2}\neq \puff{1}{P_1} + \puff{1}{P_2}. \end{equation} \end{proposition} \begin{proof} We build a counterexample in dimension $N=2$. Let us consider two squares $P_1 = \conv\{(\pm 1, \pm 1)\}$, $P_2 = \conv\{(0, \pm 1), (\pm 1, 0)\} \subset \mathbb{R}^2$. The first puffed square is a disc with radius half of the diagonal, so $\puff{1}{P_1}$ has radius $\sqrt{2}$ and $\puff{1}{P_2}$ has radius $1$. Therefore $\puff{1}{P_1} + \puff{1}{P_2}$ is a disc centered at the origin of radius $1+\sqrt{2}$. On the other hand $P_1 + P_2$ is an octagon. Its associated polynomial in \eqref{eq:polytope} is \begin{equation} p(x,y) = ((x+y)^2-9)((x-y)^2-9)(x^2-4)(y^2-4). \end{equation} Via the procedure explained above we obtain the boundary of this puffed octagon, as the zero locus of the following irreducible polynomial \begin{equation} 2 x^6+7 x^4 y^2+7 x^2 y^4+2 y^6-88 x^4-193 x^2 y^2-88 y^4+918 x^2+918 y^2-2592. \end{equation} This is a curve with three real connected comonents, shown in violet in Figure~\ref{fig:puff_octa}. Clearly the puffed octagon is not a circle, hence $\puff{1}{P_1} + \puff{1}{P_2} \neq \puff{1}{P_1+P_2}$. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{puffocta.png} \caption{The octagon, in blue, and (the algebraic boundary of) its puffed octagon, in violet.} \label{fig:puff_octa} \end{figure} \end{proof} \subsection{Strict convexity of the puffed polytopes} Our aim is to study the strict convexity of the fiber body of a puffed polytope. In order to do so, we shall at first say something more about the boundary structure of a puffed itself. \begin{lemma}\label{lem:faces_puffedP} Let $P\subset \mathbb{R}^N$ be a full--dimensional polytope. Then all faces $F$ of $P$ of dimension $k<N-i$, are contained in the boundary of $\puff{i}{P}$. \end{lemma} \begin{proof} Let $F$ be a $k-$face of $P$; it is contained in the zero set of the polynomial \eqref{eq:polytope}. Moreover $F$ arises as the intersection of at least $N-k$ facets (i.e. faces of dimension $N-1$), thus its points are zeros of multiplicity at least $N-k$. Hence, if $N-k>i$ the face $F$ is still in the zero set of \eqref{eq:der_puff}, i.e. it belongs to the boundary of $\puff{i}{P}$. \end{proof} The other direction is not always true: there may be $k$--faces of $P$, with $k\geq N-i$, whose points are zeros of \eqref{eq:der_puff} of multiplicity higher than $i$, and hence faces of $\puff{i}{P}$. However there are two cases in which this is not possible. \begin{lemma}\label{lem:facesofpuff12} Let $P\subset \mathbb{R}^N$ be a full--dimensional polytope. \begin{itemize} \item{$i=1$:} the flat faces in the boundary of $\puff{1}{P}$ are exactly the faces of dimension $k<N-1$; \\ \item{$i=2$:} the flat faces in the boundary of $\puff{2}{P}$ are exactly the faces of dimension $k<N-2.$ \end{itemize} \begin{proof} The first point is clear because the facets (faces of dimension $N-1$) are the only zeroes of multiplicity one. The second point follows from the so called ``diamond property'' of polytopes \cite{ziegler2012lectures}. \end{proof} \end{lemma} \begin{remark} By \cite[Proposition~$24$]{renegarhyperbolic} we can deduce that the flat faces of a puffed polytope must be faces of the polytope itself. The remaining points in the boundary of $\puff{i}{P}$ are exposed points. \end{remark} Using this result we can deduce conditions for the strict convexity of the fiber body of a puffed polytope. \begin{proposition}[Fiber $1$st puffed polytope]\label{prop:puff1} Let $P\subset \mathbb{R}^{n+m}$ be a full--dimensional polytope, $n\geq 1$, $m\geq 2$, and take any projection $\pi : \mathbb{R}^{n+m} \to \mathbb{R}^n$. The fiber puffed polytope $\Sigma_{\pi}\left( \puff{1}{P} \right)$ is strictly convex if and only if $m=2$. \begin{proof} By Lemma~\ref{lem:facesofpuff12}, the flat faces in the boundary of $\puff{1}{P}$ are the faces of $P$ of dimension $k<n+m-1$. Suppose first that $m>2$ and let $F$ be a $(n+m-2)$--face of $P$. Take a point $p$ in the relative interior of $F$ and let $x_p:=\pi(p)$. Then the dimension of $F\cap \pi^{-1}(x_p)$ is at least $m-2\geq 1$; we can also assume without loss of generality that \begin{equation}\label{eq:dim} 1\leq \dim \left( F\cap \pi^{-1}(x_p) \right) < n+m-2. \end{equation} Furthermore there is a whole neighborhood $U$ of $x_p$ such that condition \eqref{eq:dim} holds, so for every $x\in U$ the convex body $\left( \puff{1}{P} \right)_x$ is not strictly convex. By Proposition~\ref{prop:strictlyconvex} then $\Sp{\left( \puff{1}{P}\right) }$ is not strictly convex. Suppose now that $m=2$ and fix a flat face $F$ of $\puff{1}{P}$. Its dimension is less or equal than $n$, so $\left( F\cap \pi^{-1}(x_p) \right)$ is either one point or a face of positive dimension. In the latter case $\dim \pi(F)\leq n-1$, i.e. it is a set of measure zero in $\pi \left( \puff{1}{P} \right)$. Because there are only finitely many flat faces, we can conclude that almost all the fibers are strictly convex and thus by Proposition~\ref{prop:strictlyconvex}, $\Sp{\left( \puff{1}{P}\right) }$ is strictly convex. \end{proof} \end{proposition} A similar result holds for the second fiber puffed polytope, using Lemma~\ref{lem:facesofpuff12}. \begin{proposition}[Fiber $2$nd puffed polytope]\label{prop:puff2} Let $P\subset \mathbb{R}^{n+m}$ be a full--dimensional polytope, $n\geq 1$, $m\geq 2$, and take any projection $\pi : \mathbb{R}^{n+m} \to \mathbb{R}^n$. The fiber puffed polytope $\Sigma_{\pi}\left( \puff{2}{P} \right)$ is strictly convex if and only if $m\leq 3$, i.e. $m=2$ or $3$. \end{proposition} \begin{proof} We can use the previous strategy again. If $m>3$, there always exists a face of $\puff{2}{P}$ of dimension $n+m-3$ whose non--empty intersection with fibers of $\pi$ has dimension at least $1$ and strictly less than $n+m-3$. So in this case we get a non strictly convex fiber body. On the other hand, when $m=2$ or $3$ the intersection of the fibers and the flat faces has positive dimension only on a measure zero subset of $\mathbb{R}^n$, hence almost all the fibers are strictly convex and the thesis follows. \end{proof} Can we generalize this result for the $i$-th puffed polytope? In general no, and the reason is precisely that a $k$-face may be contained in more than $(n+m-k)$ facets, when $k<n+m-2$. The polytopes $P$ for which this does not happen are called \emph{simple polytopes}. Thus with the same proof as above we obtain the following. \begin{proposition}[Fiber $i$-th puffed simple polytope]\label{prop:puffn} Let $P\subset \mathbb{R}^{n+m}$ be a full--dimensional simple polytope, $n\geq 1$, $m\geq 2$, and take any projection $\pi : \mathbb{R}^{n+m} \to \mathbb{R}^n$. The fiber puffed polytope $\Sigma_{\pi}\left( \puff{i}{P} \right)$ is strictly convex if and only if $m\leq i+1$. \end{proposition} In the case where $P$ is not simple, one has to take into account the number of facets in which each face of dimension $k \geq n+m-i$ is contained, in order to understand if they are or not part of the boundary of $\puff{i}{P}$. \subsection{A case study: the elliptope} Take the tetrahedron $\mathcal{T}$ in $\mathbb{R}^3$ realized as \begin{equation} \conv\{ (1,1,1), (1,-1,-1), (-1,1,-1), (-1,-1,1) \}. \end{equation} The first puffed tetrahedron (for the rest of the subsection we will omit the word ``first'') is the semialgebraic convex body called \emph{elliptope} which is the set of points $(x,y,z)\in[-1,1]^3$ such that $x^2+y^2+z^2-2xyz\leq 1$. Let $\pi$ be the projection on the first coordinate: $\pi(x,y,z)=x$. The fibers of the elliptope at $x$ for $x\in(-1,1)$ are the ellipses defined by \begin{equation} \mathcal{E}_x=\set{(y,z)}{\left(\frac{y-xz}{\sqrt{1-x^2}}\right)^2+z^2\leq 1}. \end{equation} Introducing the matrix \begin{equation} M_x:= \begin{pmatrix} \frac{1}{\sqrt{1-x^2}} & \frac{-x}{\sqrt{1-x^2}} \\ 0 & 1 \end{pmatrix} \end{equation} it turns out that $\mathcal{E}_x = \set{(y,z)}{\|M_x(y,z)\|^2\leq 1} = (M_x)^{-1}B^2$, where $B^2$ is the unit $2$--disc. We obtain \begin{equation} h_{\mathcal{E}_x}(u,v)=h_{B^2}\left((M_x)^{-T}(u,v)\right)=\left\|(M_x)^{-T}(u,v)\right\| = \sqrt{u^2+v^2+2 x u v}. \end{equation} By~\eqref{eq:supportaverage} we need to compute the integral of $h_{\mathcal{E}_x}$ between $x=-1$ and $x=1$ to obtain the support function of the fiber body of the elliptope. We get \begin{equation} h_{\Sigma_{\pi}\mathcal{E}}(u,v)= \frac{1}{3 u v}\left( |u+v|^3-|u-v|^3 \right). \end{equation} Hence the fiber body is semialgebraic and its algebraic boundary is the zero set of the four parabolas $3y^2 + 8z - 16$, $3y^2 - 8z - 16$, $8y + 3z^2 - 16$, $8y - 3z^2 + 16$, displayed in Figure~\ref{fig:fiber_ell}. \begin{figure} \begin{subfigure}{0.6\textwidth} \centering \includegraphics[width = 0.9\textwidth]{parabolas_noaxes.png} \caption{} \label{fig:fiber_ell} \end{subfigure} \begin{subfigure}{0.39\textwidth} \centering \includegraphics[width=0.6\textwidth]{fiber_sandwich_noaxes.png} \caption{} \label{fig:3_fiberbodies} \end{subfigure} \caption{Left: the four green parabolas meet in the four black points on the boundary of the fiber elliptope, that lie on the diagonals $y=z$ and $y=-z$ .\\ Right: sandwiched fiber bodies. The blue rhombus is the fiber tetrahedron $\Sigma_{\pi}\mathcal{T}$; the green convex body is the fiber elliptope $\Sigma_{\pi}\mathcal{E}$; the grey square is the fiber cube $\Sigma_{\pi}\left( [-1,1]^3\right)$.} \end{figure} As anticipated in Proposition~\ref{prop:puff1} the fiber elliptope is strictly convex. Notice that the elliptope is naturally sandwiched between two polytopes: the tetrahedron $\mathcal{T}$ and the cube $[-1,1]^3$. Therefore, as a natural consequence of the definition, the same chain of inclusions works also for their fiber bodies: \begin{equation} \Sigma_{\pi}\mathcal{T} \subset \Sigma_{\pi}\mathcal{E} \subset \Sigma_{\pi}\left([-1,1]^3\right) \end{equation} as shown in Figure~\ref{fig:3_fiberbodies}. \begin{remark} From this example it is clear that the operation of ``taking the fiber body'' does not commute with the operation of ``taking the puffed polytope''. In fact the puffed polytope of the blue square in Figure~\ref{fig:3_fiberbodies} is not the green convex body bounded by the four parabolas: it is the disc $y^2 + z^2 \leq 4$. \end{remark} \section{Curved convex bodies}\label{sec:Smooth} In this section we are interested in the case where the boundary of the convex body $K$ is highly regular. We derive a formula to compute support function of the fiber body directly in terms of the support function of $K$, without having to compute those of the fibers. \begin{definition} We say that a convex body $K$ is \emph{curved} if its support function $h_K$ is $C^2$ and the gradient $\nabla h_K$ restricted to the sphere is a $C^1$ diffeomorphism. \end{definition} In that case $K$ is full--dimensional and its boundary is a $C^2$ hypersurface. Moreover we have the following. \begin{lemma} Let $K\subset \mathbb{R}^{n+m}$ be a curved convex body and let $v\in S^{n+m-1}$. Then the differential $\mathrm{d}_v\nabla h_K$ is a symmetric positive definite automorphism of $v^\perp$. \begin{proof} This is proved in~\cite[p.$116$]{bible}, where curved convex bodies are said to be ``of class $C^2_+$'' and $\mathrm{d}_v\nabla h_K$ is denoted by $\overline{W}_v$. \end{proof} \end{lemma} The following gives an expression for the face of the fiber body. This is to be compared with the case of polytopes which is given in~\cite[Lemma~$11$]{esterovkhovanskii}. \begin{lemma}\label{lemma:supofsmooth} If $K$ is a curved convex body and $u\in W$ with $\| u \|=1$, then \begin{equation} \nabla h_{\Sigma_\pi K}(u)=\int_V \nabla h_K(u+\xi)\cdot J_{\psi_u}(\xi)\ \mathrm{d}\xi \end{equation} where $\psi_u:V\to V$ is given by $\psi_u(\xi)=\left(\pi\circ\nabla h_K\right)(u+\xi)$ and $J_{\psi_u}(\xi)$ denotes its Jacobian (i.e. the determinant of its differential) at the point $\xi$. \begin{proof} From~\eqref{eq:repoffacetrue} we have that $\nabla h_{\Sp{K}}(u)=\int_V \gamma_u(x) \mathrm{d} x$, where $\gamma_u(x)=\nabla h_{K_x}(u).$ Assume that $x=\psi_u(\xi)$ is a change of variables. We get $\gamma_u(x)=(\gamma_u\circ\pi\circ\nabla h_K) (u+\xi)=\nabla h_K(u+\xi)$ and the result follows. It remains to prove that it is indeed a change of variables. Note that $\nabla h_K(u+\xi)=\nabla h_K (v)$ where $v=\tfrac{u+\xi}{\|u+\xi\|}\in S^{n+m-1}$. The differential of the map $\xi\mapsto v$ maps $V$ to $\left(V+\mathbb{R} u\right)\cap v^\perp$. Moreover $\nabla h_K$ restricted to the sphere is a $C^1$ diffeomorphism by assumption. Thus it only remains to prove that its differential $\mathrm{d}_v\nabla h_K$ sends $\left(V+\mathbb{R} u\right)\cap v^\perp$ to a subspace that does not intersect $\ker \left(\restr{\pi}{v^\perp}\right)$. To see this, note that $\ker \left(\restr{\pi}{v^\perp}\right)^\perp=\left(V+\mathbb{R} u\right)\cap v^\perp$. Moreover, by the previous lemma, we have that $\langle w, \mathrm{d}_v\nabla h_K \cdot w\rangle =0$ if and only if $w=0$. Thus if $w\in\ker \left(\restr{\pi}{v^\perp}\right)^\perp$ and $w\neq 0$, then $\pi \left(\mathrm{d}_v\nabla h_K \cdot w\right)\neq 0$. Putting everything together, this proves that $\mathrm{d}_\xi \psi_u$ has no kernel which is what we wanted. \end{proof} \end{lemma} As a direct consequence we derive a formula for the support function. \begin{theorem}\label{thm:supofsmooth} Let $K\subset \mathbb{R}^{n+m}$ be a curved convex body. Then the support function of $\Sp{K}$ is for all $u\in W$ \begin{equation}\label{eq:supofsmooth} h_{\Sp{K}}(u)=\int_V \langle u, \nabla h_K(u+\xi)\rangle \cdot J_{\psi_u}(\xi)\ \mathrm{d}\xi \end{equation} where $\psi_u:V\to V$ is given by $\psi_u(\xi)=\left(\pi\circ\nabla h_K\right)(u+\xi)$ and $J_{\psi_u}(\xi)$ denotes its Jacobian at the point $\xi$. \end{theorem} \begin{proof} Apply the previous lemma to $h_{\Sp{K}}(u)=\langle u, \nabla h_{\Sp{K}}(u)\rangle$. \end{proof} Assume that the support function $h_K$ is \emph{algebraic}, i.e. it is a root of some polynomial equation. Then the integrand in Lemma~\ref{lemma:supofsmooth} and in Theorem~\ref{thm:supofsmooth} is also algebraic. Indeed it is simply $ \nabla h_K(u+\xi)$ times the Jacobian of $\psi_u$ which is a composition of algebraic functions. We can generalize this concept in the direction of the so called $D$--modules \cite{Holonomic}. One can define what it means for a $D$--ideal of the Weyl algebra $D$ to be \emph{holonomic}. Then a function is holonomic if its annihilator, a $D$--ideal, is holonomic. Intuitively this means that such function satisfies a system of linear homogeneous (in the function and its derivatives) differential equations with polynomial coefficients, plus a suitable dimension condition. It can be proved that holonomicity is a generalization of algebraicity. We say that a convex body $K$ is \emph{holonomic} if its support function $h_K$ is holonomic. In this setting, the fiber body satisfies the following property. \begin{corollary}\label{cor:holonomiccurved} If K is a curved holonomic convex body, then its fiber body is again holonomic. \begin{proof} We want to prove that the integrand in Theorem~\ref{thm:supofsmooth} is a holonomic function of $u$ and $\xi$. Then the result follows from the fact that the integral of a holonomic function is holonomic \cite[Proposition~$2.11$]{Holonomic}. If $h_K$ is holonomic then $\nabla h_K (u+\xi)$ is a holonomic function of $u$ and $\xi$, as well as its scalar product with $u$. It remains to prove that the Jacobian of $\psi_u$ is holonomic. But $\psi_u$ is the projection of a holonomic function and thus holonomic, so the result follows. \end{proof} \end{corollary} \subsection{A case study: Schneider's polynomial body} In~\cite[p.203]{bible} Schneider exhibits an example of a one parameter family of semialgebraic centrally symmetric convex bodies that are not zonoids (see Section~\ref{sec:Zonoids} for a definition of zonoids). Their support function is polynomial when restricted to the sphere. We will show how in that case Theorem~\ref{thm:supofsmooth} makes the computation of the fiber body relatively easy. \begin{definition} Schneider's polynomial body is the convex body $\mathcal{S}_\alpha\in \mathscr{K}(\mathbb{R}^3)$ whose support function is given by (see~\cite[p.203]{bible}) \begin{equation} h_{\mathcal{S}_\alpha}(u)=\|u\|\left(1+\frac{\alpha}{2}\left(\frac{3(u_3)^2}{\|u\|^2}-1\right)\right) \end{equation} for $\alpha\in[-8/20,-5/20].$ \end{definition} Let $\pi:=\langle e_1, \cdot \rangle:\mathbb{R}\oplus\mathbb{R}^2\to \mathbb{R}$ be the projection onto the first coordinate. We want to apply Theorem~\ref{thm:supofsmooth} to compute the support function of $\Sp{\mathcal{S}_\alpha}$. For the gradient we obtain: \begin{equation} \nabla h_{\mathcal{S}_\alpha} (u)=\frac{1}{2\|u\|^3}\begin{pmatrix} -u_1\left((u_1)^2(\alpha-2)+ (u_2)^2(\alpha-2)+2(u_3)^2(2\alpha-1)\right) \\ -u_2\left((u_1)^2(\alpha-2)+ (u_2)^2(\alpha-2)+2(u_3)^2(2\alpha-1)\right) \\ \tfrac{u_3}{\|u\|^2}\left((u_1)^2(5\alpha+2)+ (u_2)^2(5\alpha+2)+2(u_3)^2(2\alpha+1)\right) \end{pmatrix}. \end{equation} For $u=(0,u_2,u_3)$, the Jacobian is $J_{\psi_u}(t)=\frac{\mathrm{d}}{\mathrm{d} t}\left(\pi\circ\nabla h_{\mathcal{S}_\alpha}(t,u_2,u_3)\right)$, which gives \begin{equation} J_{\psi_u}(t)=\frac{ t^2(-(u_2)^2(\alpha-2)+(u_3)^2(5\alpha+2))-\|u\|^2((u_2)^2(\alpha-2)+2(u_3)^2(2\alpha-1)) }{ 2(t^2+\|u\|^2)^\frac{5}{2} }. \end{equation} Substituting in~\eqref{eq:supofsmooth}, we integrate $\langle u, \nabla h_{\mathcal{S}_\alpha} (t,u_2,u_3)\rangle J_{\psi_u}(t)$ and get the support function of the fiber body (see Figure~\ref{fig:fiberofpoly}) which is again polynomial: \begin{equation}\label{eq:suppofpoly} h_{\Sp{\mathcal{S}_\alpha}}(u)=\frac{\pi}{64\|u\|^3} \left({8(\alpha-2)(u_2)^4-8(\alpha^2+2\alpha-8)(u_2)^2(u_3)^2+(-25\alpha^2+16\alpha+32)(u_3)^4}\right). \end{equation} \begin{figure} \centering \includegraphics[width=0.4\textwidth]{fiberofpolyless.png} \caption{Fiber body of Schneider's polynomial body for $\alpha=-i/20$ with $i=5,6$ and $7$} \label{fig:fiberofpoly} \end{figure} \section{Zonoids}\label{sec:Zonoids} In this section, we focus on the class of \emph{zonoids}. Let us first recall some definitions and introduce some notation. For more details we refer to~\cite[Section~$3.5$]{bible}. We will use the following notation for centered segments: for any $x\in\mathbb{R}^{n+m}$ we write \begin{equation}\label{eq:defseg} \underline{x}:=\frac{1}{2}\left[-x,x\right]. \end{equation} \begin{definition} A convex body $K\in\mathscr{K}(\mathbb{R}^{n+m})$ is called a \emph{zonotope} if there exist $x_1,\ldots x_N\in\mathbb{R}^{n+m}$ such that, with the notation introduced above, $K=\underline{x_1}+\cdots+\underline{x_N}$. A \emph{zonoid} is a limit (in the Hausdorff distance) of zonotopes. The space of zonoids of $\mathbb{R}^{n+m}$ will be denoted by $\mathscr{Z}_0(\mathbb{R}^{n+m})$. \end{definition} \begin{remark} It follows immediately from the definition that all zonoids are centrally symmetric centered in the origin, i.e. if $K\in \mathscr{Z}_0(\mathbb{R}^{n+m})$ then $(-1)K=K$. In general the definition of zonoids may also include translations of such bodies. The elements of $\mathscr{Z}_0(\mathbb{R}^{n+m})$ are then called \emph{centered} zonoids. For simplicity here we chose to omit the term ``centered''. \end{remark} We introduce the approach of Vitale in~\cite{vitale} using random vectors. The following is~\cite[Theorem~$3.1$]{vitale} rewritten in our context. \begin{proposition}\label{prop:VitaleZon} A convex body $K\in\mathscr{K}(\mathbb{R}^{n+m})$ is a zonoid if and only if there is a random vector $X\in \mathbb{R}^{n+m}$ with $\mathbb{E}\|X\|<\infty$ such that for all $u\in\mathbb{R}^{n+m}$ \begin{equation}\label{eq:hofVitZon} h_K(u)=\frac{1}{2} \mathbb{E}\left|\langle u, X\rangle\right|. \end{equation} We call such a zonoid the \emph{Vitale zonoid} associated to the random vector $X$, and denote it by $K_0(X)$. \end{proposition} \subsection{The fiber body of a zonoid}\label{sec:fiber_of_zonoid} We now show that the fiber body of a zonoid is a zonoid and give a formula to compute it. Let us first introduce some of the tools used by Esterov in~\cite{esterovmixedfiber}. \begin{definition} For any $u\in W$ define $T_u:=Id_V\oplus \langle u, \cdot\rangle : V\oplus W \to V \oplus \mathbb{R}$. \end{definition} \begin{definition} Let $C\in\mathscr{K}(V\oplus \mathbb{R})$. The \emph{shadow volume} $V_+(C)$ of $C$ is defined to be the integral of the maximal function on $\pi(C)\subset V$ such that its graph is contained in C, i.e. \begin{equation} V_+(C)=\int_{\pi(C)}\varphi(x)\mathrm{d} x, \end{equation} where $\varphi(x)=\sup\set{t}{(x,t)\in C}$. In particular if $(-1)C=C$, then the shadow volume is $V_+(C)=\tfrac{1}{2}\Vol_{n+1}(C)$. \end{definition} The \emph{shadow volume} can then be used to express the support function of the fiber body. \begin{lemma} For $u\in W$ and $K\in\mathscr{K}(\mathbb{R}^{n+m})$, we have \begin{equation} h_{\Sigma_\pi K}(u)=V_+\left(T_u(K)\right). \end{equation} In particular if $(-1)K=K$, \begin{equation}\label{eq:suppShadow2} h_{\Sigma_\pi K}(u)=\frac{1}{2}\Vol_{n+1}\left(T_u(K)\right). \end{equation} \begin{proof} We also denote by $\pi: V\oplus \mathbb{R}\to V$ the projection onto $V$. The shadow volume is the integral on $\pi(T_u(K))=\pi(K)$ of the function $\varphi(x)=\sup\set{t}{(x,t)\in T_u(K)}=\sup\set{\langle u, y \rangle}{(x,y)\in K}=h_{K_x}(u)$. Thus the result follows from Proposition~\ref{prop:supportaverage}. \end{proof} \end{lemma} \begin{remark}\label{rmk:projectionbody} Note that if $m=2$ then $T_u$ is the projection onto the hyperplane spanned by $V$ and $u$. In that case~\eqref{eq:suppShadow2} is the formula for the support function of the $\Pi$--body of $K$ at $Ju$, where $J$ is a rotation by $\pi/2$ in $W$. Thus in that case, $\Sigma_\pi K$ is the projection of $\Pi K$ onto $W$ rotated by $\pi/2$. \end{remark} We will show that the mixed fiber body of zonoids comes from a multilinear map defined directly on the vector spaces. \begin{definition} We define the following (completely skew-symmetric) multilinear map: \begin{align} F_\pi:(V\oplus W)^{n+1} &\to W \\ (x_1+y_1,\ldots,x_{n+1}+y_{n+1}) &\mapsto \frac{1}{(n+1)!}\sum_{i=1}^{n+1}(-1)^{n+1-i} (x_1\wedge\cdots\wedge \widehat{x_i} \wedge \cdots \wedge x_{n+1}) y_{i} \end{align} where $x_1\wedge\cdots\wedge \widehat{x_i} \wedge \cdots \wedge x_{n+1}$ denotes the determinant of the chosen vectors omitting $x_i$. \end{definition} We are now able to prove the main result of this section, here stated in the language of the Vitale zonoids introduced in Proposition~\ref{prop:VitaleZon}. \begin{theorem}\label{thm:Fiberofzonoids} The fiber body of a zonoid is the zonoid. Moreover, if $X\in \mathbb{R}^{n+m}$ is a random vector such that $\mathbb{E}\|X\|<\infty$ and $K:=K_0(X)$ is the associated Vitale zonoid, then \begin{equation}\label{eq:FormulaZon} \Sigma_\pi K=K_0(F_\pi(X_1,\ldots,X_{n+1})) \end{equation} where $X_1,\ldots,X_{n+1}\in \mathbb{R}^{n+m}$ are i.i.d. copies of $X$. In other words, the support function of the fiber body $\Sigma_\pi K$ is given for all $u\in W$ by \begin{equation}\label{eq:FormulaZonsupp} h_{\Sigma_\pi K}(u)=\frac{1}{2}\mathbb{E}|\langle u, Y\rangle| \end{equation} where $Y\in W$ is the random vector defined by $Y:=F_\pi(X_1,\ldots,X_{n+1})$. \begin{proof} Suppose that $K=K_0(X)$ and let $u\in W$. Note that by~\eqref{eq:hofVitZon} and Proposition~\ref{prop:propertieshK}--$\mathit{(ii)}$, $T_u(K)=K_0\left(T_u(X_1)\right)$. Thus by~\eqref{eq:suppShadow2} and \cite[Theorem~$3.2$]{vitale} we get \begin{equation}\label{eq:spkinproof} h_{\Sp{K}}(u)=\tfrac{1}{2}\Vol\left(K_0(T_u(X))\right)=\tfrac{1}{2}\tfrac{1}{(n+1)!}\mathbb{E}|T_u(X_1)\wedge\cdots\wedge T_u(X_{n+1})| \end{equation} where $X_1,\ldots,X_{n+1}\in \mathbb{R}^{n+m}$ are iid copies of $X$. Now let us write $X_i:=\alpha_i+\beta_i$ with $\alpha_i\in V$ and $\beta_i\in W$. Then \begin{align} \left|T_u(X_1)\wedge\cdots\wedge T_u(X_{n+1}) \right| &= \left|\left(\alpha_1+\langle u, \beta_1\rangle\right)\wedge \cdots \wedge \left(\alpha_{n+1}+\langle u, \beta_{n+1}\rangle\right) \right| \\ &=\left| \sum_{i=1}^{n+1}(-1)^{n+1-i} (\alpha_1\wedge\cdots\wedge \widehat{\alpha_i} \wedge \cdots \wedge \alpha_{n+1}) \langle u,\beta_{i}\rangle\right| \\ &=\left|\langle u,(n+1)! F_\pi(\alpha_1+\beta_1,\ldots,\alpha_{n+1}+\beta_{n+1}) \rangle \right|. \end{align} Reintroducing this in~\eqref{eq:spkinproof} we obtain~\eqref{eq:FormulaZonsupp}. \end{proof} \end{theorem} This allows to generalize~\cite[Theorem~$4.1$]{fiberpolytopes} for all zonotopes. \begin{corollary}\label{cor:fiberofzonotopes} For all $z_1,\ldots, z_{N}\in\mathbb{R}^{n+m}$, the fiber body of the zonotope $\sum_{i=1}^N\underline{z_i}$ is the zonotope given by \begin{equation} \Sigma_\pi\left(\sum_{i=1}^N\underline{z_i}\right)=(n+1)!\sum_{1\leq i_1<\cdots<i_{n+1}\leq N} \underline{F_\pi(z_{i_1},\ldots, z_{i_{n+1}})} \end{equation} where we used the notation of~\eqref{eq:defseg}, writing $\underline{x}$ for the segment $[-x/2,x/2]$. \begin{proof} We apply Theorem~\ref{thm:Fiberofzonoids} to the discrete random vector $X$, that is equal to $N z_i$ with probability $1/N$ for all $i=1,\ldots,N$. In that case one can check from~\eqref{eq:hofVitZon} that the Vitale zonoid $K_0(X)$ is precisely the zonotope $\sum_{i=1}^N\underline{z_i}$, and the result follows from~\eqref{eq:FormulaZonsupp}. \end{proof} \end{corollary} Esterov shows in~\cite{esterovmixedfiber} that the map $\Sp:\mathscr{K}(\mathbb{R}^{n+m})\to \mathscr{K}(W)$ comes from another map, which is (Minkowski) multilinear in each variable: the so called \emph{mixed fiber body}. The following is~\cite[Theorem~$1.2$]{esterovmixedfiber}. \begin{proposition} There is a unique continuous multilinear map \begin{equation} \mathrm{M}\Sigma_\pi:\left(\mathscr{K}(\mathbb{R}^{n+m})\right)^{n+1}\to \mathscr{K}(W) \end{equation} such that for all $K\in\mathscr{K}(\mathbb{R}^{n+m})$, $\mathrm{M}\Sigma_\pi(K,\ldots,K)=\Sigma_\pi(K)$. \end{proposition} Once its existence is proved, one can see that the mixed fiber body $\mathrm{M}\Sigma_\pi(K_1,\ldots,K_{n+1})$ is the coefficient of $t_1\cdot\ldots\cdot t_{n+1}$, divided by $(n+1)!$, in the expansion of $\Sp{\left(t_1 K_1+\cdots+t_{n+1}K_{n+1}\right)}$. Using this \emph{polarization formula}, one can deduce from Theorem~\ref{thm:Fiberofzonoids} a similar statement for the mixed fiber body of zonoids. \begin{proposition}\label{prop:mixedfiberofzon} The mixed fiber body of zonoids is a zonoid. Moreover, if $X_1,\ldots,X_{n+1}\in \mathbb{R}^{n+m}$ are independent (not necessarily identically distributed) random vectors such that $\mathbb{E}\|X_i\Vert$ is finite, and $K_i:=K_0(X_i)$ are the associated Vitale zonoids, then \begin{equation} \mathrm{M}\Sigma_\pi(K_1,\ldots,K_{n+1})=K_0(F_\pi(X_1,\ldots,X_{n+1})). \end{equation} \begin{proof} Let us show the case of $n+1=2$ variables. The general case is done in a similar way. Let $\tilde{X}:=t_1\alpha 2 X_1+ t_2 (1-\alpha) 2 X_2$ where $\alpha$ is a Bernoulli random variable of parameter $1/2$ independent of $X_1$ and $X_2$. Using~\eqref{eq:hofVitZon}, one can check that $K_0(\tilde{X})=t_1 K_1+t_2 K_2.$ Now let $Y_1$ (respectively $Y_2$) be an i.i.d. copy of $X_1$ (respectively $X_2$) independent of all the other variables. Define $\tilde{Y}:=t_1\beta 2 Y_1+ t_2 (1-\beta) 2 Y_2$ where $\beta$ is a Bernoulli random variable of parameter $1/2$ independent of all the other variables. By Theorem~\ref{thm:Fiberofzonoids} we have that $\Sp(t_1K_1+t_2K_2)=K_0(F_\pi(\tilde{X},\tilde{Y})).$ By~\eqref{eq:hofVitZon}, using the independence assumptions, it can be deduced that for all $t_1,t_2\geq 0$ \begin{equation} h_{K_0(F_\pi(\tilde{X},\tilde{Y}))}=t_1^2 h_{\Sp K_1}+t_2^2 h_{\Sp K_2}+t_1t_2 (h_{K_0(F_\pi(X_1,Y_2))}+h_{K_0(F_\pi(X_2,Y_1))}). \end{equation} The claim follows from the fact that $K_0(F_\pi(X_1,Y_2))=K_0(F_\pi(X_2,Y_1))=K_0(F_\pi(X_1,X_2))$. \end{proof} \end{proposition} \subsection{Discotopes} In this section, we investigate the fiber bodies of finite Minkowski sums of discs in $\mathbb{R}^3$, called \emph{discotopes}. They also appear in the literature, see~\cite{sanyal_discotopes} for example. Discotopes are zonoids (because discs are zonoids see Lemma~\ref{lem:discotopeeq} below) that are not polytopes nor curved (see Section~\ref{sec:Smooth}) but still have simple combinatorial properties and a simple support function. We will see how in this case formula~\eqref{eq:FormulaZonsupp} can be useful to compute the fiber body. \begin{definition} Let $v\in \mathbb{R}^3$, we denote by $D_v$ the disc in $v^\perp$ centered at 0 of radius $\|v\|$. \end{definition} \begin{lemma}\label{lem:discotopeeq} Discs are zonoids. If $a,b$ is an orthonormal basis of $v^\perp$, we define the random vector $\sigma (\theta):= \|v\| (\cos(\theta)a+\sin(\theta)b)$ with $\theta\in[0,2\pi]$ uniformly distributed. Then we have \begin{equation}\label{eq:DvasKofs} D_v=\pi\cdot K_0\left(\sigma (\theta)\right) \end{equation} where we recall the definition of the Vitale Zonoid associated to a random vector in Proposition~\ref{prop:VitaleZon}. In other words we have: \begin{equation}\label{eq:DiscRandomVar} h_{D_v}(u)=\|v\|\sqrt{\langle u,a\rangle^2+\langle u,b\rangle^2}=\frac{\pi}{2}\mathbb{E}|\langle u, \sigma (\theta)\rangle|. \end{equation} \end{lemma} \begin{proof} Consider the zonoid $K_0\left(\sigma (\theta)\right)$. We will prove that it is a disc contained in $v^\perp$ centered at $0$ of radius $\|v\|/\pi$. First of all, since $\sigma (\theta)\in v^\perp$ almost surely, we have $h_{K_0\left(\sigma (\theta)\right)}(\pm v)=0$. Thus $K_0\left(\sigma (\theta)\right)$ is contained in the plane $v^\perp$. Moreover, let $O(v^\perp)$ denote the stabilizer of $v$ in the orthogonal group $O(3)$. The zonoid $K_0\left(\sigma (\theta)\right)$ is invariant under the action of $O(v^\perp)$ thus it is a disc centered at $0$. To compute its radius it is enough to compute the support function at one point: $h_{K_0\left(\sigma (\theta)\right)}(a_1)=\|v\|\cdot\mathbb{E}|\cos(\theta)|=\|v\|/\pi$ and this concludes the proof. \end{proof} \begin{remark} Note that the law of the random vector $\sigma(\theta)$ does not depend on the choice of the orthonormal basis $a,b$. It only depends on the line spanned by $v$ and the norm $\|v\|$. \end{remark} \begin{definition} A convex body $K\subset \mathbb{R}^3$ is called a \emph{discotope} if it can be expressed as a finite Minkowski sum of discs, i.e. if there exist $v_1,\ldots, v_N\in\mathbb{R}^3$, such that $K=D_{v_1}+\cdots+D_{v_N}$. In particular discotopes are zonoids. Moreover we can and will assume without loss of generality that \begin{equation} \frac{v_i}{\|v_i\|} \neq \pm \frac{v_j}{\|v_j\|} \qquad \hbox{for } i\neq j. \end{equation} \end{definition} What is the shape of a discotope? In order to answer this question we are going to study the boundary structure of such a convex body, when $N\geq 2$. \begin{lemma}\label{lem:boundary_discs} Consider the discotope $K = D_{v_1} + \ldots + D_{v_N}$, fix $q\in \partial (D_{v_2}+\ldots + D_{v_N})$ and take the Minkowski sum $D_{v_1}+\{q\}$. Then such disc is part of the boundary of the discotope if and only if \begin{equation}\label{eq:cond_qmax} \langle q , v_1 \rangle = \pm \max\set{ \langle \tilde{q} , v_1 \rangle}{\tilde{q}\in D_{v_2}+\ldots+D_{v_N}}. \end{equation} \end{lemma} \begin{proof} We do the proof for $N=2$; the general case is then given by a straightforward induction. Let $r:S^2 \to \mathbb{R}_{\geq 0}$ be the radial function of the discotope, namely $r(x) := \max \set{\lambda \geq 0}{\lambda x \in K}$. A point $x \in \partial K$ if and only if $r\left(\frac{x}{\| x \|}\right) = \| x \|$. So we claim that for all $p\in D_{v_1}$ \begin{equation} r\left( \frac{p+q}{\| p+q \|}\right) = \| p+q \| \end{equation} where $q\in D_{v_2}$ satisfies $\langle q , v_1 \rangle = \pm \max\set{ \langle \tilde{q} , v_1 \rangle}{\tilde{q}\in D_{v_2}}$. Assume first that $q$ realizes the maximum. Let $ r\left( \frac{p+q}{\| p+q \|}\right) = \lambda$. Then we have: \begin{equation} \lambda \left( \frac{p+q}{\| p+q \|}\right) = p' + q' \in \partial K \end{equation} for some $p'\in D_{v_1}$ and $q'\in D_{v_2}$. By taking the scalar product with $v_1$ we get: \begin{equation} \frac{\lambda}{\| p+q \|} \langle q, v_1 \rangle = \langle q' , v_1 \rangle \leq \langle q, v_1 \rangle \end{equation} therefore $\lambda \leq \| p+q \|$. Since $p+q$ is a point of $K$, $\lambda \geq \| p+q \|$ and the thesis follows.\\ The other case where $q$ realizes the minimum is analogous. \end{proof} Since we assumed that all the $v_i$ are non colinear, for every $i$ there are exactly two $q_i$ that satisfy \eqref{eq:cond_qmax} that we will denote by $q_i^+$ and $q_i^-$ respectively. Lemma~\ref{lem:boundary_discs} then says that in the boundary of the discotope there are exactly $2N$ discs, namely \begin{equation} D_{v_1} + \{q_1^+\},\; D_{v_1} + \{q_1^-\}, \;\ldots,\; D_{v_N} + \{q_N^+\},\; D_{v_N} + \{q_N^-\}. \end{equation} The rest of the boundary of the discotope is the open surface $\mathcal{S}:=\partial K \setminus \cup_{i=1}^N (D_{v_i}+\{q_i^{\pm}\})$ made of exposed points. Moreover we show in the next proposition that $\mathcal{S}$ has either one or two connected components. \begin{proposition}\label{prop:componentsS} Consider the discotope $K = D_{v_1} + \ldots + D_{v_N}$, then $\mathcal{S}$ has two connected components if and only if $v_1, \ldots, v_N$ lie all in the same plane. Otherwise it is connected and no two discs intersect. \end{proposition} \begin{proof} Assume first that $v_1, \ldots, v_N \in H$ where without loss of generality $H$ is the hyperplane defined by $\{z=0\}$, then we claim that all the discs in $\partial K$ meet on $H$ in a very precise configuration. Trivially the Minkowski sum $\left( D_{v_1} \cap H \right) + \ldots + \left( D_{v_N} \cap H \right)$ is contained in $K\cap H$. On the other hand let $p\in K\cap H$, then \begin{equation} p = (\alpha_1,\beta_1,\gamma_1) + \ldots + (\alpha_N,\beta_N,\gamma_N) \end{equation} where $(\alpha_i,\beta_i,\gamma_i)\in D_{v_i}$ and $\sum \gamma_i = 0$. But because $v_i \in H$, then also $(\alpha_i,\beta_i,0)\in D_{v_i}$ and so we can write $p$ as \begin{equation} p = (\alpha_1,\beta_1,0) + \ldots + (\alpha_N,\beta_N,0) \end{equation} hence $p\in \left( D_{v_1} \cap H \right) + \ldots + \left( D_{v_N} \cap H \right)$. This implies that $K\cap H$ is a $2$--dimensional zonotope with $2N$ edges, as in Figure~\ref{fig:touching_discs}; its vertices are exactly the points of intersection of the discs in the boundary. Hence the boundary discs divide $\mathcal{S}$ in exactly $2$ connected components. \begin{figure} \centering \def0.7\textwidth{0.7\textwidth} \input{touching_disks.pdf_tex} \caption{The $6$ blue discs are part of the boundary of the discotope $K = D_{v_1} + D_{v_2} + D_{v_3}$, where $v_1,v_2,v_3$ belong to the red-shaded hyperplane H. It separates the two connected components of $\mathcal{S}$. In particular the intersection $\partial K\cap H$ is the red hexagon. } \label{fig:touching_discs} \end{figure} For the vice versa notice that if there are two connected components, then at least two boundary discs must intersect. Without loss of generality assume that there is an intersection point $p$ between a copy of $D_{v_1}$ and a copy of $D_{v_2}$ and consider the plane $H = \text{span}(v_1,v_2)$. Let $\pi (K)$ be the projection of the discotope on $H$; clearly $\pi(p)\in \partial \pi(K)$ is a vertex. Then for $u\in S^1\hookrightarrow H$ \begin{align} h_{\pi(K)}(u) &= h_{D_{v_1}}(u) + \ldots + h_{D_{v_N}}(u) \\ &\overset{\eqref{eq:DiscRandomVar}}{=} \sum_{i=1}^N \|v_i\|\sqrt{\langle u,a_i\rangle^2+\langle u,b_i\rangle^2} \\ &= \sum_{i=1}^N \|v_i\|\sqrt{\langle u,\pi(a_i)\rangle^2+\langle u,\pi(b_i)\rangle^2} \end{align} where $\{\frac{v_i}{\|v_i\|},a_i,b_i\}$ is an orthonormal basis for every $i$. There are two possibilities now: either $\pi(a_i)$ and $\pi(b_i)$ are linearly independent, or they are linearly dependent and possibly zero. The latter case corresponds to discs such that $v_i\in H$, and the summand above becomes linear. So, up to relabeling, we can rewrite the support function splitting these cases: \begin{equation} h_{\pi(K)}(u) = \sum_{i=1}^k |\langle u,\alpha_i \rangle | + \sum_{j=k+1}^N \|v_j\|\sqrt{\langle u,\pi(a_j)\rangle^2+\langle u,\pi(b_j)\rangle^2} \end{equation} for some $\alpha_i \in \mathbb{R}$ and $2\leq k \leq N$. Therefore $\pi(K)$ is the Minkowski sum of $k$ line segments and $N-k$ ellipses. The boundary contains a vertex if and only if there are no ellipses in the sum, hence $k=N$ i.e. $v_i \in H$ for every $i$. \end{proof} \begin{remark} The previous result can be interpreted with the notion of \emph{patches}. These geometric objects have been first introduced in \cite{convexhull_trajectories} and allow to subdivide the boundary of a convex body. Accordingly to their definition, in the discotope we find $2N$ $2$--patches, corresponding to the boundary discs, and either one ore two $0$--patches when $\mathcal{S}$ has one or two connected components respectively. Recently Plaumann, Sinn and Wesner \cite{sinn_patches} refined the definition of patches for a semialgebraic convex body. In this setting it is more subtle to count the number of patches of our discotopes, because this requires the knowledge of the number of irreducible components of $\mathcal{S}$. \end{remark} \subsection{A case study: the dice} \begin{definition} Let $e_1,e_2,e_3$ be the standard basis of $\mathbb{R}^3$ and let $D_i:=D_{e_i}$. We define the \emph{dice} to be the discotope $\mathscr{D}:=D_1+D_2+D_3$. See Figure~\ref{fig:Dice}. \end{definition} The boundary of the dice consists of $6$ two--dimensional discs of radius $1$, lying in the center of the facets of the cube $[-2,2]^3$, and a connected surface. The latter is the zero locus of the polynomial of degree 24: \begin{equation} \varphi(x,y,z) = x^{24}+4 x^{22} y^2+2 x^{20} y^4+ \ldots +728 z^4-160 x^2-160 y^2-160 z^2+16 \end{equation} which is too long to fit in a page (it is made of $91+78+66+55+45+36+28+21+15+10+6+3+1=455$ monomials, here distinguished by their degree). Consider the projection $\pi:=\langle e_1, \cdot\rangle :\mathbb{R}\oplus \mathbb{R}^2\to \mathbb{R}$. Even in this simple example the fibers of the dice under this projection can be tricky to describe. However using the formula for zonoids one can compute explicitly the fiber body (see Figure~\ref{fig:FiberOfDice}). \begin{figure}[ht] \begin{subfigure}{0.49\textwidth} \centering \includegraphics[width=0.6\textwidth]{dice_red_discs.png} \caption{} \label{fig:Dice} \end{subfigure} \begin{subfigure}{0.49\textwidth} \centering \includegraphics[width=0.5\textwidth]{fiber_dice.png} \caption{} \label{fig:FiberOfDice} \end{subfigure} \caption{Left: the dice. Right: its fiber body.} \end{figure} \begin{proposition} With respect to this projection $\pi$, the fiber body of $\mathscr{D}$ is \begin{equation} \Sigma_\pi(\mathscr{D})= D_1+\frac{\pi}{4} \left(\underline{e_2}+\underline{e_3}\right)+ \frac{1}{2}\Lambda \end{equation} where $\Lambda$ is the convex body whose support function is given by \begin{equation} h_\Lambda(u_2,u_3)=\frac{1}{2}\int_0^{\pi}\sqrt{\cos(\theta)^2 \left(u_2\right)^2+\sin(\theta)^2 \left(u_3\right)^2} \ \mathrm{d} \theta. \end{equation} and where we recall the notation~\eqref{eq:defseg} for segments. \begin{proof} First of all let us note that by expanding the mixed fiber body $\mathrm{M}\Sigma_\pi(\mathscr{D},\mathscr{D})$ we have \begin{equation}\label{eq:sumfibdice} \Sigma_\pi(\mathscr{D})=\Sigma_\pi(D_1)+\Sigma_\pi(D_2)+\Sigma_\pi(D_3)+2\left( \mathrm{M}\Sigma_\pi(D_1,D_2)+\mathrm{M}\Sigma_\pi(D_1,D_3)+\mathrm{M}\Sigma_\pi(D_2,D_3)\right). \end{equation} Now let $\sigma_1(\theta):=(0,\cos(\theta),\sin(\theta))$, $\sigma_2(\theta):=(\cos(\theta),0,\sin(\theta))$ and $\sigma_3(\theta):=(\cos(\theta),\sin(\theta),0)$ in such a way that $h_{D_i}(u)=\frac{\pi}{2}\mathbb{E}|\langle u, \sigma_i(\theta)\rangle|$. We then want to use Theorem~\ref{thm:Fiberofzonoids} and Proposition~\ref{prop:mixedfiberofzon} to compute all the summands of the expansion of $\Sigma_\pi(\mathscr{D})$. Using~\eqref{eq:DvasKofs} we have that $\mathrm{M}\Sigma_\pi(D_i,D_j)=\pi^2 K_0 (F_\pi(\sigma_i(\theta),\sigma_j(\phi))$ with $\theta,\phi\in S^1$ uniform and independent. In our case, $F_\pi(x,y)=(x_1y_2-y_1x_2, x_1y_3-y_1x_3)/2$. We obtain \begin{align} F_\pi(\sigma_1(\theta),\sigma_1(\phi)) =0, \;\: &F_\pi(\sigma_2(\theta),\sigma_2(\phi)) =\frac{1}{2}(0,\sin(\phi-\theta)), \\ F_\pi(\sigma_3(\theta),\sigma_3(\phi))=\frac{1}{2}(\sin(\phi-\theta),0), \;\: &F_\pi(\sigma_1(\theta),\sigma_2(\phi)) =\frac{-\cos(\phi)}{2}(\cos(\theta),\sin(\theta)), \\ F_\pi(\sigma_1(\theta),\sigma_3(\phi)) =\frac{-\cos(\phi)}{2}(\cos(\theta),\sin(\theta)), \;\: &F_\pi(\sigma_2(\theta),\sigma_3(\phi)) =\frac{1}{2}(\cos(\theta)\sin(\phi),\sin(\theta)\cos(\phi)). \end{align} Computing the support function $h_{\pi^2K_0 (F_\pi(\sigma_i(\theta),\sigma_j(\phi)))}=(\pi^2/2)\mathbb{E}|\langle u, F_\pi(\sigma_i(\theta),\sigma_j(\phi))\rangle|$ and using that $\mathbb{E}|\cos(\phi)|=2/\pi$, we get \begin{align} &\Sigma_\pi(D_1)=0; \quad \Sigma_\pi (D_2)=\frac{\pi}{4}\ \underline{e_2}; \quad \Sigma_\pi (D_3)=\frac{\pi}{4}\ \underline{e_3} ; \\ &\mathrm{M}\Sigma_\pi(D_1,D_2)=\mathrm{M}\Sigma_\pi(D_1,D_3)=\frac{1}{4} D_1 \end{align} It only remains to compute $\mathrm{M}\Sigma_\pi(D_2,D_3)$. We have \begin{equation} h_{\mathrm{M}\Sigma_\pi(D_2,D_3)}(u)=\frac{1}{2}\left(\frac{\pi}{2}\right)^2\mathbb{E}|\langle u, F_\pi(\sigma_2(\theta),\sigma_3(\phi))\rangle|=\frac{\pi^2}{16}\mathbb{E}|u_2 \cos(\theta)\sin(\phi)+u_3\sin(\theta)\cos(\phi)|. \end{equation} We use then the independence of $\theta$ and $\phi$ and~\eqref{eq:DiscRandomVar} to find \begin{equation} h_{\mathrm{M}\Sigma_\pi(D_2,D_3)}(u)=\frac{\pi}{8}\mathbb{E}\sqrt{\cos(\theta)^2 \left(u_2\right)^2+\sin(\theta)^2 \left(u_3\right)^2}=\frac{1}{4}h_\Lambda(u) \end{equation} Puting back together everything we obtain the result. \end{proof} \end{proposition} \begin{remark} It is worth noticing that the convex body $\Lambda$ also appears, up to a multiple, in~\cite[Section~$5.1$]{PSC} where it is called $D(2)$, with no apparent link to fiber bodies. In the case where $u_2\neq 0$ we have \begin{equation} h_\Lambda(u)= |u_2| E\left(\sqrt{1-\left(\frac{u_3}{u_2}\right)^2}\right) \end{equation} where $E(s)=\int_0^{\pi/2}\sqrt{1-s^2\sin(\theta)^2}\mathrm{d} \theta$ is the so called complete elliptic integral of the second kind. This function is not semialgebraic thus the example of the dice shows that the fiber body of a semialgebraic convex body is not necessarily semialgebraic. However $E$ is holonomic. This suggests that the curved assumption in Corollary~\ref{cor:holonomiccurved} may not be needed. \end{remark} \bibliographystyle{alpha}
2023-04-23T06:41:16.921Z
2021-05-27T02:16:39.000Z
redpajama/arxiv
arxiv_0001
2,039
12,346
8c839538cd7e194a238ee6917de22d80cfe9347e
\section{\@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus -.2ex}{2.3ex plus .2ex}{\bf }} \def\subsection{\@startsection{subsection}{2}{\z@}{-3.25ex plus -1ex minus -.2ex}{1.5ex plus .2ex}{\it }} \def\alph{footnote}{\alph{footnote}} \def\@makefnmark{{$\!^{\@thefnmark}$}} \pagestyle{plain} \renewenvironment{thebibliography}[1] {\begin{list}{\arabic{enumi}.} {\usecounter{enumi}\setlength{\parsep}{0pt} \setlength{\itemsep}{0pt} \settowidth {\labelwidth}{#1.}\sloppy}}{\end{list}} \topsep=0in\parsep=0in\itemsep=0in \newcounter{arabiclistc} \newenvironment{arabiclist} {\setcounter{arabiclistc}{0} \begin{list}{\arabic{arabiclistc}} {\usecounter{arabiclistc} \setlength{\parsep}{0pt} \setlength{\itemsep}{0pt}}}{\end{list}} \def\@citex[#1]#2{\if@filesw\immediate\write\@auxout {\string\citation{#2}}\fi \def\@citea{}\@cite{\@for\@citeb:=#2\do {\@citea\def\@citea{,}\@ifundefined {b@\@citeb}{{\bf ?}\@warning {Citation `\@citeb' on page \thepage \space undefined}} {\csname b@\@citeb\endcsname}}}{#1}} \newif\if@cghi \def\cite{\@cghitrue\@ifnextchar [{\@tempswatrue \@citex}{\@tempswafalse\@citex[]}} \def\citelow{\@cghifalse\@ifnextchar [{\@tempswatrue \@citex}{\@tempswafalse\@citex[]}} \def\@cite#1#2{{$\!^{#1}$\if@tempswa\typeout {IJCGA warning: optional citation argument ignored: `#2'} \fi}} \newcommand{\cite}{\cite} \setcounter{secnumdepth}{2} \def1}\let\glb@currsize=\relax\selectfont{1.0} \ifx\selectfont\undefined \@normalsize\else\let\glb@currsize=\relax\selectfont \fi \ifx\selectfont\undefined \def\@singlespacing{% \def1}\let\glb@currsize=\relax\selectfont{1}\ifx\@currsize\normalsize\@normalsize\else\@currsize\fi% } \else \def\@singlespacing{\def1}\let\glb@currsize=\relax\selectfont{1}\let\glb@currsize=\relax\selectfont} \fi \long\def\@makecaption#1#2{ \vskip 10pt \setbox\@tempboxa\hbox{\footnotesize #1: #2} \ifdim \wd\@tempboxa >\hsize \leftskip 0pt plus 1fil \rightskip 0pt plus -1fil \parfillskip 0pt plus 2fil \footnotesize #1: #2\par \else \hbox to\hsize{\hfil\box\@tempboxa\hfil} \fi} \bibliographystyle{unsrt} \def\Journal#1#2#3#4{{#1} {\bf #2}, #3 (#4)} \def\em Nuovo Cimento{\em Nuovo Cimento} \def\em Nucl. Instrum. Methods{\em Nucl. Instrum. Methods} \def{\em Nucl. Instrum. Methods} A{{\em Nucl. Instrum. Methods} A} \def{\em Nucl. Phys.} B{{\em Nucl. Phys.} B} \def{\em Phys. Lett.} B{{\em Phys. Lett.} B} \def\em Phys. Rev. Lett.{\em Phys. Rev. Lett.} \def{\em Phys. Rev.} D{{\em Phys. Rev.} D} \def{\em Z. Phys.} C{{\em Z. Phys.} C} \def\scriptstyle{\scriptstyle} \def\scriptscriptstyle{\scriptscriptstyle} \def\multicolumn{\multicolumn} \def\epsilon^{\prime}{\epsilon^{\prime}} \def\varepsilon{\varepsilon} \def\rightarrow{\rightarrow} \def\pi^+\pi^-\gamma{\pi^+\pi^-\gamma} \def{\bf p}{{\bf p}} \defK^0{K^0} \def\bar{K^0}{\bar{K^0}} \def\alpha{\alpha} \def\bar{\alpha}{\bar{\alpha}} \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def\begin{eqnarray}{\begin{eqnarray}} \def\end{eqnarray}{\end{eqnarray}} \def\CPbar{\hbox{{\rm CP}\hskip-1.80em{/}} \begin{document} \title{GRAVITATIONAL EXCITONS FROM EXTRA DIMENSIONS} \author{ U. G\"UNTHER, A. ZHUK } \address{Department of Physics, University of Odessa, 2 Petra Velikogo Street, Odessa 270100,\\ UKRAINE} \maketitle\abstracts{ We study inhomogeneous multidimensional cosmological models with a higher dimensional space-time manifold $M = M_0\times\prod\nolimits_{i=1} ^nM_i$ $( n \ge 1 )$ under dimensional reduction to $D_0$-dimensional effective models and show that small inhomogeneous excitations of the scale factors of the internal spaces near minima of effective potentials should be observable as massive scalar particles (gravitational excitons) in the external space-time. } \mbox{} \vspace{-6ex} \section*{Gravitational excitons} We consider a multidimensional space-time manifold \begin{equation} \label{2.1}M = M_0 \times M_1 \times \dots \times M_n \end{equation} with metric \begin{equation} \label{2.2}g = g_{MN}(X) dX^M \otimes dX^M = g^{(0)}+\sum_{i=1}^ne^{2\beta ^i(x)}g^{(i)}, \end{equation} where $x$ are some coordinates of the $D_0 = d_0+1$ - dimensional manifold $M_0$. Let manifolds $M_i$ be $d_i$-dimensional Einstein spaces with metric $g^{(i)} $, i.e., $R\left[ g^{(i)}\right] =\lambda ^id_i\equiv R_i $. Internal spaces $M_i \quad (i=1,\dots ,n) $ may have nontrivial global topology, being compact (i.e. closed and bounded) for any sign of spatial topology. With total dimension $D=1+\sum_{i=0}^nd_i$, $\kappa ^2$ a $D$-dimensional gravitational constant, $\Lambda $ - a $D$-dimensional bare cosmological constant and $S_{YGH}$ the standard York-Gibbons-Hawking boundary term, we consider an action of the form \begin{equation} \label{2.6}S=\frac 1{2\kappa ^2}\int\limits_Md^DX\sqrt{|g|}\left\{ R[g]-2\Lambda \right\} +S_{add}+S_{YGH}. \end{equation} The additional potential term \begin{equation} \label{2.7}S_{add}=-\int\limits_Md^DX\sqrt{|g|}\rho (x) \end{equation} is not specified and left in its general form, taking into account the Casimir effect, the Freund-Rubin monopole ansatz, or a perfect fluid. In all these cases $\rho $ depends on the external coordinates through the scale factors $a_i(x)=e^{\beta ^i(x)}\ (i=1,\ldots ,n)$ of the internal spaces. After dimensional reduction the action reads $$ S=\frac 1{2\kappa _0^2}\int\limits_{M_0}d^{D_0}x\sqrt{|g^{(0)}|}% \prod_{i=1}^ne^{d_i\beta ^i}\left\{ R\left[ g^{(0)}\right] -G_{ij}g^{(0)\mu \nu }\partial _\mu \beta ^i\,\partial _\nu \beta ^j+\right. $$ \begin{equation} \label{2.8}+\sum_{i=1}^n\left. R\left[ g^{(i)}\right] e^{-2\beta ^i}-2\Lambda -2\kappa ^2\rho \right\} , \end{equation} where $\kappa _0^2=\kappa ^2/V_I $ and $ V_I =\prod_{i=1}^nv_i=\prod_{i=1}^n\int\limits_{M_i}d^{d_i}y \sqrt{|g^{(i)}|}$ are the the $D_0$-dimensional gravitational constant and the internal space volume. $G_{ij}=d_i\delta _{ij}-d_id_j$ \\ $(i,j=1,\ldots ,n)$ defines the midisuperspace metric. Action (\ref{2.8}) is written in the Brans-Dicke frame. Conformal transformation to the Einstein frame \begin{equation} \label{2.10}g_{\mu \nu }^{(0)}=\Omega^2 \tilde g^{(0)}_{\mu \nu } =exp\left( -\frac 2{D_0-2} \sum_{i=1}^n d_i\beta^i\right) \tilde g^{(0)}_{\mu \nu } \end{equation} yields \begin{equation} \label{2.12}S=\frac 1{2\kappa _0^2}\int\limits_{M_0}d^{D_0}x\sqrt{|\tilde g^{(0)}|}\left\{ \tilde R\left[ \tilde g^{(0)}\right] -\bar G_{ij}\tilde g^{(0)\mu \nu }\partial _\mu \beta ^i\,\partial _\nu \beta ^j-2U_{eff}\right\} , \end{equation} where $\bar G_{ij} =d_i\delta _{ij}+\frac 1{D_0-2}d_id_j,\ (i,j=1,\ldots ,n)$, and the effective potential reads \begin{equation} \label{2.15}U_{eff}={\left( \prod_{i=1}^ne^{d_i\beta ^i}\right) }^{-\frac 2{D_0-2}}\left[ -\frac 12\sum_{i=1}^nR_ie^{-2\beta ^i}+\Lambda +\kappa ^2\rho \right] . \end{equation} We recall that $\rho $ depends on the scale factors of the internal spaces: $% \rho =\rho \left( \beta ^1,\ldots ,\beta ^n\right) $. Thus, we are led to the action of a self-gravitating $\sigma -$model with flat target space and self-interaction described by the potential (\ref{2.15}). It can be easily seen that the problem of the internal spaces stable compactification is reduced now to the search of models that provide minima of the effective potential (\ref{2.15}). The midisuperspace metric (target space metric) by a regular coordinate transformation $\varphi =Q\beta ,\quad \beta =Q^{-1}\varphi \ $ can be turned into a pure Euclidean form \begin{equation} \label{2.20}\bar G_{ij}d\beta ^i\otimes d\beta ^j= \sigma _{ij}d\varphi ^i\otimes d\varphi ^j=\sum_{i=1}^nd\varphi ^i\otimes d\varphi ^i\, . \end{equation} An appropriate transformation $Q:\ \beta ^i\mapsto \varphi ^j=Q_i^j\beta ^i$ is given e.g. by \begin{equation} \label{2.21} \begin{array}{ll} \varphi ^1 & =-A\sum_{i=1}^nd_i\beta ^i ,\\ & \\ \varphi ^i & =\left[ d_{i-1}/\Sigma _{i-1}\Sigma _i\right] ^{1/2}\sum_{j=i}^nd_j(\beta ^j-\beta ^{i-1}) ,\quad i=2,\ldots ,n \, , \end{array} \end{equation} where $\Sigma _i=\sum_{j=i}^nd_j$, $A=\pm {\left[ \frac 1{D^{\prime }}\frac{D-2}{D_0-2}\right] }^{1/2}$ and $D^{\prime }=\sum_{i=1}^nd_i$. So we can write action (\ref{2.12}) as \begin{equation} \label{2.23}S=\frac 1{2\kappa _0^2}\int\limits_{M_0}d^{D_0}x\sqrt{|\tilde g^{(0)}|}\left\{ \tilde R\left[ \tilde g^{(0)}\right] -\sigma _{ik}\tilde g^{(0)\mu \nu }\partial _\mu \varphi ^i\,\partial _\nu \varphi ^k-2U_{eff}\right\} \end{equation} with effective potential \begin{equation} \label{2.24}U_{eff}=e^{\frac 2{A(D_0-2)}\varphi ^1}\left( -\frac 12\sum_{i=1}^nR_ie^{-2{(Q^{-1})^i}_k\varphi ^k}+\Lambda +\kappa ^2\rho \right) . \end{equation} Let us suppose that this potential has minima which are localized at points $\vec \varphi_c,c=1,...,m$ : $\left. \frac {\partial U_{eff}}{\partial \varphi^i} \right|_{\vec \varphi_c} = 0$. Then, for small field fluctuations $\xi^i \equiv \varphi^i - \varphi^i_{(c)}$ around the minima the potential (\ref{2.24}) reads \begin{equation} \label{2.25}U_{eff}= U_{eff}\left( \vec \varphi _c\right) +\frac 12\sum_{i,k=1}^n\bar a_{(c)ik}\xi ^i\xi ^k+O(\xi ^i\xi ^k\xi ^l) \, , \end{equation} where the Hessians $\bar a_{(c)ik}:=\left. \frac{\partial ^2U_{eff}}{\partial \xi ^i\,\partial \xi ^k}\right| _{\vec \varphi _c} $ are assumed as not vanishing identically. The action functional (\ref{2.23}) reduces now to a family of action functionals for fluctuation fields $\xi ^i$% \begin{equation} \label{2.27} \begin{array}{ll} S= & \frac 1{2\kappa _0^2}\int\limits_{M_0}d^{D_0}x \sqrt{|\tilde g^{(0)}|}\left\{ \tilde R\left[ \tilde g^{(0)}\right] -2U_{eff}\left( \vec \varphi _c\right) -\right. \\ & \\ & \left. -\sigma _{ik}\tilde g^{(0)\mu \nu }\partial _\mu \xi ^i\,\partial _\nu \xi ^k-\bar a_{(c)ik}\xi ^i\xi ^k\right\} ,\ c=1,...,m. \end{array} \end{equation} It remains to diagonalize the Hessians $\bar a_{(c)ik}$ by appropriate $% SO(n)-$ro\-ta\-tions $S_c:\ \xi \mapsto \psi =S_c\xi ,\quad S_c^{\prime }=S_c^{-1}$% \begin{equation} \label{2.28}\bar A_c=S_c^{\prime }M_c^2S_c,\quad M_c^2={\rm diag\ }% (m_{(c)1}^2,m_{(c)2}^2,\ldots ,m_{(c)n}^2), \end{equation} leaving the kinetic term $\sigma _{ik}\tilde g^{(0)\mu \nu }\partial _\mu \xi ^i\,\partial _\nu \xi ^k$ invariant \begin{equation} \label{2.29}\sigma _{ik}\tilde g^{(0)\mu \nu }\partial _\mu \xi ^i\,\partial _\nu \xi ^k=\sigma _{ik}\tilde g^{(0)\mu \nu }\partial _\mu \psi ^i\,\partial _\nu \psi ^k, \end{equation} and we arrive at action functionals for decoupled normal modes of linear $% \sigma -$models in the background metric $\tilde g^{(0)}$ of the external \mbox{space-time:} \begin{eqnarray}\label{2.30} S & = & \frac{1}{2\kappa _0^2}\int \limits_{M_0}d^{D_0}x \sqrt {|\tilde g^{(0)}|}\left\{\tilde R\left[\tilde g^{(0)}\right] - 2\Lambda _{(c)eff}\right\} + \nonumber\\ \ & + & \sum_{i=1}^{n}\frac{1}{2}\int \limits_{M_0}d^{D_0}x \sqrt {|\tilde g^{(0)}|}\left\{-\tilde g^{(0)\mu \nu}\psi ^i_{,\mu}\psi ^i_{,\nu} - m_{(c)i}^2\psi ^i\psi ^i\right\},\ c=1,\ldots ,m, \end{eqnarray} where $\Lambda _{(c)eff}\equiv U_{eff}\left( \vec \varphi _c\right) $ is the $D_0$-dimensional effective cosmological constant and the factor $\sqrt{V_I /\kappa ^2}$ has been included into $\psi $ for convenience: $\sqrt{V_I /\kappa ^2}\psi \rightarrow \psi $. Thus, conformal excitations of the metric of the internal spaces behave as massive scalar fields developing on the background of the external space-time. By analogy with excitons in solid state physics where they are excitations of the electronic subsystem of a crystal, the excitations of the internal spaces were called gravitational excitons \cite{gz}. \section*{References}
2023-04-23T06:41:17.891Z
1997-10-17T18:28:36.000Z
redpajama/arxiv
arxiv_0001
2,068
1,982
4ff0defc496f331832760b0a02235b3f4fccab32
\section{#1}} \newcommand{\refer}[1]{(\ref{#1})} \newcommand{\wtd}[1]{\widetilde{#1}} \newcommand{\bar{\q}}{\bar{\q}} \newcommand{{\l^\prime}}{{\l^\prime}} \newcommand{{\l^{\prime\prime}}}{{\l^{\prime\prime}}} \newcommand{{\l^{\prime\prime\prime}}}{{\l^{\prime\prime\prime}}} \newcommand{{\widehat{Q}}}{{\widehat{Q}}} \newcommand{\ft}[2]{{\textstyle\frac{#1}{#2}}} \title{\vspace{-3truecm} {\small \rightline{THU-97-44} \rightline{NIKHEF 97-045} \rightline{hep-th/9710215}} \vspace{1truecm} Open and Closed Supermembranes with Winding} \author{Bernard de Wit\address{Institute for Theoretical Physics, Utrecht University\\ Princetonplein 5, 3508 TA Utrecht, The Netherlands\\ }, Kasper Peeters$^{\rm b}$ and Jan C.\ Plef\/ka\address{NIKHEF, P.O. Box 41882, 1009 DB Amsterdam, The Netherlands\\ } } \begin{document} \begin{abstract} Motivated by manifest Lorentz symmetry and a well-defined large-$N$ limit prescription, we study the supersymmetric quantum mechanics proposed as a model for the collective dynamics of D0-branes from the point of view of the 11-dimensional supermembrane. We argue that the continuity of the spectrum persists irrespective of the presence of winding around compact target-space directions and discuss the central charges in the superalgebra arising from winding membrane configurations. Along the way we comment on the structure of open supermembranes. \end{abstract} \maketitle M-theory is defined as the strong-coupling limit of type-IIA string theory and is supposed to capture all the relevant degrees of freedom of all known string theories, both at the perturbative and the nonperturbative level \cite{Townsend,witten3,horvw,Mtheory}. In this description the various string-string dualities play a central role. At large distances M-theory is described by 11-dimensional supergravity. In \cite{bergs87} it was shown that elementary supermembranes can live in a superspace background that is a solution of the source-free supergravity field equations in 11 dimensions. In the light-cone gauge (in a flat target space) it was subsequently shown \cite{dWHN} that the supermembrane theory takes the form of a supersymmetric quantum-mechanical model, which coincides with the zero-volume reduction of supersymmetric Yang-Mills theory based on 16 supercharges. For the supermembrane the underlying gauge group is the group of area-preserving diffeomorphisms of the two-dimensional membrane surface. This group can be described by the $N\to\infty$ limit of SU$(N)$ (the role of the membrane topology is subtle, as we will discuss in due course). For the finite groups the phase-space variables take the form of matrices, associated with the Lie algebra of some group (in this case U$(N)$ or SU$(N)$). For this reason, these models are commonly referred to as {\it matrix} models. It has been discussed that the possible massless ground states of the supermembrane coincide with the physical states of 11-dimensional supergravity\footnote{% For discussions on the existence of massless states, see \cite{dWHN,DWN,FH,mboundstates}. According to \cite{mboundstates} such states do indeed exist in eleven dimensions. }. More recently it was shown that the same quantum-mechanical matrix models based on U$(N)$ describe the short-distance dynamics of $N$ D0-branes \cite{boundst,Dbranes}. Subsequently there has been a large number of studies of these models for finite $N$ \cite{Dpart,BBPT} and some of them have been reported at this conference. These studies were further motivated by a conjecture according to which the degrees of freedom captured in M-theory, are in fact described by the U$(N)$ super-matrix models in the $N\to \infty$ limit \cite{BFSS}. A further conjecture, also discussed at this conference, is that the finite-$N$ matrix model coincides with M-theory compactified on a light-like circle \cite{susskind}. So it turns out that M-theory, supermembranes and super-matrix models are intricately related. A direct relation between supermembranes and type-IIA theory was emphasized in particular in \cite{Townsend}, based on the relation between $d=10$ extremal black holes in 10-dimensional supergravity and the Kaluza-Klein states of 11-dimensional supergravity. From the string point of view these states carry Ramond-Ramond charges, just as the D0-branes \cite{Polchinski}. Strings can arise from membranes by a so-called double-dimensional reduction \cite{DHIS}. Similarly supermembranes were employed to provide evidence for the duality of M-theory on $\mbox{\bf R}^{10}\times S_1/\mbox{\bf Z}_2$ and 10-dimensional $E_8\times E_8$ heterotic strings \cite{horvw}. Here we choose the supermembrane perspective, motivated by its manifest Lorentz invariance and well-defined large-$N$ limit (but not necessarily committing ourselves to the view that M-theory is a theory of fundamental membranes). Let us nevertheless first consider the supersymmetric matrix models, whose Hamiltonian equals \begin{equation} H = \frac{1}{g}{\rm Tr}\Big[ \ft{1}{2}{\bf P}^2 + \ft{1}{4}[X^a,X^b]^2 + g\theta^{\rm T} \gamma_a [ X^a, \theta ]\Big] \, . \end{equation} Here, $\bf X$, $\bf P$ and $\theta$ take values in the Lie algebra of the gauge group. From the supermembrane point of view they are vectors and spinors of the `transverse' SO(9) rotation group. This Hamiltonian can alternatively be interpreted as the zero-volume limit of supersymmetric Yang-Mills theory with some arbitrary gauge group. For the supermembrane one must choose the (infinite-dimensional) group of area-preserving diffeomorphisms (which also plays an important role in selfdual gravity and $N=2$ strings) and put $g$ equal to the light-cone momentum $P^+_0$. The matrix model has 16 supercharges, but additional charges can be obtained by splitting off an abelian factor of the U$(N)$ gauge group, \begin{eqnarray} Q^+ &=& {\rm Tr} \Big[(2P^a \gamma_a + [X^a, X^b]\gamma_{ab})\theta\Big]\,, \nonumber\\ Q^- &=& {g}\; {\rm Tr} \left[ \theta \right] \, . \end{eqnarray} For the supermembrane the second charge is associated with the center-of-mass superalgebra (we return to the membrane supersymmetry algebra shortly). The form of the area-preserving diffeomorphisms that remain as an invariance of the model, depends in general on the topology of the membrane surface. For spherical and toroidal topologies, it has been shown that the algebra can be approximated by SU$(N)$ in the large-$N$ limit. This limit is subtle. However, once one assumes an infinitesimal gauge group, one can describe a large variety of different theories. For instance, the gauge group $[{\rm U}(N)]^M$, which is a subgroup of U$(N\cdot M)$, leads in the limit $M\to \infty$ to the possibility of describing the collective dynamics of D0-branes on a circle by supersymmetric Yang-Mills theories in $1+1$ dimensions \cite{Tdual}. Hence, it is possible to extract extra dimensions from a suitable infinite-dimensional gauge group. Obviously this can be generalized to a hypertorus. Therefore models based on (subgroups of) U$(\infty)$ can describe an enormous variety of models. Using the SU$(N)$ regularisation it was shown in \cite{dWLN} that the spectrum of the supermembrane is continuous, with no mass gap. This result is expected when the supermembrane Hamiltonian is viewed as the Hamiltonian for the collective dynamics of arbitrarily large numbers of D-particles. From the supermembrane point of view, these instabilities arise because arbitrarily long, string-like (zero-area) configurations can form. There are thus no asymptotic membrane states, but rather multimembrane configurations connected by these infinitely thin strings. The massless ground states, which correspond to the states of 11-dimensional supergravity, thus appear in a continuum of multimembrane states. The connection with M-theory and finite-N matrix models is made by compactification of the 11-dimensional action on a circle or higher-dimensional tori. However, this compactification has only recently been studied \cite{us} from the point of view of the supermembrane. In this talk we discuss the supermembrane with winding, paying attention to the extension of the supersymmetric gauge theory, the Lorentz invariance and the effect of winding on the mass spectrum. Along the way we shall simultaneously develop the theory of open supermembranes, which has recently received some attention \cite{BB,BM,EMM2}. The actions of fundamental supermembranes are of the Green-Schwarz type \cite{bergs87}. In flat target space, they read \begin{eqnarray}\label{action} {\cal L}&=&\sqrt{-g(X,\theta)} \nonumber\\ && -\epsilon^{ijk}\, [ \ft{1}{2}\partial_iX^\mu \, (\partial_jX^\nu+\bar{\q} \g^\nu\partial_j\q)\nn\\ &&\hspace{12mm} +\ft{1}{6}\,\bar{\q}\g^\mu\partial_i\q\, \bar{\q}\g^\nu\partial_j\q]\, \bar{\q}\g_{\mu\nu} \partial_k\q\, , \end{eqnarray} where $X^\mu$ denote the 11-dimensional target-space embedding coordinates lying in $T^d\times{\mbox{\bf R}}^{1,10-d}$ and thus permitting us to have winding on the $d$-dimensional torus $T^d$. Moreover we have the fermionic variables $\q$, which are 32-component Majorana spinors and $g=\mbox{det}\, g_{ij}$ with the induced metric \begin{equation} g_{ij}=(\partial_iX^\mu+ \bar{\q}\gamma^\mu\partial_i\q)\, (\partial_jX^\nu+ \bar{\q}\gamma^\nu\partial_j\q)\, \eta_{\mu\nu}. \ee{indmet} Next to supersymmetry the action \refer{action} exhibits an additional local fermionic symmetry called $\kappa$-symmetry. In the case of the open supermembrane, $\kappa$-symmetry imposes boundary conditions on the fields. They must ensure that the following integral over the boundary of the membrane world volume vanishes, \begin{eqnarray} \int_{\del M} \Big[&& \ft12{\rm d} X^\m \wedge ( {\rm d}X^\n + \bar\theta\g^\nu {\rm d}\theta)\, \bar \theta \g_{\m\n} \d_\kappa \theta \nn\\ &&+\ft12( {\rm d}X^\m - \ft13 \bar\theta\g^\mu {\rm d}\theta ) \wedge \bar\theta\g_{\m\nu} {\rm d}\theta\; \bar\theta\g^\n \d_{\kappa}\theta \nn \\[1mm] && +\ft16\bar\theta\g^\mu {\rm d}\theta\wedge \bar\theta\g^\nu {\rm d}\theta\, \bar\theta\g_{\m\nu} \d_{\kappa}\theta\Big ]\,. \end{eqnarray} This can be achieved by having a ``membrane D-$p$-brane'' at the boundary with $p=1,5$, or 9, which is defined in terms of $(p+1)$ Neumann and $(10-p)$ Dirichlet boundary conditions for the $X^\mu$, together with corresponding boundary conditions on the fermionic coordinates\footnote{% Here our conclusions concur with those of \cite{EMM2} but not with those of \cite{BM}.}. More explicitly, we define projection operators \begin{equation} {\cal P}_\pm=\ft12\Big({\bf 1} \pm \g^{p+1}\, \g^{p+2}\cdots \g^{10}\Big)\,, \ee{projectors} and impose the Dirichlet boundary conditions \begin{eqnarray} \del_\parallel \, X^M\big|&=& 0\,, \qquad M=p+1,\ldots,10\,, \nn\\ {\cal P}_- \q\big|&=&0\, , \la{boundcond} \end{eqnarray} where $\del_\perp$ and $\del_\parallel$ define the world-volume derivatives perpendicular or tangential to the surface swept out by the membrane boundary in the target space. Note that the fermionic boundary condition implies that ${\cal P}_- \del_\parallel\q=0$. Furthermore, it implies that spacetime supersymmetry is reduced to only 16 supercharges associated with spinor parameters ${\cal P}_+\epsilon$, which is {\it chiral} with respect to the ($p+1$)-dimensional world volume of the D-$p$-brane at the boundary. With respect to this reduced supersymmetry, the superspace coordinates decompose into two parts, one corresponding to $(X^M, {\cal P}_-\theta)$ and the other corresponding to $(X^m, {\cal P}_+\theta)$ where $m=0,1,\ldots,p$. While for the five-brane these superspaces exhibit a somewhat balanced decomposition in terms of an equal number of bosonic and fermionic coordinates, the situation for $p=1,9$ shows heterotic features in that one space has an excess of fermionic and the other an excess of bosonic coordinates. Moreover, we note that supersymmetry may be further broken by e.g.\ choosing different Dirichlet conditions on non-connected segments of the supermembrane boundary. The Dirichlet boundary conditions can be supplemented by the following Neumann boundary conditions, \begin{eqnarray} \del_\perp \, X^m\big|&=& 0 \qquad m=0,1,\ldots,p \,,\nn\\ {\cal P}_+ \del_\perp \q \big|&=&0 \,. \la{Nboundcond} \end{eqnarray} These do not lead to a further breakdown of the rigid spacetime symmetries. We now continue and follow the light-cone quantization described in \cite{dWHN}. The supermembrane Hamiltonian takes the form \begin{eqnarray} \label{memham} \displaystyle {\cal H}&=& \displaystyle\frac{1}{P_0^+}\, \int {\rm d}^2\s \, \sqrt{w}\, \bigg[ \, \frac{P^a\, P_a }{2\,w} + \ft{1}{4} \{\, X^a,X^b\,\}^2\nonumber\\ \displaystyle &&\hspace{18mm}\quad\quad \displaystyle-P^+_0\, \bar{\q}\,\g_- \g_a\, \{\, X^a , \q\,\}\, \bigg]\, . \end{eqnarray} Here the integral runs over the spatial components of the worldvolume denoted by $\s^1$ and $\s^2$, while $P^a(\s)$ ($a=2,\ldots,10$) are the momentum conjugates to the transverse $X^a$. In this gauge the light-cone coordinate $X^+=(X^1+X^0)/\sqrt2$ is linearly related to the world-volume time. The momentum $P^+$ is time independent and proportional to the center-of-mass value $P^+_0$ times some density ${\sqrt{w(\s)}}$ of the spacesheet, whose spacesheet integral is normalized to unity. The center-of-mass momentum $P_0^-$ is equal to minus the Hamiltonian \refer{memham} subject to the gauge condition \tmath{\g_+\, \q=0}. And finally we made use of the Poisson bracket \tmath{\{ A,B\} } defined by \begin{equation} \{ A(\s ),B(\s )\} = \frac{1}{\sqrt{w(\s)}}\, \e^{rs}\, \del_r A(\s )\, \del_s B(\s ). \ee{poisbrak} Note that the coordinate $X^-=(X^1-X^0)/\sqrt2$ itself does not appear in the Hamiltonian \refer{memham}. It is defined via \begin{equation} P^+_0\, \del_rX^-= - \frac{{\bf P} \cdot \del_r{\bf X}}{\sqrt{w}} - P^+_0\, \bar{\q}\g_-\del_r\q\,, \ee{delxminus} and implies a number of constraints that will be important in the following. First of all, the right-hand side must be closed. If there is no winding in $X^-$, it must moreover be exact. The equivalence of the large-$N$ limit of SU$(N)$ quantum mechanics with the closed supermembrane model is based on the residual invariance of the supermembrane action in the light-cone gauge. It is given by the area-preserving diffeomorphisms of the membrane surface. These are defined by transformations of the worldsheet coordinates \begin{equation} \label{APD} \s^r \rightarrow \s^r + \x^r(\s) \,, \end{equation} with \begin{equation} \del_r(\sqrt{w(\s)}\, \x^r(\s)\, )=0. \end{equation} We wish to rewrite this condition in terms of dual spacesheet vectors by \begin{equation} \sqrt{w(\s)}\,\x^r(\s)= \e^{rs}\, F_s(\s)\, . \ee{1form} In the language of differential forms the condition \refer{APD} may then be simply recast as \tmath{{\rm d}F=0}. The trivial solutions are the exact forms \tmath{F={\rm d}\x}, or in components \begin{equation} F_s=\del_s\x(\s), \ee{exact} for any globally defined function $\x(\s)$. The nontrivial solutions are the closed forms which are not exact. On a Riemann surface of genus $g$ there are precisely $2g$ linearly independent non-exact closed forms, whose integrals along the homology cycles are normalized to unity\footnote{% In the mathematical literature the globally defined exact forms are called ``hamiltonian vector fields'', whereas the closed but not exact forms which are not globally defined go under the name ``locally hamiltonian vector fields''.}. % In components we write \begin{equation} F_s=\f_{(\l)\, s}\;, \qquad \l=1,\ldots,2g\,. \ee{harm} The commutator of two infinitesimal area-preserving diffeomorphisms is determined by the product rule \begin{equation} \xi_r^{(3)} = \partial_r \left( \frac{\epsilon^{st}}{\sqrt{w}} \xi_s^{(2)}\xi_t^{(1)}\right) \,, \end{equation} where both $\xi_r^{(1,2)}$ are closed vectors. Because $\xi_r^{(3)}$ is exact, the exact vectors generate an invariant subgroup of the area-preserving diffeomorphisms, which can be approximated by SU$(N)$ in the large-$N$ limit in the case of closed membranes. For open membranes the boundary conditions on the fields \refer{boundcond} lead to an SO($N$) group structure, as we shall see in the sequel. The presence of the closed but non-exact forms is crucial for the winding of the embedding coordinates. More precisely, while the momenta ${\bf P}(\s)$ and the fermionic coordinates $\theta(\s)$ remain single valued on the spacesheet, the embedding coordinates, written as one-forms with components $\del_r {\bf X}(\s)$ and $\del_r X^-(\s)$, are decomposed into closed forms. Their non-exact contributions are multiplied by an integer times the length of the compact direction. The constraint alluded to above amounts to the condition that the right-hand side of \refer{delxminus} is closed. Under the full group of area-preserving diffeomorphisms the fields $X^a$, $X^-$ and $\q$ transform according to \begin{eqnarray} \label{APDtrafoXtheta} &\d X^a= \displaystyle{\e^{rs}\over \sqrt{w}}\, \x_r\, \del_s X^a\,, \quad \d X^-= \displaystyle {\e^{rs}\over \sqrt{w}}\, \x_r\, \del_s X^-\,,\nonumber\\ &\quad\quad\d \q^a= \displaystyle {\e^{rs}\over \sqrt{w}}\, \x_r\, \del_s \q\,, \end{eqnarray} where the time-dependent reparametrization $\x_r$ consists of closed exact and non-exact parts. Accordingly there is a gauge field $\w_r$, which is therefore closed as well, transforming as \begin{equation} \d\w_r=\del_0\x_r + \del_r \bigg( {\e^{st}\over\sqrt{w}}\,\x_s\,\w_t\bigg), \ee{APDtrafoomega} and corresponding covariant derivatives \begin{eqnarray} \label{covderiv} D_0 X^a&=& \displaystyle\del_0X^a - {\e^{rs}\over \sqrt{w}}\, \w_r\, \del_s X^a\,, \nonumber\\ D_0 \q &=&\displaystyle \del_0\q - {\e^{rs}\over \sqrt{w}}\, \w_r\, \del_s\q\,, \end{eqnarray} and similarly for \tmath{D_0 X^-}. The action corresponding to the following Lagrangian density is then gauge invariant under the transformations \refer{APDtrafoXtheta} and \refer{APDtrafoomega}, \begin{eqnarray} \label{gtlagrangian} {\cal L}&=&P^+_0\,\sqrt{w}\, \Big[\, \ft{1}{2}\,(D_0{\bf X})^2 + \bar{\q}\,\g_-\, D_0\q \nonumber\\ && \hspace{13mm} - \ft{1}{4}\,(P^+_0)^{-2}\, \{ X^a,X^b\}^2 \\ && \hspace{13mm} + (P^+_0)^{-1}\, \bar{\q}\,\g_-\,\g_a\,\{X^a,\q\} + D_0 X^-\Big]\, ,\nn \end{eqnarray} where we draw attention to the last term proportional to $X^-$, which can be dropped in the absence of winding and did not appear in \cite{dWHN}. Moreover, we note that for the open supermembranes, \refer{gtlagrangian} is invariant under the transformations \refer{APDtrafoXtheta} and \refer{APDtrafoomega} only if $\xi_\parallel=0$ holds on the boundary. This condition defines a subgroup of the group of area-preserving transformations, which is consistent with the Dirichlet conditions \refer{boundcond}. Observe that $\del_\parallel$ and $\del_\perp$ will now refer to the {\it spacesheet} derivatives tangential and perpendicular to the membrane boundary\footnote{% Consistency of the Neumann boundary conditions \refer{Nboundcond} with the area-preserving diffeomorphisms \refer{APDtrafoXtheta} further imposes $\partial_\perp\xi^\parallel=0$ on the boundary, where indices are raised according to \refer{1form}.}. The action corresponding to \refer{gtlagrangian} is also invariant under the supersymmetry transformations \begin{eqnarray} \d X^a &=& -2\, \bar{\e}\, \g^a\, \q\,, \nn\\ \d \q &=& \ft{1}{2} \g_+\, (D_0 X^a\, \g_a + \g_- )\, \e \nonumber\\ &&\quad\quad +\ft{1}{4}(P^+_0)^{-1} \, \{ X^a,X^b \}\, \g_+\, \g_{ab}\, \e ,\nn\\ \d \w_r &=& -2\,(P^+_0)^{-1}\, \bar{\e}\,\del_r\q\,. \la{susytrafos} \end{eqnarray} The supersymmetry variation of $X^-$ is not relevant and may be set to zero. For the open case one finds that the boundary conditions $\omega_\parallel=0$ and \mbox{$\epsilon={\cal P}_+\,\epsilon$} must be fulfilled in order for \refer{susytrafos} to be a symmetry of the action. In that case the theory takes the form of a gauge theory coupled to matter. The pure gauge theory is associated with the Dirichlet and the matter with the Neumann (bosonic and fermionic) coordinates. In the case of a `membrane D-$9$-brane' one now sees that the degrees of freedom on the `end-of-the world' $9$-brane precisely match those of 10-dimensional heterotic strings. {\it On} the boundary we are left with eight propagating bosons $X^m$ (with $m=2, \ldots,9$), as $X^{10}$ is constant on the boundary due to \refer{boundcond}, paired with the 8-dimensional chiral spinors $\theta$ (subject to $\g_+ \theta= {\cal P}_-\theta=0$), i.e., the scenario of Ho\u{r}ava-Witten \cite{horvw}. The full equivalence with the membrane Hamiltonian is now established by choosing the \tmath{\w_r=0} gauge and passing to the Hamiltonian formalism. The field equations for $\w_r$ then lead to the membrane constraint \refer{delxminus} (up to exact contributions), partially defining \tmath{X^-}. Moreover the Hamiltonian corresponding to the gauge theory Lagrangian of \refer{gtlagrangian} is nothing but the light-cone supermembrane Hamiltonian \refer{memham}. Observe that in the above gauge theoretical construction the space-sheet metric $w_{rs}$ enters only through its density $\sqrt{w}$ and hence vanishing or singular metric components do not pose problems. We are now in a position to study the full 11-dimensional supersymmetry algebra of the winding supermembrane. For this we decompose the supersymmetry charge $Q$ associated with the transformations \refer{susytrafos}, into two 16-component spinors, \begin{equation} Q= Q^+ + Q^- , \quad \mbox{where}\quad Q^\pm = \ft{1}{2}\, \g_\pm\,\g_\mp\, Q, \ee{Qdecomposition} to obtain \begin{eqnarray} Q^+&=&\int {\rm d}^2 \s \, \Big(\, 2\, P^a\, \g_a + \sqrt{w}\, \{\, X^a, X^b\, \} \, \g_{ab}\, \Big) \, \q \,, \nn \\ Q^-&=& 2\, P^+_0\, \int {\rm d}^2\s\, \sqrt{w}\, \g_-\, \q . \la{Q-cont} \end{eqnarray} The canonical Dirac brackets are derived by the standard methods and read \begin{eqnarray} (\, X^a(\s), P^b(\s^\prime)\, )_{\mbox{\tiny DB}} &=& \d^{ab}\, \d^2(\s-\s^\prime)\,, \nn\\ (\, \q_\a(\s), \bar{\q}_\b(\s^\prime)\, )_{\mbox{\tiny DB}} &=& \nonumber\\ &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\stupidskip \ft{1}{4}\,(P^+_0)^{-1} \,w^{-1/2} \, (\g_+)_{\a\b}\,\d^2(\s-\s^\prime)\,. \end{eqnarray} In the presence of winding the results given in \cite{dWHN} yield the supersymmetry algebra \begin{eqnarray} \la{contsusy} (\, Q^+_\a, \bar{Q}^+_\b\, )_{\mbox{\tiny DB}} &=& 2\, (\g_+)_{\a\b}\, {\cal H}\nonumber\\ &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! - 2\, (\g_a\, \g_+)_{\a\b}\, \int{\rm d}^2\s\, \sqrt{w}\, \{\, X^a, X^-\,\}\, , \nn \\ (\, Q^+_\a, \bar{Q}^-_\b\, )_{\mbox{\tiny DB}} &=& -(\g_a\,\g_+\,\g_- )_{\a\b}\, P^a_0 \nonumber\\ &&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! - \ft{1}{2}\,(\g_{ab}\, \g_+\g_- )_{\a\b}\, \int {\rm d}^2\s\,\sqrt{w}\, \{\, X^a,X^b\,\}\,,\nn\\ (\, Q^-_\a, \bar{Q}^-_\b\, )_{\mbox{\tiny DB}} &=& -2\, (\g_- )_{\a\b}\, P^+_0\, , \end{eqnarray} where use has been made of the defining equation \refer{delxminus} for $X^-$. The new feature of this supersymmetry algebra is the emergence of the central charges in the first two anticommutators, which are generated through the winding contributions. They represent topological quantities obtained by integrating the winding densities \begin{equation} z^{a}(\s)=\e^{rs}\,\del_r X^a\,\del_s X^- \end{equation} and \begin{equation} z^{ab}(\s) =\e^{rs}\,\del_r X^a\,\del_s X^b \end{equation} over the space-sheet. It is gratifying to observe the manifest Lorentz invariance of \refer{contsusy}. Here we should point out that, in adopting the light-cone gauge, we assumed that there was no winding for the coordinate $X^+$. In \cite{BSS} the corresponding algebra for the matrix regularization was studied. The result obtained in \cite{BSS} coincides with ours in the large-$N$ limit, in which an additional longitudinal five-brane charge vanishes, provided that one identifies the longitudinal two-brane charge with the central charge in the first line of \refer{contsusy}. This requires the definition of $X^-$ in the matrix regularization, a topic that was dealt with in \cite{dWMN}. We observe that the discrepancy noted in \cite{BSS} between the matrix calculation and certain surface terms derived in \cite{dWHN}, seems to have no consequences for the supersymmetry algebra. A possible reason for this could be that certain Schwinger terms have not been treated correctly in the matrix computation, as was claimed in a recent paper \cite{EMM}. The form of the algebra is another indication of the consistency of the supermembrane-supergravity system. In order to define a matrix approximation one introduces a complete orthonormal basis of functions $Y_A(\s)$ for the globally defined $\x(\s)$ of \refer{exact}. One may then write down the following mode expansions for the phase space variables of the supermembrane, \begin{eqnarray} \del_r{\bf X}(\s) &=& {\bf X}^\l\, \f_{(\l)\, r} + \sum_A {\bf X}^A\, \del_r Y_A(\s),\nn \\ {\bf P}(\s) &=& \sum_A \sqrt{w}\, {\bf P}^A\, Y_a(\s), \nn\\ \q(\s) &=& \sum_A \q^A\, Y_A(\s) , \la{modeexp} \end{eqnarray} introducing winding modes for the transverse $X^a$. A similar expansion exists for $X^-$. One then naturally introduces the structure constants of the group of area-preserving diffeomorphism by \cite{dWMN} \begin{eqnarray} f_{ABC} &=& \int {\rm d}^2\s\, \e^{rs}\,\del_r Y_A\, \del_s Y_B\, Y_C\,, \nn \\ f_{\l BC} &=& \int {\rm d}^2\s\, \e^{rs}\,\f_{(\l)\, r}\, \del_s Y_B\, Y_C\,, \nn \\ f_{\l {\l^\prime} C} &=& \int {\rm d}^2\s\, \e^{rs}\,\f_{(\l)\, r}\, \f_{({\l^\prime})\, s}\, Y_C \, . \end{eqnarray} Note that with $Y_0=1$, we have \tmath{f_{AB0}=f_{\l B0}=0}. The raising and lowering of the $A$ indices is performed with the invariant metric \begin{equation} \h_{AB}= \int {\rm d}^2\s\, \sqrt{w}\, Y_A(\s)\, Y_B(\s) \end{equation} and there is no need to introduce a metric for the $\l$ indices. By plugging the mode expansions \refer{modeexp} into the Hamiltonian \refer{memham} one obtains the decomposition \begin{eqnarray} {\cal H} &=& \ft{1}{2}\, {\bf P}_0\cdot{\bf P}_{0}\nonumber\\ &&+ \ft{1}{4}\, {f_{\l {\l^\prime}}}^0\, f_{{\l^{\prime\prime}} {\l^{\prime\prime\prime}} 0}\, X^{a\, \l}\, X^{b\, {\l^\prime}}\, X^{\l^{\prime\prime}}_a\, X^{\l^{\prime\prime\prime}}_b \nn \\ &&+ \ft{1}{2}\, {\bf P}^{A}\cdot {\bf P}_{A} - f_{ABC}\, \bar{\q}^C\, \g_-\,\g_a\, \q^B\, X^{a\, A} \nonumber\\ && - f_{\l BC}\, \bar{\q}^C\, \g_-\,\g_a\,\q^B\, X^{a\, \l} \nn\\ && + \ft{1}{4}\, {f_{AB}}^E\, f_{CDE}\, X^{a\, A}\, X^{b\, B}\, X^C_a\, X^D_b \nn\\ &&+ {f_{\l B}}^E\, f_{CDE}\, X^{a\, \l}\, X^{b\, B}\, X^C_a\, X^D_b \nn\\ && +\ft{1}{2}\, {f_{\l B}}^E\, f_{{\l^\prime} DE}\, X^{a\, \l}\, X^{b\, B}\, X^{\l^\prime}_a\, X^D_b \nn\\ &&+\ft{1}{2}\, {f_{\l B}}^E\, f_{C {\l^\prime} E}\, X^{a\, \l}\, X^{b\, B}\, X^C_a\, X^{\l^\prime}_b \nn\\ && + \ft{1}{2}\, {f_{\l {\l^\prime}}}^E\, f_{C DE}\, X^{a\, \l}\, X^{b\, {\l^\prime}}\, X^C_a\, X^D_b \nn\\ &&+ {f_{\l {\l^\prime}}}^E\, f_{{\l^{\prime\prime}} DE}\, X^{a\, \l}\, X^{b\, {\l^\prime}}\, X^{\l^{\prime\prime}}_a\, X^D_b \nn\\ && + \ft{1}{4}\, {f_{\l {\l^\prime}}}^E\, f_{{\l^{\prime\prime}} {\l^{\prime\prime\prime}} E}\, X^{a\, \l}\, X^{b\, {\l^\prime}} \, X^{\l^{\prime\prime}}_a\, X^{\l^{\prime\prime\prime}}_b, \la{memhammode} \end{eqnarray} where here and henceforth we spell out the zero-mode dependence explicitly, i.e.\ the range of values for $A$ no longer includes $A=0$. Note that for the toroidal supermembrane \tmath{f_{\l{\l^\prime} A}=0} and thus the last three terms in \refer{memhammode} vanish. The second term in the first line represents the winding number squared. In the matrix formulation, the winding number takes the form of a trace over a commutator. We have scaled the Hamiltonian by a factor of $P^+_0$ and the fermionic variables by a factor $(P^+_0)^{-1/2}$. Supercharges will be rescaled as well, such as to eliminate explicit factors of $P^+_0$. The constraint equation \refer{delxminus} is translated into mode language by contracting it with \tmath{\e^{rs}\, \f_{(\l)\, s}} and \tmath{\e^{rs}\, \del_s Y_C} respectively and integrating the result over the spacesheet to obtain the two constraints \begin{eqnarray} \vf_\l &=& f_{\l{\l^\prime} 0}\,(\, {\bf X}^{\l^\prime}\cdot{\bf P}_0 +X^{-\, {\l^\prime}}\, P^+_0\, ) \nonumber\\ &&\quad+ f_{\l{\l^\prime} C}\, {\bf X}^{\l^\prime}\cdot{\bf P}^C \nn \\ &&\quad+ f_{\l BC}\,(\, {\bf X}^B\cdot{\bf P}^C +\bar{\q}^C\,\g_-\, \q^B\, ) =0 ,\nn \\ \vf_A &=& f_{ABC}\, (\, {\bf X}^B\cdot{\bf P}^C + \bar{\q}^C\,\g_-\,\q^B\, ) \nonumber\\ &&\quad + f_{A\l C} \, {\bf X}^\l\cdot{\bf P}^C =0 , \end{eqnarray} taking also possible winding in the $X^-$ direction into account. Note that even for the non-winding case \tmath{X^{a\,\l}=0} there remain the extra $\vf_\l$ constraints. These have so far not played any role in the matrix formulation. The zero-mode contributions completely decouple in the Hamiltonian and the supercharges. We thus perform a split in $Q^+$ treating zero modes and fluctuations separately to obtain the mode expansions, \begin{equation} Q^- = 2\,\g_-\, \q^0 \,, \qquad Q^+ = Q^+_{(0)} + {\widehat{Q}}^+\,, \ee{split} where \begin{eqnarray} Q^+_{(0)} &=& \Bigl (\,2\, P^a_0\,\g_a + f_{\l{\l^\prime} 0}\, X^{a\,\l}\, X^{b\,{\l^\prime}}\, \g_{ab}\, \Bigr )\, \q_0 \,, \nn \\ \widehat{Q}^+ &=& \Bigl (\, 2\, P^a_C\,\g_a + f_{ABC}\, X^{a\, A}\, X^{b\, B}\,\g_{ab} \nn\\ &&\quad + 2\, f_{\l BC}\, X^{a\, \l}\, X^{b\, B}\,\g_{ab}\nn\\ &&\quad + f_{\l{\l^\prime} C}\, X^{a\, \l}\, X^{b\, {\l^\prime}}\,\g_{ab}\, \Bigr )\, \q^C \,. \end{eqnarray} Upon introducing the supermembrane mass operator by \begin{equation} {\cal M}^2=2\, {\cal H} - {\bf P}_0\cdot{\bf P}_{0}- \ft{1}{2}\,( f_{\l{\l^\prime} 0}\, X^{a\, \l}\, X^{b\, {\l^\prime}})^2 , \ee{mass2} the supersymmetry algebra \refer{contsusy} then takes the form \begin{eqnarray} &&\{{\widehat{Q}}{}^+_\a, \bar{\widehat{Q}}{}^+_\b\, \} = (\g_+)_{\a\b} \, {\cal M}^2 \label{s1}\\ &&\hspace{4mm} -2\, (\g_a\,\g_+ )_{\a\b}\, f_{\l{\l^\prime} 0}\, X^{a\,\l}\, ( X^{-\, {\l^\prime}}\, P^+_0 + {\bf X}^{\l^\prime}\cdot {\bf P}_0 ) \,, \nn\\ &&\{ Q^+_{(0)\,\a }, \bar{Q}^+_{(0)\,\b }\, \} = \label{w3} \\ &&\hspace{12mm} (\g_+)_{\a\b}\,\Big(\, {\bf P}_0\cdot{\bf P}_{0} +\ft{1}{2}\, (f_{\l{\l^\prime} 0}\,X^{a\, \l}\, X^{b\, {\l^\prime}})^2\Big) \nn\\ && \hspace{12mm} +2\, (\g_a\,\g_+ )_{\a\b}\, f_{\l{\l^\prime} 0}\, X^{a\,\l}\, {\bf X}^{\l^\prime}\cdot {\bf P}_0 \,, \nn\\ && \{ Q^+_{(0)\,\a } , \bar{Q}^-_\b\, \} = -(\g_a\,\g_+\,\g_-)_{\a\b}\, P^a_0 \label{s4}\\ && \hspace{12mm}- \ft{1}{2}\, (\g_{ab}\,\g_+\, \g_-)_{\a\b}\, f_{\l{\l^\prime} 0}\, X^{a\, \l}\, X^{b\, {\l^\prime} }\, , \nn\\ && \{ {\widehat{Q}}^+_\a, \bar{Q}^-_\b\, \} = \{\, Q^+_{(0)\,\a } \, , \bar{\widehat{Q}}{}^+_\b\, \} = 0 \,. \la{modesusy} \end{eqnarray} And the mass operator commutes with all the supersymmetry charges, \begin{equation} [\, {\widehat{Q}}^+, {\cal M}^2\, ] =[\, Q_{(0)}^+, {\cal M}^2\, ] = [\, Q^-, {\cal M}^2\, ] = 0 , \ee{QswithM2} defining a supersymmetric quantum-mechanical model. At this stage it would be desirable to present a matrix model regularization of the supermembrane with winding contributions, generalizing the matrix approximation to the exact subgroup of area-preserving diffeomorphisms \cite{GoldstoneHoppe,dWHN}, at least for toroidal geometries. However, this program seems to fail due to the fact that the finite-$N$ approximation to the structure constants \tmath{f_{\l BC}} violates the Jacobi identity, as was already noticed in \cite{dWMN}. Let us nevertheless discuss an example of this regularization in the case of non-winding, open supermembranes in some detail. Consider a spacesheet topology of an annulus. Its set of basis functions is easily obtained by starting from the well-known torus functions $Y_{\bf m}(\sigma)$ \cite{dWHN} labeled by a two-dimensional vector ${\bf m}=(m_1,m_2)$ with $m_1,m_2$ integer numbers and \begin{equation} Y_{\bf m}(\sigma_1,\sigma_2)=e^{i\, (m_1\sigma_1+m_2\sigma_2)}. \ee{Ys} Consider now the involution ${\bf m}\rightarrow {\bf \widetilde{m}}$ where ${\bf \widetilde{m}}=(m_1, -m_2)$. One then defines the new basis functions \cite{kimrey,EMM2} \begin{equation} C^\pm_{\bf m}= Y_{\bf m} \pm Y_{\bf \widetilde{m}}, \ee{Ces} where $\s_1\in[0,2\pi]$ and $\s_2\in[0,\pi]$. It turns out that $C^+_{\bf m}$ obeys Neumann and $C^-_{\bf m}$ Dirichlet conditions on the boundaries, i.e., $\del_2C^+_{\bf m}|=\del_1C^-_{\bf m}|=0$. These basis functions possess the algebra \begin{eqnarray} \{ C^-_{\bf m},C^-_{\bf n} \}&=& i ({\bf m}\! \times\! {\bf n})\, C^-_{\bf m+n} -i({\bf m} \!\times\! {\bf \widetilde{n}})\, C^-_{\bf m+\wtd{n}}\,, \nn\\ \{ C^+_{\bf m},C^+_{\bf n} \}&=& i ({\bf m}\! \times\! {\bf n})\, C^-_{\bf m+n} +i({\bf m} \!\times\! {\bf \widetilde{n}})\, C^-_{\bf m+\wtd{n}}\,,\nn \\ \{ C^+_{\bf m},C^-_{\bf n} \}&=& i ({\bf m}\! \times\! {\bf n})\, C^+_{\bf m+n} -i({\bf m} \!\times\! {\bf \widetilde{n}})\, C^+_{\bf m+\wtd{n}} \, . \nn\\&&\la{Calg} \end{eqnarray} Note that the $C^-_{\bf m}$ form a closed subalgebra. The matrix regularization now comes about by replacing $Y_{\bf m}$ with the $(N^2-1)$ adjoint SU($N$) matrices $T_{\bf m}$ of \cite{dWHN}. The corresponding operation to ${\bf m}\rightarrow\widehat{\bf m}$ is matrix transposition, i.e. $T_{\bf \wtd{m}}=T_{\bf m}^{\rm T}$. Hence we find that, in the matrix picture, the antisymmetric $(N(N-1)/2)$ $C^-_{\bf m}$ matrices form the adjoint and the symmetric $(N(N+1)/2)$ $C^+_{\bf m}$ transform as the symmetric rank-two representation of SO($N$). Finally we turn to the question of the mass spectrum for membrane states with winding. The mass spectrum of the supermembrane without winding is continuous. This was proven in the SU($N$) regularization \cite{dWLN}. Whether or not nontrivial zero-mass states exist, is not known (for some discussion on these questions, we refer the reader to \cite{DWN}). Those would coincide with the states of 11-dimensional supergravity. It is often argued that the winding may remove the continuity of the spectrum (see, for instance, \cite{Russo}). There is no question that winding may increase the energy of the membrane states. A membrane winding around more than one compact dimension gives rise to a nonzero central charge in the supersymmetry algebra. This central charge sets a lower limit on the membrane mass. However, this should not be interpreted as an indication that the spectrum becomes discrete. The possible continuity of the spectrum hinges on two features. First the system should possess continuous valleys of classically degenerate states. Qualitatively one recognizes immediately that this feature is not directly affected by the winding. A classical membrane with winding can still have stringlike configurations of arbitrary length, without increasing its area. Hence the classical instability still persists. The second feature is supersymmetry. Generically the classical valley structure is lifted by quantum-mechanical corrections, so that the wave function cannot escape to infinity. This phenomenon can be understood on the basis of the uncertainty principle. Because, at large distances, the valleys become increasingly narrow, the wave function will be squeezed more and more which tends to induce an increasing spread in its momentum. This results in an increase of the kinetic energy. Another way to understand this is by noting that the transverse oscillations perpendicular to the valleys give rise to a zero-point energy, which acts as an effective potential barrier that confines the wave function. When the valley configurations are supersymmetric the contributions from the bosonic and the fermionic transverse oscillations cancel each other, so that the wave function will not be confined and can extend arbitrarily far into the valley. This phenomenon indicates that the energy spectrum must be continuous. Without winding it is clear that the valley configurations are supersymmetric, so that one concludes that the spectrum is continuous. With winding the latter aspect is somewhat more subtle. However, we note that, when the winding density is concentrated in one part of the spacesheet, then valleys can emerge elsewhere corresponding to stringlike configurations with supersymmetry. Hence, as a space-sheet local field theory, supersymmetry can be broken in one region where the winding is concentrated and unbroken in another. In the latter region stringlike configurations can form, which, at least semiclassically, will not be suppressed by quantum corrections. We must stress that we are describing only the generic features of the spectrum. Our arguments by no means preclude the existence of mass gaps. To prove or disprove the existence of discrete states is extremely difficult. While the contribution of the bosonic part of the Hamiltonian increases by concentrating the winding density on part of the spacesheet, the matrix elements in the fermionic directions will also grow large, making it difficult to estimate the eigenvalues. At this moment the only rigorous result is the BPS bound that follows from the supersymmetry algebra. Obviously, the state of lowest mass for given winding numbers, is always a BPS state, which is invariant under some residual supersymmetry.
2023-04-23T06:41:18.130Z
1997-10-28T17:54:15.000Z
redpajama/arxiv
arxiv_0001
2,078
6,213
db8670abf417b90dc7ef507ca6886750574763f3
\section*{Acknowledgments} I would like to thank Professor Pawe{\l } O. Mazur for bringing to my attention the work of Professor A. Staruszkiewicz that started my interest in nonlinear modifications of the Schr\"{o}dinger equation and Professor G. A. Goldin for his interest in this paper. A correspondence with Professor Wolfgang L\"{u}cke concerning the problem of separability in nonlinear quantum mechanics and a stimulating exchange of correspondence with Dr. Marek Czachor about many issues of nonlinear quantum mechanics are also gratefully acknowledged. This work was partially supported by the NSF grant No. 13020 F167 and the ONR grant R\&T No. 3124141. \bigskip
2023-04-23T06:41:18.235Z
1999-05-17T07:54:27.000Z
redpajama/arxiv
arxiv_0001
2,086
108
9c34de6a405bbacb2d3356af91e6111364a7a247
\section{Introduction} In this paper we examine the small but well-known class of double quasars where a quasar image has apparently been split into two by the gravitational lensing effect of a galaxy mass object along the line of sight. In most cases the lensing galaxy is not detectable, and the tight limits on the mass-to-light ratios suggest that the galaxies may be `dark' in the sense that they have failed to form stars. This could be because they do not contain baryonic material, or more plausibly because the conditions for extensive star formation are not present. In order to establish the existence of the dark galaxies, it is necessary to make the case that the double quasars are lensed and not binary systems, and that the lensing galaxy is truly dark and not concealed in some way. For any particular quasar pair it is sometimes possible to argue special circumstances which circumvent the need to invoke a dark galaxy. In this paper we use the statistical properties of the known systems to argue that there is indeed a population of dark galaxies to be explained. \section{The quasar sample} At present there are 8 double quasars known with a separation greater than 2 arcseconds that are plausible lens systems (Walsh et al. 1979; Surdej et al. 1987; Weedman et al. 1982; Djorgovski \& Spinrad 1984; Meylan \& Djorgovski 1989; Hewett et al. 1989; Wisotski et al. 1993; Hawkins et al. 1997). The properties of these systems are summarised in Table 1, and include the redshift, the separation in arcseconds, the magnitudes of the two components in the $R$ band, and the velocity difference. The last 3 columns refer to the lens and are discussed later. It will be seen that all systems have an \begin{center} \begin{table*} \caption[]{Parameters for double quasar systems} \begin{tabular}{ccccccrrlcr} Name&z&sep&m$_A$&m$_B$&$\delta$m&\multicolumn{2}{c}{$\delta$v}&R$_g$& M$_g$&M/L\\ &&($''$)&&&&\multicolumn{2}{c}{(km sec$^{-1}$)}&&$10^{12}_{\odot}$&\\ &&&&&&&&&\\ Q0957$+$561& 1.41 & 6.1 & 16.6 & 17.0 & 0.4 & 3 & $ \pm$14&18.5&1.8& 26\\ Q0142$-$100& 2.72 & 2.2 & 16.9 & 19.1 & 2.2 & -24 & $\pm$109&19 &0.2& 4\\ &&&&&&&&&\\ Q2345$+$007& 2.15 & 7.0 & 18.9 & 20.4 & 1.5 & 15 & $ \pm$20&26 &2.6& 22000\\ Q1635$+$267& 1.96 & 3.8 & 19.1 & 20.7 & 1.6 & 41 & $ \pm$54&23.5&0.7& 590\\ Q1120$+$019& 1.46 & 6.5 & 16.5 & 21.2 & 4.7 & 200 & $\pm$100&23 &2.0& 1100\\ Q1429$-$008& 2.08 & 5.1 & 17.7 & 20.8 & 3.1 & 260 & $\pm$300&24 &1.3& 1700\\ Q1104$-$180& 2.30 & 3.0 & 16.7 & 18.6 & 1.9 & 0 & $ \pm$90&24 &0.4& 540\\ Q2138$-$431& 1.64 & 4.5 & 19.8 & 21.0 & 1.2 & 0 & $\pm$115&23.8&1.0& 1100\\ \end{tabular} \end{table*} \end{center} image separation less than 8 arcseconds. The surface density of quasars to a magnitude limit of $B = 21$ is about 30 per square degree (Hawkins \& V\'{e}ron 1995). This implies that the probability of a given quasar having a companion at a distance of $2 - 7$ arcseconds is in the range $10^{-4} - 10^{-5}$. Thus in a typical search of 1000 candidates there is a probability of 1\% to 10\% of finding a close pair by chance, not a particularly unlikely outcome. This will be made more likely by the effects of clustering, and less likely by the additional requirement for the redshifts to be the same in a lensed system. Various selection effects will further change the probability, but it seems unlikely that random associations can be convincingly ruled out on statistical grounds in any particular case (see for example Hawkins et al. 1997). \section{Tests for gravitational lensing} The existence of the small sample of lens candidates in Table 1 makes it possible to carry out a different test. The distribution of separations is shown in Fig. 1(a). If the double quasars are chance associations, one would expect the histogram of separations to increase linearly. Selection effects will tend to increase this trend, as close pairs are typically the hardest to find. In fact the distribution falls to zero beyond 8 arcsecs, even though most surveys are aimed at detecting associations to at least 20 arcsecs (Webster et al. 1988; Reimers 1990; Sramek \& Weedman 1978). At greater distances the probability of chance coincidences is no longer negligible, although as Schneider (1993) points out it is perhaps surprising that even now very few other quasars with similar redshifts are known with separations less than two arcminutes. If there are indeed no systematic effects in the selection, then the hypothesis that the observed separations are consistent with chance associations can be ruled out as completely negligible by a Kolmogorov-Smirnoff test. If one adopts a correlation function (Collins et al. 1992) of the form $r^{-0.7}$ this flattens the expected relation to $r^{0.3}$ but this is still inconsistent with the data at a very high confidence level. \begin{figure} \hspace*{0.5cm} \psfig{figure=fig1.eps,height=11cm,angle=0} \caption[]{Histograms of the properties of the double quasar systems in Table 1. The top panel shows the distribution of separations of the two components and the bottom shows the distribution of redshifts.} \end{figure} If on the other hand quasars are gravitationally lensed, the separation is basically determined by the mass of the lens. To produce the largest separation of about 7 arcseconds a lensing mass of $3 \times 10^{12}M_{\odot}$ is required, similar to that of the most massive galaxies known. This statistical argument is only modified for binary quasars formed in the same protogalactic environment if there is some sort of mutual triggering mechanism for quasar activity, an idea which is at present of a speculative nature. Fig. 1(b) shows a histogram of redshifts of the quasars in Table 1. It will be seen that the distribution is highly asymmetric. All 8 quasars have a redshift $z > 1.4$ in contrast to the expected distribution which is almost flat (Hewett et al. 1993). Although a proper statistical test is difficult to carry out as several different search procedures were used (Webster et al. 1988; Reimers 1990; Hawkins et al. 1997; Sramek \& Weedman 1978) with possibly different redshift biases, none of them would appear to discriminate significantly against low redshift objects, and there is no lack of single low redshift quasars. On the other hand, the observed redshift distribution agrees well with that expected on the basis of gravitational lensing probabilities (Turner 1990), which drop sharply below a redshift $z = 1$. The velocity difference between the two components in all the systems in Table 1 is consistent with zero, and in all but two cases has an upper limit of about 100 km sec$^{-1}$. The observed pairwise velocity dispersion is (Ratcliffe et al. 1996) about 400 km sec$^{-1}$. This is much larger than the observed limits, but must be reduced to allow for evolutionary effects. This test can be improved considerably when tighter limits can be placed on the velocity differences. The claims that systems in the bottom part of Table 1 are lensed rest largely on detailed comparisons of the spectra of the two components (Meylan \& Djorgovski 1989; Hewett et al. 1989; Turner et al. 1988; Steidel \& Sargent 1991; Wisotski et al. 1995). It is generally accepted that small differences in the continuum and absorption line systems can be accommodated within the lensing picture as effects of time delay, microlensing and different light paths for the two components. Nonetheless, spectra of the two components are found to be very similar, and the emission line systems all but identical. This raises the question of how similar one might expect two arbitrary quasars at the same redshift to be. Although it is still difficult to answer this question properly (Turner et al. 1988), one can quite easily compare the colours of the two components with a sample of quasars at the same redshift. There are two systems in Table 1 which have accurate multicolour CCD photometry, and these are plotted in Fig. 2 together with all quasars with a similar redshift from the survey of Hawkins \& V\'{e}ron (1995). Although the two components of the double quasars have identical colours within the measurement errors, there is a wide spread in colour for other quasars at the same redshift. \section{Lensing by dark galaxies?} Taking together the individual analysis of each double quasar with the ensemble properties discussed here, there appears to be an overwhelming case that the systems in Table 1 are gravitationally lensed. This then raises the question of the nature of the lensing objects. For the first two systems, lensing galaxies are clearly visible at $z = 0.39$ and $z = 0.45$, near the most probable value (Turner et al. 1984). The lensing object lies close to the fainter component, as expected for lensed systems. For the remainder, in spite of intense efforts (eg Tyson et al. 1986) no lensing galaxies have been found. Limits of $R > 23$ to $R > 26$ have been put on the magnitude of any possible lens, implying mass-to-light ratios in excess of 500 $M_{\odot}/L_{\odot}$. The last 3 columns in Table 1 show the R-band magnitude limit placed on a possible lensing galaxy, the mass of the lens and the resulting mass-to-light ratio, assuming $H_{0} = 50$km sec$^{-1}$Mpc$^{-1}$. The mass of the lens is based on the separation of the two components, and where a lensing galaxy is undetected assumes a lens at $z = 0.5$, the most probable redshift (Turner et al. 1984). The M/L can only be reduced significantly by putting the lens close to the quasar. This is a very unlikely configuration (Turner et al. 1984), and requires a large increase in the mass of the lens. The mass-to-light ratios in Table 1 are at least 10 to 100 times larger than for normal galaxies (White 1990), which can effectively be ruled out as lens candidates. It has been known for some time (Narayan et al. 1984) that diffuse mass distributions such as galaxy clusters can in principle produce multiple images of quasars. This can be done either by the cluster on its own, or in combination with a galaxy. In the first case fine tuning is required to produce separations of a few arcseconds, but this may be partly offset by the effects of amplification bias. One would also expect the closer quasar pairs to be brighter, a trend which is not evident in the systems in Table 1. The presence of a galaxy between the two images combined with an increase in surface mass density from a cluster can produce a larger separation than would be seen from the galaxy alone, suggesting a larger mass-to-light ratio. This picture has been suggested as an explanation for the wide separation system Q2345+007 where shear has been detected and a candidate cluster is visible (Pell\'{o} et al. 1996). There should however be a third fainter quasar image between the two brighter images which has not so far been detected. \begin{figure*} \psfig{figure=fig2.eps,height=14cm,angle=270} \vspace*{-2.5cm} \caption[]{Two colour diagrams for quasars. The open circles are CCD measures of the two components of double systems, and the filled circles are photographic measures of other quasars with similar redshifts ($\delta z < 0.06$).} \end{figure*} It is hard to escape the conclusion that `dark galaxies', or perhaps dark matter galactic halos, are responsible for lensing 6 out of the 8 quasars in Table 1. If we accept that the quasar lenses represent a fair sample of galaxies we must conclude that 3 in 4 galaxies are dark, in the sense that they have a mass-to-light ratio of at least several hundred $M_{\odot}/L_{\odot}$. A mechanism for formation and evolution of such galaxies has recently been described by Jimenez et al. (1997). In most of the systems there is evidence for microlensing (Wisotski et al. 1993; Hawkins et al. 1996; Steidel \& Sargent 1991; Schild 1996). This could be caused by stars or other compact objects in the lensing galaxy, but would require an optical depth to lensing approaching unity to produce a high probability of variation (Kayser et al. 1986; Schneider \& Weiss 1987). Thus the total dark matter content of the galaxy would have to be in the form of microlensing bodies (Kayser et al. 1986), and the stellar population would not be sufficient except perhaps close to the nucleus. Even if the lensing galaxy were entirely composed of microlensing bodies it typically lies close to the fainter of the two quasar images, and is not in a position to lens the brighter image. It is perhaps more plausible that the microlensing arises from a general distribution of dark matter bodies along the line of sight (Hawkins 1996), but either way it supports the idea of dark matter in the form of compact bodies. \section{Conclusions} In this paper we have defined a sample of double quasars which are plausible gravitational lens candidates. In each individual case earlier papers have made strong but not conclusive arguments that they are indeed gravitationally lensed systems. Here we consider the ensemble properties of these candidates from a statistical point of view and conclude that there is an overwhelming case that most if not all the quasars are lensed. 6 out of 8 of the systems contain no detectable lensing galaxy, implying a minimum mass-to-light ratio of several hundred, and suggesting that dark galaxies may outnumber normal ones by a substantial amount. Most of the double quasar systems show evidence for microlensing. If this is caused by compact bodies in the lensing galaxy it would imply that the dark halo was made up almost entirely of substeller compact bodies. A more plausible picture may be one in which the microlensing is taking place all along the line of sight, and double systems are no different from normal quasars in this respect.
2023-04-23T06:41:19.467Z
1997-10-28T18:13:06.000Z
redpajama/arxiv
arxiv_0001
2,158
2,389
7f061645cb7bfcd69066e658ba89bc825f78ec85
\section{Introduction} The line strength indices ${ Mg_{2}} $, \mbox{${ \langle Fe \rangle}$}, \mbox{${ H_{\beta}}$}, etc. and their gradients are customarily used to infer the age and metallicity and their variations across galaxies. Furthermore, in elliptical galaxies, the gradients in ${ Mg_{2}} $\ and \mbox{${ \langle Fe \rangle}$}\ have different slopes (Fisher et al. 1995,1996, Carollo \& Danziger 1994a,b; Carollo et al. 1993, Davies et al. 1993), which is used as a clue to arguing that $Mg$ (\mbox{$\alpha$-elements}\ in general) is enhanced with respect to $Fe$ toward the center. The bottom line to infer from ${ Mg_{2}} $\ and \mbox{${ \langle Fe \rangle}$}\ an enhancement in \mbox{$\alpha$-elements}\ rests on the notion that these two indices depend on age and the abundances of $Mg$ and $Fe$, and that age and abundance effects can somehow be disentangled. If this is the case, the implications are of paramount importance. It is worth recalling that according to the current nucleosynthesis scenario $Fe$ is mainly produced by Type Ia supernovae (accreting white dwarfs in binary systems in the most popular scheme) and in smaller quantities by Type II supernovae. In contrast, only Type II supernovae contribute to oxygen and \mbox{$\alpha$-elements}. Furthermore as the mean lifetime of a binary system (Type Ia progenitors) is $\geq 1$ Gyr, the contamination by Type I supernovae occurs later as compared to Type II supernovae. Finally, we expect the iron abundance $[Fe/H]$ and the $[\alpha/Fe]$ ratios to increase and decrease, respectively, as the galaxy ages. In standard models of galactic chemical evolution, i.e. constant initial mass function and supernova driven galactic winds (cf. Matteucci 1997 for a comprehensive review of the subject), this means that to obtain a galaxy (or region of it) enhanced in \mbox{$\alpha$-elements}\ the time scale of star formation over there must be shorter than about 1 Gyr. This is a very demanding constraint on models of galaxy formation and evolution. It is worth clarifying that such a conclusion is largely independent of the IMF and galactic wind model in usage, even if some {\it ad hoc} combinations of IMF and/or galactic wind model can be found in which enhancement of \mbox{$\alpha$-elements}\ is possible irrespective of the argument about the time scale of star formation. The reader is referred to the review article by Matteucci (1997) and the recent study by Chiosi et al. (1997) on an unconventional IMF for more details. In this paper, we address the question as to what extent the indices ${ Mg_{2}} $\ and \mbox{${ \langle Fe \rangle}$}\ (and their gradients) depend on age and chemical abundances, paying particular attention to the complex environment of a galaxy in which stars of many ages and chemical compositions are present. In other words, we seek to clarify how the past history of star formation, which manifests itself in the extant relative distribution of stars per metallicity bin [thereinafter the partition function $N(Z)$], affects the correspondence between indices, ages and abundances. To this aim, we will utilize the spherical models of elliptical galaxies developed by Tantalo et al. (1997), which take into account gradients in mass density, star formation rate, and chemical abundances (Section 2). With the aid of these models, we predict how the gradients in $Mg$ and $Fe$ (and their ratio) translate into gradients in ${ Mg_{2}} $\ and \mbox{${ \langle Fe \rangle}$}, and check whether a gradient in ${ Mg_{2}} $\ steeper than a gradient in \mbox{${ \langle Fe \rangle}$}\ implies an enhancement of $Mg$ with respect to $Fe$ toward the center of these galaxies. We anticipate here that, while these models are indeed able to match many key properties of elliptical galaxies, including the gradients in broad band colors (see below), they lead to contradictory results as far as the gradients in line strength indices are concerned (Section 3). To understand the physical cause of this odd behaviour of the models, we check the calibration in use and the response of ${ Mg_{2}} $\ and \mbox{${ \langle Fe \rangle}$}\ to chemistry (Section 4). The reason of the contradiction resides in the dependence of the indices in question on the existing $N(Z)$, i.e. the past history of star formation. It will be shown that gradients in the indices ${ Mg_{2}} $\ and \mbox{${ \langle Fe \rangle}$}\ do not automatically correspond to gradients in the $Mg$ and $Fe$ abundances, and their ratios in particular (Section 4). In order to cast more light on this topic, in a way independent of the particular model of galactic evolution, we derive the above indices for single stellar populations (SSP) of different metallicity and age, and look at the possible combinations of these two parameters leading to the same values for the indices (Section 5). Finally, some concluding remarks are drawn in Section 6. \section{The reference model } {\it Sketch of the model. } Elliptical galaxies are supposed to be made of baryonic and dark matter both with spherical distributions but different density profiles. While dark matter is assumed to have remained constant in time, the baryonic material (initially in form of primeval gas) is supposed to have accreted at a suitable rate onto the potential well of the former. The rate at which the density of baryonic material grows with time at any galacto-centric distance is chosen in such a way that at the galaxy age $T_G$ it matches the radial density distribution derived by Young (1976) for spherical systems. The density profile of the dark-matter is taken from Bertin et al. (1992) and Saglia et al. (1992) however adapted to the Young formalism for the sake of internal consistency. The mass of dark matter is taken in fixed proportions with respect to the luminous one, $ M_D=\theta \times M_L$ (all models below are for $\theta=5$), Dark matter will only affect the gravitational potential. Given the total present day luminous mass $ M_L$ of the galaxy (thereinafter in units of $ 10^{12}\times M_{\odot}$ and shortly indicated by $ M_{L,12}$) and its effective radius $ R_{L,e}$, the model is divided into a number of spherical shells, with proper spacing in radius and mass. The luminous mass of each shell is written as \begin{displaymath} \Delta M_{L,S} = \overline \rho_L(r) \times \Delta V(r), \end{displaymath} where $\Delta V(r)$ is the volume of the shell and $\overline \rho_L(r)$ is the mean density of baryonic material, respectively, The inner and outer radius of each shell are chosen in such a way that, using the Young (1976) tabulations of the mean density as a function of the fractionary radius $s=r/R_{L,e}$, the mass $\Delta M_{L,S}$ amounts to about 5\% of the total mass $M_{L,12}$. The radial variation of the gravitational potential of dark and luminous mass can be easily derived from the above description. The reader is referred to Tantalo et al. (1997) for all other details. \vskip 6pt {\it Accretion rate. }The collapsing rate of luminous material (gas) is expressed as \begin{equation} \left[\frac{d{\rho_{L}}(r,t)}{dt}\right]_{inf} = \rho_{L0}(r) \times exp({-\frac{t}{\tau(r)}}) \label{drho} \end{equation} \noindent where $\tau(r)$ is the local time scale of gas accretion to be discussed below, and $\rho_{L0}(r)$ is fixed by imposing that at the present-day age of the galaxy $T_{G}$ the density of luminous material in each shell has grown to the value given by the Young profile. \vskip 6pt {\it Star formation rate.} This follows the standard Schmidt law \begin{equation} \left[\frac{d{\rho_{g}}(r,t)}{dt}\right]_{sf} = \nu(r) {\overline \rho_{g}}(r,t) \label{esfr} \end{equation} \noindent where $\overline \rho_{g}(r,t)$ is the mean local gas density and $\nu(r)$ is the specific efficiency of star formation. \vskip 6pt {\it Accretion time scale $\tau(r)$. } A successful description of the gas accretion phase is possible adapting to galaxies the radial velocity law describing the final collapse of the core in a massive star (Bethe 1990), i.e. free-fall [$v(r) \propto r^{-\frac{1}{2}}$] in all regions external to a certain value of the radius $r^*$ and homologous collapse inside [$v(r) \propto r$]. This picture is also confirmed by Tree-SPH dynamical models of elliptical galaxies (cf. Carraro et al. 1997 and references therein). Let us cast the problem in a general fashion by expressing the velocity $v(r)$ as \begin{displaymath} v(r) = c_1 \times r^{\alpha} ~~~~~~~~~~ {\rm for}~~ r \leq ~~ r^* \end{displaymath} \begin{displaymath} v(r) = c_2 \times r^{- \beta} ~~~~~~~~ {\rm for}~~ r > ~~ r^* \end{displaymath} \noindent (where $c_1$, $c_2$, $\alpha$ and $\beta$ are suitable constants), and the time scale of accretion as \begin{displaymath} \tau(r) \propto { r \over v(r) } \end{displaymath} \noindent In the models below we adopt $\alpha=2$ (as suggested by the Tree-SPH calculations) and $\beta=0.5$ (as indicated by the core collapse analogy). The determination of the constants $c_1$ and $c_2$ is not strictly required as long as we seek for scaling relationships. The time scale of gas accretion can be written as proportional to some arbitrary time scale, modulated by a correction term arising from the scaling law for the radial velocity. For the time scale base-line we can take the free-fall time scale $t_{ff}$ referred to the whole system, \begin{equation} \tau(r) = t_{ff} \times \frac{r^{*}}{r} ~~~~~~~~~~~~ { \rm if }~~ r \leq r^{*} \label{tau_ff_1} \end{equation} \begin{equation} \tau(r) = t_{ff} \times (\frac{r}{r^{*}})^{3/2} ~~~~~ { \rm if} ~~ r > r^{*} \label{tau_ff_2} \end{equation} \noindent For the free-fall time scale $t_{ff}$ we make use of the relation by Arimoto \& Yoshii (1987) \begin{equation} t_{ff} = 0.0697 \times M_{L,12}^{0.325} ~~~~~~~~~~~{ Gyr}. \label{tff} \end{equation} \noindent Finally, we take $r*= {1 \over 2} R_{L,e}$. Other choices are obviously possible without changing the overall results of this study. \vskip 6pt {\it Specific efficiency of star formation.} In order to derive the specific efficiency of star formation $\nu(r)$ we utilize the simple scale relations developed by Arimoto \& Yoshii (1987) however adapted to the density formalism. At the typical galactic densities ($10^{-22}$ - $10^{-24} $ g~cm$^{-3}$) and considering hydrogen as the dominant coolant (Silk 1977) the critical Jeans length is much smaller than the galactic radius, therefore the galaxy gas can be considered as made of many cloud lets whose radius is as large as the Jeans scale. If these clouds collapse nearly isothermal without suffering from mutual collisions, they will proceed through subsequent fragmentation processes till opaque small subunits (stars) will eventually be formed. In such a case the stars are formed on the the free-fall time scale. In contrast, if mutual collisions occur, they will be supersonic giving origin to layers of highly cooled and compressed material, the Jeans scale falls below the thickness of the compressed layer and fragmentation occurs on the free-fall time scale of the high density layers, and finally the whole star forming process is driven by the collision time scale. On the basis of these considerations, we take the ratio \begin{equation} \sqrt{ \frac{1} {t_{ff} \times t_{col} } } \label{nu_star} \end{equation} \noindent as a measure of the net efficiency of star formation. \begin{figure} \psfig{file=tantalo_enha_fig1.ps,height=9.0truecm,width=8.5truecm} \caption{Temporal evolution of four abundance ratios: $[Fe/H]$ ({\em solid line}), $[Mg/Fe]$ ({\em dotted line}), $[O/Fe]$ ({\em dashed line}), and $[Si/Fe]$ ({\em long-dashed line}). The {\em dot-dashed line} shows the ratio $Z/$\mbox{${\rm Z_{\odot}}$}\ as a function of time. The abundance ratios are in the standard notation} \label{x_age} \end{figure} Let us express $\nu(r)$ as the product of a suitable yet arbitrary specific efficiency $\nu^*$ referred to the whole galaxy and a dimensionless quantity $F(r)$ describing as the ratio of eq. (\ref{nu_star}) varies with the galacto-centric distance. An obvious expression for $F(r)$ is the ratio (\ref{nu_star}) normalized to its central value. According to Arimoto and Yoshii (1987) the mean collision time scale referred to the whole galaxy can be expressed as \begin{equation} t_{col} = 0.0072 \times M_{L,12}^{0.1} ~~~~~~~~~~~~~~Gyr \label{t_jeans} \end{equation} \vskip 6pt With aid of this and the relation for the the free-fall time scale above we can first calculate $\nu^*$ \begin{equation} \nu^* = \left[ \sqrt{\frac{1}{t_{ff} \times t_{col}}} \right]_{gal} \label{nu_star_def} \end{equation} Extending by analogy the definition of free-fall and collision time scale to each individual region, we get \begin{equation} F(r) = \left( { r_{c} \over r } \right) ^{3 \gamma } \times \left[\frac{ \overline{\rho}_{g}(r_{c},T_{G}) } {\overline{\rho}_{g}(r,T_{G}) } \right]^{\gamma} \label{fr} \end{equation} \noindent where $\overline \rho_g(r,T_G)$ is the mean gas density within the region of mid radius $r$ and $r_c$ is the mid radius of the innermost sphere. \begin{figure*}[] \psfig{file=tantalo_enha_fig2.ps,height=9.0truecm,width=17.0truecm} \caption{Gradients in colors and line strength indices for the galaxy NGC~6407 (Carollo \& Danziger 1994a). {\it Left Panel}: comparison with the theoretical gradients in (B--R) for models of different mass and same age (15 Gyr). {\it Right Panel}: comparison with the theoretical gradients in line strength indices for the $ 3 \times M_{L,12}$ model which has nearly the same $ M/L_{B}$ ratio as NGC~6407. Finally, the observational data along major and minor axis are indicated by full and empty circles, respectively} \label{mod_car} \end{figure*} In principle, the exponent $\gamma$ could be derived from combining the mass dependence of $t_{ff}$ and $t_{col}$, i.e. $\gamma \simeq 0.2$. However, the many models calculated by Tantalo et al. (1997) show that in order to recover the observational gradients in broad band colors (and other properties of elliptical galaxies), the efficiency of star formation (i.e. $F(r)$ in our formulation) must vary with the radial distance more strongly than predicted by $\gamma=0.2$. The following relation is found to give good results \begin{equation} \gamma = 0.98\times M_{L,12}^{0.02} \label{alfa_nu} \end{equation} \noindent Finally, the total expression for $\nu(r)$ is \begin{equation} \nu(r) = \left[ \frac{1} {t_{ff} \times t_{col} } \right]_{gal}^{0.5} \times \left( { r_{c} \over r } \right) ^{3 \gamma } \times \left[\frac{ \overline{\rho}_{g}(r_{c},T_{G}) } {\overline{\rho}_{g}(r,T_{G}) } \right]^{\gamma} Gyr^{-1} \label{nu_tot} \end{equation} {\it Remark.} Before proceeding further, we like to briefly comment on the apparent complexity of the model adopted to perform the analysis. First of all, the model has to be considered as gross tool to understand how gradients in star formation would affect gradients in metallicity and photometric properties. Secondly, the analysis itself is almost model-independent as what matters here is to make use of a reasonably grounded formulation able to predict gradients in chemical abundances and in narrow band indices and look and the mutual correlation between them. \vskip 6pt {\it Galactic winds.} The models allow for galactic winds triggered by the energy deposit from Type I and II supernova explosions and stellar winds from massive stars. The formalism in usage here is the same as in Bressan et al. (1994) and Tantalo et al. (1996) but for two important details: first only a fraction ($\eta=0.3$) of the kinetic energy by stellar winds is let thermalize, and second the cooling law for supernova remnants is the same as in Gibson (1994, 1996). When the total thermal energy of the gas in each region exceeds the local gravitational potential, gas is supposed to escape from the galaxy and star formation to stop over there. It worth noticing for the sake of clarity that in presence of galactic winds, the local density and total mass of baryonic material in turn can never reach to the asymptotic values $\rho_L(r,T_G)$ and $M_{L,12}$, respectively. The discussion by Tantalo et al. (1997) of this topic clarifies, however, that the final results of the models are not too severely affected by contradiction between the initial hypothesis (models constrained to match the asymptotic mass) and the real value of this reached in the course of evolution. \vskip 6pt {\it Chemical and photometric evolution. } The chemical evolution of elemental species is governed by the same set of equations as in Tantalo et al. (1996) however adapted to the density formalism and improved as far as the ejecta and the contribution from Type Ia and Type II supernovae are concerned according to the revision made by Portinari et al. (1997) and Tantalo et al. (1997) to whom we refer. Finally, the line strength indices ${ Mg_{2}} $\ and \mbox{${ \langle Fe \rangle}$}\ have been calculated adopting the calibrations by Worthey (1992) and Worthey et al. (1994) as a function of [Fe/H], $\rm T_{eff}$ and gravity of the stars. \vskip 6pt {\it Mass-radius relationship.} The final step is to adopt a relationship between $R_{L,e}$ and $M_{L}$ so that assigned the total baryonic mass, the effective radius and all other quantities in turn are known. For the purposes of this study and limited to the case of $H_{0}=50~Km~sec^{-1}Mpc^{-1}$, we derive from the data of Carollo et al. (1993) and Goudfrooij et al. (1994) the following relation \begin{equation} R_{L,e} = 17.13 \times M_{L,12}^{0.557} \label{reff_mass} \end{equation} \noindent where $R_{L,e}$ is in kpc. \vskip 6pt {\it Main results.} This simple modelling of the distribution of density and hence mass in a spherical system allow us to describe the gradients in star formation, chemical abundances, ages, and photometric properties. These models indeed are able to reproduce (i) the slope of colour-magnitude relation by Bower et al. (1992a,b); (ii) the UV excess as measured by the colour (1550--V) by Burstein et al. (1988); (iii) the mass to blue luminosity ratio $\rm (M/L_{B})_{\odot}$ of elliptical galaxies. See Tantalo et al. (1997) for all other details. For the purposes of the discussion below, in Table~1 we present the basic data for the central core ($ r=0.05 R_{L,e}$) and the first shell ($0.05\times R_{L,e} \leq r \leq 0.15 \times R_{L,e}$) in a typical galaxy with total luminous mass $3M_{L,12}$. The content of Table~1 is as follows: $\Delta M_{L,S}$ is the asymptotic luminous mass of the region in units of $ 10^{12}$\mbox{${\rm M_{\odot}}$}; $\nu$ is the local efficiency of the SFR; $\tau$ is the local time scale of gas accretion in Gyr; $t_{gw}$ is the age in Gyr at which energy heating by supernova explosions and stellar winds exceeds the local gravitational energy; $ Z_{max}$ and $ \langle Z \rangle$ are the maximum and mean metallicity, respectively; $ G(t)$ and $ S(t)$ are the local fractionary masses of gas and stars gas, respectively, both normalized to the asymptotic mass $\Delta M_{L,S}$; $ N_{enh}$ is the percentage of stars showing $\alpha$-enhancement that are present in the region (see below). Finally, in Fig.~\ref{x_age} we show the temporal evolution of the abundances of a few elements in the central core of the model. No detail for the remaining regions is given here but for the gradients in broad band colors and narrow band indices to be presented in Fig.~\ref{mod_car} below. \begin{table} \begin{center} \caption{Basic features for the central core and first shell of the reference model with $3M_{L,12}$} \small \begin{tabular} {l| c | c } \hline \hline & & \\ \multicolumn{1}{l|}{Parameter} & \multicolumn{1}{c|}{ Core} & \multicolumn{1}{c}{$1^{st}$ shell} \\ & & \\ \hline & & \\ $ \Delta M_{L,S}$ & 0.146 & 0.150 \\ $\nu$ & 7.1 & 50.0 \\ $\tau$ & 0.74 & 0.29 \\ $ t_{g\omega}$ & 5.12 & 0.79 \\ $ Z_{max}$ & 0.0964 & 0.0439 \\ $ \langle Z \rangle$ & 0.0365 & 0.0286 \\ $ G(t)$ & 0.004 & 0.010 \\ $ S(t)$ & 0.845 & 0.874 \\ & & \\ \hline & & \\ $ N_{enh}$ & 45.7\% & 53.9\% \\ & & \\ \hline \hline \end{tabular} \end{center} \label{tab1} \end{table} \section{Casting the gradient contradiction} In Fig.~\ref{mod_car} we compare the theoretical and observational gradients in broad-band colors (left panel) and line strength indices (right panel) for our proto-type model. The data are from Carollo \& Danziger (1994a,b). Remarkably, while the model matches the gradient in broad band colors, it fails as far as the line strength indices ${ Mg_{2}} $\ and \mbox{${ \langle Fe \rangle}$}\ and their gradients are concerned. Other cases can be found in the Carollo \& Danziger (1994a,b) list (they are not shown here for the sake of brevity), in which either the broad band colors or the line strength indices are matched, but the simultaneous fit of the two sets of data is not possible. The obvious attitude toward these matters would be to ascribe the above failure to inadequacy of the models (which may certainly be the case) and thus quit the subject. This is a point of embarrassment because there is no obvious explanation as to why models that successfully reproduce many of the observed properties of elliptical galaxies (cf. Bressan et al. 1994, 1996; Tantalo et al. 1996; 1997) fail to match the line strength indices. In addition to this, and even more relevant to the aims of the present study, a point of contradiction between chemical structure and line strength indices is soon evident. The problem is cast as follows. The theoretical gradients $[\Delta ln Mg_2/ \Delta R] \simeq -0.13$ and $[ \Delta ln \langle Fe \rangle / \Delta R] \simeq -0.11$ (with R the galactocentric distance in kpc) are nearly identical. \begin{figure*}[] \psfig{file=tantalo_enha_fig3.ps,height=9.0truecm,width=17.0truecm} \caption{{\it Panels (a)} and {\it (b)}: the number of living stars and abundance ratio distribution per metallicity bin in the central core and the $1^{st}$ shells, respectively, of the galaxy with mass $ 3M_{L,12}$. The abundance ratios are in the standard notation. The {\em solid line} is distribution of living stars in units of $10^{11}M_{\odot}$. The {\em dotted}, {\em dashed}, {\em long-dashed}, and {\em dot-dashed} lines give the distribution per metallicity bin of $[O/Fe]$, $[Fe/H]$, $[Mg/Fe]$, and $[C/Fe]$, respectively. The top scale gives the birth-time $t=T_{G} - T_{SSP}$ in Gyr of a SSP of age $T_{SSP}$ in a galaxy of age $T_{G}$} \label{x_nz} \end{figure*} This is a surprising result because looking at the data of Table~1 the duration of star formation was much longer in the central core than in the outer shell (the same trend holds for all remaining shells not considered here), which implies that the stars in the core are on the average less $\alpha$-enhanced than in the more external regions (cf. the temporal evolution of the elemental abundances shown in Fig.~\ref{x_age}), whereas the gradients we have obtained seem to indicate a nearly constant ratio $[Mg/Fe]$. To single out the reason of the contradiction we look at the variation of the abundance ratios $[Fe/H]$, $[C/Fe]$, $[O/Fe]$, and $[Mg/Fe]$ (with respect to the solar value) as a function of the metallicity and time, and the present-day partition function $N(Z)$. This allows us to evaluate the fraction of living stars with metallicity above any particular value and with abundance ratios above or below the solar value. The relationships in question are shown in the two panels of Fig.~\ref{x_nz} (the left panel is for the central core; the right panel is for the $1^{st}$ more external shell). In addition to this, we also look at the current age of the stellar population stored in every metallicity bin (we remind the reader that the metallicity in this model in a monotonic increasing function of the age, cf. Fig.~\ref{x_age}). The top axis of Fig.~\ref{x_nz} shows the correspondence between metallicity and birth-time of the stellar content in each metallicity bin, shortly named single stellar population (SSP). The SSP birth-time is $t = T_{G} - T_{SSP}$, where $T_{G}$ is the present-day galaxy age and $T_{SSP}$ is the current age of the SSP. From this diagram we learn that the external shell is truly richer in $\alpha$-enhanced stars ($\sim 53.9\%$ of the total) than the central core ($\sim 45.7\%$ of the total). The percentages $ N_{enh}$ are given in Table~1. This confirms our expectation that the models should predict gradients in line strength indices consistent with the gradients in abundances. {\it Which is the reason for such unexpected contradiction? } One may argue that the above disagreement results either from the adoption of calibrations, such as those by Worthey (1992) and Worthey et al. (1994), which include the dependence on $[Fe/H]$, $ T_{eff}$, and gravity but neglect the effect of enhancing the $\alpha$-elements, or the particular model in usage. \begin{figure} \psfig{file=tantalo_enha_fig4.ps,height=9.0truecm,width=8.5truecm} \caption{{\it Model-C}: the star formation rate as a function of the time for the central region of the galaxy model with $3M_{L,12}$. At the age of 3 Gyr the efficiency of the SFR is let increase from $\nu = 0.1$ up to $\nu = 50$ over a time scale of $10^{8}$yr. The parameters of this model are given in Table~2} \label{sfr} \end{figure} \begin{figure} \psfig{file=tantalo_enha_fig5.ps,height=9.0truecm,width=8.5truecm} \caption{{\it Model-C}: Panel (a) shows maximum and mean metallicity {\em dotted} and {\em solid line}, respectively. Panel (b) shows the fractionary density of gas $ G(t)$ and living stars $ S(t)$ as a function of time, {\em dotted} and {\em solid line}, respectively} \label{chem} \end{figure} \begin{table} \begin{center} \caption{Basic properties of the central region of the test models with $3M_{L,12}$. Model-A: late galactic wind and no enhancement of $\alpha$-elements. Model-B: early galactic wind and enhancement of $\alpha$-elements. Model-C: recent burst of star formation, and galactic wind, and strong enhancement of $\alpha$-elements} \small \begin{tabular}{l| c | c | c} \hline \hline & & \\ \multicolumn{1}{l|}{Parameter} & \multicolumn{1}{c|}{Model-A} & \multicolumn{1}{c|}{Model-B} & \multicolumn{1}{c}{Model-C} \\ & & & \\ \hline & & & \\ $ \Delta M_{L,S}$ & 0.146 & 0.146 & 0.146 \\ $\nu$ & 7.1 & 100.0 & 0.1 $\div$ 50 \\ $\tau$ & 0.74 & 0.05 & 0.10 \\ $t_{g\omega}$ & 5.12 & 0.39 & 3.58 \\ $Z_{max}$ & 0.0964 & 0.0713 & 0.0878 \\ $\langle Z \rangle$ & 0.0365 & 0.0279 & 0.0294 \\ $G(t)$ & 0.004 & 0.002 & 0.003 \\ $S(t)$ & 0.845 & 0.942 & 0.994 \\ & & & \\ \hline & & & \\ $N_{enh}$ & 45.7\% & 85.2\% & 74.8\% \\ & & & \\ \hline \hline \end{tabular} \end{center} \label{tab2} \end{table} \section{Changing calibrations and chemistry } To answer the question posed in the previous section, first we adopt a different calibration in which the effect of $[Mg/Fe]$ is explicitly taken into account, and second we discuss different, {\it ad hoc} designed, galactic models in which different levels of enhancement in \mbox{$\alpha$-elements}\ are let occur by artificially changing the history of star formation. \subsection{Calibrations containing [Mg/Fe]} Many studies have emphasized that line strength indices depend not only on the stellar parameters $T_{eff}$ and gravity, but also on the chemical abundances (Barbuy 1994, Idiart et al. 1995, Weiss et al. 1995, Borges et al. 1995). We start pointing out that in presence of a certain degree of enhancement in $\alpha$-elements one has to suitably modify relationship between the total metallicity $Z$ and the iron content $[Fe/H]$. Using the pattern of abundances by Anders \& Grevesse (1989), Grevesse (1991) and Grevesse \& Noels (1993), we find the general relation \begin{equation} \left[\frac{Fe}{H} \right] = \log{\left(\frac{Z}{Z_{\odot}}\right)} - \log{\left(\frac{X}{X_{\odot}}\right)} - 0.8\left[\frac{\alpha}{Fe}\right] - 0.05\left[\frac{\alpha}{Fe} \right]^{2} \label{feh} \end{equation} \noindent where the term $[\alpha/Fe]$ stand for all $\alpha$-elements lumped together. The recent empirical calibration by Borges et al. (1995) for the ${ Mg_{2}} $\ index includes the effect of different $[Mg/Fe]$ ratios \begin{displaymath} {\ln}{Mg_{2}} = -9.037 + 5.795 \frac{5040}{T_{eff}} + 0.398 \log{g} + ~~~~~~~~ \end{displaymath} \begin{equation} ~~~~~~~~0.389 \left[ \frac{Fe}{H} \right] - 0.16 \left[ \frac{Fe}{H} \right]^{2} + 0.981 \left[ \frac{Mg}{Fe} \right] \label{mg2} \end{equation} \noindent which holds for effective temperatures and gravities in the ranges $ 3800 < T_{eff} < 6500$ K and $ 0.7 < \log{g} < 4.5$. To our knowledge, no corresponding calibration for the \mbox{${ \langle Fe \rangle}$}\ index is yet available, so that one is forced to adopt that with no dependence on $[Mg/Fe]$. Nevertheless, a zero-order evaluation of the effect of $[Mg/Fe]$ on the \mbox{${ \langle Fe \rangle}$}\ index is possible via the different relation between $Z$ and $[Fe/H]$ of the $[\alpha/Fe] \neq 0$ case. The above relations for $[Fe/H]$, ${ Mg_{2}} $\ and implicitly \mbox{${ \langle Fe \rangle}$}\ are used to generate new SSPs and galactic models in which not only the chemical abundances are enhanced with respect to the solar value but also the effect of this on the line strength indices is taken into account in a self-consistent manner. \begin{figure*}[] \psfig{file=tantalo_enha_fig6.ps,height=9.0truecm,width=17.0truecm} \caption{The partition function $N(Z)$ and abundance ratios distribution per metallicity bin, for the Model-A (left panel), Model-B (middle panel), and Model-C (right panel). The {\em solid line} is $N(Z)$ in units of $10^{11}$ at the age of 15 Gyr. The {\em dotted}, {\em dashed}, {\em long-dashed}, and {\em dot-dashed} lines give the distribution per metallicity bin for $[O/Fe]$, $[Fe/H]$, $[Mg/Fe]$ and $[C/Fe]$, respectively. The abundance ratios are in the standard notation. The top scale gives the birth-time $t=T_{G}-T_{SSP}$ of a SSP with age $T_{SSP}$ in a galaxy with age $T_G$} \label{x_nz_abc} \end{figure*} \subsection{Three different chemical structures } We present here three galactic models that in virtue of their particular history of star formation, have different chemical structures and degree of enhancement in \mbox{$\alpha$-elements}. The discussion is limited to the central region of the $3M_{L,12}$ galaxy. \vskip 6pt {\it Model-A: late galactic wind}. This case has late galactic wind ($\sim 5.12$ Gyr), which means that Type Ia supernovae dominate the enrichment in $Fe$ of the gas, and the ratio [$\alpha/Fe$] is solar or below solar for most of the time. This model is actually the central region of the $3M_{L,12}$ galaxy presented above. The percentage ($ N_{enh}$) of $\alpha$-enhanced stars that are still alive at the age of 15 Gyr amounts to 45.7\%. \vskip 6pt {\it Model-B: early galactic wind}. In order to enhance the relative abundance of elements from Type II supernovae we arbitrarily shortened the duration of the star forming period. To this aim, in the central region of the same galaxy, the efficiency of star formation has been increased $\nu=100$) and the infall time scale decreases ($\tau$=0.05 Gyr) so that the galactic wind occurs much earlier than in the previous case (at 0.39 Gyr). The material (gas and stars) of Model-B is therefore strongly enhanced in \mbox{$\alpha$-elements}. The percentage ($ N_{enh}$) of $\alpha$-enhanced stars that are still alive at the age of 15 Gyr amounts to 87.2\%. \vskip 6pt {\it Model-C: recent burst of star formation}. A third possibility is considered, in which a burst of star formation can occur within a galaxy that already underwent significant stellar activity and metal enrichment during its previous history. This model (always limited to the central region of the galaxy) has nearly constant star formation rate from the beginning, but at the age (arbitrarily chosen) of 3 Gyr it is supposed to undergo a sudden increase in the star formation rate. To this aim, the specific efficiency of star formation $\nu$ is let increase from $\nu=0.1$ to $\nu=50$ over a time scale of $10^{8}$~yr. The initial nearly constant stellar activity is secured by adopting a long time scale of gas accretion in the infall scheme ($\tau=10$ Gyr). The rate of star formation (in units of \mbox{${\rm M_{\odot}}$}$\rm yr^{-1}$) as a function of time is shown in Fig.~\ref{sfr}. Soon after the intense period of star formation, the galactic wind occurs thus halting star formation and chemical enrichment. The basic chemical properties of the model as a function of the age are shown in Fig.~\ref{chem}. This displays the maximum ({\em dotted line}) and mean metallicity ({\em solid line}), and the fractionary mass of gas $ G(t)$ ({\em dotted line}) and stars $ S(t)$ ({\em solid line}) both normalized to $\Delta M_{L,S}$. The percentage ($ N_{enh}$) of $\alpha$-enhanced stars that are still alive at the age of 15 Gyr amounts to 74.5\%. The basic data for the three models in question are summarized in Table~2, whereas the evolution of their chemical abundances and present-day partition function $N(Z)$ are shown in Figs.~\ref{x_nz_abc}, where the left panel is for Model-A, the middle panel is for Model-B, and the right panel is for Model-C. These figures are the analog of Fig.~\ref{x_nz}. \subsection{${ Mg_{2}} $\ and \mbox{${ \langle Fe \rangle}$}\ in SSPs with $[\alpha/Fe]\neq 0$ } To secure full consistency between chemical abundances and line strength indices for the galactic models and make use of the SSP technique (cf. Bressan et al. 1996) one should adopt SSPs with the same pattern of abundances indicated by the chemical models. To this aim, the distributions of chemical abundances (and their ratios) as a function of the total metallicity and/or time provided by the model galaxies are used to calculate the ${ Mg_{2}} $\ and \mbox{${ \langle Fe \rangle}$}\ indices of SSPs with different $Z$, $[Fe/H]$, $[O/Fe]$, and $[Mg/Fe]$. \begin{figure} \psfig{file=tantalo_enha_fig7.ps,height=10.5truecm,width=9.0truecm} \caption{{\it Panels (a)}, {\it (c)} and {\it e)} show the ${ Mg_{2}} $\ index evolution for SSPs with different metallicity (Z=0.0004, Z=0.004, Z=0.008, Z=0.02, Z=0.05 and Z=0.1 {\it solid} {\it dotted} {\it dashed} {\it long-dashed}, {\it dot-dashed} and {\it dot-long-dashed} lines, respectively) and the assumption of enhancement in $\alpha$-elements. {\it Panels (b)}, {\it (d)} and {\it (f)} show the same but without enhancement of $\alpha$-elements} \label{mg2_ssp} \end{figure} \begin{figure} \psfig{file=tantalo_enha_fig8.ps,height=10.5truecm,width=9.0truecm} \caption{{\it Panels (a)}, {\it (c)} and {\it e)} show the \mbox{${ \langle Fe \rangle}$}\ index evolution for SSPs with different metallicity (Z=0.0004, Z=0.004, Z=0.008, Z=0.02, Z=0.05 and Z=0.1 {\it solid} {\it dotted} {\it dashed} {\it long-dashed}, {\it dot-dashed} and {\it dot-long-dashed} lines, respectively) and the assumption of enhancement in $\alpha$-elements. {\it Panels (b)}, {\it (d)} and {\it (f)} show the same but without enhancement of $\alpha$-elements} \label{fe_ssp} \end{figure} To quantify the abundance of $\alpha$-elements we prefer to use $[O/Fe]$ instead of $[Mg/Fe]$, because the stellar yields by Portinari et al. (1997) that are at the base of our chemical models somewhat overestimate the production of $Fe$ by Type II supernovae, and underestimate in turn the ratio $[Mg/Fe]$ as compared to the observational value. According to Portinari et al. (1997) the theoretical $[Mg/Fe]$ is about $0.2\div 0.3$ dex lower than indicated by the observational data. Applying this correction, the ratio $[Mg/Fe]$ gets close to the ratio $[O/Fe]$ (see Fig.~\ref{x_nz_abc}), which somehow justifies our use of $[O/Fe]$ instead of $[Mg/Fe]$ in the procedure below. This marginal drawback of the chemical model does not however affect the conclusion of this analysis. For any value of $Z$ we read from Fig.~\ref{x_nz_abc} the ratios $[C/Fe]$, $[O/Fe]$, and $[Mg/Fe]$, derive $[Fe/H]$ from equation (\ref{feh}), and insert $[Mg/Fe], i.e. =[O/Fe]$, and $[Fe/H]$ into equation (\ref{mg2}). It goes without saying that the values of $[Fe/H]$ derived from Figs.~\ref{x_nz_abc} and equation (\ref{feh}) are mutually consistent by definition. Table~3 shows the values of $[Fe/H]$ and $[O/Fe]$ assigned to the SSPs according to the chemical structure of Model-A, Model-B, and Model-C and, for purposes of comparison, to reference SSPs with no enhancement at all. The temporal evolution of the ${ Mg_{2}} $\ and \mbox{${ \langle Fe \rangle}$}\ indices for the SSPs listed in Table~3 is shown in the various panels of Figs.~\ref{mg2_ssp} and ~\ref{fe_ssp}, respectively. \begin{figure*}[t] \psfig{file=tantalo_enha_fig9.ps,height=9.0truecm,width=17.0truecm} \caption{Evolution of the ${ Mg_{2}} $\ index as a function of time. {\it Panels (a)}, {\it (c)} and {\it (e)} show the evolution of the ${ Mg_{2}} $\ index calculated including the effect of the chemical abundances, while {\it Panels (b)}, {\it (d)} and {\it (f)} show the same but without the effect of $\alpha$-enhancement. The {\em solid line} corresponds to Model-A the {\em dotted line} to Model-B, and {\em dashed line} to Model-C} \label{mg2_abc} \end{figure*} \begin{figure*}[t] \psfig{file=tantalo_enha_fig10.ps,height=10.5truecm,width=17.0truecm} \caption{Evolution of the \mbox{${ \langle Fe \rangle}$}\ index as a function of time. {\it Panels (a)}, {\it (c)} and {\it (e)} show the evolution of the ${ Mg_{2}} $\ index calculated including the effect of the chemical abundances, while {\it Panels (b)}, {\it (d)} and {\it (f)} show the same but without the effect of $\alpha$-enhancement. The {\em solid line} corresponds to Model-A the {\em dotted line} to Model-B and {\em dashed line} to Model-C} \label{fe_abc} \end{figure*} \begin{table*} \begin{center} \caption{$[O/Fe]$ and $[Fe/H]$ ratios for SSPs with enhancement of \mbox{$\alpha$-elements}\ according to Model-A, Model-B and Model-C. The same ratios for the reference SSPs with no enhancement are also shown} \scriptsize \begin{tabular*}{113mm} {l| c c| c c| c c| c c} \hline \hline & & & & & & & & \\ \multicolumn{1}{l|}{Z} & \multicolumn{2}{c|}{Model-A} & \multicolumn{2}{c|}{Model-B} & \multicolumn{2}{c|}{Model-C} & \multicolumn{2}{c}{Reference SSPs} \\ \hline & & & & & & & & \\ & $[O/Fe]$ & $[Fe/H]$ & $[O/Fe]$ & $[Fe/H]$ & $[O/Fe]$ & $[Fe/H]$ & $[O/Fe]$ & $[Fe/H]$ \\ & & & & & & & & \\ \hline & & & & & & & & \\ 0.0004 & +0.8 & --2.38 & +3.28 & --4.87 & +0.51 & --2.13 & 0.0 & --1.71 \\ 0.004 & +0.6 & --1.22 & +1.72 & --2.23 & +0.14 & --0.82 & 0.0 & --0.71 \\ 0.008 & +0.5 & --0.80 & +1.22 & --1.44 & +0.03 & --0.42 & 0.0 & --0.39 \\ 0.02 & +0.4 & --0.30 & +0.72 & --0.57 & +0.30 & --0.22 & 0.0 & +0.03 \\ 0.05 & --0.50 & +0.88 & --0.25 & +0.69 & +0.31 & +0.24 & 0.0 & +0.50 \\ 0.1 & --0.80 & +1.55 & --0.59 & +1.40 & --0.87 & +1.60 & 0.0 & +0.95 \\ & & & & & & & & \\ \hline \hline \end{tabular*} \end{center} \label{tab3} \end{table*} It is soon evident that with $[\alpha/Fe]=0$ (right panels of Fig.~\ref{mg2_ssp}), ${ Mg_{2}} $\ monotonically increases with the metallicity, but for the extreme SSP with Z=0.1 for which the trend is reversed at ages older than 5 Gyr. When $[\alpha/Fe]\neq 0$, the trend is more complicated as it depends on the degree of enhancement. For cases A and B, the strongest ${ Mg_{2}} $\ happens to occur for $Z=0.02$, whereas for case C it occurs for $Z=0.05$ (left panels of Fig.~\ref{mg2_ssp}). As far as \mbox{${ \langle Fe \rangle}$}\ is concerned, with $[\alpha/Fe]=0$ the index gets stronger at increasing metallicity but for the extreme case of $Z=0.1$, in which \mbox{${ \langle Fe \rangle}$}\ gets lower than or comparable to the values for the case with $Z=0.05$ at ages older than about 5 Gyr (right panels of Fig.~\ref{fe_ssp}). In presence of enhancement in \mbox{$\alpha$-elements}, there is a significant dependence of \mbox{${ \langle Fe \rangle}$}\ on this parameter even at our zero-order evaluation (see the left panels of Fig.~\ref{fe_ssp}). Although the exact evaluation of the effect of $[Mg/Fe]$ on \mbox{${ \langle Fe \rangle}$}\ is hampered by the lack of the proper calibration, still the above experiments clarify that it cannot be neglected. \subsection { The ${ Mg_{2}} $\ and \mbox{${ \langle Fe \rangle}$}\ indices of galaxies} {\it What the results for the ${ Mg_{2}} $\ and \mbox{${ \langle Fe \rangle}$}\ indices would be when applying these SSPs to model galaxies ? } The situation is displayed in the various panels of Fig.~\ref{mg2_abc} for ${ Mg_{2}} $\ and Fig.~\ref{fe_abc} for \mbox{${ \langle Fe \rangle}$}. The combined analysis of the chemical structures, partition functions $N(Z)$, and temporal variations of the ${ Mg_{2}} $\ and \mbox{${ \langle Fe \rangle}$}\ indices of the model galaxies (Figs.~\ref{mg2_abc} and ~\ref{fe_abc}) allow us to make the following remarks: \begin{description} \item [(i)] $[Mg/Fe]\neq 0$: ${ Mg_{2}} $\ in Model-A (late wind, no chemical enhancement) is always weaker than in Model-B (early wind, significant chemical enhancement). However, the difference is large for ages younger than about 5 Gyr, and gets very small up to vanishing for older ages. ${ Mg_{2}} $\ of Model-C is always weaker than Model-A and Model-B which means that the higher (or comparable) enhancement in \mbox{$\alpha$-elements}\ of Model-C with respect to the previous ones does not produce a stronger ${ Mg_{2}} $\ index. The age dependence of \mbox{${ \langle Fe \rangle}$}\ is more intrigued. At young ages ($< 5$ Gyr) the intensity of \mbox{${ \langle Fe \rangle}$}\ gets stronger passing from Model-C to Model-A and finally Model-B. At ages older than 5 Gyr, Model-A has the strongest \mbox{${ \langle Fe \rangle}$}\ whereas Model-B and Model-C are weaker and of the same intensity. \item [(ii)] $[Mg/Fe] = 0$: ${ Mg_{2}} $\ in Model-A (late wind, no chemical enhancement) is first weaker than in Model-B (early wind, significant chemical enhancement) up to ages of about 5 Gyr, and then becomes significantly stronger at older ages. Finally, the ${ Mg_{2}} $\ index of Model-C (burst of star formation, and strong enhancement) is weaker or about equal to that of Model-A and Model-B. The index \mbox{${ \langle Fe \rangle}$}\ closely follows the trend of ${ Mg_{2}} $\ all over the age range. \end{description} It follows that both ${ Mg_{2}} $\ and \mbox{${ \langle Fe \rangle}$}\ don not simply correlate with age, $[Fe/H]$ and $[Mg/Fe]$. The striking result is that ${ Mg_{2}} $\ and to some extent \mbox{${ \langle Fe \rangle}$}\ as well of models with supposedly the highest degree of enhancement in \mbox{$\alpha$-elements}\ happen to be weaker than in those with no enhancement. {\it What causes the odd behaviour of ${ Mg_{2}} $\ and \mbox{${ \langle Fe \rangle}$}\ as a function of the age and underlying chemical structure of the model galaxy ? } \vskip 6pt To answer this question, we have artificially removed from the partition function $N(Z)$ all the stars in certain metallicity bins and re-calculated the line strength indices for the three models. Panels (c) and (d) of Figs.~\ref{mg2_abc} and ~\ref{fe_abc} show the results when all stars with metallicity higher than $Z$=0.05 are removed. This is motivated by the trend as function of the metallicity shown by the SSPs we have already pointed out. The situation remains practically unchanged. Likewise, panels (e) and (f) of Figs.~\ref{mg2_abc} and ~\ref{fe_abc} show the same but when all stars with metallicity lower than $Z=0.008$ are removed. Now the results change significantly. In the case of $[Mg/Fe] \neq 0$, the ${ Mg_{2}} $\ index of Model-B is always much stronger than that of Model-A. In such a case there is correspondence between the strength of the index and the amount of enhancement in \mbox{$\alpha$-elements}. In contrast, \mbox{${ \langle Fe \rangle}$}\ of Model-A is first slightly weaker and then stronger than in Model-B, whereas that of Model-C is first weaker and then almost equal to that of Model-B. cases. In the case of $[Mg/Fe]=0$, at ages older than about 5 Gyr the ${ Mg_{2}} $\ and \mbox{${ \langle Fe \rangle}$}\ of the three models are almost equal each other, with a marginal increase passing from Model-C to Model-B and Model-A. At younger ages, while Model-A and Model-C have nearly the same indices, Model-B has always the strongest values. The above analysis clarifies in a quantitative fashion a number of important clues: \begin{itemize} \item{In galaxies, both ${ Mg_{2}} $\ and \mbox{${ \langle Fe \rangle}$}\ depend on the age, $N(Z)$, $[Fe/H]$, and $[Mg/Fe]$ in a somewhat unpredictable fashion.} \item {Because of this, inferring from the above indices the abundance ratio $[Mg/Fe]$ and the enhancement of $\alpha$-elements is a difficult task with somewhat ambiguous results.} \item{Different slopes of the gradients in ${ Mg_{2}} $\ and \mbox{${ \langle Fe \rangle}$}\ do not automatically imply gradients in chemical abundances or enhancement ratios.} \end{itemize} \begin{figure} \psfig{file=tantalo_enha_fig11.ps,height=9.0truecm,width=8.5truecm} \caption{ Curves of constant ${ Mg_{2}} $\ (left) and \mbox{${ \langle Fe \rangle}$}\ (right) at varying age (in Gyr) and metallicity (Z) for SSP with no enhancement in \mbox{$\alpha$-elements}. The calibrations in use are from Worthey et al. (1994). The intensity of the indices is annotated along the curves } \label{level_noenh} \end{figure} \begin{figure} \psfig{file=tantalo_enha_fig12.ps,height=9.0truecm,width=8.5truecm} \caption{ The same as in Fig.~11 but for SSP with enhancement if \mbox{$\alpha$-elements}\ according to the entries of Table~3. The calibrations in use is from Borges et al. (1995) for ${ Mg_{2}} $\ and Worthey et al. (1994) for \mbox{${ \langle Fe \rangle}$}\ however corrected for the different relation between $Z$ and $[Fe/H]$ in presence of enhancement in \mbox{$\alpha$-elements}. The intensity of the indices is annotated along the curves } \label{level_enh} \end{figure} \section{Indices in the age-metallicity plane of SSPs} We like to conclude this study presenting the loci of constant ${ Mg_{2}} $\ and \mbox{${ \langle Fe \rangle}$}\ in the age-metallicity plane of SSPs. This is shown in Figs.~\ref{level_noenh} and \ref{level_enh} displaying the above indices with and without enhancement of \mbox{$\alpha$-elements}\ using the calibrations discussed in the previous sections. This plane can be used to quickly check how gradients in ages and/or metallicities across galaxies (inferred from other independent analysis) would reflect onto gradients in ${ Mg_{2}} $\ and \mbox{${ \langle Fe \rangle}$}\ and vice-versa. An interesting feature to note, is the marked dip in the ${ Mg_{2}} $\ index for values greater than a certain limit that occurs at a certain value of the metallicity. The threshold value for ${ Mg_{2}} $\ is about 0.26 and the metallicity is about 0.03 in the case of no enhancement, whereas they are lowered to 0.15 and 0.008, respectively, in presence of enhancement. Starting from the observational result that in general elliptical galaxies show stronger ${ Mg_{2}} $\ and \mbox{${ \langle Fe \rangle}$}\ toward the center, and representing the local mix of stellar population in a galaxy (center and/or external regions) with a mean SSP of suitable age and composition, we see that the observational gradient in ${ Mg_{2}} $\ and \mbox{${ \langle Fe \rangle}$}\ could be compatible with (i) either a nucleus more metal-rich and older than the external regions; (ii) or nucleus more metal-rich and younger than the external regions. In contrast a nucleus less metal-rich and older than the external regions would lead to gradients in ${ Mg_{2}} $\ and \mbox{${ \langle Fe \rangle}$}\ opposite to what observed. The situation is straightforward with the Worthey (1992) calibrations, whereas it is somewhat intrigued with the Borges et al. (1995) calibration in presence of $\alpha$-enhancement. It worth recalling here that Bressan et al. (1996) analyzing the Gonzales (1993) \mbox{${ H_{\beta}}$}\ and \mbox{${ [MgFe]} $}\ data for elliptical galaxies and their variation across these systems suggested that most galaxies ought to have nuclei with higher metallicities and longer durations of star forming activity than the peripheral regions. This suggestion is fully compatible with the gradients in ${ Mg_{2}} $\ and \mbox{${ \langle Fe \rangle}$}\ observed in elliptical galaxies. \section{Summary and conclusions} Aim of this study was to ascertain on a quantitative basis the effect of age, metallicity, partition function $N(Z)$, abundance ratio $[Mg/Fe]$, and calibration in usage on the line strength indices ${ Mg_{2}} $\ and \mbox{${ \langle Fe \rangle}$}\, from whose gradients the problem of the possible enhancement of $\alpha$-elements toward the center of elliptical galaxies originates. Although by no means we want to exclude such a possibility, the attention is called on a number of indirect effects that could invalidate the one-to-one correlation between the index intensity and the abundance of the corresponding element, and indirectly on the one-to-one correlation between the relative slopes of the observational gradients and the inferred spatial variation of abundance ratios ([$\alpha/Fe]$ in particular). The results of this study can be summarized as follows: \begin{enumerate} \item{The intensity of ${ Mg_{2}} $\ does not simply correlate with the abundance of $Mg$ and the ratio $[Mg/Fe]$ in particular.} \item{The intensity of ${ Mg_{2}} $\ does not simply correlate with the age or the metallicity.} \item{The intensity of ${ Mg_{2}} $\ much depends on the partition function $N(Z)$.} \item{Likewise for the \mbox{${ \langle Fe \rangle}$}\ index.} \item{ Inferring the abundance of $Mg$ or the enhancement ratio $[Mg/Fe]$ is a cumbersome affair whose solution is not always possible because hints on $N(Z)$ are needed.} \item{The observational gradients in ${ Mg_{2}} $\ and \mbox{${ \langle Fe \rangle}$}\ do not automatically imply gradients in the abundances of $Mg$ and $Fe$ or enhancement ratios. Inferring from the observational ${ Mg_{2}} $\ and \mbox{${ \langle Fe \rangle}$}\ constraints on the past history of star formation (via the different time scales of $Mg$ and $Fe$ enrichment) may be risky.} \end{enumerate} Although most of these conclusions have already been around in literature, their quantitative assessment has never been attempted before in a systematic fashion, in particular in the complex but realistic situation in which many stellar populations with different ages, metallicities, and abundance ratios are present. \vskip 6pt \acknowledgements{We are most grateful to Dr. Guy Worthey for his constructive referee report which much contributed to improve upon the orginal manuscript. This study has been financed by the Italian Ministry of University, Scientific Research and Technology (MURST), the Italian Space Agency (ASI), and the European Community TMR grant \#ERBFMRX-CT96-0086. }
2023-04-23T06:41:19.588Z
1997-10-10T18:53:15.000Z
redpajama/arxiv
arxiv_0001
2,169
8,615
bbcef814e13eb23559dda1fb440c6739b6d354a5
\section{Introduction} Oxygen is the most abundant element in the Galaxy after hydrogen and helium. Consequently, it is important to establish the current-epoch O abundance accurately for studies of Galactic chemical evolution (\cite{tim95}). A considerable effort has recently gone into determining the abundances of oxygen and other elements in nearby B stars since these young stars should most closely reflect the current ISM abundance pattern (\cite{gie92,kil92,cun94,kil94}). These studies yield a median B-star oxygen abundance (per 10$^6$ H atoms) of 10$^6$ O/H $\approx$ 450 which is about 2/3 of the Grevesse \& Noels (1993) solar value (10$^6$ O/H $=$ 741 $\pm$ 130). This result is inconsistent with the traditional assumptions that the solar abundance reflects that of the ISM at the time of the Sun's formation 4.6 Gyr ago and that the interstellar O abundance should increase slowly over time (\cite{aud76}, Timmes et al.\ 1995). A simple interpretation of the conflict between the solar and B star abundances is that it is a manifestation of the abundance scatter in their respective stellar populations. For example, Cunha \& Lambert (1994) find a spread of $\pm$0.2 dex among the oxygen abundances in their sample of Orion association B stars. Also, in a sample of F and G stars of similar age and Galactocentric radius to the Sun, \cite{edv93} find an iron abundance spread of $\pm$0.25 dex with the Sun among the most metal-rich cases. Such scatter in the stellar abundances is strongly suggestive of localized abundance inhomogeneities in the ISM\@. Yet, as discussed by \cite{roy95}, a variety of hydrodynamical processes operating in the Galactic disk should keep the gas chemically well-mixed on short time scales. In order to determine the homogeneity of the local interstellar medium at a level capable of distinguishing between a solar and a B star oxygen abundance, it is necessary to obtain very sensitive observations. Since O I (13.618 eV) and H I (13.598 eV) have nearly the same ionization potentials, O I is the dominant form of gaseous oxygen in diffuse interstellar H I clouds. Although the prominent O I $\lambda$1302 absorption line is typically saturated in diffuse sightlines, the very weak intersystem transition at 1355.598 \AA\ can yield accurate gas-phase O abundances if measured with sufficient sensitivity. Observations of this line with the {\it Copernicus} satellite indicate a mean interstellar gas-phase oxygen abundance that is 40\% to 70\% of the solar value (\cite{yor83,kee85}). However, the scatter in these data is too great to rule out a solar abundance of interstellar oxygen especially since some of the O is tied up in dust grains. This scatter is primarily due to the uncertainties associated with measuring the weak O I $\lambda$1356 line strengths. The ability of the Goddard High Resolution Spectrograph (GHRS) onboard the {\it Hubble Space Telescope} ({\it HST}) to obtain UV spectra with higher resolution and signal-to-noise (S/N) ratios than those acquired with {\it Copernicus} has made it possible to take interstellar abundance studies another step forward with accurate observations of very weak absorption lines (\cite{sav96}). Utilizing high S/N GHRS observations of the O I $\lambda$1356 absorption in the low density sightlines toward the stars $\iota$ Ori and $\kappa$ Ori, \cite{mey94} have found a total oxygen abundance (gas plus grains) in Orion that is consistent with the stellar (\cite{cun94}) and nebular determinations (\cite{bal91,rub91,ost92,pei93}). In this paper, we present new O I $\lambda$1356 data; our total GHRS sample includes 13 sightlines toward stars in seven distinctly different Galactic directions at distances ranging from 130 to 1500 pc, with most closer than 500 pc. These sightlines were particularly chosen for their wide range in physical conditions so as to search for evidence of density-dependent depletion variations in the gas-phase oxygen abundance as well as spatial variations. \section{Observations} The GHRS observations of the interstellar O I $\lambda$1356 absorption line toward the stars $\gamma$ Cas, $\epsilon$ Per, $\delta$ Ori, $\epsilon$ Ori, 15 Mon, $\tau$ CMa, and $\gamma$ Ara were obtained in 1995 October and November using the echelle-A grating and the 2\farcs0 large science aperture. The detailed characteristics and in-flight performance of the GHRS instrument are discussed by \cite{bra94} and \cite{hea95}. The observations of each star consisted of multiple FP-Split exposures centered near 1356 \AA. An FP-Split breaks up an exposure into four subexposures taken at slightly different grating positions so as to better characterize and minimize the impact of the GHRS Digicon detector's fixed pattern noise on the S/N ratio of the data. Each of these subexposures was sampled two or four times per Digicon diode (depending on the brightness of the star) at a velocity resolution of 3.5 km s$^{-1}$. The data reduction procedure discussed in detail by \cite{car94a} was utilized to maximize the S/N ratio of the O I spectra. Basically, this process involves four steps: (1) the subexposures comprising each FP-Split exposure are merged in diode space so as to create a template of the fixed pattern noise spectrum, (2) each subexposure is divided by this noise template, (3) all of the rectified subexposures are aligned in wavelength space using the interstellar lines as a guide, and (4) the aligned subexposures are summed to produce the net O I spectrum of each star. As illustrated in Figure 1 for five of the stars in our echelle sample, the resulting continuum-flattened spectra reveal convincing detections of the interstellar O I $\lambda$1356 line in all seven sightlines. The measured S/N ratios of these spectra are all in the 400 to 600 range. The GHRS spectra of $\lambda$ Ori and $\zeta$ Per displayed in Figure 1 were obtained in 1994 February using the G160M grating and the 0\farcs25 small science aperture. These spectra were reduced in the same manner as the echelle data and are each characterized by a velocity resolution of 16 km s$^{-1}$ and a S/N ratio of about 500. The measured equivalent widths of the interstellar O I $\lambda$1356 absorption in these spectra as well as those in the echelle sightlines are listed in Table 1. The uncertainties in these line strengths reflect the statistical and continuum placement errors summed in quadrature. Table 1 also includes the previously reported GHRS O I $\lambda$1356 measurements toward $\iota$ Ori and $\kappa$ Ori (Meyer et al.\ 1994), $\xi$ Per (\cite{car91}), and $\zeta$ Oph (\cite{sav92}). The O I column densities listed in Table 1 were calculated from the $\lambda$1356 equivalent widths using the Zeippen, Seaton, \& Morton (1977) oscillator strength of $f=1.248\times10^{-6}$. Although the quoted uncertainty in this $f$-value is 15\%, \cite{sof94} have empirically verified that it is consistent with the better-determined $f$-value appropriate for the O I $\lambda$1302 transition. In the cases of $\gamma$ Cas, $\epsilon$ Per, $\delta$ Ori, $\iota$ Ori, $\epsilon$ Ori, $\kappa$ Ori, 15 Mon, and $\tau$ CMa, the $\lambda$1356 absorption is weak enough for $N$(O I) to be confidently derived under the assumption that the line is optically thin. In the cases of $\zeta$ Per, $\lambda$ Ori, and $\gamma$ Ara, a slight correction for saturation was applied using a Gaussian curve-of-growth with respective $b$-values of 2.0$^{+2.0}_{-0.5}$, 5.0$^{+\infty}_{-2.5}$, and 3.0$^{+\infty}_{-1.5}$ km s$^{-1}$. These $b$-values were estimated from GHRS observations of the interstellar Mg II $\lambda\lambda$1239.9,1240.4 doublet toward $\zeta$ Per, the Mg II and N I $\lambda\lambda$1160,1161 (\cite{mey97}) doublets toward $\lambda$ Ori, and the O I $\lambda$1356 line width toward $\gamma$ Ara. The resultant O I column densities for $\zeta$ Per, $\lambda$ Ori, and $\gamma$ Ara are 24\%, 4\%, and 6\% greater than their weak line limits, respectively. The $N$(O I) values corresponding to the slightly saturated $\lambda$1356 lines toward $\xi$ Per and $\zeta$ Oph were taken from the detailed analyses of these sightlines by Cardelli et al.\ (1991) and Savage et al.\ (1992). The uncertainties in the O I column densities listed in Table 1 reflect the estimated errors in the $\lambda$1356 equivalent width measurements and the saturation corrections (where applied). These errors do not include the uncertainty in the $\lambda$1356 $f$-value because it would affect all of the column densities in the same way. \section{Results} In order to put the GHRS oxygen results in perspective, it is instructive to compare and analyze them in concert with the best {\it Copernicus} satellite observations of the interstellar O I $\lambda$1356 line. Table 2 lists the 14 sightlines toward which the equivalent width of this line has been measured at the 4$\sigma$ level or better with {\it Copernicus} (\cite{boh83}, Zeippen et al.\ 1977). Among the four sightlines in common between the GHRS and {\it Copernicus} samples, $\zeta$ Oph yields similar O I line strengths while the $\epsilon$ Per, $\lambda$ Ori, and $\kappa$ Ori lines are weaker in the more sensitive GHRS spectra. In deriving the {\it Copernicus} O I column densities listed in Table 2, $\epsilon$ Per and $\kappa$ Ori were assumed to be optically thin while the $\lambda$ Ori and $\zeta$ Oph lines were corrected for saturation in the same way as the GHRS data. Since the other sightlines in the {\it Copernicus} sample have appreciably stronger O I lines, saturation is more of a concern than in the general case of the GHRS sample. For these sightlines, saturation corrections were applied using a single-component Gaussian curve-of-growth and the $b$-value estimates listed in Table 2. The $b$-value estimates are based in part on {\it Copernicus} observations of interstellar Cl I and P II in these sightlines (\cite{jen86}). The impact of the saturation corrections on $N$(O I) ranges from 12\% over the weak-line limit for $\delta$ Sco to 68\% for $\rho$ Oph, and less than 30\% for most of the other sightlines. The quoted errors in the derived O I column densities reflect the uncertainities in these corrections as well as those in the $\lambda$1356 line strength measurements. The total hydrogen column densities ($N$(H) $=$ 2$N$(H$_2$) $+$ $N$(H I)) listed in Tables 1 and 2 were determined in the same manner for each sightline in the GHRS and {\it Copernicus} samples. These values reflect the H$_2$ column densities measured by Savage et al.\ (1977) and the weighted means of the Bohlin, Savage, \& Drake (1978) and Diplas \& Savage (1994) $N$(H I) data. The uncertainties in the resulting oxygen abundances (per 10$^6$ H atoms) in Tables 1 and 2 reflect the propagated errors in both $N$(O I) and $N$(H). Taken together, the GHRS sightlines yield a weighted mean interstellar gas-phase oxygen abundance of 10$^6$ O/H $=$ 319 $\pm$ 14 while the {\it Copernicus} data yields 10$^6$ O/H $=$ 361 $\pm$ 20. Although the {\it Copernicus} mean is heavily weighted by the accurate value toward $\zeta$ Oph, both samples are indicative of an interstellar gas-phase O abundance that is appreciably below the solar value of 10$^6$ O/H $=$ 741 $\pm$ 130 (Grevesse \& Noels 1993). The key improvement of the GHRS data over the {\it Copernicus} data is the greater accuracy of the individual GHRS measurements (especially of the weaker O I lines). In the GHRS data, the largest deviation of O/H from the mean value is 18\% compared to a range in the {\it Copernicus} data of a factor of 3. As reviewed by Jenkins (1987), the interstellar gas-phase abundances of many elements measured by {\it Copernicus} decrease as a function of the mean sightline hydrogen density, $n_H$ $=$ $N$(H)/$r$, where $r$ is the distance to the background star. These elemental depletions from the gas phase reflect both the growth of dust grains in denser clouds and grain destruction in more diffuse environments. Using the stellar distances listed in Diplas \& Savage (1994), we have calculated $n_H$ for each of the sightlines in our GHRS and {\it Copernicus} samples and plotted them versus the corresponding oxygen abundances in Figure 3. As might be expected from Figure 2, there is no significant evidence of variations in the gas-phase O abundance as a function of $n_H$ in either sample. Although the {\it Copernicus} data samples more denser sightlines, the GHRS data pins down the oxygen gas abundance in the most diffuse clouds at a level that is completely consistent with the higher density cases. The absence of abundance variations as a function of $n_H$ in the GHRS data suggests that there is negligible exchange between gas and dust in these diffuse sightlines and that the total (gas plus dust) abundance of oxygen must not vary significantly in the local ISM\@. A better barometer of diffuse cloud conditions is the fractional abundance of molecular hydrogen, $f$(H$_2$) $=$ 2$N$(H$_2$)/$N$(H) (\cite{car94b}). Sightlines separate rather distinctly into groups with low and high $f$(H$_2$) values due to the difference between UV transparent and H$_2$ self-shielding environments. Even for weakly depleted elements like Ge (\cite{car94b}) and Zn (\cite{rot95,sem95}), the gas-phase abundances are higher in the low $f$(H$_2$) group than in the high group signifying both the presence of dust grains and changes in the elemental dust abundance due to interstellar grain growth and/or destruction. Figure 4 clearly shows that the interstellar gas-phase O abundances measured with GHRS are both well-sampled as a function of $f$(H$_2$) and exhibit no dependence on this parameter. Indeed, the mean abundance in the 7 sightlines with log $f$(H$_2$) $<$ -2.0 (10$^6$ O/H $=$ 325 $\pm$ 20) is essentially the same as that in the 6 sightlines with log $f$(H$_2$) $>$ -2.0 (10$^6$ O/H $=$ 312 $\pm$ 19). Thus, any significant reservoir of interstellar oxygen in diffuse clouds other than the atomic gas must be resilient enough to survive in a variety of environments. \section{Discussion} Under the traditional assumption that the {\it cosmic} elemental abundances reflect those in the solar system, the mean gas-phase abundance of interstellar oxygen measured by GHRS implies an O dust fraction of 10$^6$ O/H $\approx$ 420. However, it has been known for some time that an elemental inventory of the likely constituents of interstellar dust yields appreciably less solid-state oxygen than this inferred amount (\cite{gre74,mey89}). In particular, assuming various mixtures of oxygen-bearing grain compounds such as the silicates pyroxene [(Mg,Fe)SiO$_3$] and olivine [(Mg,Fe)$_2$SiO$_4$] and oxides like Fe$_2$O$_3$, it is difficult to increase the O dust fraction much beyond 10$^6$ O/H $\approx$ 180 (Cardelli et al.\ 1996) simply because the requisite metals are far less abundant than oxygen ([O:Si:Mg:Fe]$_{solar}$ $\approx$ [24:1:1:1]). If these metals have total (gas plus dust) underabundances similar to that derived for oxygen, the implied O dust fraction would be 10$^6$ O/H $\approx$ 120 instead of 10$^6$ O/H $\approx$ 180. It would be hard to hide a significant amount in molecules like CO or O$_2$ in the diffuse sightlines observed by GHRS without leaving any trace of O abundance variations as a function of $f$(H$_2$). For similar reasons, the ``missing'' oxygen is unlikely to be locked up in icy grain mantles or ice grains (\cite{gre74}). Such carriers would also leave unmistakable signatures like the 3.07 $\mu$m O-H stretch ``ice'' feature that are not observed in diffuse sightlines (\cite{whi88}). Thus, unless there is some other resilient form of oxygen in the diffuse ISM, it appears clear from our GHRS observations that the {\it total} abundance of interstellar O is about 2/3 of the solar value. This result is consistent with the conclusions of the previous GHRS oxygen studies (Meyer et al.\ 1994, Cardelli et al.\ 1996) that used subsets of the complete sightline sample presented here. The possibility that the interstellar oxygen measurements are sampling an overall deficit in local ISM elemental abundances has been enhanced by recent GHRS observations of interstellar krypton. Based on measurements in ten sightlines, \cite{car97} find a mean interstellar gas-phase Kr abundance that is about 60\% of the solar system abundance. Since Kr, as a noble gas, should not be depleted much into dust grains, this gas-phase abundance reflects a true interstellar deficit similar to what we find for oxygen. Furthermore, the abundance of krypton, like oxygen, is remarkably homogeneous from sightline to sightline independent of diffuse cloud conditions. This homogeneity is reflected in Figure 5 where we plot the interstellar gas-phase O/Kr abundance ratio as a function of $f$(H$_2$) for the GHRS sightlines in common between this study and that of \cite{car97}. The current data are consistent with a picture where the abundances of all of the elements in the local ISM are generally about 2/3 of their solar system values (Snow \& Witt 1995, 1996). The interstellar abundance deficit suggested by the GHRS observations of O and Kr fortifies the results of nearby B star measurements of the current epoch abundances of O and other elements (\cite{gie92,kil92,cun94,kil94}). As discussed earlier, these studies yield median B-star CNO abundances that are also about 2/3 of the solar values. The implication of this result is that something unusual happened to either the Sun or the local ISM in the context of standard models of Galactic chemical evolution (\cite{aud76}, Timmes et al.\ 1995). The $\pm$0.05 dex spread in the interstellar oxygen abundances is appreciably less than the $\pm$0.2 dex oxygen spread in Orion B stars (Cunha \& Lambert 1994) and the $\pm$0.25 dex Fe abundance spread in the solar-like star sample of Edvardsson et al.\ (1993). If one believes that these stellar abundance spreads are real and not due to observational error, the question arises as to how to make stars with such large abundance variations out of a very well-mixed ISM\@. The GHRS data certainly makes it difficult now to explain the solar anomaly simply as the result of a typical ISM abundance fluctuation. There are three models to explain why the ISM has a lower oxygen abundance than does the Sun. 1. Based on isotopic anomalies involving $^{26}$Al and other elements in meteorites and cosmic rays, the idea that the early solar system was chemically enriched by a local supernova explosion has long been popular (\cite{ree78,lee79,oli82}). At first glance, this idea would seem to be a reasonable explanation for the overabundance of oxygen in the Sun. If the solar system originated in a molecular cloud with active OB star formation, a first generation of massive stars could have evolved quickly and enriched the gas in heavy elements such as oxygen which would later be incorporated in the Sun. Cunha \& Lambert (1994) have found evidence of such a process in the Orion OB association where elements such as oxygen, which is produced in abundance by Type II supernovae, exhibit larger abundance spreads in the B stars than do elements like nitrogen. In the context of the Edvardsson et al.\ (1993) study of solar-like stars, if this kind of cloud self-enrichment process were common, it could also explain the stellar Fe abundance spread as well as the Sun's position near the top. Yet, our GHRS data indicates that the ISM abundance inhomogeneities produced by any such process must be quickly damped out. In particular, our five GHRS Orion sightlines yield a spread of only $\pm$0.05 dex in their interstellar oxygen gas abundances and a mean value of 10$^6$ O/H $=$ 305 $\pm$ 24 which is completely consistent with that of the other GHRS sightlines (10$^6$ O/H $=$ 325 $\pm$ 17). Even if this mixing problem can be accommodated through Galactic hydrodynamical processes (Roy \& Kunth 1995), the supernova enrichment hypothesis still faces the challenge of creating similar overabundances for a variety of elements in the Sun. For example, in addition to O and Kr, the early GHRS results on interstellar nitrogen also yield a 2/3 solar abundance for this element (Cardelli et al.\ 1996). Given the steep relative yield of O to N in Type II supernovae (\cite{oli82}) as compared to their present-day interstellar abundances, it is difficult to understand how one or more such explosions could have produced similar overabundances of these two elements, let alone others, in the protosolar nebula. 2. As discussed by Meyer et al.\ (1994) and Roy \& Kunth (1995), another approach to understanding the underabundance of interstellar oxygen is to invoke a recent infall of metal-poor extragalactic gas in the local Milky Way. The idea of infall, whether gradual or episodic, has long been recognized as a potentially important component of Galactic chemical evolution (\cite{aud76,may81,pit89,chi97}). Recent observations of high-velocity gas in the Galactic halo indicate that at least some of these infalling clouds have metallicities as low as 10\% solar (\cite{kun94,lu94}). \cite{com94} have suggested that the impact of a $\approx$10$^6$ M$_{\sun}$ extragalactic cloud with the local Milky Way some 10$^8$ years ago could explain the origin and characteristics of the nearby early-type stars and molecular clouds that constitute Gould's Belt. In terms of the resultant metallicity of the mixed gas, such a collision could also have diluted the heavy element abundances in the local ISM below their solar values. Since this dilution would affect all of the elements in the same way, the similar interstellar underabundances observed for O and Kr could easily be explained through an infall model. Conversely, such a model would have serious problems if any element was found not to exhibit this underabundant pattern. Sulfur is a potential candidate to break this pattern since \cite{fit97} have measured near-solar interstellar gas-phase S abundances toward three stars with high-quality GHRS data. However, these abundances are quite uncertain due to the considerable saturation of the S II $\lambda\lambda$1251,1254,1260 absorption lines and the likely possibility that a significant fraction of the S II (which has an ionization potential of 23.3 eV) originates in H II regions. Another prediction of the local infall hypothesis would be that the abundances just beyond Gould's Belt should be closer to the solar values. In terms of the B stars within a kpc or so, the data are generally inconclusive on this point with O abundance spreads of about $\pm$0.2 dex and no systematic variations found (\cite{geh85,fit92,kil94a,kau94,sma96}). However, in a comprehensive study of B stars over a large range of galactocentric distance (6 $\leq$ $R_g$ $\leq$ 18 kpc), \cite{sma97} find an oxygen abundance gradient of -0.07 $\pm$ 0.01 dex kpc$^{-1}$ that they claim should be representative of the present-day Galactic ISM\@. This large-scale gradient is consistent with that measured for oxygen in H II regions (\cite{sha83,sim95,aff96}) and planetary nebulae (\cite{mac94}). Unfortunately, the small-scale scatter in all of these gradient measures is too large to shed much light on the local infall hypothesis. 3. \cite{wie96} suggest that the Sun actually formed in the more metal-rich ISM at a galactocentric distance of $R_g$ $=$ 6.6 $\pm$ 0.9 kpc and has migrated over the past 4.6 Gyr to its current distance of $R_g$ $=$ 8.5 kpc. This scenario is predicated on a very smooth ISM metallicity gradient and a process of radial stellar diffusion that would lead to both the Sun's enhanced Fe metallicity and the observed $\pm$0.25 dex spread in the Fe abundances of nearby solar-like stars (Edvardsson et al.\ 1993). In terms of our GHRS interstellar O abundances, this idea is attractive because it could explain both the solar oxygen overabundance and how such a well-mixed local ISM could co-exist with the much greater metallicity spreads of local stellar populations. The key challenge for the Wielen et al.\ scenario is working out the basic mechanism through which the stellar orbits can appreciably migrate radially. Furthermore, based on the $\pm$0.2 dex spread of the O abundances in the Orion B stars (Cunha \& Lambert 1994), it is clear that stellar diffusion cannot be responsible for every large stellar abundance spread. All three of these models are subject to future observational tests. If the abundances of all of the elements in the local ISM are 2/3 solar, then the model that the Sun was enriched by a local supernova is untenable. Accurate measurements of interstellar abundances at distances greater than 1 kpc will allow us to test models which predict spatial variations of these quantities. \acknowledgments This work was supported by NASA through grant NAG5-2178 to UCLA. \clearpage
2023-04-23T06:41:19.593Z
1997-10-15T16:58:08.000Z
redpajama/arxiv
arxiv_0001
2,170
4,217
a00c09c4f2022dfc8a7015918c89eeb875a9dc9c
\section{Introduction} Photodisintegration of the deuteron in the $\Delta$-resonance region is particularly interesting in order to investigate the $N \Delta$-interaction. None of the models developed so far is able to describe in a satisfactory manner the experimental data over the whole $\Delta$-resonance region (for a review see \cite{ArS91}). Among the most sophisticated approaches are the unitary three-body model of Tanabe and Ohta \cite{TaO89} and the coupled channel approach (CC) of Wilhelm and Arenh\"ovel \cite{WiA93}. In both models, all free parameters were fixed in advance by fitting $NN$- and $\pi N$-scattering, and $\pi$-photoproduction on the nucleon. Consequently, no adjustable parameters remained for deuteron photodisintegration. However, it turned out that both approaches considerably underestimated the total cross section in the $\Delta$-region by about 20-30\% \cite{{TaO89},{WiA93}}. Another failure was the wrong shape of the differential cross section and the photon asymmetry, especially at photon energies above 300 MeV \cite{{TaO89},{WiA93},{Leg95}}. In these calculations, one of the principal problems is the question of how to fix the $\gamma N \Delta$-coupling $G^{M1}_{\Delta N}(E_{\Delta})$ in the $M1 \,\, N \Delta$-current \begin{equation}\label{ndcurrent} \vec{\jmath}^{\,\,\, M1}_{\Delta N}(E_{\Delta}, \vec{k}) = \frac{G^{M1}_{\Delta N}(E_{\Delta})}{2M} \,\, \tau_{\Delta N,0} \,\, i\, \vec{ \sigma}_{\Delta N} \times \vec{k} \, \, , \end{equation} where $E_{\Delta}$ is the energy available for the internal excitation of the $\Delta$ and $\vec{k}$ the momentum of the incoming photon. Wilhelm et al.\ as well as Tanabe et al.\ have determined $G^{M1}_{\Delta N}(E_{\Delta})$ by fitting the $M_{1+}(3/2)$-multipole of pion photoproduction on the nucleon. The full pion production amplitude $t_{\pi \gamma}(E_{\Delta})$ in the $(3,3)$-channel can be written as \begin{equation}\label{deltat} t_{\pi \gamma}(E_{\Delta}) = t^{B}_{\pi \gamma}(E_{\Delta}) - \frac{v^{\dag}_{\Delta} \vec{\epsilon} \cdot \vec{\jmath}^{\,\,\, M1}_{\Delta N}(E_{\Delta},\vec{k})}{ E_{\Delta}-M^{0}_{\Delta} -\Sigma_{\Delta}(E_{\Delta})} \,\, , \end{equation} where $t^{B}_{\pi \gamma}(E_{\Delta})$ is the nonresonant Born amplitude. While in \cite{WiA93} an effective $\gamma N \Delta$-coupling $G^{M1}_{\Delta N}(E_{\Delta}) $ and the model of \cite{PoS87} for the bare $\Delta$-mass $M^{0}_{\Delta}$, the $\Delta$-self energy $\Sigma_{\Delta}(E_{\Delta})$ and the $\Delta \pi N$-vertex $v^{\dag}_{\Delta}$ has been used, we follow here the work of Tanabe and Ohta (model A in \cite{TaO85}). $G^{M1}_{\Delta N}(E_{\Delta})$ contains besides the bare $\gamma N \Delta$-coupling the contributions from nonresonant pion rescattering (Fig.\ \ref{tmatrix}), so that it becomes complex and energy dependent. \begin{figure}[htb] \centerline{\psfig{figure=figure1.eps,width=10cm,angle=270}} \vspace{-2.2cm} \caption{(a) The $M_{1+}(\frac{3}{2})$-multipole amplitude of pion photoproduction consisting of a Born and a resonant amplitude. (b) The dressed $\gamma N \Delta$-coupling, including nonresonant pion rescattering.} \label{tmatrix} \end{figure} The Born terms contributing to the $(3,3)$-channel are the crossed $N$-pole and $\pi$-pole graphs. When embedded into the two-nucleon system, these Born terms become part of the two-body recoil and the $\pi$-meson currents, respectively (Fig.\ \ref{vergleich}). \begin{figure}[htb] \centerline{\psfig{figure=figure2.eps,width=7cm,angle=270}} \vspace{-.7cm} \caption{The Born terms contributing to the $M_{1+}(\frac{3}{2})$-multipole amplitude of pion photoproduction (upper part) and their correspondence in the two-body recoil and meson currents (lower part).} \label{vergleich} \end{figure} In static calculations, however, the recoil current is not present due to its cancellation against the wave function renormalization current \cite{GaH76}. A similar, but less serious problem arises in the treatment of the pion pole diagrams compared to the meson current of static MEC. It had already been conjectured in \cite{WiA93} that this inconsistent treatment of pion exchange may lead to the observed underestimation of the total cross section in their coupled channel approach, because by incorporating the Born terms effectively into an increased $M1\,\, \Delta$-excitation strength, a satisfactory agreement with the data could be achieved. In order to avoid these shortcomings, we have included for the first time in a coupled channel approach complete retardation in the $\pi$-exchange contributions to potentials and MECs. \section{The Model} Concerning the potential models which enter our coupled channel approach, we have chosen for the retarded NN-potential an improved version of the energy dependent Bonn-OBEPT developed by Elster et al.\, \cite{MaH88}, which has to be renormalized via subtraction of a $N \Delta$-box graph \cite{PoS87}. Transitions between $NN$- and $N \Delta$-space are mediated by retarded $\pi$- and $\rho$-exchange whose form factors are fixed by fitting the $^1D_2$ $NN$-partial wave. In order to ensure unitarity up to the $2 \pi$-threshold, we consider in addition the formation of an intermediate $NN$-state with the quantum numbers of the deuteron and a pion as spectator (denoted for simplicity by $\pi d$-channel). Concerning the e.m.\ part of our model, the $\Delta$-excitation is the most important photoabsorption mechanism above $\pi$-threshold. It is described by the current operator in Eq.\ (\ref{ndcurrent}) neglecting small E2 contributions. Concerning gauge invariance, we are able to show that current conservation for the $\pi$-retarded MECs is fulfilled if we consider besides the usual vertex-, meson- and contact-MECs the recoil current, the recoil and additional two-body charge densities (Fig.\ \ref{mecdarstellung}). \begin{figure}[htb] \centerline{\psfig{figure=figure3.eps,width=10cm,angle=270}} \vspace{-.7cm} \caption{Graphical representation of the retarded $\pi$-MECs.} \label{mecdarstellung} \end{figure} Whereas the effect of the additional two-body charge terms is very small, the recoil contributions turn out to be quite important (see discussion below). They do not appear in static approaches due to their cancellation against the wave function renormalization contributions \cite{GaH76}, which have their origin in the renormalization of the baryonic states when eliminating the mesonic wave function components. This concept breaks down beyond the $\pi$-threshold if full $\pi$-retardation is considered since the $\pi N N $-component can be on-shell. Therefore, we do not orthonormalize and no wave function renormalization contributions appear. Consequently, the recoil current and charge densities have to be included. Because the pion production model of Tanabe and Ohta {\cite{TaO85} effectively incorporates $\omega$-exchange, we include in addition the leading order $\rho \pi \gamma$- and $\omega \pi \gamma$-currents, which are purely transverse \cite{RiG97}. Because the $\rho$-mass is rather large, retardation in the $\rho$-MEC is expected to be rather unimportant and therefore not considered in this work. \section{Results} Our results for the total photodisintegration cross section are shown in Fig.\ \ref{sigtot}. Similar to \cite{WiA93}, the static calculation considerably underestimates the data. Inclusion of retardation in the hadronic interaction even lowers further the cross section, which is more than compensated by retardation in the $\pi$-MEC leading to a strong enhancement which can be traced back essentially to the inclusion of recoil contributions. The inclusion of the $\pi d$-channel and the $\rho \pi \gamma / \omega \pi \gamma$-MECs enhances the cross section further so that our full calculation now gives quite a good agreement with experimental data over the whole energy range. In Figs.\ \ref{wqdiff} and \ref{sigma}, we show differential cross sections and photon asymmetries for various energies. Whereas the differential cross section is in satisfactory agreement with the data, we slightly underestimate the absolute size of the asymmetry. However, in contrast to \cite{{TaO89},{WiA93}} we are able to reproduce quite well the shape of these two observables at higher energies. \vspace{0.5cm} \centerline{\bf ACKNOWLEDGMENT} We would like to thank S.\ Wartenberg from the A2 collaboration for providing us the preliminary results on the photon asymmetry prior to publication \cite{War97}. \begin{figure}[htp] \centerline{\psfig{figure=figure4.ps,width=7 cm,angle=90}} \vspace{-.7cm} \caption{Total cross section for $\gamma d \rightarrow p n$ as a function of photon laboratory energy $E_{\gamma}$ in comparison with experiment {\protect \cite{{Leg95},{Cra96}}}. Dotted: static OBEPR-calculation, dash-dot: retardation switched on only in the hadronic part but static MECs , full: calculation with complete retardation, $\pi d$-channel and $\rho \pi \gamma / \omega \pi \gamma$-MECs.} \label{sigtot} \end{figure} \begin{figure}[htb] \centerline{\psfig{figure=figure5.ps,width=15.8cm,angle=90}} \vspace{-.7cm} \caption{Differential cross section for various energies in comparison with experiment {\protect \cite{{Leg95},{Cra96}}}. Notation of the curves as in Fig.\ {\protect \ref{sigtot}}.} \label{wqdiff} \end{figure} \begin{figure}[htb] \centerline{\psfig{figure=figure6.ps,width=15.8cm,angle=90}} \vspace{-.7cm} \caption{Photon asymmetry $\Sigma$ for various energies in comparison with experiment {\protect \cite{{Leg95},{War97},{Ada91}}}. Notation of the curves as in Fig.\ {\protect \ref{sigtot}}.} \label{sigma} \end{figure}
2023-04-23T06:41:20.494Z
1997-10-15T16:43:05.000Z
redpajama/arxiv
arxiv_0001
2,221
1,505
194a9706f093995bbc29e2f657bb0b2297ff6061
\section{Introduction} In this paper we intend to elucidate the algebraic part of our approach to the quantum integrable models in 1+1 dimensional discrete space-time, developed during last five years. We shall not give a complete survey of our publications (Faddeev and Volkov 1993,1994; Faddeev 1994; Volkov 1997a,b) because it would take too much space. We believe, that the algebraic side is most instructive and original; more analytic side will be mentioned only briefly with references to (Faddeev 1994) Discrete space-time models (DSTM) in soliton theory have acquired a prominent role from the very advent of this part of mathematical physics. The first examples of such models were proposed by Hirota (1977) as discrete analogues of the major continuous soliton models. Subsequent development was carried on mostly by Dutch group, see (Nijhoff and Capel 1995) and references therein. The recent resurgence of interest towards DSTM is connected with several new ideas: 1. The nonlinear equations for the family of transfer matrices $ T_{S}(\lambda) $ in the framework of the Thermodynamic Bethe Ansatz can be considered as DSTM with spin $ S $ and rapidity $ \lambda $ being discrete variables (Kl\"umper and Pearce 1992; Kuniba {\em et al.}1994; Krichever {\em et al.}1996). Of course, the rapidity assumes the continuous values, but only discrete shift $\lambda \to q \lambda $ enters the equations. 2. A.~Bobenko and U.~Pinkal with collaborators develop the discrete analogue of classical continuous 2-dimensional differential geometry, see (Bobenko and Pinkal 199?) and references therein. 3. Quantum version of DSTM revealed a new type of symmetry, giving the discrete analogue of current algebra and Virasoro algebra (Faddeev and Volkov 1993; Volkov 1997c). Moreover, quantum DSTM seems to be rather universal, giving both massless (conformal) and massive models in continuous limit. For the evident methodological reason we shall illustrate our approach on a concrete example. For that we have chosen the most prominent model of physics and geometry --- the Liouville model. More involved Sine-Gordon model will be touched upon only briefly. Incidentally, the latter was already a subject of our earlier publications. We shall not discuss the usual paraphernalia of integrable models such as zero-curvature representation, Lax equation and Bethe ansatz. We shall simply present the main dynamical object --- the evolution operator, realizing the elementary time-shift. Its natural place inside the Algebraic Bethe Ansatz is discussed in recent lectures of one of authors (Faddeev 1996). We believe, that our explicit formulas are interesting enough as they stand so we want to present them in its clearest form, independent of original derivation. We begin with the reminder of the classical Liouville model and its Hamiltonian interpretation. This will play the role of the starting point for the subsequent deformations: discretization of space on which hamiltonian data are given, and quantization. As a result we shall get a suitable algebra of observables. Finally the time evolution will be defined in terms of a certain automorphism of this algebra. The discrete time equations of motion produced by this automorphism will be shown to be a natural analogue of the corresponding classical equations. The integrability of the model will be confirmed by presenting an explicit set of conservation laws. \section{Classical differential equation} As the goals declared in the Introduction suggest, this time we shall consider the evergreen Liouville's Equation (LE) \[ \frac{\partial^2\varphi}{\partial t^2} -\frac{\partial^2\varphi}{\partial x^2} =e^{-2\varphi} \] a Hamiltonian 1+1-dimensional field theory with $x$ denoting the spatial coordinate and $t$ serving as time. The Cauchy data \[ \varphi(x,t)|_{_{t=0}}=\varphi(x)\qquad\qquad \frac{\partial\varphi}{\partial t}(x,t)|_{_{t=0}} =\varpi(x) \] can be equipped with the canonical Poisson bracket \[ \{\varpi(x),\varphi(y)\}=\delta(x-y)\qquad\qquad \{\varpi(x),\varpi(y)\}=\{\varphi(x),\varphi(y)\}=0 \] so that the evolution goes the Hamiltonian way in its most familiar \[ \dot{\varphi}=\{H,\varphi\}=\varpi\qquad\qquad \dot{\varpi}=\{H,\varpi\}=\varphi''+e^{-2\varphi}, \] the Hamiltonian being \[ H={\scriptstyle\frac{1}{2}}\int dx\;(\varpi^2+(\varphi')^2+e^{-2\varphi}). \] The periodic boundary conditions \[ \varphi(x+2\pi)=\varphi(x)\qquad\qquad \varpi(x+2\pi)=\varpi(x) \] pose no problem provided the Poisson bracket employs a 2$\pi$-periodic delta-function rather than the ordinary one. Of the equation's specific features the Liouville's formula \[ e^{-2\varphi(x,t)} =\frac{f'(\xi)g'(\tau)}{(f(\xi)-g(\tau))^2} \] \[ x=\xi-\tau\qquad\qquad t=\xi+\tau \] making a solution out of two arbitrary functions, is the ultimate. It is there, according to (Gervais and Neveu 1982; Faddeev and Takhtajan 1985), where the real Hamiltonian theory of LE begins. We shall not reach that high in this paper. \section{Classical difference equation} The best lattice approximation to LE \[ e^{\varphi(x,t-\Delta)}e^{\varphi(x,t+\Delta)} -e^{\varphi(x-\Delta,t)}e^{\varphi(x+\Delta,t)} =\Delta^2 \] is due to R. Hirota (1987) like virtually every decent difference-difference equation. In order to make its transformation into LE under limit $\Delta\rightarrow 0$ more obvious one may recompose it like this: \[ \sinh{\scriptstyle\frac{1}{2}}\bigg(\varphi(x,t-\Delta)+\varphi(x,t+\Delta) -\varphi(x-\Delta,t)-\varphi(x+\Delta,t)\bigg) \] \[ =\Delta^2 e^{-{\scriptstyle\frac{1}{2}}(\varphi(x,t-\Delta)+\varphi(x,t+\Delta) +\varphi(x-\Delta,t)+\varphi(x+\Delta,t))} . \] Now as the mission of the lattice spacing $\Delta$ is over, it is only natural to have everything suitably rescaled \[ (x,t)\longrightarrow(\Delta x,\Delta t) \] \[ e^\varphi\longrightarrow \Deltae^\varphi \] or just set $\Delta$=1. Either way, the Difference Liouville Equation (DLE) takes its final form \[ e^{\varphi_{j,k+1}}e^{\varphi_{j,k-1}} -e^{\varphi_{j+1,k}}e^{\varphi_{j-1,k}}=1 \] \[ j+k\equiv1\pmod2 \] where the change for subscripts manifests that the `space-time' is now a ${\mathbb{Z}\,}^{2}$ lattice while the second line specifies which half of that lattice the equation will actually occupy. This half itself makes a square lattice turned by fourty five degrees with respect to the original one and twice less dense. The values of $\varphi$ on a `saw' formed by vertices with $k$ equal either 0 or 1 \[ \varphi_{j,0}=\varphi_{j}\qquad\mbox{for even $j$} \] \[ \varphi_{j,1}=\varphi_{j}\qquad\mbox{for odd $j$} \] make a reasonable Cauchy data, that is they are just sufficient to have the whole system resolved step by step. This has everything to do with the second-order nature of the original continuum equation whose Cauchy data combine the present and a little bit of the future represented by $\varphi(x)$ and $\varpi(x)$ respectively. Quite expectedly, there exists a Poisson bracket preserved under evolution along $k$-direction governed by DLE. However, it turns out more complicated than one might have wished a lattice deformation of the canonical one would be: \[ \{\varphi_i,\varphi_j\}=\varsigma(i,j) \] with \[ \varsigma(i,j)=\left\{\begin{array}{l} 0\qquad\qquad\mbox{if $i-j\equiv 1\pmod2$}\\\\ (-1)^{{\scriptstyle\frac{1}{2}}(i+j+1)}\mathrm{sign}(i-j)\qquad \mbox{otherwise}\end{array}\right. \] Such is the price for the ultimate simplicity of the equation. This would be too much if we had lost the option of periodic boundary condition \[ \varphi_{j+L}=\varphi_j. \] Fortunately, we had not. If the period is chosen properly \[ L=2M\qquad\qquad M\equiv1\pmod2 \] the bracket remains intact provided the above description of $\varsigma$ applies when $|i-j|\leq L$ and extends periodically \[ \varsigma(i\pm L,j)=\varsigma(i,j) \] elsewhere. Those still insisting on an easier bracket can change the variables \[ \phi_j={\scriptstyle\frac{1}{2}}(\varphi_{j+1}+\varphi_{j-1}) \] and have it \begin{eqnarray*}\\ && \{\phi_i,\phi_j\}=0\qquad\mbox{if $|i-j|\neq 1$}\\\\ && \{\phi_{j-1},\phi_j\}=\frac{(-1)^j}{2} \\\end{eqnarray*} at the expence of a busier equation \[ e^{2\phi_{j,k+1}}e^{2\phi_{j,k-1}} =(1+e^{2\phi_{j+1,k}})(1+e^{2\phi_{j-1,k}}) \] \[ j+k\equiv0\pmod2. \] Either way, the prospect of dealing with discrete Poisson maps is hardly encouraging. That is why we choose to leave the classical equations alone and go quantum. Before we do, let us round out the classical part with a beautiful, if irrelevant for our current agenda, discrete Liouville formula: \[ e^{-2\phi_{j,k}}=e^{-\varphi_{j+1,k}-\varphi_{j-1,k}} =\frac{(f_{m+1}-f_{m})(g_{n+1}-g_{n})} {(f_{m+1}-g_{n})(f_{m}-g_{n+1})} \] \[ j=m-n\qquad\qquad k=m+n+1 . \] \section{Algebra of observables} One dilemma about quantization is whether to develop it in terms of the bare $\varphi$'s or stick to the variables actually entering the equation, that is the exponents \[ v_j=e^{\varphi_j}\qquad\qquad \{v_i,v_j\}=\varsigma(i,j)v_i v_j \] The respective Heisenberg- and Weyl-style quantum algebras are \[ [\boldsymbol{\varphi}_i,\boldsymbol{\varphi}_j]=\mathrm{i}\hbar\gamma\varsigma(i,j) \] \[ \boldsymbol{\varphi}_{j+L}=\boldsymbol{\varphi}_j \] with the usual lot in r.h.s. comprising the imaginary unit $\mathrm{i}$, the Plank constant $\hbar$ and the coupling constant $\gamma$; and \[ \mathfrak{v}_i\mathfrak{v}_j=q^{\varsigma(i,j)}\mathfrak{v}_j\mathfrak{v}_i \] \[ \mathfrak{v}_{j+L}=\mathfrak{v}_j \] with all packed in a single $q$uantisation constant \[ q=e^{\mathrm{i}\hbar\gamma}. \] Weyl-style algebra may be viewed as a subalgebra of the Heisenberg-one \[ \mathfrak{v}_j=e^{\boldsymbol{\varphi}_j} \] but not the other way round. Roughly speaking, the latter also accomodates non-integer powers \[ \mathfrak{v}^\alpha_j=e^{\alpha\boldsymbol{\varphi}_j} \] not allowed in the former. Another dilemma is whether to place $q$ on the unit circle or not. The first option is obviously involution-friendly. It allows for both unitary \[ \mathfrak{v}_i^*=-\mathfrak{v}^{-1}_i \] and selfadjoint \[ \mathfrak{v}_i^*=\mathfrak{v}^{}_i \] pictures, the former offering the luxury of dealing with bounded operators if at the expence of complications in representation theory due to the arithmetics of $q$ while the latter actually being the one relevant for the true Liouville model. On the other hand, $q$ inside (or outside) the circle is favoured in q-algebra but whether it is good for something else remains to be seen. We choose not to take sides before time and conclude the Section on a more practical note. Let us introduce, for future use, quantum counterparts of the $e^{2\phi}$-variables \[ \mathfrak{w}_j=\mathfrak{v}_{j+1}\mathfrak{v}_{j-1}=\mathfrak{v}_{j-1}\mathfrak{v}_{j+1} \] and compile a list of emerging commutation relations \begin{eqnarray*}\\ && \mathfrak{v}_j\mathfrak{w}_j=q^{2(-1)^j}\mathfrak{w}_j\mathfrak{v}_j \\\\ && \mathfrak{w}_{j-1}\mathfrak{w}_j=q^{2(-1)^j}\mathfrak{w}_j\mathfrak{w}_{j-1} \\\\ && \mathfrak{v}_i\mathfrak{w}_j=\mathfrak{w}_j\mathfrak{v}_i\qquad \mbox{if $i\neq j\pmod{L}$} \\\\ && \mathfrak{w}_i\mathfrak{w}_j=\mathfrak{w}_j\mathfrak{w}_i\qquad \mbox{if $|i-j|\neq 1\pmod{L}$}. \\\end{eqnarray*} \section{Evolution operator} Given an invertible operator $\mathfrak{Q}$, one can make the algebra of observables `evolve' \[ \cdots\longmapsto\mathfrak{Q}\mathfrak{z}\mathfrak{Q}^{-1}\longmapsto\mathfrak{z}\longmapsto \mathfrak{Q}^{-1}\mathfrak{z}\mathfrak{Q}\longmapsto \mathfrak{Q}^{-2}\mathfrak{z}\mathfrak{Q}^{2}\longmapsto\cdots \] hoping that the evolution of generators \[ \mathfrak{v}_{j,k+2}=\mathfrak{Q}^{-1}\mathfrak{v}_{j,k}\mathfrak{Q}\qquad j+k\equiv0\pmod2 \] \[ \mathfrak{v}_{2a,0}=\mathfrak{v}_{2a}\qquad\qquad\quad\mathfrak{v}_{2a-1,1}=\mathfrak{v}_{2a-1}\] manages to solve some nice and local equations, for instance, \[ \mathfrak{v}_{j,k+1}\mathfrak{v}_{j,k-1}-q^{-1}\mathfrak{v}_{j-1,k}\mathfrak{v}_{j+1,k}=1. \] As a matter of fact, this is exactly what happens if \[ \mathfrak{Q}=\prod_{a=1}^{M}\epsilon(\mathfrak{w}_{2a-1})\;\; \mathfrak{F}\;\prod_{a=1}^{M}\epsilon(\mathfrak{w}_{2a}) \] provided $\mathfrak{F}$ is the `flip' operator \[ \mathfrak{F}^{-1}\mathfrak{v}^{}_j \mathfrak{F}=\mathfrak{v}^{-1}_j \] while the function $\epsilon$ solves the following functional equation: \[ \frac{\epsilon(qz)}{\epsilon(q^{-1}z)}=\frac{1}{1+z} . \] Indeed, let us plug the definition of the $\mathfrak{v}_{j,k}$'s into the hypothetical equation \begin{eqnarray*}\\ \mathfrak{Q}^{-b-1}\mathfrak{v}_{2a}\mathfrak{Q}^{b+1}\mathfrak{Q}^{-b}\mathfrak{v}_{2a}\mathfrak{Q}^{b}-q^{-1} \mathfrak{Q}^{-b}\mathfrak{v}_{2a-1}\mathfrak{Q}^{b}\mathfrak{Q}^{-b}\mathfrak{v}_{2a+1}\mathfrak{Q}^{b}=1 \\\\ \mathfrak{Q}^{-b}\mathfrak{v}_{2a-1}\mathfrak{Q}^{b}\mathfrak{Q}^{-b+1}\mathfrak{v}_{2a-1}\mathfrak{Q}^{b-1} -q^{-1}\mathfrak{Q}^{-b}\mathfrak{v}_{2a-2}\mathfrak{Q}^{b}\mathfrak{Q}^{-b}\mathfrak{v}_{2a}\mathfrak{Q}^{b}=1\\\end{eqnarray*} and dispose of as many $\mathfrak{Q}$'s as possible: \begin{eqnarray*}\\ \mathfrak{v}_{2a}\mathfrak{Q}\mathfrak{v}_{2a} -q^{-1}\mathfrak{Q}\mathfrak{v}_{2a-1}\mathfrak{v}_{2a+1}=\mathfrak{Q}&&\\\\ \mathfrak{v}_{2a-1}\mathfrak{Q}\mathfrak{v}_{2a-1} -q^{-1}\mathfrak{v}_{2a-2}\mathfrak{v}_{2a}\mathfrak{Q}=\mathfrak{Q}&& . \\\end{eqnarray*} Then all the $\epsilon(\mathfrak{w})$ factors but one go the same way which results in \begin{eqnarray*}\\ \mathfrak{v}_{2a}\mathfrak{F}\epsilon(\mathfrak{w}_{2a})\mathfrak{v}_{2a} -q^{-1}\mathfrak{F}\epsilon(\mathfrak{w}_{2a})\mathfrak{v}_{2a-1}\mathfrak{v}_{2a+1} &=&\mathfrak{F}\epsilon(\mathfrak{w}_{2a}) \\\\ \mathfrak{v}_{2a-1}\epsilon(\mathfrak{w}_{2a-1})\mathfrak{F}\mathfrak{v}_{2a-1} -q^{-1}\mathfrak{v}_{2a-2}\mathfrak{v}_{2a}\epsilon(\mathfrak{w}_{2a-1})\mathfrak{F} &=&\epsilon(\mathfrak{w}_{2a-1})\mathfrak{F} \quad . \\\end{eqnarray*} Once $\mathfrak{F}$ is gone too, we are left with \begin{eqnarray*}\\ \mathfrak{v}^{-1}_{2a}\epsilon(\mathfrak{w}^{}_{2a})\mathfrak{v}^{}_{2a} -q^{-1}\epsilon(\mathfrak{w}^{}_{2a})\mathfrak{v}^{}_{2a-1}\mathfrak{v}^{}_{2a+1} &=&\epsilon(\mathfrak{w}^{}_{2a}) \\\\ \mathfrak{v}^{}_{2a-1}\epsilon(\mathfrak{w}^{}_{2a-1})\mathfrak{v}^{-1}_{2a-1} -q^{-1}\mathfrak{v}^{}_{2a-2}\mathfrak{v}^{}_{2a}\epsilon(\mathfrak{w}^{}_{2a-1}) &=&\epsilon(\mathfrak{w}^{}_{2a-1}) \\\end{eqnarray*} which is nothing but the above functional equation mated with the commutation relations which closed the last Section: \[ \mathfrak{v}^{-1}_{2a}\epsilon(\mathfrak{w}^{}_{2a})\mathfrak{v}^{}_{2a} =\epsilon(q^{-2}\mathfrak{w}^{}_{2a})\qquad\qquad \mathfrak{v}^{}_{2a-1}\epsilon(\mathfrak{w}^{}_{2a-1})\mathfrak{v}^{-1}_{2a-1} =\epsilon(q^{-2}\mathfrak{w}^{}_{2a-1}) \] \[ \epsilon(q^{-2}\mathfrak{w}_{j})-q^{-1}\epsilon(\mathfrak{w}_{j})\mathfrak{w}_{j} =\epsilon(q^{-2}\mathfrak{w}_{j}) -q^{-1}\mathfrak{w}_{j}\epsilon(\mathfrak{w}_{j})=\epsilon(\mathfrak{w}_{j})\quad. \] So, since everything eventually reduces to that functional equation, let us see if it can be solved. \section{q-exponent} Indeed, the equation in question \[ \frac{\epsilon(qz)}{\epsilon(q^{-1}z)}=\frac{1}{1+z} \] is readily fulfilled by those ubiquitous q-exponents \begin{eqnarray*}\\ &&\epsilon(z)=(-qz;q^2)^{}_\infty\qquad \mbox{- good for $|q|<1$} \\\\ && \epsilon(z)=\frac{1}{(-q^{-1}z;q^{-2})^{}_\infty}\qquad \mbox{- good for $|q|>1$} \\\end{eqnarray*} where \[ (x;y)^{}_\infty\equiv\prod_{p=0}^\infty(1-xy^p). \] There is no solution entire in $z$ if $|q|=1$. For those opted for the Weyl-style algebra of observables (see Section 3) this is the end to the story, not a happy one in the latter case where the equations of motion survive but the evolution automorphism behind them turns outer. The Heisenberg Way has more solutions to its disposal: \[ \epsilon_{\mathrm{Heisenberg}}(z)=\epsilon_{\mathrm{Weyl}}(z) \times\mbox{any function}(z^\theta) \] with $\theta$ being the first solution \[ \theta={\frac{\pi}{\hbar\gamma}} \] to the equation \[ q^{2\theta}=1 \] with a clear purpose: \[ \mathfrak{v}^\theta_i\mathfrak{w}^{}_j=\mathfrak{w}^{}_j\mathfrak{v}^\theta_i\qquad\qquad \mathfrak{v}^{}_i\mathfrak{w}^\theta_j=\mathfrak{w}^\theta_j\mathfrak{v}^{}_i . \] Among them we find the one capable of surviving under $|q|\rightarrow1$ limit (Faddeev 1994): \[ \boldsymbol{\epsilon}_q(z)=\boldsymbol{\epsilon}_{q^{\theta^2}}(z^\theta)=\left\{\begin{array}{ll} \displaystyle\frac{(-qz;q^2)^{}_\infty} {(-q^{-\theta^2}z^\theta;q^{-2\theta^2})^{}_\infty} \qquad&|q|<1\\\\ \displaystyle \frac{(-q^{\theta^2}z^\theta;q^{2\theta^2})^{}_\infty} {(-q^{-1}z;q^{-2})^{}_\infty}&|q|>1.\end{array}\right. \] It is plain to see what makes it so special. {\em Duality} is the word. We get two functional equations \[ \frac{\boldsymbol{\epsilon}_q(qz)}{\boldsymbol{\epsilon}_q(q^{-1}z)}=\frac{1}{1+z} \qquad\qquad\frac{\boldsymbol{\epsilon}_{q^{\theta^2}}(q^{\theta^2}z^\theta)} {\boldsymbol{\epsilon}_{q^{\theta^2}}(q^{-\theta^2}z^\theta)}=\frac{1}{1+z^\theta} \] satisfied at once. Consequently, the updated evolution operator \[ \mathfrak{Q}=\prod_{a=1}^{M}\boldsymbol{\epsilon}_q(\mathfrak{w}^{}_{2a-1})\;\; \mathfrak{F}\;\prod_{a=1}^{M}\boldsymbol{\epsilon}_q(\mathfrak{w}^{}_{2a}) =\prod_{a=1}^{M}\boldsymbol{\epsilon}_{q^{\theta^2}}(\mathfrak{w}^\theta_{2a-1})\;\; \mathfrak{F}\;\prod_{a=1}^{M}\boldsymbol{\epsilon}_{q^{\theta^2}}(\mathfrak{w}^\theta_{2a}) \] \[ \mathfrak{F}^{-1}\mathfrak{v}^{}_j \mathfrak{F}=\mathfrak{v}^{-1}_j \qquad\qquad \mathfrak{F}^{-1}\mathfrak{v}^{\theta}_j \mathfrak{F}=\mathfrak{v}^{-\theta}_j \] not only inherits the right evolution of the pure $\mathfrak{v}$'s \[ \mathfrak{v}_{j,k+1}\mathfrak{v}_{j,k-1}-q^{-1}\mathfrak{v}_{j-1,k}\mathfrak{v}_{j+1,k}=1 \] but also produces an equally right evolution \[ \mathfrak{v}^\theta_{j,k+1}\mathfrak{v}^\theta_{j,k-1} -q^{-\theta^2}\mathfrak{v}^\theta_{j-1,k}\mathfrak{v}^\theta_{j+1,k}=1 \] of their dual twins $\mathfrak{v}^\theta$. Since the Heisenberg-setup turned out to be just a pair of decoupled Weyl-ones, we will stick to the latter for the rest of the paper. \section{A different angle} Although the ingredients used in the formula of the evolution operator are all more or less familiar, they are put together in a bizarre way. A traditional R-matrix philosophy would offer a different approach which we now start presenting. The equation remains the same \[ \mathsf{v}_{m+1,n+1}\mathsf{v}_{m,n}-q^{-1}\mathsf{v}_{m,n+1}\mathsf{v}_{m+1,n}=1 \] but the change for a kind of `light-cone' variables \[ j=m-n\qquad\qquad k=m+n \] signals that the $n$-direction is now considered temporal. So, we are going to find out what algebra of observables and what evolution operator produce the evolution \[ \mathsf{v}_{m,n+1}=\mathsf{Q}^{-1}\mathsf{v}_{m,n}\mathsf{Q} \] \[ \mathsf{v}_{m,0}=\mathsf{v}_m \] solving that `light-cone' equation. \section{Algebra of observables ii} Here is the complete list of relations defining the new algebra of observables: \begin{eqnarray*}\\ && \mathsf{v}_\ell\mathsf{v}_m=\mathsf{v}_m\mathsf{v}_\ell\qquad \mbox{if $m-\ell=0,2,\ldots,M-1$} \\\\ && \mathsf{v}_\ell\mathsf{v}_m=q\mathsf{v}_m\mathsf{v}_\ell\qquad \mbox{if $m=1,3,\ldots,M$} \\\\ && \mathsf{v}_m\mathsf{C}=q\mathsf{C}\mathsf{v}_m \\\\ && \mathsf{v}_{m+M}=q^{\scriptstyle\frac{1}{2}}\mathsf{C}\mathsf{v}_m\qquad\qquad M\equiv1\pmod2 . \\\end{eqnarray*} This is exactly what it takes to achieve the required relationship \begin{eqnarray*}\\ && \mathsf{v}_m\mathsf{w}_m=q^2\mathsf{w}_m\mathsf{v}_m \\\\ && \mathsf{v}_\ell\mathsf{w}_m=\mathsf{w}_m\mathsf{v}_\ell\qquad \mbox{if $\ell\neq m\pmod{M}$} \\\end{eqnarray*} between the $\mathsf{v}$'s and their `derivatives' \[ \mathsf{w}_m=\frac{\mathsf{v}_{m+1}}{\mathsf{v}_{m-1}} \] which themselves form the much advertized by the authors `lattice current algebra' \begin{eqnarray*}\\ &&\mathsf{w}_{m-1}\mathsf{w}_m=q^2\mathsf{w}_m\mathsf{w}_{m-1} \\\\ && \mathsf{w}_\ell\mathsf{w}_m=\mathsf{w}_m\mathsf{w}_\ell\qquad \mbox{if $|m-\ell|\neq 1\pmod{M}$} \\\\ && \mathsf{w}_{m+M}=\mathsf{w}_m . \\\end{eqnarray*} We already met similar relations, it was the end of Section 3. That time they did not contradict the periodicity of the $\mathfrak{v}$'s. Now they do: it is impossible to have $\mathsf{C}=1$ and good commutation relations at the same time. We shall soon see why. \section{Evolution operator ii} Let us see what the operator \[ \mathsf{Q}_{\mathrm{naive}}=\mathsf{S}\mathsf{F}\epsilon(\mathsf{w}_M)\ldots\epsilon(\mathsf{w}_2)\epsilon(\mathsf{w}_1) \] can do. Of course, the function $\epsilon$ and the flip operator $\mathsf{F}$ are the same as before \[ \frac{\epsilon(qz)}{\epsilon(q^{-1}z)}=\frac{1}{1+z} \] \[ \mathsf{F}^{-1}\mathsf{v}^{}_m\mathsf{F}=\mathsf{v}^{-1}_{m} \] while $\mathsf{S}$ is the shift operator \[ \mathsf{S}^{-1}\mathsf{v}_m\mathsf{S}=\mathsf{v}_{m-1} \] We plug the `naive' evolution into the hypothetical equation: \[ \mathsf{Q}_{\mathrm{naive}}^{-n-1}\mathsf{v}_{m+1}\mathsf{Q}_{\mathrm{naive}}^{n+1}\mathsf{Q}_{\mathrm{naive}}^{-n}\mathsf{v}_{m}\mathsf{Q}_{\mathrm{naive}}^{n} \!\!-q^{-1}\mathsf{Q}_{\mathrm{naive}}^{-n-1}\mathsf{v}_{m}\mathsf{Q}_{\mathrm{naive}}^{n+1} \mathsf{Q}_{\mathrm{naive}}^{-n}\mathsf{v}_{m+1}\mathsf{Q}_{\mathrm{naive}}^{n}\!\!=1 \] and dispose of as many $\mathsf{Q}_{\mathrm{naive}}$'s as possible: \[ \mathsf{v}_{m+1}\mathsf{Q}_{\mathrm{naive}}\mathsf{v}_{m}-q^{-1}\mathsf{v}_{m}\mathsf{Q}_{\mathrm{naive}}\mathsf{v}_{m+1}=\mathsf{Q}_{\mathrm{naive}} . \] Then all the $\epsilon(\mathsf{w})$ factors but one go the same way which results in \[ \mathsf{v}_{m+1}\mathsf{S}\mathsf{F}\epsilon(\mathsf{w}_m)\mathsf{v}_{m} -q^{-1}\mathsf{v}_{m}\mathsf{S}\mathsf{F}\epsilon(\mathsf{w}_m)\mathsf{v}_{m+1} =\mathsf{S}\mathsf{F}\epsilon(\mathsf{w}_m) . \] Once $\mathsf{S}$ and $\mathsf{F}$ are gone too, we are left with \[ \mathsf{v}^{-1}_{m}\epsilon(\mathsf{w}^{}_m)\mathsf{v}^{}_{m}-q^{-1} \mathsf{v}^{-1}_{m-1}\epsilon(\mathsf{w}^{}_m)\mathsf{v}^{}_{m+1}=\epsilon(\mathsf{w}^{}_m) \] which is nothing but our functional equation mated with the commutation relations which closed the last Section: \[ \mathsf{v}^{-1}_{m}\epsilon(\mathsf{w}_m)\mathsf{v}^{}_{m}=\epsilon(q^{-2}\mathsf{w}_m) \] \[ \epsilon(q^{-2}\mathsf{w}_m)-q^{-1}\epsilon(\mathsf{w}_m)\mathsf{w}_m=\epsilon(\mathsf{w}_m) . \] This proves that the `naive' evolution satisfies the required equations of motion ... as long as $m$ is neither 1 nor $M$. We could not reasonably expect it to do any better because $\mathsf{Q}_{\mathrm{naive}}$ obviously had no respect to the translational symmetry of the algebra of observables. In order to have this eventually repaired, let us first figure out how that sad dependence on the starting point can be cured in a simpler situation, say, for a monomial $\mathsf{w}^{p_M}_M\ldots\mathsf{w}^{p_2}_2\mathsf{w}^{p_1}_1$. Pulling the factors from the very right to the very left one by one we get a clear picture of how the ordered monomials with matching powers but different starting points turn into each other: \begin{eqnarray*}\\ &&\mathsf{w}^{p_M}_M\ldots\mathsf{w}^{p_2}_2\mathsf{w}^{p_1}_1 \\\\ && =q^{2p_M p_1}q^{-2p_1 p_2}\;\mathsf{w}^{p_1}_1\mathsf{w}^{p_M}_M \ldots \mathsf{w}^{p_3}_3\mathsf{w}^{p_2}_2 \\\\ && =q^{2p_M p_1}q^{-2p_2 p_3}\;\mathsf{w}^{p_2}_2\mathsf{w}^{p_1}_1 \ldots \mathsf{w}^{p_4}_4\mathsf{w}^{p_4}_3=\ldots \\\end{eqnarray*} Now we know. The expression $q^{-2p_{m-1} p_m}\;\mathsf{w}^{p_{m-1}}_{m-1} \mathsf{w}^{p_{m-2}}_{m-2}\ldots\mathsf{w}^{p_{m+1}}_{m+1}\mathsf{w}^{p_m}_m$ does not depend on $m$ provided $p_{\ell+M}\equiv p_{\ell}$. We award it with a self-explanatory notation \[ \prod^{\circlearrowright}\mathsf{w}^{p_\ell}_\ell\equiv q^{-2p_M p_1}\; \mathsf{w}^{p_M}_M\ldots\mathsf{w}^{p_2}_2\mathsf{w}^{p_1}_1 \] and extend this definition linearly to the corresponding polynomials, in particular, \begin{eqnarray*}\\ &&\prod^{\circlearrowright}\epsilon(\mathsf{w}_\ell)\equiv\sum_{p_M,\cdots,p_2,p_1} c_{p_M}\cdots c_{p_2}c_{p_1}\prod^{\circlearrowright}\mathsf{w}^{p_\ell} \\\\ && =\sum_{p_M,\cdots,p_2,p_1}c_{p_M}\cdots c_{p_2}c_{p_1} q^{-2p_Mp_1}\;\mathsf{w}^{p_M}_M\ldots\mathsf{w}^{p_2}_2\mathsf{w}^{p_1}_1\\\\ && =\sum_{p_M,p_1}c_{p_M}c_{p_1} q^{-2p_M p_1}\;\mathsf{w}^{p_M}_M\bigg(\epsilon(\mathsf{w}^{}_{M-1})\ldots \epsilon(\mathsf{w}^{}_3)\epsilon(\mathsf{w}^{}_2)\bigg)\mathsf{w}^{p_1}_1 \\\\ && =\sum_{p_{m-1},p_m}c_{p_{m-1}}c_{p_m} q^{-2p_{m-1} p_m}\;\mathsf{w}^{p_{m-1}}_{m-1} \bigg(\epsilon(\mathsf{w}^{}_{m-2})\ldots \epsilon(\mathsf{w}^{}_{m+2})\epsilon(\mathsf{v}^{}_{m+1})\bigg)\mathsf{w}^{p_m}_m \\\end{eqnarray*} with the coefficients $c$ coming from \[ \epsilon(z)=\sum_p c_p z^p. \] The same treatment applies as well to the `selfdual' $\epsilon$'s of Section 5: \[ \boldsymbol{\epsilon}_q(z)=\sum_{p,r} c_{p,r} z^p z^{\theta r} \] \[ \prod^{\circlearrowright}\boldsymbol{\epsilon}_q(\mathsf{w}_\ell) =\sum_{p_M,p_1,r_M,r_1}c_{p_M,r_M}c_{p_1,r_1} q^{-2(p_M p_1+\theta^2 r_M r_1)} \] \[ \times \mathsf{w}^{p_M}_M\mathsf{w}^{\theta r_{M}}_M \bigg(\boldsymbol{\epsilon}_q(\mathsf{w}^{}_{M-1})\ldots \boldsymbol{\epsilon}_q(\mathsf{w}^{}_3)\boldsymbol{\epsilon}_q(\mathsf{w}^{}_2)\bigg) \mathsf{w}^{p_1}_1\mathsf{w}^{\theta r_1}_1 . \] So, we achieve the vital translational invariance of those products \[ \mathsf{S}\prod^{\circlearrowright}\epsilon(\mathsf{w}_\ell)=\prod^{\circlearrowright}\epsilon(\mathsf{w}_\ell)\;\mathsf{S} \] sacrificing none of their `orderness'. The time has come to plug the repaired evolution operator \[ \mathsf{Q}=\mathsf{S}\mathsf{F}\prod^{\circlearrowright}\epsilon(\mathsf{w}_\ell) \] into the hypothetical equation ... see the beginning of this Section. Now as we finally established that the operator $\mathsf{Q}$ is indeed responsible for the quantized and fully discretized Liouville equation \[ \mathsf{v}_{m+1,n+1}\mathsf{v}_{m,n}-q^{-1}\mathsf{v}_{m,n+1}\mathsf{v}_{m+1,n}=1, \] we must admit that so far the commitment to this particular equation was only a matter of personal taste. What would change if the function involved \[ \mathsf{Q}=\mathsf{S}\mathsf{F}\prod^{\circlearrowright}f(\mathsf{w}_\ell) \] was different, for instance, \[ f(z)=\frac{\epsilon(z)}{\epsilon(q^{2\lambda}z)}\qquad? \] Nothing except the r.h.s. of the functional equation \[ \frac{f(qz)}{f(q^{-1}z)}=\frac{1+q^{2\lambda}z}{1+z} \] and the form of the eventual equations of motion \[ \mathsf{v}_{m+1,n+1}\mathsf{v}_{m,n}-q^{-1}\mathsf{v}_{m,n+1}\mathsf{v}_{m+1,n} =1-q^{\lambda+1}\mathsf{v}_{m+1,n+1}\mathsf{v}_{m,n+1} \mathsf{v}_{m+1,n}\mathsf{v}_{m,n} . \] By the way, this is another Hirota's equation, the discrete sine-Gordon one. We shall see it again in Section 11. \section{Classical continuum limit} The equation of Section 6 certainly turns into the `light-cone' Liouville equation \[ \frac{\partial^2\psi}{\partial\xi\partial\tau} =e^{-2\psi} , \] the matching Cauchy problem being \[ \psi(\xi,\tau)|_{_{\tau=0}}=\psi(\xi) . \] The algebra of observables from Section 7 transforms into no less familiar Poisson bracket reading \[ \{\psi(\xi),\psi(\eta)\} ={\scriptstyle\frac{1}{4}}\mathrm{sign}(\xi-\eta) . \] The evolution operator of Section 8 bears some resemblance to the corresponding Hamiltonian \[ {\mathcal H}=\int d\xi\; e^{-2\psi} . \] What is wrong? We seem to inherit also the quasiperiodic boundary condition \[ \psi(\xi+\pi)=\psi(\xi)+\Psi \] which obviously contradicts to the equation. The periodic condition \[ \psi(\xi+\pi)=\psi(\xi) \] could do but that in turn would contradict the Poisson bracket. A more careful examination reveals that the `constant' in boundary conditions is not a constant of motion: \[ \mathsf{Q}^{-1}\mathsf{C}\mathsf{Q}=\mathsf{C}^{-1} . \] Of course, the lattice equations of motion themselves have no problem with that. However, their solutions are not smooth enough to survive a straightforward continuum limit. Anyway, this peculiarity is not too relevant to what we are after in this paper. \section{Conservation laws} Looking at the two evolution operators we now possess \[ \mathfrak{Q}=\prod_{a=1}^{M}\epsilon(\mathfrak{w}_{2a-1})\;\; \mathfrak{F}\;\prod_{a=1}^{M}\epsilon(\mathfrak{w}_{2a})\qquad\qquad \mathsf{Q}=\mathsf{S}\mathsf{F}\prod^{\circlearrowright}\epsilon(\mathsf{w}_\ell) \] do we see something in the latter that was not there in the former? We do, the latter looks almost like a good old ordered product of `fundamental R-matrices' (Tarasov {\em et al.}1983). According to (Volkov 1997a), the shift-n-flipless part of $\mathsf{Q}$ \[ \Omega=\prod^{\circlearrowright}\epsilon(\mathsf{w}_\ell) \] belongs \[ \Omega=\Omega{\scriptstyle{(\infty)}} \] in a family \[ \Omega{\scriptstyle{(\lambda)}}=\prod^{\circlearrowright}\epsilon(\lambda|\mathsf{w}_m)\qquad\qquad \epsilon(\lambda|z)\equiv \frac{\epsilon(z)}{\epsilon(q^{2\lambda} z)} \] consolidated by the Artin-Yang-Baxter's Equation \[ \mathsf{R}_{m+1}{\scriptstyle{(\lambda,\mu)}}\mathsf{R}_{m}{\scriptstyle{(\lambda)}}\mathsf{R}_{m+1}{\scriptstyle{(\mu)}} =\mathsf{R}_{m}{\scriptstyle{(\mu)}}\mathsf{R}_{m+1}{\scriptstyle{(\lambda)}}\mathsf{R}_{m}{\scriptstyle{(\lambda,\mu)}} . \] The choice of notation \[ \mathsf{R}_m{\scriptstyle{(\lambda)}}\equiv\epsilon(\lambda|\mathsf{w}_m)\qquad\qquad \mathsf{R}_{m}{\scriptstyle{(\lambda,\mu)}}\equiv\frac{\mathsf{R}_{m}{\scriptstyle{(\lambda)}}}{\mathsf{R}_{m}{\scriptstyle{(\mu)}}} \] is meant to emphasize the R-matrix connection. Let us recall how that AYBE could be verified. From (Faddeev and Volkov 1993) comes the multiplication rule \[ \epsilon(\lambda|\mathsf{b})\epsilon(\lambda|\mathsf{a})=\epsilon(\lambda|\mathsf{a}+\mathsf{b}+q\mathsf{b}\mathsf{a})\] applying whenever $\mathsf{a}$ and $\mathsf{b}$ satisfy the Weyl's algebra \[ \mathsf{a}\mathsf{b}=q^2\mathsf{b}\mathsf{a} . \] The two $\mathsf{w}$'s next to each other certainly do, therefore \[ \mathsf{R}_{m+1}{\scriptstyle{(\lambda)}}\mathsf{R}_{m}{\scriptstyle{(\lambda)}} =\epsilon(\lambda|\mathsf{w}_m+\mathsf{w}_{m+1}+q\mathsf{w}_{m+1}\mathsf{w}_m) , \] therefore \begin{eqnarray*}\\ &&\mathsf{R}_{m+1}{\scriptstyle{(\lambda)}}\mathsf{R}_{m}{\scriptstyle{(\lambda)}}\mathsf{R}_{m+1}{\scriptstyle{(\mu)}}\mathsf{R}_{m}{\scriptstyle{(\mu)}} \\\\ &&\qquad=\epsilon(\lambda|\mathsf{w}_m+\mathsf{w}_{m+1}+q\mathsf{w}_{m+1}\mathsf{w}_m) \epsilon(\mu|\mathsf{w}_m+\mathsf{w}_{m+1}+q\mathsf{w}_{m+1}\mathsf{w}_m) \\\\ &&\qquad=\epsilon(\mu|\mathsf{w}_m+\mathsf{w}_{m+1}+q\mathsf{w}_{m+1}\mathsf{w}_m) \epsilon(\lambda|\mathsf{w}_m+\mathsf{w}_{m+1}+q\mathsf{w}_{m+1}\mathsf{w}_m) \\\\ &&=\mathsf{R}_{m+1}{\scriptstyle{(\mu)}}\mathsf{R}_{m}{\scriptstyle{(\mu)}}\mathsf{R}_{m+1}{\scriptstyle{(\lambda)}}\mathsf{R}_{m}{\scriptstyle{(\lambda)}}. \\\end{eqnarray*} This is it. AYBE and ordered products are known to make a natural match \begin{eqnarray*}\\ && \frac{\mathsf{R}_{m+1}{\scriptstyle{(\lambda)}}}{\mathsf{R}_{m+1}{\scriptstyle{(\mu)}}} \bigg(\mathsf{R}_{m}{\scriptstyle{(\lambda)}}\mathsf{R}_{m-1}{\scriptstyle{(\lambda)}}\ldots\mathsf{R}_{1}{\scriptstyle{(\lambda)}}\bigg) \bigg(\mathsf{R}_{m+1}{\scriptstyle{(\mu)}}\mathsf{R}_{m}{\scriptstyle{(\mu)}}\ldots\mathsf{R}_{2}{\scriptstyle{(\mu)}}\bigg) \\\\ &&\qquad=\frac{\mathsf{R}_{m+1}{\scriptstyle{(\lambda)}}}{\mathsf{R}_{m+1}{\scriptstyle{(\mu)}}} \bigg(\mathsf{R}_{m}{\scriptstyle{(\lambda)}}\mathsf{R}_{m+1}{\scriptstyle{(\mu)}}\bigg) \bigg(\mathsf{R}_{m-1}{\scriptstyle{(\lambda)}}\mathsf{R}_{m}{\scriptstyle{(\mu)}}\bigg)\ldots \bigg(\mathsf{R}_{1}{\scriptstyle{(\lambda)}}\mathsf{R}_{2}{\scriptstyle{(\mu)}}\bigg) \\\\ && \qquad=\bigg(\mathsf{R}_{m}{\scriptstyle{(\mu)}}\mathsf{R}_{m+1}{\scriptstyle{(\lambda)}}\bigg) \frac{\mathsf{R}_{m}{\scriptstyle{(\lambda)}}}{\mathsf{R}_{m}{\scriptstyle{(\mu)}}} \bigg(\mathsf{R}_{m-1}{\scriptstyle{(\lambda)}}\mathsf{R}_{m}{\scriptstyle{(\mu)}}\bigg)\ldots \bigg(\mathsf{R}_{1}{\scriptstyle{(\lambda)}}\mathsf{R}_{2}{\scriptstyle{(\mu)}}\bigg) \\\\ && \qquad=\bigg(\mathsf{R}_{m}{\scriptstyle{(\mu)}}\mathsf{R}_{m+1}{\scriptstyle{(\lambda)}}\bigg) \bigg(\mathsf{R}_{m-1}{\scriptstyle{(\mu)}}\mathsf{R}_{m}{\scriptstyle{(\lambda)}}\bigg) \frac{\mathsf{R}_{m-1}{\scriptstyle{(\lambda)}}}{\mathsf{R}_{m-1}{\scriptstyle{(\mu)}}}\ldots \bigg(\mathsf{R}_{1}{\scriptstyle{(\lambda)}}\mathsf{R}_{2}{\scriptstyle{(\mu)}}\bigg) \\\\ && \qquad=\bigg(\mathsf{R}_{m}{\scriptstyle{(\mu)}}\mathsf{R}_{m+1}{\scriptstyle{(\lambda)}}\bigg) \bigg(\mathsf{R}_{m-1}{\scriptstyle{(\mu)}}\mathsf{R}_{m}{\scriptstyle{(\lambda)}}\bigg)\ldots \frac{\mathsf{R}_{2}{\scriptstyle{(\lambda)}}}{\mathsf{R}_{2}{\scriptstyle{(\mu)}}} \bigg(\mathsf{R}_{1}{\scriptstyle{(\lambda)}}\mathsf{R}_{2}{\scriptstyle{(\mu)}}\bigg) \\\\ && \qquad=\bigg(\mathsf{R}_{m}{\scriptstyle{(\mu)}}\mathsf{R}_{m+1}{\scriptstyle{(\lambda)}}\bigg) \bigg(\mathsf{R}_{m-1}{\scriptstyle{(\mu)}}\mathsf{R}_{m}{\scriptstyle{(\lambda)}}\bigg)\ldots \bigg(\mathsf{R}_{1}{\scriptstyle{(\mu)}}\mathsf{R}_{2}{\scriptstyle{(\lambda)}}\bigg)\mathsf{R}_{1}{\scriptstyle{(\lambda,\mu)}} \\\\ &&=\bigg(\mathsf{R}_{m}{\scriptstyle{(\mu)}}\mathsf{R}_{m-1}{\scriptstyle{(\mu)}}\ldots\mathsf{R}_{1}{\scriptstyle{(\mu)}}\bigg) \bigg(\mathsf{R}_{m+1}{\scriptstyle{(\lambda)}}\mathsf{R}_{m}{\scriptstyle{(\lambda)}}\ldots\mathsf{R}_{2}{\scriptstyle{(\lambda)}}\bigg) \frac{\mathsf{R}_{1}{\scriptstyle{(\lambda)}}}{\mathsf{R}_{1}{\scriptstyle{(\mu)}}} , \\\end{eqnarray*} so, one hopes the $\circlearrowright$-ed product to step beyond $m=M-2$ and deliver \[ \Omega{\scriptstyle{(\lambda)}}\Omega{\scriptstyle{(\mu)}}=\Omega{\scriptstyle{(\mu)}}\Omega{\scriptstyle{(\lambda)}} . \] Let us see. \begin{eqnarray*}\\ &&\Omega{\scriptstyle{(\lambda)}} \Omega{\scriptstyle{(\mu)}} \\\\ &&\qquad=\sum c_{p_M}^{(\lambda)} c_{p_1}^{(\lambda)} q^{-2 p_M p_1} \mathsf{w}^{p_M}_M\mathsf{R}_{\!M\!-\!1}^{(\lambda)} \ldots \mathsf{R}_2^{(\lambda)} \mathsf{w}^{p_1}_1 \\\\ &&\qquad\times\sum c_{r_M}^{(\mu)} c_{r_1}^{(\mu)} q^{-2 r_M r_1} \mathsf{w}^{r_M}_M\mathsf{R}_{\!M\!-\!1}^{(\mu)} \ldots \mathsf{R}_2^{(\mu)} \mathsf{w}^{r_1}_1 \\\\ &&\qquad=\sum c_{p_M}^{(\lambda)} c_{r_M}^{(\mu)} c_{p_1}^{(\lambda)} c_{r_1}^{(\mu)} q^{-2(p_M p_1+r_M r_1+r_M p_1)} \\\\ &&\qquad\times\mathsf{w}^{p_M}_M \mathsf{R}_{\!M\!-\!1}^{(\lambda)} \mathsf{w}^{r_M}_M \mathsf{R}_{\!M-2}^{(\lambda)} \mathsf{R}_{\!M\!-\!1}^{(\mu)} \ldots \mathsf{R}_2^{(\lambda)} \mathsf{R}_3^{(\mu)} \mathsf{w}^{p_1}_1\mathsf{R}_2^{(\mu)} \mathsf{w}^{r_1}_1 \\\end{eqnarray*} - so far, we only recalled the definitions and did some reshuffling not involving any nontrivial commutation relations, ${\scriptstyle{(\lambda)}}$ and ${\scriptstyle{(\mu)}}$ moved to the superscript level in order to save some paper - \begin{eqnarray*}\\ && =\sum c_{p_M}^{(\lambda)} c_{r_M}^{(\mu)} c_{p_1}^{(\lambda)} c_{r_1}^{(\mu)} q^{-2(p_M p_1+r_M r_1+r_M p_1)} \\\\ &&\times\mathsf{w}^{p_M}_M \mathsf{R}_{\!M\!-\!1}^{(\lambda)} \mathsf{w}^{r_M}_M \frac{\mathsf{R}_{\!M\!-\!1}^{(\mu)} }{\mathsf{R}_{\!M\!-\!1}^{(\lambda)} } \frac{\mathsf{R}_{\!M\!-\!1}^{(\lambda)} }{\mathsf{R}_{\!M\!-\!1}^{(\mu)} } \mathsf{R}_{\!M-2}^{(\lambda)} \mathsf{R}_{\!M\!-\!1}^{(\mu)} \ldots \mathsf{R}_2^{(\lambda)} \mathsf{R}_3^{(\mu)} \mathsf{w}^{p_1}_1\mathsf{R}_2^{(\mu)} \mathsf{w}^{r_1}_1 \\\end{eqnarray*} - the unit operator $\frac{\mathsf{R}_{\!M\!-\!1}^{(\mu)} } {\mathsf{R}_{\!M\!-\!1}^{(\lambda)} }\frac{\mathsf{R}_{\!M\!-\!1}^{(\lambda)} } {\mathsf{R}_{\!M\!-\!1}^{(\mu)} }=\mathsf{1}$ has been inserted - \begin{eqnarray*}\\ &&=\sum c_{p_M}^{(\lambda)} c_{r_M}^{(\mu)} c_{p_1}^{(\lambda)} c_{r_1}^{(\mu)} q^{-2(p_M p_1+r_M r_1+r_M p_1)} \\\\ &&\times\mathsf{w}^{p_M}_M \mathsf{R}_{\!M\!-\!1}^{(\lambda)} \mathsf{w}^{r_M}_M \frac{\mathsf{R}_{\!M\!-\!1}^{(\mu)} }{\mathsf{R}_{\!M\!-\!1}^{(\lambda)} } \mathsf{R}_{\!M-2}^{(\mu)} \mathsf{R}_{\!M\!-\!1}^{(\lambda)} \ldots \mathsf{R}_2^{(\mu)} \mathsf{R}_3^{(\lambda)} \frac{\mathsf{R}_2^{(\lambda)} }{\mathsf{R}_2^{(\mu)} } \mathsf{w}^{p_1}_1\mathsf{R}_2^{(\mu)} \mathsf{w}^{r_1}_1 \\\end{eqnarray*} - AYBE did its habitual job - \begin{eqnarray*}\\ &&=\sum c_{p_M}^{(\lambda)} c_{r_M}^{(\mu)} c_{p_1}^{(\lambda)} c_{r_1}^{(\mu)} q^{-2(p_M p_1+r_M r_1+r_M p_1)} \\\\ &&\times\mathsf{w}^{p_M}_M \mathsf{R}_{\!M\!-\!1}^{(\lambda)} \mathsf{w}^{r_M}_M \frac{\mathsf{R}_{\!M\!-\!1}^{(\mu)} }{\mathsf{R}_{\!M\!-\!1}^{(\lambda)} }\ldots \mathsf{R}_{m+1}^{(\mu)} \mathsf{R}_{m+2}^{(\lambda)} \mathsf{R}_{m}^{(\mu)} \mathsf{R}_{m+1}^{(\lambda)} \mathsf{R}_{m-1}^{(\mu)} \mathsf{R}_{m}^{(\lambda)} \ldots \frac{\mathsf{R}_2^{(\lambda)} }{\mathsf{R}_2^{(\mu)} } \mathsf{w}^{p_1}_1\mathsf{R}_2^{(\mu)} \mathsf{w}^{r_1}_1 \\\end{eqnarray*} - nothing happened, we just refocused the attention to the middle portion of the product - \begin{eqnarray*}\\ &&=\sum c_{p_M}^{(\lambda)} c_{r_M}^{(\mu)} c_{r_{m+1}}^{(\mu)} c_{r_{m}}^{(\mu)} c_{p_{m+1}}^{(\lambda)} c_{p_{m}}^{(\lambda)} c_{p_1}^{(\lambda)} c_{r_1}^{(\mu)} q^{-2(p_M p_1+r_M r_1+r_M p_1)} \\\\ &&\times\mathsf{w}^{p_M}_M \mathsf{R}_{\!M\!-\!1}^{(\lambda)} \mathsf{w}^{r_M}_M \frac{\mathsf{R}_{\!M\!-\!1}^{(\mu)} }{\mathsf{R}_{\!M\!-\!1}^{(\lambda)} }\ldots \mathsf{w}^{r_{m+1}}_{m+1} \mathsf{R}_{m+2}^{(\lambda)} \mathsf{w}^{r_{m}}_{m}\mathsf{w}^{p_{m+1}}_{m+1} \mathsf{R}_{m-1}^{(\mu)} \mathsf{w}^{p_{m}}_{m}\ldots \frac{\mathsf{R}_2^{(\lambda)} }{\mathsf{R}_2^{(\mu)} } \mathsf{w}^{p_1}_1\mathsf{R}_2^{(\mu)} \mathsf{w}^{r_1}_1 \\\end{eqnarray*} - we disassembled some of the $\mathsf{R}$'s - \begin{eqnarray*}\\ &&=\sum c_{p_M}^{(\lambda)} c_{r_M}^{(\mu)} c_{r_{m+1}}^{(\mu)} c_{p_{m+1}}^{(\lambda)} c_{r_{m}}^{(\mu)} c_{p_{m}}^{(\lambda)} c_{p_1}^{(\lambda)} c_{r_1}^{(\mu)} q^{-2(p_M p_1+r_M r_1+r_M p_1-p_{m+1}r_m)} \\\\ &&\times\bigg(\mathsf{w}^{p_M}_M \mathsf{R}_{\!M\!-\!1}^{(\lambda)} \mathsf{w}^{r_M}_M \frac{\mathsf{R}_{\!M\!-\!1}^{(\mu)} }{\mathsf{R}_{\!M\!-\!1}^{(\lambda)} }\ldots \mathsf{w}^{r_{m+1}}_{m+1}\mathsf{R}_{m+2}^{(\lambda)} \mathsf{w}^{p_{m+1}}_{m+1}\bigg) \\\\ &&\times\bigg(\mathsf{w}^{r_{m}}_{m} \mathsf{R}_{m-1}^{(\mu)} \mathsf{w}^{p_{m}}_{m}\ldots \frac{\mathsf{R}_2^{(\lambda)} }{\mathsf{R}_2^{(\mu)} } \mathsf{w}^{p_1}_1\mathsf{R}_2^{(\mu)} \mathsf{w}^{r_1}_1\bigg) \\\end{eqnarray*} - the two in the middle traded places - \begin{eqnarray*}\\ &&=\sum c_{r_{m}}^{(\mu)} c_{p_{m}}^{(\lambda)} c_{p_1}^{(\lambda)} c_{r_1}^{(\mu)} c_{p_M}^{(\lambda)} c_{r_M}^{(\mu)} c_{r_{m+1}}^{(\mu)} c_{p_{m+1}}^{(\lambda)} q^{-2(r_m r_{m+1}+p_m p_{m+1}+p_m r_{m+1}-r_1p_M)} \\\\ &&\times\bigg(\mathsf{w}^{r_{m}}_{m} \mathsf{R}_{m-1}^{(\mu)} \mathsf{w}^{p_{m}}_{m}\ldots \frac{\mathsf{R}_2^{(\lambda)} }{\mathsf{R}_2^{(\mu)} } \mathsf{w}^{p_1}_1\mathsf{R}_2^{(\mu)} \mathsf{w}^{r_1}_1\bigg) \\\\ &&\times\bigg(\mathsf{w}^{p_M}_M \mathsf{R}_{\!M\!-\!1}^{(\lambda)} \mathsf{w}^{r_M}_M \frac{\mathsf{R}_{\!M\!-\!1}^{(\mu)} }{\mathsf{R}_{\!M\!-\!1}^{(\lambda)} }\ldots \mathsf{w}^{r_{m+1}}_{m+1}\mathsf{R}_{m+2}^{(\lambda)} \mathsf{w}^{p_{m+1}}_{m+1}\bigg) \\\end{eqnarray*} - the two halves in big brackets passed through each other - \begin{eqnarray*}\\ &&=\sum c_{r_{m}}^{(\mu)} c_{p_{m}}^{(\lambda)} c_{p_1}^{(\lambda)} c_{p_M}^{(\lambda)} c_{r_1}^{(\mu)} c_{r_M}^{(\mu)} c_{r_{m+1}}^{(\mu)} c_{p_{m+1}}^{(\lambda)} q^{-2(r_m r_{m+1}+p_m p_{m+1}+p_m r_{m+1})} \\\\ &&\times\mathsf{w}^{r_{m}}_{m} \mathsf{R}_{m-1}^{(\mu)} \mathsf{w}^{p_{m}}_{m}\ldots \frac{\mathsf{R}_2^{(\lambda)} }{\mathsf{R}_2^{(\mu)} } \mathsf{w}^{p_1}_1\mathsf{R}_2^{(\mu)} \mathsf{w}^{p_M}_M\mathsf{w}^{r_1}_1 \mathsf{R}_{\!M\!-\!1}^{(\lambda)} \mathsf{w}^{r_M}_M \frac{\mathsf{R}_{\!M\!-\!1}^{(\mu)} }{\mathsf{R}_{\!M\!-\!1}^{(\lambda)} }\ldots \mathsf{w}^{r_{m+1}}_{m+1}\mathsf{R}_{m+2}^{(\lambda)} \mathsf{w}^{p_{m+1}}_{m+1} \\\end{eqnarray*} - the two in the middle traded places - \begin{eqnarray*}\\ &&=\sum c_{r_{m}}^{(\mu)} c_{p_{m}}^{(\lambda)} c_{r_{m+1}}^{(\mu)} c_{p_{m+1}}^{(\lambda)} q^{-2(r_m r_{m+1}+p_m p_{m+1}+p_m r_{m+1})} \\\\ &&\times\mathsf{w}^{r_{m}}_{m} \mathsf{R}_{m-1}^{(\mu)} \mathsf{w}^{p_{m}}_{m}\!\ldots\! \frac{\mathsf{R}_2^{(\lambda)} }{\mathsf{R}_2^{(\mu)} } \mathsf{R}_1^{(\lambda)} \mathsf{R}_2^{(\mu)} \mathsf{R}_M^{(\lambda)} \mathsf{R}_1^{(\mu)} \mathsf{R}_{\!M\!-\!1}^{(\lambda)} \mathsf{R}_M^{(\mu)} \frac{\mathsf{R}_{\!M\!-\!1}^{(\mu)} }{\mathsf{R}_{\!M\!-\!1}^{(\lambda)} } \!\ldots\!\mathsf{w}^{r_{m+1}}_{m+1}\mathsf{R}_{m+2}^{(\lambda)} \mathsf{w}^{p_{m+1}}_{m+1} \\\end{eqnarray*} - we assembled some $\mathsf{R}$'s, now it only remains to apply AYBE three more times~- \begin{eqnarray*}\\ &&\qquad=\sum c_{r_{m}}^{(\mu)} c_{p_{m}}^{(\lambda)} c_{r_{m+1}}^{(\mu)} c_{p_{m+1}}^{(\lambda)} q^{-2(r_m r_{m+1}+p_m p_{m+1}+p_m r_{m+1})} \\\\ &&\qquad\times\mathsf{w}^{r_{m}}_{m} \mathsf{R}_{m-1}^{(\mu)} \mathsf{w}^{p_{m}}_{m}\ldots \mathsf{R}_1^{(\mu)} \mathsf{R}_2^{(\lambda)} \mathsf{R}_M^{(\mu)} \mathsf{R}_1^{(\lambda)} \mathsf{R}_{\!M\!-\!1}^{(\mu)} \mathsf{R}_M^{(\mu)} \ldots \mathsf{w}^{r_{m+1}}_{m+1}\mathsf{R}_{m+2}^{(\lambda)} \mathsf{w}^{p_{m+1}}_{m+1} \\\\ &&\qquad=\sum c_{r_m}^{(\mu)} c_{r_{m+1}}^{(\mu)} q^{-2 r_m r_{m+1}} \mathsf{w}^{r_m}_m\mathsf{R}_{m-1}^{(\mu)} \ldots \mathsf{R}_{m+2}^{(\mu)} \mathsf{w}^{r_{m+1}}_{m+1} \\\\ &&\qquad\times\sum c_{p_m}^{(\lambda)} c_{p_{m+1}}^{(\lambda)} q^{-2 p_m p_{m+1}} \mathsf{w}^{p_m}_m\mathsf{R}_{m-1}^{(\lambda)} \ldots \mathsf{R}_{m+2}^{(\lambda)} \mathsf{w}^{p_{m+1}}_{m+1} \\\\ &&=\Omega{\scriptstyle{(\mu)}} \Omega{\scriptstyle{(\lambda)}} . \\\end{eqnarray*} Done, at least for $M>5$. In fact, even $M=3$ is possible but this would take three more pages to verify. Anyway, a more civilized edition of the above proof is presented in (Volkov 1997b). The commutativity of the $\Omega$'s may be a good news but there is a bad one too. The flip operator $\mathsf{F}$ does not commute with the $\Omega$'s. Which means there is another family \[ \mho{\scriptstyle{(\lambda)}}=\mathsf{F}^{-1}\Omega{\scriptstyle{(\lambda)}}\mathsf{F}=\mathsf{F}\Omega{\scriptstyle{(\lambda)}}\mathsf{F}^{-1} \] not coinciding with the original one. Of course, the $\mho$'s commute with each other \[ \mho{\scriptstyle{(\lambda)}}\mho{\scriptstyle{(\mu)}}=\mho{\scriptstyle{(\mu)}}\mho{\scriptstyle{(\lambda)}} \] but it is not immediately clear whether \[ \Omega{\scriptstyle{(\lambda)}}\mho{\scriptstyle{(\mu)}}=\mho{\scriptstyle{(\mu)}}\Omega{\scriptstyle{(\lambda)}} \] should also be true. Fortunately, there is some hidden agenda making it happen. Technically-wise, the proof is the same as that above except the AYBE in use is somewhat different: \begin{eqnarray*}\\ && \bigg(\mathsf{w}_{m+1}^{\mu} \epsilon(\lambda\!-\!\mu|\mathsf{w}_{m+1}^{})\bigg) \epsilon(\lambda|\mathsf{w}_{m}^{})\epsilon(\mu|\mathsf{w}_{m+1}^{-1}) \\\\ && \qquad=\epsilon(\lambda\!-\!\mu|\mathsf{w}_{m+1}^{}) \epsilon(\lambda|q^{-2\mu}\mathsf{w}_{m}^{}) \mathsf{w}_{m+1}^{\mu}\epsilon(\mu|\mathsf{w}_{m+1}^{-1}) \\\\ && \qquad=\frac{\epsilon(\lambda|q^{-2\mu}\mathsf{w}_{m+1}^{})} {\epsilon(\mu|q^{-2\mu}\mathsf{w}_{m+1}^{})} \epsilon(\lambda|q^{-2\mu}\mathsf{w}_{m}^{}) q^{\mu^2}\epsilon(\mu|q^{-2\mu}\mathsf{w}_{m+1}^{}) \\\\ && \qquad=q^{\mu^2}\epsilon(\mu|q^{-2\mu}\mathsf{w}_{m}^{}) \epsilon(\lambda|q^{-2\mu}\mathsf{w}_{m+1}^{}) \frac{\epsilon(\lambda|q^{-2\mu}\mathsf{w}_{m}^{})} {\epsilon(\mu|q^{-2\mu}\mathsf{w}_{m}^{})} \\\\ && \qquad=\mathsf{w}_{m}^{\mu}\epsilon(\mu|\mathsf{w}_{m}^{-1}) \epsilon(\lambda|q^{-2\mu}\mathsf{w}_{m+1}^{}) \epsilon(\lambda\!-\!\mu|\mathsf{w}_{m}^{}) \\\\ && =\epsilon(\mu|\mathsf{w}_{m}^{-1})\epsilon(\lambda|\mathsf{w}_{m+1}^{}) \bigg(\epsilon(\lambda\!-\!\mu|\mathsf{w}_{m}^{}) \mathsf{w}_{m}^{\mu}\bigg) . \\\end{eqnarray*} Once the flip-n-shift join \[ \mathsf{Q}{\scriptstyle{(\lambda)}}=\mathsf{S}\mathsf{F}\Omega{\scriptstyle{(\lambda)}}=\mho{\scriptstyle{(\lambda)}}\mathsf{S}\mathsf{F} \] one realizes that full commutativity is not there \[ \mathsf{Q}{\scriptstyle{(\lambda)}}\mathsf{Q}{\scriptstyle{(\mu)}}\neq\mathsf{Q}{\scriptstyle{(\mu)}}\mathsf{Q}{\scriptstyle{(\lambda)}} , \] only `squares' can do: \begin{eqnarray*}\\ && \bigg(\mathsf{Q}{\scriptstyle{(\kappa)}}\mathsf{Q}{\scriptstyle{(\lambda)}}\bigg)\bigg(\mathsf{Q}{\scriptstyle{(\mu)}}\mathsf{Q}{\scriptstyle{(\nu)}}\bigg) =\mathsf{F}\Omega{\scriptstyle{(\kappa)}}\mathsf{F}\Omega{\scriptstyle{(\lambda)}}\mathsf{F}\Omega{\scriptstyle{(\mu)}}\mathsf{F}\Omega{\scriptstyle{(\nu)}}\\\\ && =\mathsf{F}^4\mho{\scriptstyle{(\kappa)}}\Omega{\scriptstyle{(\lambda)}}\mho{\scriptstyle{(\mu)}}\Omega{\scriptstyle{(\nu)}} =\mathsf{F}^4\mho{\scriptstyle{(\mu)}}\Omega{\scriptstyle{(\nu)}}\mho{\scriptstyle{(\kappa)}}\Omega{\scriptstyle{(\lambda)}} \\\\ && =\mathsf{F}\Omega{\scriptstyle{(\mu)}}\mathsf{F}\Omega{\scriptstyle{(\nu)}}\mathsf{F}\Omega{\scriptstyle{(\kappa)}}\mathsf{F}\Omega{\scriptstyle{(\lambda)}}= \bigg(\mathsf{Q}{\scriptstyle{(\mu)}}\mathsf{Q}{\scriptstyle{(\nu)}}\bigg)\bigg(\mathsf{Q}{\scriptstyle{(\kappa)}}\mathsf{Q}{\scriptstyle{(\lambda)}}\bigg).\\\end{eqnarray*} In particular, \[ \mathsf{Q}^{-2}\mathsf{Q}^2{\scriptstyle{(\lambda)}}\mathsf{Q}^2=\mathsf{Q}^2{\scriptstyle{(\lambda)}} . \] On these grounds, let us call the $\mathsf{Q}{\scriptstyle{(\lambda)}}$'s `conservation laws' even though what actually happens is that only their squares are only recovered on every other step in time. This peculiarity has everything to do with that discussed in Section 9. Apparently, not one but two time steps should make a `physical' time unit. \section{Conclusion} We developed here the scheme allowing to describe some quantum dynamical systems in discrete 1+1-dimensional space-time. The discretized Liouville model was taken as the main example and treated both for laboratory coordinates and light-like ones. All considerations were purely algebraic, no representation and/or Hilbert space was used. We confined ourselves to pure Heisenberg picture of quantum theory. The main outcome is the construction of evolution operator realizing the automorphism of the algebra of observables leading to the Heisenberg equations of motion representing lattice and quantum deformation of the corresponding classical equations. In this construction the famous q-exponent (q-dilogarithm) played the most prominent part. We discussed also the integrability of the model presenting the set of conservation laws. Their construction and the verification of commutativity was based on a new solution of Artin-Yang-Baxter relation, being a close relative of the q-exponent. We hope that the scheme of this paper is general enough and allows to include many related models of quantum theory. Our papers mentioned in Introduction give some illustration of this.
2023-04-23T06:41:21.433Z
1997-10-08T11:08:08.000Z
redpajama/arxiv
arxiv_0001
2,264
8,196
70829dfd581a9aa83302b660178575c850795680
\section{Introduction} The properties of the Orion nebula are the starting point for many of our ideas on high mass stars and their interactions with the environment. A symposium held in 1981 (Glassgold et al.~\cite{Gea82}) summarises much of what was known at that time. More recent work has been reviewed by Genzel \& Stutzki~(\cite{GS89}). In general, the aim has been to understand the ionization structure and dynamical evolution of the nebula. In recent years, much attention has been paid to the hot neutral gas adjacent to the ionization front known as a PDR or Photon Dominated Region. Work on the ionized nebula has tended to concentrate upon determinations of the ionization structure and elemental abundances (see Peimbert~\cite{P82}, Simpson et al.~\cite{Sea86}). More recent optical and infrared studies have been carried out by Osterbrock et al.~(\cite{OSV90}, \cite{OTV92}), Baldwin et al.~(\cite{Bea91}), Peimbert et al.~(\cite{PTR92}), Pogge et al.~(\cite{POA92}), De Poy \& Pogge~(\cite{DP94}), Bautista et al.~(\cite{BPD95}), Rubin et al.~(\cite{RDW93}), and Rodriguez (1996). A review of the results has been made by Peimbert~(\cite{P93}) and discussions of the methods employed are given by Mathis~(\cite{M95}) and by Peimbert~(\cite{P95}). These studies in general indicate that the major fraction of elements such as C,N,O,S are in the gas phase within the ionized nebula whereas species such as Si and Fe appear to be depleted by roughly an order of magnitude relative to abundances either in the Sun or nearby B stars. Radio work on Orion has revealed an immense variety of structures in the emissions of the ionized gas ( e.g. Felli et al.~\cite{Fea93}, Yusuf--Zadeh~\cite{YZ90}). Particularly striking is the bar--like structure situated roughly 2 arc minutes (0.25 parsec) to the south--east of the Trapezium stars which is the subject of this article. ``The Bar'' is also observed in molecular line emission (see below) and clearly marks an ionization front where Lyman continuum photons from the O6 star \THEC \ are absorbed. The dynamical behavior of the ionized gas can be studied in radio recombination lines (Pankonin et al.~\cite{PWH79}, Wilson \& J\"{a}ger~\cite{WJ87}, Wilson \& Filges~\cite{WF90}, Wilson et al. ~\cite{WFCRR97}) from which one concludes that much of the ionized material in Orion is streaming towards the observer. Infared studies of HII regions sample not only the ionized gas but also the adjacent neutral material or PDR. A recent review of the properties of these regions is that of Hollenbach \& Tielens~(\cite{HT97}) (see also the discussions of Genzel~\cite{G92}, and Walmsley~\cite{W97}). Modelling studies have been carried out by Tielens \& Hollenbach~(\cite{TH85}), Hollenbach et al.~(\cite{HTT91}), Sternberg \& Dalgarno~(\cite{SD89}; \cite{SD95}), Fuente et al.~(\cite{FUea93}), Jansen et al.~(\cite{Jea95a},b), Bertoldi \& Draine ~(\cite{BD96}) and Draine \& Bertoldi ~(\cite{DB96}). Much of this activity has centred on attempts to understand the properties of "The Bar" mentioned above. Recent observational studies using a variety of molecular tracers have been carried out by Tielens et al.~(\cite{Tea93}), by Tauber et al.~(\cite{Tea94}; \cite{tauber95}), by Hogerheijde et al.~(\cite{HJD95}), and by van der Werf et al.~(\cite{VdW96}). These show a stratification along the direction of the perpendicular to the bar in the plane of the sky. This is in the sense expected for gas heated by the Trapezium stars and consistent according to the models with attenuation by a gas of density $5\, 10^4$ \percc . However, the data also seem to show that the gas in the bar is far from homogeneous and that clumps of density as high as $10^6$ \percc \ are embedded in the filament. Such high density condensations presumably either have been or will be soon overrun by the ionization front and will give rise to dense ionized globules within the HII region (see e.g Lizano et al. ~\cite{LCGH96}, Dyson et al. ~\cite{DWR95}). Understanding the characteristics of such high density clumps may thus be of critical importance for the evolution of the HII region. One of the most useful tracers of PDR's has turned out to be the near infrared lines of molecular hydrogen. For example, van der Werf et al. used the FAST camera on the ESO/MPI 2.2 m telescope to image the \MOLH \ $v=1\rightarrow 0$ S(1) (2.122 $\mu\hbox{m}$ ) and $v=2\rightarrow 1$ S(1) (2.248 $\mu\hbox{m}$ ) lines towards the bar with 1.5\hbox{$^{\prime\prime}$} \ resolution. These show that the transition from atomic to molecular hydrogen in the bar occurs 15 arc seconds (0.03 pc at a distance of 450 pc) to the SE of the ionization front (i.e. away from the ionizing stars). Van der Werf et al. also find that the ratio $R_{12}$ of the intensities of the $1\rightarrow 0$ and $2\rightarrow 1$ lines varies between a value of $8.1\pm 0.7$ at the peak of the \MOLH \ emission to a value of $3.4\pm 1.9$ 30\hbox{$^{\prime\prime}$} \ from the ionization front on the side shielded from the radiation of the Trapezium stars. The latter value is characteristic of UV-pumped fluorescent emission in a low density gas (Sternberg and Dalgarno~\cite{SD89}). The present study had as its aim to obtain near IR spectra of the gas in the vicinity of the bar in order to verify and extend understanding of the physical conditions on both sides of the ionization front. We were partly motivated by the idea that there is a link between the ionized and PDR components in that the former is mainly sensitive to the radiation just shortward of 912 \AA \ while the latter is basically a measure of the radiation longward of this limit (see Bertoldi \& Draine 1996 for a discussion). It is thus of considerable interest to compare the two using the same instrument. We therefore used the TIRGO telescope on the Gornergrat (Switzerland) to obtain slit spectra in the J, H, and K bands at 3 positions in the vicinity of the bar, shown in Fig.~\ref{fima}. As supplementary information, we also made use of unpublished observations carried out using the IRSPEC spectrometer on the ESO NTT telescope by Dr A.Moorwood. We summarize in the next section the techniques used for observations and data reductions. The results are presented in Sect.3 and discussed in Sect. 4. We summarize our conclusions in Sect. ~\ref{sconcl}. \begin{table} \begin{center} \caption{Spectra in the J, H and K bands} \vskip 0.2cm \vbox{\hskip -8mm \begin{tabular}{lccccc} \hline\hline Line & $\lambda$ & $A$ & $B$ & $C$ & $CS$ \\ & ($\mu$m) & & & & \\ \hline &&&&&\\ \,[OI]\,$2p^33D-2p^33P$ & 1.129 & 3.6 & 4.7 & 6.8 & 12 \\ \,[PII]\,$3p^2D_2-3p^2P_2$ & 1.189 & 2.4 & 3.5 & 3.2 & 6.6 \\ \,HeI\,$5^3D-3^3P^\circ$ & 1.197 & 5.1 & 5.4 & 6.0 & 5.4 \\ \,HeI\,$4^3P^\circ-3^3S$ & 1.253 & 6.9 & 5.9 & 5.3 & 2.3 \\ \,[FeII]\,$4sa^4\!D_{7/2}\!-\!4sa^6\!D_{9/2}$& 1.257 & 4.8 & 8.3 & 6.7 & 18 \\ \,?? & 1.268 & 2.3 & 1.5 & $<$1.8 & 5.7 \\ \,HeI\,$5^3F^\circ-3^3D$ & 1.278 & 18 & 16 & 18 & 26 \\ \,HI\,5-3 & 1.282 & 370 & 400 & 450 & 430 \\ \,HeI\,$5^3P^\circ-3^3D$ & 1.298 & 2.2 & 2.3 & 1.6 & $<$2.6 \\ \,+[FeII]\,$4sa^4\!D_{3/2}\!-\!4sa^6\!D_{1/2}$& & & & & \\ \,[OI]\,$2p^34S-2p^33P$ & 1.317 & 3.3 & 6.8 & 7.4 & 8.4 \\ \,[FeII]\,$4sa^4\!D_{7/2}\!-\!4sa^6\!D_{7/2}$ & 1.321 & 1.1 & 2.3 & $<$1.6 & 4.7 \\ \,HeI\,$5^1S-3^1P^\circ$ & 1.341 & 1.5 & 1.3 & 1.6 & 2.3 \\ &&&&&\\ \,HI\,21-4 & 1.514 & 3.3 & 2.7 & 2.7 & 3.0 \\ \,HI\,20-4 & 1.520 & 3.6 & 3.6 & 3.6 & 3.9 \\ \,HI\,19-4 & 1.526 & 4.4 & 4.3 & 4.5 & 4.4 \\ \,HI\,18-4 & 1.534 & 6.4 & 6.2 & 6.6 & 8.7 \\ \,+[FeII]\,$4sa^4\!D_{5/2}\!-\!3d^7\!a^4\!F_{9/2}$ & & & & & \\ \,HI\,17-4 & 1.544 & 6.5 & 5.9 & 6.3 & 6.2 \\ \,HI\,16-4 & 1.556 & 8.2 & 7.3 & 7.2 & 7.2 \\ \,HI\,15-4 & 1.570 & 9.4 & 9.0 & 8.2 & 9.4 \\ \,HI\,14-4 & 1.588 & 11 & 10 & 10 & 13 \\ \,[FeII]\,$4sa^4\!D_{3/2}\!-\!3d^7\!a^4\!F_{7/2}$& 1.600 & 0.5 & 0.6 & 0.9 & 3.0 \\ \,HI\,13-4 & 1.611 & 14 & 13 & 13 & 16 \\ \,HI\,12-4 & 1.641 & 18 & 17 & 17 & 19 \\ \,[FeII]\,$4sa^4\!D_{7/2}\!-\!3d^7\!a^4\!F_{9/2}$& 1.644 & 5.9 & 11 & 8.7 & 24 \\ \,[FeII]\,$4sa^4\!D_{5/2}\!-\!3d^7\!a^4\!F_{7/2}$& 1.677 & 0.8 & 1.1 & $<$1.1 & 2.3 \\ \,HI\,11-4 & 1.681 & 21 & 21 & 23 & 24 \\ \,HeI\,$4\,^3D-3\,^3P^\circ$ & 1.701 & 9.1 & 8.5 & 8.6 & 6.4 \\ \,HeI\,$10\,^1P^\circ-4\,^1D$ & 1.732 & 0.5 & 1.0 & 2.1 & 1.2 \\ \,HI\,10-4 & 1.737 & 26 & 28 & 29 & 30 \\ \,HeI\,$7^3P^\circ-4^3S$ & 1.746 & 0.9 & 1.8 & 3.0 & 3.9 \\ $\>\>$ +H$_2$(1,0)S(7) & & & & & \\ \,H2(1,0)S(6) & 1.788 & 1.5 & 1.1 & 1.6 & 2.2 \\ &&&&&\\ \,H$_2$(1,0)S(2) & 2.034 & 0.6 & .... & .... & 4.9 \\ \,HeI\,$2^1P^\circ-2^1S$ & 2.058 & 99 & 110 & 74 & 95 \\ \,H$_2$(2,1)S(3) & 2.073 & $<$0.5 & 1.2 & 3.6 & 4.2 \\ \,HeI\,$4S-^3P^\circ$ & 2.113 & 4.2 & 3.7 & 2.8 & 2.1 \\ \,H$_2$(1,0)S(1) & 2.122 & 1.5 & 5.1 & 17 & 15 \\ \,H$_2$(2,1)S(2) & 2.154 & $<$0.1 & 0.2 & 1.3 & 1.3 \\ \,HeI\,$7F^\circ-4^1D$ & 2.162 & 3.0 & 2.5 & 2.4 & 1.9 \\ \,HI\,7-4 & 2.166 & 100 & 100 & 100 & 100 \\ \,??+H$_2$(3,2)S(3) & 2.199 & 0.5 & 0.9 & 1.4 & 3.5 \\ \,[FeIII]$3d^6G_5-3D^6H_6$ & 2.219 & 1.4 & 2.0 & 1.3 & 2.3 \\ \,H$_2$(1,0)S(0) & 2.223 & 0.5 & 1.6 & 6.6 & 6.5 \\ \,[FeIII]$3d^6G_4-3d^6H_4$ & 2.242 & 0.7 & 0.6 & 0.6 & 0.4 \\ \,H$_2$(2,1)S(1) & 2.248 & 0.4 & 1.2 & 4.1 & 3.8 \\ \,??+H$_2$(3,2)S(2) & 2.286 & 0.7 & 0.6 & 1.0 & 1.4 \\ \hline\hline \end{tabular}} \end{center} Note: \hskip 0.1cm Intensity 100 corresponds to: \\ \hskip 1cm A~) $2.65\times 10^{-3}$ erg cm$^{-2}$s$^{-1}$ sr$^{-1}$ \\ \hskip 1cm B~) $2.08\times 10^{-3}$ erg cm$^{-2}$s$^{-1}$ sr$^{-1}$ \\ \hskip 1cm C~) $0.61\times 10^{-3}$ erg cm$^{-2}$s$^{-1}$ sr$^{-1}$ \\ \hskip 1cm CS) $0.50\times 10^{-3}$ erg cm$^{-2}$s$^{-1}$ sr$^{-1}$ \\ \end{table} \begin{figure*} \centerline{\psfig{figure=ps.fig1,width=18cm,angle=-90}} \caption {On the left: composite J, H, and K image of the Orion Bar region showing the slit positions used for the LONGSP and IRSPEC observations. Coordinates are offsets in right ascension and declination relative to the position of the star \THEA \ (R.A(1950)= 5$^{h}$ 32$^{m}$ 55.$^{s}$5 , Dec(1950)= -5$^{\circ}$ 26\hbox{$^{\prime}$} \ 51\hbox{$^{\prime\prime}$} ). We show our three slit positions (1,2,and 3) as well as the four positions for which we tabulate line intensities (A,B,C,CS). On the right: Br$\gamma$\ image of the Orion Bar region.} \label{fima} \end{figure*} \section{Observations} \subsection{ARNICA Observations} The Orion Bar was observed during two observing runs in January 1996 and February 1997 using ARNICA (ARcetri Near Infrared CAmera) mounted on the 1.5m TIRGO\footnote{The TIRGO telescope is operated by the C.A.I.S.M.I.-C.N.R Firenze, Italy} telescope. ARNICA is equipped with a 256x256 NICMOS3 array, the pixel size with the optics used at TIRGO is $0.96^{\prime\prime}$; for a complete description of the instrument and of its performance, see Lisi et al.~(\cite{Lea96}) and Hunt et al.~(\cite{Hea96}). The Bar was imaged in the three J, H, and K broad band filters (centered at 1.25, 1.65, and 2.2~$\mu$m, respectively) and in the Br$\gamma$ narrow band filter ($\lambda=2.166\,\,\mu$m, $\Delta\lambda/\lambda\sim 1\%$, Vanzi et al.~\cite{VGCT97}). The seeing was approximately 2-3\hbox{$^{\prime\prime}$} \ and the observed field was approximately $\sim 4^\prime .5\times 4^\prime .5$, covering all the Bar region. Data reduction was carried out using the IRAF \footnote{IRAF is made available to the astronomical community by the National Optical Astronomy Observatories, which are operated by AURA, Inc., under contract with the U.S. National Science Foundation} and ARNICA \footnote{A description of ARNICA can be obtained from the Arcetri Observatory at {\tt ftp://150.217.20.1/pub/arnica/}} (Hunt et al.~\cite{HTBMM94}) software packages. Photometric calibration in the J, H and K bands was performed by observing photometric standard stars from the list of Hunt et al.~(\cite{Hea97}) ; the calibration accuracy is estimated to be $\sim 5\%$. The Br$\gamma$ image was continuum subtracted and calibrated using the K band image. We show in Fig.~\ref{fima} (left panel) an image obtained combining the J, H, and K images and (right panel) the continuum subtracted Br$\gamma$ image. \subsection{LONGSP Observations} J (1.25$\mu\hbox{m}$ ), H(1.65 $\mu\hbox{m}$ ), and K (2.2$\mu\hbox{m}$ ) band spectra of the Orion Bar were obtained using the LonGSp (Longslit Gornergrat Spectrometer) spectrometer mounted at the Cassegrain focus on the TIRGO telescope. The spectrometer is equipped with cooled reflective optics and a grating in Littrow configuration. The detector is a 256$\times$256 engineering grade NICMOS3 array (for detector performances see Vanzi et al.~\cite{VMG95}). The pixel sizes are 11.5 \AA\ (first order) and 1\farcs73 in the dispersion and slit directions, respectively. LONGSP operates in the range 0.9-2.5 $\mu\hbox{m}$\ achieving a spectral resolution at first order of $R\simeq550$ in J, 700 in H and 950 in K. For a more comprehensive description of the instrument, refer to Vanzi et al.~(\cite{Vea97}). Observations were conducted in two runs in January and March 1996 under non-photometric conditions. The slit used had dimensions 3\farcs5$\times$70\hbox{$^{\prime\prime}$}\ and was oriented N-S. The seeing during the observations was in the range 2\hbox{$^{\prime\prime}$}-4\hbox{$^{\prime\prime}$}. The Orion Bar was observed at three slit positions labeled as 1, 2, 3 and shown in Fig.~\ref{fima} superimposed on a NIR image obtained by co-adding the J, H and K ARNICA observations discussed in the previous section. Position 1 and 2 were chosen in order to study the variation of line intensities along a cut encompassing all the bar. Position 3 was subsequently chosen to be coincident with the CS peak discovered by van der Werf et al. at R.A.= 5$^h$ 32$^m$ 58.5$^s$ and Dec.= -5$^\circ$ 26$^\prime$ 25\hbox{$^{\prime\prime}$}\ (B1950.0). This high density ($10^6$ cm$^{-3}$) clump appears to be illuminated directly from the Trapezium and we thought it useful to examine directly the relative variations in line intensities across the clump. The center of the slits were offset by -35\hbox{$^{\prime\prime}$}, -23\hbox{$^{\prime\prime}$}\ (Pos. 1) -35\hbox{$^{\prime\prime}$}, 23\hbox{$^{\prime\prime}$} (Pos. 2) and 42\hbox{$^{\prime\prime}$}, 26\hbox{$^{\prime\prime}$}\ (Pos. 3) in R.A. and Dec. with respect to the star $\theta^2$ A Ori. At each grating position we performed 5 ABBA cycles (A=on source, B=on sky) with an on-chip integration time of 60 sec, for a total of 10 min integration on source. At the beginning or at the end of the five cycles on the object we performed 1 ABBA cycle on the O6 star BS 1895 ($\Theta^1$C Ori). Data reduction was performed with the ESO package MIDAS, within the context IRSPEC, modified to take into account LonGSp instrumental characteristics. The frames were corrected for bad pixels, flat-fielded, sky subtracted and wavelength calibrated using the OH sky lines present on all the frames (Oliva \& Origlia~\cite{OO92}). After a direct subtraction, sky removal was optimized by minimizing the standard deviation in selected areas where the OH sky lines were poorly subtracted but no object emission was present. The wavelength calibration was performed to better than 1/5 of a pixel ($\simeq$2\AA). The spectra were then corrected for telluric absorption by dividing by the featureless spectrum of \THEC . For more details on LonGSp data reduction, see Vanzi et al. \cite{Vea97}. Flux calibration of the spectra in the J, H, and K bands was achieved by rescaling the observed flux distribution along the slit to match that obtained from the ARNICA images at the positions of the slits. We consider such calibration accurate to $\simeq$20\% when comparing the fluxes of lines measured in two different bands. Indeed, the comparison between the flux distributions of H$_2$(1,0)S(1) $\lambda$2.12$\mu\hbox{m}$\ in our observations and in those by van der Werf et al. shows only a 10\% discrepancy in the absolute flux level. \subsection{IRSPEC Observations} IRSPEC (Moorwood et al. \cite{Mea91}) observations of the bar using the ESO NTT telescope were carried out in 1991 by Dr A. Moorwood. The detector was a SBRC 62x58 InSb array with pixels of $\simeq$5\AA\ (H band ) along the dispersion and 2\farcs2 along the slit direction. The slit, 4\farcs4$\times$120\hbox{$^{\prime\prime}$}\ in size, was oriented NE-SW as shown in Fig~\ref{fima}. The data were uncalibrated and in this paper, we merely make use of the profiles of line intensity along the slit. \begin{figure*} \psfig{figure=ps.fig2a,width=11cm} \caption { J band spectrum obtained towards the three slit sections {\it A}, {\it B} and {\it C} and in the {\it CS} position shown in Fig.1. The upper spectrum in each panel has been multiplied by 15 to emphasize weak features.} \label{fspecj} \end{figure*} \begin{figure*} \psfig{figure=ps.fig2b,width=12cm} \caption { H band spectrum.} \label{fspech} \end{figure*} \begin{figure*} \psfig{figure=ps.fig2c,width=11cm} \caption { K band spectrum.} \label{fspeck} \end{figure*} \section{Results} In Figs.~\ref{fspecj}, ~\ref{fspech}, ~\ref{fspeck}, we show sample spectra averaged over three portions of slit positions 1 and 2 (see Fig.~\ref{fima}). Based on the profiles of line intensity along the N-S direction, we decided to divide the combined slit into three sections which we named {\it A} (a 28 pixel section to the north), {\it B} (14 pixels in a central region), and {\it C} (28 pixels to the south). These sections are displayed in the left panel of Fig. 1. The hydrogen and helium recombination lines peak in position {\it A} and become weaker to the south whereas the molecular hydrogen lines become stronger and reach their maximum intensity in position {\it C}. However, there is clearly ionized gas towards the region {\it C} and vice-versa. In Figs.~\ref{fspecj},~\ref{fspech}, ~\ref{fspeck}, we show also the spectra summed over 40 pixels in slit position 3 (the ``CS peak''). In Table 1, we give intensities corresponding to the spectra shown in Figs.~\ref{fspecj},~\ref{fspech}~\ref{fspeck}. They have been averaged over the respective apertures. Line intensities, in each region, are given relative to Br$\gamma$ (put equal to 100). Typical uncertainties vary from $\simeq10\%$ for strong lines ($I\simgreat 10$) to $\simeq20\%$ for lines with $1\simless I\simless 10$ and about 50\% for the others. When comparing lines in two different bands, a 20\% error due to spectrophotometric calibration must also be taken into account. The line intensities can be corrected for reddening using the Cardelli et al. (1989) prescription A$_\lambda$/A$_V=0.48 \lambda^{-1.61}_\mu$, where we have adopted $R$=5.5, as appropriate for the Orion region. >From the ratio Pa$\beta$/Br$\gamma$ we derive A$_V\sim$ 2 mag (Sect.3.2). Note that this value applies only to lines forming in the ionized gas. The variation in intensity along the amalgamation of slit positions 1 and 2 of a variety of interesting line tracers is shown in Figs.~\ref{fcuthi} to~\ref{fcutfeoi}. Figure ~\ref{fcutirspec} shows intensity variations in a number of lines from the IRSPEC data. Figure ~\ref{fcutcs} plots the intensity profile of selected lines at slit position 3. >From Figs. ~\ref{fcuthi}-\ref{fcutfeoi}, we can see that region {\it A} coincides roughly with a peak in the lines of HI, which have a second peak in {\it B}, while the emission of molecular hydrogen has a strong peak in {\it C} and a weaker one in {\it B}. Lines of FeII and OI have a sharp peak in {\it B} and a secondary one in {\it A}, but are absent in region {\it C}. In the following, we will use the definition {\it A}, {\it B} and {\it C} (see caption to Fig. ~\ref{fcuthi}) both to identify the peaks in the intensity profiles and with reference to Table 1. We now summarize our results starting with the IRSPEC data which have the advantage that the slit is oriented perpendicularly to the bar. We then discuss in turn the hydrogen and helium recombination lines (which we presume form in the ionized gas), the two oxygen transitions, the collisionally excited iron lines which may form close to the ionization front, and the molecular hydrogen lines which are thought to form in hot neutral gas close to the ionization front. \begin{figure} {\psfig{figure=ps.cutHI,height=10cm,width=10cm}} \caption { Variation with declination offset (relative to declination (1950) =$-05^{\circ}$ 24\hbox{$^{\prime}$} \ 51\hbox{$^{\prime\prime}$} ) of line intensities measured in the amalgamation of slit positions 1 and 2. The lines are H (7-4) (Br$\gamma $; solid) , H 13-4 (dotted), and the He line at 1.701 $\mu\hbox{m}$ ~(dashed). The vertical scale is arbitrary. We note that regions {\it A}, {\it B}, and {\it C} discussed in the text are defined as follows : {\it A}, $\Delta \delta > 16\hbox{$^{\prime\prime}$} $, {\it B}, $-10\hbox{$^{\prime\prime}$} \, <\Delta \delta \ < \, 16\hbox{$^{\prime\prime}$} $, {\it C}, $\Delta \delta <\, -10$\hbox{$^{\prime\prime}$} .} \label{fcuthi} \end{figure} \begin{figure} {\psfig{figure=ps.cutH2,height=10cm,width=10cm}} \caption { Variation with declination offset of H (7-4) (dashed line) and \MOLH (1-0)S(1) (solid line), measured in the amalgamation of slit positions 1 and 2. The regions defined as {\it A}, {\it B}, and {\it C} are shown.} \label{fcuth2} \end{figure} \begin{figure} {\psfig{figure=ps.cutOI,height=10cm,width=10cm}} \caption { Top panel: variation with declination offset of H (7-4) (dotted line), H2 1-0S(1) (dashed line) and FeII 1.644 $\mu\hbox{m}$ (solid line). Bottom panel: variation with declination offset of H (7-4) (dotted line), H2 1-0S(1) (dashed line) and OI 1.317 $\mu\hbox{m}$ (solid line). In both cases, the variations are measured in the amalgamation of slit positions 1 and 2. } \label{fcutfeoi} \end{figure} \begin{figure} {\psfig{figure=ps.cutT,height=10cm,width=10cm}} \caption { IRSPEC cuts: HI (12-4) (dashed line), H2 1-0S(1) (dotted line) and FeII 1.644 $\mu\hbox{m}$ (solid line).} \label{fcutirspec} \end{figure} \begin{figure} {\psfig{figure=ps.cutCS,height=10cm,width=10cm}} \caption {Variation with declination offset along the slit centered on the CS peak of selected lines: HI (7-4) (dashed line, top panel), H2 1-0S(1) (solid line, top panel and dotted line, bottom panel), and FeII 1.257 $\mu\hbox{m}$ (solid line, bottom panel).} \label{fcutcs} \end{figure} \subsection{IRSPEC cut} The observations made using IRSPEC provide a useful introduction to the TIRGO results which have a wider spectral coverage. Figure ~\ref{fcutirspec} compares profiles in the Br12 line from the ionized gas, in the FeII 1.644$\mu\hbox{m}$ \ ($4sa^{4}$D$_{7/2}$-$3d^7a^{4}$F$_{9/2}$) line which traces gas close to the ionization front (see below), and in the molecular hydrogen v=1-0 S(1) line from hot (T$>$ 1000K) molecular gas. Figure ~\ref{fcutirspec} demonstrates the fact that the molecular hydrogen peak is offset $\sim$16\hbox{$^{\prime\prime}$} (0.035 pc) from the ionization front as marked by FeII (or by the fall-off in Br12). The simplest explanation of this (see Tielens et al. 1993, van der Werf et al. 1996) is that one is observing an edge-on PDR and that the observed offset corresponds to the difference between the ionization front where Lyman continuum photons are absorbed and the \MOLH \ dissociation front where photons capable of dissociating molecular hydrogen are absorbed. We note also that an offset of 16\hbox{$^{\prime\prime}$} \ in the IRSPEC data corresponds (given the orientation NE-SW of the bar) to 23\hbox{$^{\prime\prime}$}\ in the TIRGO slit oriented N-S. \subsection {HI lines} In the LONGSP data, we detect many recombination lines of atomic hydrogen, 13 lines of the Brackett series (Br$\gamma$ in the K band and 12 lines, from (10-4) to (21-4) in the H band), and Pa$\beta$ and Pa$\gamma$ in the J band. We use the 13 lines in the Brackett series to check the accuracy of our data. Figure ~\ref{fhi} plots the ratio (n-4)/(13-4) as a function of the quantum number n and compares our results with the prediction of recombination theory (Storey \& Hummer 1995). The agreement is quite good, well within our estimate of 10-20\% for the observational errors. The ratio of Pa$\beta$/Br$\gamma$ provides a value of the extinction A$_V$$\sim$2 mag, assuming the reddening curve of Cardelli et al. (1989) and $R$=5.5. We detect a slight variation along slit 1+2, from 2.3 mag in {\it A} to 1.4 mag in {\it C}. This variation, however, is within our estimated 25\% error in this line ratio. The value in the CS position is A$_V$=1.6 mag. Figure ~\ref{fhi} shows as a dashed line the theoretical line ratios corrected for A$_V$= 2 mag. \begin{figure} {\psfig{figure=ps.HI,height=10cm,width=10cm}} \caption{Ratio of the H-band hydrogen recombination lines (n-4)/(13-4) as a function of the quantum number n. Filled circles refer to position {\it A}, triangles to {\it B}, squares to {\it C}, and diamonds to position CS (the CS peak). The line (18-4) is blended with an [FeII] line. The solid line shows the predictions of recombination theory (Case B). The dashed line shows the Case B ratios corrected for a reddening of A$_V$=2 mag.} \label{fhi} \end{figure} \subsection {He lines} One of the aims of our observations was to examine the extent to which helium is neutral within the zone of ionized gas. Estimates of the helium abundance based upon measurements of either radio or optical recombination lines often assume that the helium and hydrogen Str\"{o}mgren spheres are coincident with one another (see e.g. Mezger 1980). Our profiles along the slit allow us to make a direct comparison of the HeI and HI line intensities which can then in principle be transformed into the abundance ratio $[He^{+}]/[H^{+}]$ in the immediate vicinity of the ionization front of the Bar. The chief obstacle in doing this is the uncertainty in helium line intensities which results from collisional excitations from the metastable 2$^{3}$S and 2$^{1}$S states. Smits (1996) has computed helium line intensities in an approximation where collisions (and self-absorption) out of the metastable levels into n=3 and 4 are neglected although collisions between the n=2 levels are considered. We have compared our observed intensities with Smits predictions for electron density $10^4$ \percc \ and temperature $10^4$ K. We normalise for this purpose to the 1.701$\mu\hbox{m}$\ 4$^{3}$D-3$^{3}$P$^\circ$ transition which has the same upper level as the 4471 \AA \ line often used in optical analyses. It is expected that this transition (see Osterbrock et al. 1992) is only affected at the 1-2 percent level by the collisional effects mentioned above and we neglect such effects in the following. One finds then that, relative to 4$^{3}$D-3$^{3}$P$^\circ$, lines such as 5$^{3}$D-3$^{3}$P are in good agreement with the Smits predictions but 4$^{1}$S-3$^{1}$P$^\circ$ and 5$^{1}$S-3$^{1}$P at 2.113 and 1.341$\mu\hbox{m}$ \ respectively are factors of roughly 3 stronger. The reason for this may be the neglect of the collisional effects and trapping discussed above (see e.g. Robbins \& Bernat 1973; Peimbert \& Torres-Peimbert 1977). We in any case have assumed that the 1.701 $\mu\hbox{m}$ \ line behaves essentially as predicted by the Smits models and can be used to estimate the He$^{+}$ abundance. It is natural to compare the He 1.701$\mu\hbox{m}$ \ intensity with that of the adjacent Br10 line. With the above assumptions, we find that : \begin{equation} F(He,1.7\mu )/F(H,10-4) \, = \, 3.61 \, \frac{[He^{+}]}{[H^{+}]} \end{equation} In Fig.~\ref{fhehi}, we show the profiles along the slit of the abundance ratio $[He^{+}]/[H^{+}]$ deduced from Eq.(1) as well as the ratio of the 2.06 (2$^{1}$P$^\circ$-2$^{1}$S) to the 1.7$\mu\hbox{m}$ \ lines. The Br$\gamma \ $ profile along the slit is shown for comparison. We derive an abundance ratio of $0.093\pm 0.005$ over region {\it A} consistent with He abundance estimates from other authors (see e.g. Baldwin et al.1991 who find 0.088 based on optical measurements and the review of Mezger 1980 (his Fig.5) who shows that radio estimates at positions less than 100\hbox{$^{\prime\prime}$} \ from \THEC \ are in the range 0.083-0.09). It appears that at the position of our slit, the helium and hydrogen Str\"{o}mgren spheres are close to being coincident. We note with interest the decrease of $[He^{+}]/[H^{+}]$ to values of $\sim 0.075$ in the southern part of the slit or effectively in zone {\it C}, where we seem to observe different ionization conditions in this (presumably) foreground ionized material. One notes also the fact that the degree of ``enhancement'' of the 2.06 micron line seems to increase slightly (by a factor of 1.3) in zone {\it C}. \begin{figure} {\psfig{figure=ps.Heratio,height=10cm,width=10cm}} \caption{The figure shows the He abundance [He$^+$]/[H$^+$] and the ratio of the two HeI lines at 2.058 and 1.701 $\mu\hbox{m}$\ as a function of the declination along the amalgamation of slits 1+2, compared with the Br$\gamma$ profile. The data have been smoothed over three pixels. } \label{fhehi} \end{figure} \subsection {OI lines} We detect two OI lines in the J band, the 2p$^3$3D-2p$^3$3P\ at 1.129$\mu\hbox{m}$\ and the 2p$^3$4S-2p$^3$3P\ at 1.317$\mu\hbox{m}$, with comparable intensity. These lines are produced in the neutral gas by excitation of OI to the upper level of the transitions by UV photons, at 1027 and 1040 \AA\ respectively, followed by radiative decay. The fact that the two lines have similar intensity (though not at the CS peak, see Fig.2) suggests that the contribution of Ly$-\beta$ photons to the excitation of the 2p$^3$3D level (the upper level of the 1027 \AA\ line) is negligible. That the excitation mechanism is fluorescence is confirmed by the spatial variation of the line intensities, which peak at the edge of the ionized region as marked by the HI recombination line emission (cf. Fig.~\ref{fcutfeoi}). In other words, the oxygen lines are an excellent marker of the ionization front. More precisely, one can say that due to the rapid charge exchange of OI with H$^{+}$ and the fact that the hydrogen and oxygen ionization potentials are close to being identical, the oxygen lines trace neutral gas close to the ionization front. It is possible to use the OI lines to derive a measure of the UV radiation field G$_0$ at the edge of the bar. If fluorescence is the dominant excitation mechanism, the number of photons emitted in the IR line equals the number of photons absorbed by the UV line. Since the optical depth of the UV lines is always very large ($\tau_0\sim 5 \,N/10^{18}$ for the 1040 \AA\ line and $\sim 10\,N/10^{18}$ fo the 1027 \AA\, where $N$ is the atomic H column density and we have assumed a velocity dispersion $\Delta v$=3 km s$^{-1}$ and O/H=6$\times 10^{-4}$), the number of absorbed UV photons is proportional to the line equivalent width in the ``flat'' portion of the curve of growth. This is given by : $W_\lambda/\lambda_{UV}\sim 1.2\Delta v/c \>F(\tau_0) \sim 3.6\times10^{-5}$ (Spitzer 1978, p53, where $\tau _0$ is the UV line center optical depth and F($\tau _0) \sim 3$ for optical depths of order 1000). Since both UV lines are in fact triplets with separation larger than $W_\lambda$, the total number of UV photons absorbed and re-emitted in each IR line is given by: \begin{equation} I_\nu^{UV}= {{4\pi \sin \theta_t}\over{3}}{{\lambda_{IR}\lambda_{UV}}\over{c W_\lambda}}\>\> I(IR)\>\>\>\> {\rm (erg\, cm^{-2} s^{-1} Hz^{-1})} \end{equation} where I(IR) is the observed intensity of the IR line in erg cm$^{-2}$ s$^{-1}$ sr$^{-1}$ and $\theta_t$ is the angle between the PDR and the line of sight ($\theta_t=90\deg$ in a face-on PDR; see Appendix). Note that the three components of the IR lines are not resolved in our spectra. Equation 2 gives $I_\nu^{UV} \sim 1.2\times 10^{-13}\sin{\theta_t}$ for $I(IR)=2.6\times 10^{-4}$ erg cm$^{-2}$ s$^{-1}$ sr$^{-1}$, as observed in the main peak of the 1.317 $\mu\hbox{m}$\ line. Here, we have corrected for 2 magnitudes of visual extinction (see Sect. 3.2). The inferred UV intensity can be compared to the flux from \THEC \ at the projected distance of the bar ($I_{p}\, \sim 4\times 10^{-14}$ erg cm${-2}$ s$^{-1}$ Hz$^{-1}$ for T$_\star$=40000K, L$_\star$=2.5$\times 10^5$ $\hbox{L}_\odot$). The main uncertainty is the appropriate value for $\theta_t$ \ but the Orion Bar is known to be close to edge on (see e.g Hogerheijde et al. 1995). A plausible value is $\sin\theta_t\sim 0.2$ ($\theta_t\sim10-15 \deg$; see Sect. 3.6), and one finds then that the physical distance of the bar from \THEC\ is about $(I_{p}/I_\nu^{UV})^{0.5}$ or 1.3 times the projected distance. The UV intensity can be expressed relative to the interstellar diffuse field taken here to be $3\times 10^7$ photons cm$^{-2}$ s$^{-1}$. The normalized UV intensity is then $1.3\times 10^{5}\, \sin {\theta_t}$ or G$_0\sim 2.6\times 10^4$ (for $\sin\theta_t$=0.2), similar to the value used by Tielens et al. (1993). The second, weaker peak of emission {\it A} at $\Delta\delta\sim$23 arcsec has an OI intensity about two times lower than the main peak. This may be due to a different orientation of the front with respect to the line of sight. The OI intensity on the CS peak is about 1/4 that of the main peak which again may be due to an orientation effect although it could also imply dust extinction between the CS peak and the Trapezium. In general, our OI results imply G$_{0}$ values at the ionization front in the range $6000 - 3\times 10^4$. \subsection{Iron lines} Model calculations suggest (see Baldwin et al. 1991, Rubin et al. 1991, Osterbrock et al. 1992) that iron is mainly in the form FeIV in the Orion nebula. It follows that one expects to see FeII and FeIII emission predominantly at the edge of the Str\"{o}mgren sphere close to the ionization front. In Fig.~\ref{fcutfeoi}, we show profiles along our amalgamated slit of the 1.644 $\mu\hbox{m}$ \ [FeII] transition compared with the 1.317$\mu\hbox{m}$ \ OI line discussed in the previous section. One sees that the two lines show rather similar behavior with a peak slightly offset from one of the maxima observed in the H recombination lines. It seems plausible that this approximate coincidence denotes the presence of an ionization front and the OI data which we discussed above confirm this idea. It is notable also that the iron lines show no evidence for a coincidence with the molecular hydrogen 1-0 S(1) {\it C} peak at $\Delta \delta $= -27\hbox{$^{\prime\prime}$} \ which is also shown on Fig.~\ref{fcutfeoi}. In fact, our data suggest that both the FeII and FeIII emission lines form in ionized (or partially ionized but not neutral) gas close to the ionization front. It is worth noting here that extinction estimates which we have made using the ratio of the 1.644 $4sa^{4}$D$_{7/2}$-$3d^7a^{4}$F$_{9/2}$ to the 1.26$\mu\hbox{m}$ \ $4sa^{4}$D$_{7/2}$-$4sa^{6}$D$_{9/2}$ transitions is consistent with a visual extinction of $2.7\pm 0.9$ magnitudes at all positions. We can estimate the conditions required to explain the relative intensities of the FeII lines. FeII line ratios can be used as indicators of electron density (see Oliva et al. 1990, Pradhan \& Zhang 1993). Our most useful indicator appears to be the ratio of the 1.600 $4sa^{4}$D$_{3/2}$-$3d^7a^{4}$F$_{7/2}$ line intensity to that of the 1.644 $4sa^{4}$D$_{7/2}$-$3d^7a^{4}$F$_{9/2}$. We find that this ratio varies in the range 0.06-0.1 (see table 1) over the regions covered by our slit. Based on the collisional rates of Pradhan \& Zhang(1993) (see also Oliva et al. 1990), we conclude that this corresponds to an electron density n$_{e}$ of 4000 \percc \ in the region {\it A}), 6000 \percc \ in region {\it C}, and 3000 \percc \ in region {\it B}. At the CS peak position, the observed 1.600/1.644 ratio is 0.12 corresponding to n$_{e} = 10^4 $ \percc . Thus, we can exclude high density clumps with n$_{e}$ = $10^6$ \percc \ of the type discussed by Bautista et al.(1994). On the contrary, the FeII data seem consistent with ionized gas of electron density $\sim $ \ $10^4$ \percc \ or less in the vicinity of the ionization front. It is worth stressing here that this line ratio converges to the LTE value at electron densities of roughly $10^5$ \percc and so the lack of high density clumps is significant. Moreover, the ionization degree in the layer where the FeII lines are formed is unlikely to be much smaller than unity and hence the hydrogen density is also likely to be of order 10$^4$ \percc. \subsection {H$_2$ lines} We measured the intensity of 8 H$_2$ lines in the K band, covering an excitation range from 6472 to 18089 K (see Table 1). Using the data from table 1 and transition probabilities from Turner et al. (1977), we have determined upper level column densities at positions {\it A, B}, {\it C} and {\it CS}. In Fig.~\ref{fh2}, we plot the column densities per sub--level derived in this manner against excitation energy. One sees that there are clear departures from an LTE distribution suggesting either that fluorescence is playing a role in determining level populations or that there is a sharp gradient of temperature along the lines of sight sampled. The ``best fit temperatures" derived from fitting a Boltzmann population distribution to the data in Fig.~\ref{fh2} are moreover rather high with values ranging from $\sim$2500 K in region {\it C} to 3000 K in region {\it A}. \begin{figure} {\psfig{figure=ps.h2new,height=14cm,width=10cm}} \caption {Column density per sub--level against excitation energy of the level at positions {\it A, B}, {\it C} and on the {\it CS} Peak. The dotted lines show the best fit to a single temperature population. The two v=3 transitions are doubtful and have not been considered on the fit. Filled squares represent ortho transitions and circles para transitions.} \label{fh2} \end{figure} \begin{figure} {\psfig{figure=ps.H2ratio,height=10cm,width=10cm}} \caption {Ratio of the 1-0S(1) to 2-1S(1) H$_2$ lines as a function of the declination offset along our amalgamation of slits 1+2 (solid line). To compute the ratio, the profiles have been smoothed over three pixels. The horizontal bar shows the value averaged over declination offset 0-50 arcsec. We show for comparison the profile of the 1-0S(1) line (dotted line).} \label{fhr} \end{figure} \begin{figure} {\psfig{figure=ps.H2ratio.CS,height=10cm,width=10cm}} \caption { Same as Fig.13 for the CS position.} \label{fhrcs} \end{figure} The high excitation temperatures as well as the (probable) detection of lines from levels as high as v=3 suggest that we are detecting extended fluorescent emission (seen on larger scales by Luhman \& Jaffe, 1996) in addition to a ``thermal'' layer. The evidence that some fluorescent emission is present is strengthened by the behaviour of the intensity ratio of the v=1-0 S(1) (2.12$\mu\hbox{m}$ ) and v=2-1 S(1) (2.25 $\mu\hbox{m}$ ) \MOLH \ lines along our amalgamated slit ({\it A,B,C}), shown in Fig.~\ref{fhr}. In a pure fluorescent model, this ratio is predicted to be $\sim$2, whereas an admixture of collisional excitation leads to higher values (12 for pure thermal emission at 2000 K). Our results are consistent with those of van der Werf et al. (1996) in that at the main molecular hydrogen intensity peak {\it C}, the ratio is $\sim$ 6-7. A similar value is also found at the secondary peak {\it B}, which is about coincident with the position of the ionization front as traced by the OI lines. Over the rest of the slit the ratio is $\simless$4. Thus the fraction of fluorescent emission is smaller at the peaks of v=1-0 S(1) emission. These measurements, as well as the observed intensity of the v=1-0 S(1) line, can be compared to the PDR model calculations of Hollenbach \& Natta (1995) (at steady state). Figure ~\ref{f4w} shows the predicted intensity of the 1-0 S(1) line in a face-on PDR as a function of the density for different values of G$_0$, and in Panel (2) the ratio of the 1-0 S(1) to the 2-1 S(1) line. In the main molecular peak {\it C}, we observe a peak intensity of the 1-0 S(1) line of $\sim 2.3\times 10^{-4}$ erg cm$^{-2}$ s$^{-1}$ sr$^{-1}$, and a ratio of $\sim$7. If we assume that the H$_2$ lines are produced in a PDR with G$_0\sim 3\times 10^4$ (see Sect. 3.3), this corresponds to a density $6\times 10^4$ cm$^{-3}$. These results assume a face--on PDR and there is good reason to suppose that the Orion Bar is seen edge on (see Jansen et al. ~\cite{Jea95b} for a discussion of the geometry). The effect of a slant of the Bar on the \MOLH \ lines is complicated by the effects of dust extinction and a brief discussion is given in the appendix. The results are different for the vibrationally excited H$_2$ lines at 2$\mu\hbox{m}$\ and for lines at longer wavelengths, for which the extinction is negligible. Parmar et al. (1991) have observed the J=3-1 (17$\mu\hbox{m}$ ) and 4-2 (12.3$\mu\hbox{m}$ ) v=0 \MOLH \ lines toward the bar and find that their observations are compatible with a hydrogen column density of $3\times 10^{21}$ \cmsq \ at a temperature of order 500 K. Figure ~\ref{f4w} shows in Panel (3) the model-predicted ratio for the two lines observed by Parmar et al. and in Panel (4) the ratio of the v=2-1 S(1) to the J=3-1 (17$\mu\hbox{m}$) lines. These ratios can be well reproduced by a model with G$_0\sim 3\times 10^4$ and $n\sim 6\times 10^4$ \percc\ having a moderate enhancement of the intensity of the 12 and 17 $\mu\hbox{m}$\ lines due to an inclination of the Bar with respect to the line of sight $\theta_t\sim$10$\deg$. These values are also consistent with the inclination required to explain the intensity of the OI lines, discussed in Sect. 3.4. \begin{figure*} {\psfig{figure=ps.4w,width=13cm}} \caption { Panel (1): model predictions for the intensity of the H$_2$ 1-0 S(1) line as a function of hydrogen number density for different values of G$_0$. Panel (2): ratio of the v=1-0 S(1) line at 2.12 $\mu\hbox{m}$\ to the v=2-1 S(1) line at 2.24 $\mu\hbox{m}$\ as a function of density. Panel (3): ratio of the v=0,J=4-2 line at 12 $\mu\hbox{m}$\ to the v=0,J=3-1 at 17 $\mu\hbox{m}$. Panel (4): ratio of the v=1-0 S(1) line to the v=0,J=4-2. Each curve is labelled by the logarithm of G$_0$. Triangles indicate G$_0=10^5$, squares with G$_0=10^{4.5}$, circles with G$_0=10^4$, and crosses with G$_0=10^3$. The observed value at peak {\it C} is shown by the arrow in each panel.} \label{f4w} \end{figure*} The density in the other regions where molecular lines are measured can be estimated in a similar way from Fig. ~\ref{f4w}. In the secondary molecular peak {\it B}, assuming the same inclination $\theta_t\sim 10\deg$, we obtain $n\sim 4\times 10^4$ \percc. These densities are a factor of $\sim$5-10 larger than that of the ionized gas (see Sect. 3.5) and roughly consistent with the density required to explain the stratification of the bar (i.e the offsets between cold molecular gas and ionization front, see Tielens et al. 1993), which is of order $5\times 10^4$ \percc . Similar densities can be derived from our data at slit position 3 , the ``CS peak'' ({\it CS}). We show the 1-0 S(1) intensity in Fig.~\ref{fcutcs} and the v=1-0/2-1 line ratio in Fig. ~\ref{fhrcs}. We see a peak of H$_2$ emission at $\Delta\delta\sim 49$\hbox{$^{\prime\prime}$} with an intensity of $\sim 1.6\times 10^{-4}$erg cm$^{-2}$ s$^{-1}$ sr$^{-1}$, and a broader emission between 0\hbox{$^{\prime\prime}$} and 30\hbox{$^{\prime\prime}$}, with a peak intensity of $\sim 8\times 10^{-5}$ erg cm$^{-2}$ s$^{-1}$ sr$^{-1}$. In both cases the ratio of the two H$_2$ lines is about 5 implying densities $\simless 10^5$ \percc \ for G$_0\sim 10^4$. Neither of the peaks coincide with the ``ionization front'' as defined by FeII (Fig. ~\ref{fcutcs}). One would expect such a coincidence if the densities were considerably above $10^5$ \percc \ as implied by the CS data (see van der Werf et al.1996). Thus the general conclusion is that the PDR as seen in molecular hydrogen appears to be at densities below $10^5$ \percc \ in contrast to the millimeter results which suggest the existence of clumps at densities well above $10^5$ \percc . A caveat to much of the above discussion is that comparisons which we have made between the predictions of the Hollenbach \& Natta code used by us and other results (in particular, the results of St\"{o}rzer \& Hollenbach 1997) shows that there can be substantial differences in both the \MOLH \ line intensities and in some line ratios. The main reason for this is the extreme sensitivity of \MOLH \ line intensities to the temperature structure although there are more minor effects which also play a role (note for example that the Hollenbach Natta models neglect the effects of the carbon oxygen chemistry upon \MOLH ). This has the consequence that rather minor errors in the treatment of thermal equilibrium can affect our conclusions. In this respect, mid infrared intensity ratios such as the 12/17 micron ratio measured by Parmar et al. are important because they afford a direct measure of temperature. Thus if the temperature structure can be adjusted to fit the mid infrared observations, one may have some confidence in the predictions for other lines. For the moment, we conclude that the fact that we have been able to fit the 12/17 ratio suggests that the model which we have used is reliable. \section{Discussion} Our analysis of molecular hydrogen in the previous section has assumed implicitly that the H$_2$ emission comes at all positions from material heated by the stellar radiation field (PDR). The two peaks {\it C} and {\it B} we observe must then come from two different structures. In fact, there are indications in the molecular hydrogen images of van der Werf et al. that in \MOLH \ lines there are {\it two} bars. One of these (which one might call the main \MOLH \ bar) corresponds to the feature seen in our Fig. ~\ref{fcuth2} at position {\it C} ($\Delta \delta = -24$\hbox{$^{\prime\prime}$} ). The ``second \MOLH \ bar'' (less well defined in the van der Werf et al. images) is close in projection to the main ionization front and corresponds to the peak in Fig. ~\ref{fcuth2} at $\Delta \delta$=-2\hbox{$^{\prime\prime}$} \ (roughly coincident with the OI peak). We propose that these two bars are separated along the line of sight and that the shift in position (15\hbox{$^{\prime\prime}$} \ accounting for our slit orientation) is due to a tilt in the bar of $\sim$10 degrees as discussed above. Thus, the bar is split along its length (essentially along the line of sight) into two sections separated by 0.2-0.3 parsec. Each half of the bar in this scenario has a length of order 0.1 parsec and thus the total length along the line of sight is of order 0.3-0.4 parsec or somewhat smaller than proposed by Jansen et al. (~\cite{Jea95b}). There are problems with this model however. The principal difficulty is that our data do not show evidence for ionization front indicators (OI and FeII) roughly coincident (one expects a shift of order 6 arc second for a density of $5\, 10^4$ \percc ) with the main ($\Delta \delta = -24$\hbox{$^{\prime\prime}$} \ on our slit) molecular hydrogen peak. One explanation for this might be that the density in the layer of gas between the ionization and \MOLH \ dissociation front is a factor of 2-3 lower (of order $2\times 10^4$ \percc \ ) and thus that the main \MOLH \ peak is shifted 15\hbox{$^{\prime\prime}$} \ (20\hbox{$^{\prime\prime}$} in our NS oriented slit) relative to the main ionization front (i.e., peak {\it C} in \MOLH \ corresponds to peak {\it B} in OI ). Equally, the ionization front emission seen in Fig. ~\ref{fcutfeoi} at $\Delta \delta =$23\hbox{$^{\prime\prime}$} \ might correspond to the \MOLH \ emission at $\Delta \delta = $ 3\hbox{$^{\prime\prime}$} . This lower density between ionization and photo--dissociation fronts implies a density increase between the atomic and molecular regions by at least a factor of 4 since the general bar stratification requires an average density of at least $5\times 10^4$ \percc \ in order to account for the observed offsets of ionization front and molecular lines (see also Wyrowski et al. 1997). Thus, one possible interpretation (see also Simon et al. ~\cite{SSSW97}) of the \MOLH \ data is that there is a sharp density gradient perpendicular to the line of sight such that the molecular gas has much higher density than the partially ionized atomic medium adjacent to it. This has the attractive feature that it helps explain one of the puzzles concerning the bar which is the discrepancy (more than an order of magnitude) between the hydrogen column density derived by Parmar et al. (\cite{PLA91}) and that inferred by Hogerheijde et al. (\cite{HJD95}) from their \CEIO \ data. The Parmar et al. data refer essentially to the main H$_2$ bar whereas Hogerheijde et al. preferentially sample the fully molecular gas to the SE. A density gradient may also cause the offset between the ionization front and the main molecular hydrogen peak to be larger than that estimated using a homogeneous model whereas molecular hydrogen and carbon radio recombination lines would become closer to one another. This last aspect is particularly interesting in view of the coincidence found by Wyrowski et al. (1997) between the bar seen in C91$\alpha $ \ emission and the main \MOLH \ bar. The difficulty in explaining this result stems from the fact that one expects the \MOLH \ emission to come from gas with temperature above 2000 K while the carbon line is thought to be formed at temperatures which are considerably lower. In fact, there is a firm upper limit of 1600 K on the temperature of the gas responsible for the C91$\alpha $ \ emission (based on the line width). The proposed density gradient discussed above may cause the offset between the photodissociation front (i.e \MOLH \ emission) and carbon line emission to diminish. A different interpretation of our observations is in principle possible. Our main molecular hydrogen peak {\it C} at $\Delta \delta =-24\hbox{$^{\prime\prime}$}$ could be produced by a low velocity shock preceding the ionization front. This would explain the lack of ionization front indicators coincident with molecular hydrogen. However, the kinematics of the emission seen in C91$\alpha $ \ by Wyrowski et al. are difficult to explain in this scenario . One needs a shock velocity of at least 3 \hbox{${\rm km\ts s}^{-1}$} \ to excite \MOLH \ (see Tielens et al. ~\cite{Tea93}) and then the observed C91$\alpha $ \ line widths become difficult to understand. It is intriguing that our \MOLH \ observations do not show any evidence of high density gas. This is in contrast with the fact that the molecular line data (e.g. Tauber et al. 1995, Simon et al.~\cite{SSSW97}, van der Werf et al. 1996) give evidence for a considerable fraction of the gas being in clumps with density $\gg 10^5$ \percc . Such clumps can be expected to affect our results because most of the lines observed by us are sensitive to high emission measure and high density. High density PDRs are expected to be hotter and hence considerably more intense in \MOLH\ v=2-1 and 1-0 emission than lower-density PDRs (cf. Fig.~\ref{f4w}). Nevertheless, our data show no evidence for gas with density above $10^5$ \percc \ even towards regions where Simon et al. (see also van der Werf et al.) estimate molecular hydrogen densities of order $2\, 10^5$ \percc . We see no reason on the other hand why clumps should dissipate on a short timescale when traversing the \MOLH \ dissociation front and suggest therefore that clumps, while possibly present, are a secondary phenomenon. More important in our opinion is the density gradient mentioned above. It is worth noting that in the scenario which we are advocating, the thermal pressure may be constant (at a value of the order of $10^8 $ cm$^{-3}$K) along a line perpendicular to the bar and we conclude that isobaric models of the Bar are worth examining. This incidentally would suggest relatively low values for the magnetic pressure and hence magnetic field (below 0.5 mG). \section{Conclusions} \label{sconcl} This study has presented NIR slit spectra of the Orion Bar region. Our main result is based on the molecular hydrogen line intensities and is that the densities derived from these tracers are of order $3-6\times 10^4$ \percc \ and thus consistent with estimates of the mean density derived from the observed stratification of the bar (e.g. Tielens et al.~\cite{Tea93}, Wyrowski et al. 1997). Comparison with the longer wavelength \MOLH \ data of Parmar et al.(1991) implies a tilt for the bar relative to the line of sight of $\sim$10 degrees. Our data suggest also that the bar may be split into two portions along the line of sight which are separated by 0.2-0.3 parsec. It seems plausible that the density is lower in the ``atomic'' region (perhaps $2\times 10^4$ \percc ) than in the molecular gas (of order $10^5$ \percc ). We conclude that models with constant thermal pressure should be examined in future studies of the bar region. We have also derived densities for the ionized gas in the vicinity of the ionization front using [FeII] line ratios and find values of order $10^4$ \percc . This together with the molecular hydrogen data has convinced us that high density neutral clumps play a rather minor role in determining the observed characteristics of the bar. A by-product of these observations was that we were able to use the observed OI line at 1.317$\mu\hbox{m}$ \ as an estimator for the ultraviolet radiation field incident upon the bar. We estimate the normalised FUV intensity G$_{0}$ on the bar to be $0.6-3.0\times 10^4$ using this tracer. Finally, we have used the 1.701$\mu\hbox{m}$ \ line of He to examine the degree of coincidence of helium and hydrogen Str\"{o}mgren spheres. To within our errors, we find that He$^{+}$ and H$^{+}$ coexist and hence that He abundance estimates using these tracers should be reliable. \begin{acknowledgements} We are indebted to D.P. Smits, who provided us with the results of his He level population calculations and J.H. Black for making available to us his H$_2$ transition probabilities. Paul van der Werf allowed us to use his H$_{2}$ image of the Bar and Alan Moorwood gave us his IRSPEC data. Special thanks are due to Tino Oliva, for his help and comments on this projects. This work was partially supported by ASI grant 94-RS-152 and GNA grant 96/00317 to the Osservatorio di Arcetri. A.M. acknowledges partial support through GO grant G005.44800 from Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. \end{acknowledgements} \begin{appendix} \section{The effects of inclination on the \MOLH \ lines} \begin{figure} {\psfig{figure=ps.sketch,height=10cm,width=10cm}} \caption { Sketch of the geometry for molecular hydrogen emission discussed in the Appendix. The symbols used in Eq.(A1) are defined. The observer sees the PDR at an angle $\theta _{t}$. The layer of \MOLH \ emission has continuum optical depth $\tau _{0}$ and the foreground atomic layer optical depth $\tau _{0,n}$.} \label{sketch} \end{figure} \begin{figure} {\psfig{figure=ps.inclination,height=10cm,width=10cm}} \caption {The solid line shows, as a function of the inclination angle $\theta_t$, the intensity of the 1-0 S(1) line normalized to the value in a face-on PDR where dust extinction is neglected. The dashed line shows the ratio of the two lines 1-0 S(1) and v=0, J=3-1 at 17 $\mu\hbox{m}$, normalized in an analogous faction.} \label{figI} \end{figure} The intensity of a given line as a function of the angle $\theta_t$ can be approximately estimated from the expression (cf. Fig.~\ref{sketch}): \begin{equation} I=I_0/\tau_0 (1-e^{-\tau}) e^{-\tau_n} \end{equation} where $I_0$ is the face-on intensity ($\theta _{t}=90$) ignoring extinction due to dust, $\tau$ the dust optical depth at the line frequency across the layer of the PDR which contributes to the H$_2$ emission, $\tau_n$ the optical depth of the HI region along the line of sight. $\tau_0$ and $\tau_{0,n}$ are the optical depths in the \MOLH \ and HI regions, respectively, in the direction perpendicular to the PDR. To first approximation, $\tau\sim \tau_0/\sin\theta_t$, $\tau_n\sim\tau_{0,n}/\sin\theta_t$. For $\theta_t\simless l/L$, the line of sight does not intercept the neutral region of the PDR ($\tau_n\sim$0) and the line intensity tends to $I/I_0\sim 1/\tau_0$. When $\tau_0=\tau_{0,n}=0$, then $I/I_0=1/\sin\theta_t$. Figure ~\ref{figI} shows the ratio $I/I_0$ for the 1-0 S(1) line (solid curve), which has been derived using $\tau_0\sim 1\times A_\lambda/A_v$, $\tau_{0,n}\sim 1\times A_\lambda/A_v$, $A_\lambda/A_v$=0.145. Here we assume, based upon models, 1 magnitude of visual extinction in the H$_2$ emitting layer and 1 magnitude of extinction in the foreground HI layer. For the 1-0 S(1) line, $I/I_0\sim$ 1 with an accuracy better than 30\% for $\theta_t\simgreat 8\deg$ . The figure shows also the variation with $\theta_t$ of the ratio of the 1-0 S(1) line at 2.12 $\mu\hbox{m}$\ to the v=0,J=3-1 17$\mu\hbox{m}$\ line (for which $\tau_0\sim\tau_{0,n}\sim 0$), with respect to the same ratio for a face-on PDR. \end{appendix}
2023-04-23T06:41:21.573Z
1997-10-04T22:59:05.000Z
redpajama/arxiv
arxiv_0001
2,268
10,545
1759f1eee28a81a719e58054a5f854b0db698c50
\section{Forward-Looking Introductions} Introductory texts tell students to specify a coordinate system whenever displacements are discussed. By asking that the clock's frame of motion be specified {\em whenever} time intervals are discussed, a habit is begun which will be needed to understand motion at high speeds. For example, if we define a ``map'' frame in which both yardsticks and clocks reside, then velocity and acceleration defined using only map distances and times retain their classical form. In terms of these ``coordinate'' quantities, the classical equations of translational motion listed in Figure 1 make well-defined predictions in space-time. In fact, all except four of these predictions are exact not only for low-speed motion, but for motion at any speed as well! There is a caveat. Since these equations only concern distances and times measured from the vantage point of the map, they say {\em nothing} about the experiences of the accelerated object or traveler. More on this later. The utility of Fig. 1 for motion at any speed thus assumes that one avoids making the assumption, hidden or otherwise, that time passes similarly for everyones' clocks. Time elapsed is clock-dependent, save to first order at low speeds. It is also worthwhile to look at {\em other} implicit assumptions of classical mechanics. Figure 2 lists many of the variables familiar from Newtonian physics, and asks the question: Which of these quantities, when used to describe motion between two sneezes of an airplane pilot (for example), did mechanics classically assume are independent of one's choice of reference (or map) frame? The answer is that distance between sneezes depends on map-frame (e.g. the pilot thinks both sneezes occur at the same place, namely the airplanes' cabin), as does velocity (since a map-frame moving with the pilot would see the pilot standing still). Momentum and energy obviously depend on velocity, and hence on choice of map-frame as well. However, classical mechanics {\em often implicitly} assumes that elapsed times, observed accelerations, and applied forces are the same for all map-frames. Some of the teachers polled at two recent AAPT meetings also guessed that the rate at which energy increases is frame-independent classically. Rate of energy change of course depends on frame, since it equals force times velocity. So let's be explicit: For studies involving motion at speeds much less than the speed of light, of the quantities listed in Fig. 2 one can safely assume that only time-elapsed, observed acceleration, and applied force depend very little on one's choice of map-frame. For motion studies involving speeds approaching lightspeed, {\em all} of the quantities in Fig. 2 depend strongly on ones' choice of frame. Forward-looking habits in the way introductory physics students think about time, as well as about how quantities will look in one frame or another, are important in at least two ways. First, they will minimize the things that students going on in physics will have to {\em unlearn}, like the use of time in a clock-independent manner. These things are crucial to a solid understanding of both special relativity and curved space-time. Second, as described below, such habits open the door to a deeper and simpler understanding of space-time for students with only one course in physics. This approach and philosophy is consistent with the proposal by Edwin Taylor in his 1998 AAPT Oerstead Medal talk, which describes a way to provide deeper understanding with fewer math prerequisites in a second physics course. Empowering students in a {\em first course} with deeper physical intuition also provides us with a significantly better-informed taxpaying and consuming public. \section{Equations Good at Any Speed} Thus students can be taught to be wary of assumptions about frame invariance when talking about both times and distances, even as unidirectional motion is introduced via the usual classical expressions. Not presuming to understand how traveler clocks behave at high speeds, they may then be eager to learn more about velocity and acceleration at any speed, well before they are ready for ``multi-frame'' relativity. This may be done by introducing the metric equation as a space-time extension of Pythagoras' Theorem, written so that traveler-time is the invariant. This tells students most everything they need to know about traveler time at high speeds. However, more sophisticated use of this metric equation is needed before students are prepared to predict measurements with traveler yardsticks at high speed as well. Of course, saying ``don't go there'' could pique their interest in further studies of relativity and space-time geometry downstream. Figure 3 introduces proper (or ``traveler'') time with the metric equation, followed by three corollary variables (gamma, proper velocity, and proper or ``felt'' acceleration). It then provides a set of equations, patterned after Fig. 1, which are {\em exact} for unidirectional motion at any speed. There is a new integral of constant proper acceleration (the so-called rapidity integral for proper time), and proper force $F_o$ (defined as mass times proper acceleration) has been listed separately from frame-variant force $F$, since for non-unidirectional motion the two differ. Lastly, Figure 4 now considers frame-invariance at any speed for the new variables as well as the variables discussed classically in Figure 2. Note that the new variables, which arose naturally from the relation for traveler-time provided by the metric equation, include three true frame-invariants: proper time, proper acceleration, and proper force. In a sense, then, the variables whose frame-independence was lost in the first part of this paper have been reborn in truly frame-invariant form, thanks to Minkowski's space-time extension of Pythagoras' theorem. \section{Discussion} For students who are pictorially-oriented (or equation-shy), a nomogram plotting all variables versus distance traveled from rest can be put together with these equations. To illustrate, Figure 5 allows graphical solution of constant acceleration problems with almost any combination of input variables in range of the plot. As shown in the caption to Fig. 6, for {\em analytical} solution of problems the kinematic equations can be compacted into two simple equality strings. Other problems with solutions, on-line solvers, a discover-it-yourself maze, and a companion derivation of the ``felt'' acceleration equation in Fig. 3, may be found on web pages linked to: http://www.umsl.edu/~fraundor/anyspeed.html. \acknowledgments This work has benefited indirectly from support by the U.S. Department of Energy, the Missouri Research Board, as well as Monsanto and MEMC Electronic Materials Companies. It has benefited most, however, from the interest and support of students at UM-St. Louis. \eject
2023-04-23T06:41:21.658Z
1998-06-01T20:41:59.000Z
redpajama/arxiv
arxiv_0001
2,270
1,114
5b9e8427d577b87f9c97f7c49f6420947394e04b
\section{Introduction} As well known, type-II superconductors in a magnetic field exhibit an intermediate mixed phase characterized by vortex condensation and partial magnetic flux penetration through them. The dynamics of the latter crucially drives the electromagnetic behavior, and has been extensively studied in the last years \cite{Blat1}. It has been shown that under appropriate assumptions, in particular for small magnetic field and vortex density, the elementary vortex excitations in thin layered samples with transverse magnetic field are flux lines crossing each layer only once, joining pancake vortices forming in each layer and moving in the xy plane. A well established and studied continuum model \cite{Blat1} exist for these string-like excitations and it has been succesfully applied to analyze both the dynamical \cite{Blat2,Ivlev,Blat3} and therodynamical \cite{Bula1,Bula2,Blat4} behavior. In the following we study the specific heat of vortex lines within a simple model which can be treated exactly and captures all the essential features of vortex dynamics relevant in layered superconductors. Dispersive and magnetic effects will be neglected, whereas dissipation ($\eta$), inertia ($\mu$) and elasticity ($\epsilon$) will be taken into account exactly in the general case of a finite number $N$ of layers. We shall call $d$ the inter-layer distance, $N$ the number of layers, $L$ the thickness of the film and $A$ its area. The continuum limit $N \rightarrow +\infty$ will be analyzed as a particular limit in which the usual model holds. The problem has already been addressed in the latter context by Blatter and Ivlev \cite{Blat4} for small magnetic fields and by Bulaevskii and Maley \cite{Bula1} for large magnetic fields. One of the central issues of this paper is to present a peculiar and unambiguous relation between the value of the ratio $I = \frac {\sqrt{ \epsilon \mu}}{\eta d}$ and the thermodynamical properties of the vortex line. We show that there is a temperature $T_o = \frac \pi 2 \frac \hbar {k_B} \frac \epsilon {\eta L^2}$, typically of the order of a few $mK$, which separates the universal $I$-independent low temperature already discussed by Blatter and Ivlev \cite{Blat4} from an $I$-dependent higher temperature (but still below the critical temperature) behavior. In the relevant overdamped case in which $I \ll 1$, we find that in this regime the specific heat is proportional to the square root of the temperature. This should hold up to the critical temperature, provided that the model is still appropriate. In Section IV we summarize our results; we compare them with some previous results appeared in the litterature and discuss their physical relevance. In section V we discuss the effect of including the Magnus force. We find essentially the same results as before, in particular the same behavior with temperature, and even the numerical values of the coefficient are not very different. The fundamental characteristics of the vortex lines are quite simple in the approximation we are considering; in particular, the self-energy stemming from the interaction between neighbor pancake vortices essentially reduces to a local elastic term, and dissipation seems to be reasonably well described by an ohmic viscous term. Both the elastic modulus $\epsilon $ and the friction coefficient $\eta $ can be theoretically computed, but the mass density $\mu $ is still object of discussion and can be only hardly estimated because of the wide class of contributions it gets depending on the particular setting, and its determination is still a central issue of vortex physics, even if there is strong belief for it to be negligibly small with respect to dissipation \cite{Stephen}. Since each point of the string has two-dimensional dynamics, we parameterize the problem in cylindrical coordinates, calling ${\bf q}(t,z)$ the xy position of the string at given z. The equation of motion of the vortex line is \begin{equation} \label{eqnmot} \mu {\bf \ddot q}+\eta {\bf \dot q}-\epsilon {\bf q^{\prime \prime }}=0\;. \end{equation} Because of the finite size of the system in the $z$ direction, it is necessary to impose the additional Neumann boundary conditions \begin{equation} {\bf q}^{\prime }(t,0)={\bf q^{\prime }}(t,L)=0\;. \end{equation} In this way, no energy-momentum flow is allowed across the end-points of the string. The quantum-mechanical version of this dissipative dynamics is reached by means of Caldeira and Leggett's formalism \cite{Cald}, in which dissipation is introduced by coupling the system to a thermal bath of harmonic oscillators with a continuous frequency spectrum satisfying the requirement to reproduce a simple velocity dependent ohmic viscous term in the effective equation of motion. The Euclidean effective action of the open system, relevant for thermodynamics, is obtained by integrating out the degrees of freedom of the bath, and reads in the continuum limit \begin{eqnarray} \label{effact} S(\beta )=&&\int_0^Ldz \int_0^\beta d\tau \left[ \frac 12\mu {\bf \dot q}^2(\tau, z)+\frac 12\epsilon {\bf q^{\prime }}^2(\tau , z)\right] \medskip\ \\ && +\frac \eta {4\pi }\int_0^Ldz \int_0^\beta d\tau \int_0^\beta d\tau ^{\prime }\left[ \frac{{\bf q}(\tau , z)-{\bf q}(\tau ^{\prime }, z)}{ \frac \beta \pi \sin \left( \pi \frac{\tau -\tau ^{\prime }} \beta \right)} \right]^2 \nonumber \;. \end{eqnarray} It is convenient to extend ${\bf q}(t,z)$ symmetrically to negative $z$ , and expand the obtained even function in Fourier modes of fundamental frequency $\nu =\frac \pi L$ along $z$ by means of ${\bf q}(t,z)={\bf q} _0(t)+\sqrt{2}\sum_{n=0}^{N-1}{\bf q}_n(t)\cos \nu nz$; in this way, the boundary conditions are automatically satisfied. The Euclidean action then splits into a sum of harmonic oscillator contributions encoding the vibrational modes of the string plus a zero mode representing the translational motion of the center of mass. In order to handle the thermodynamical Euclidean path-integral, we introduce the Matsubara frequency $\omega =\frac{2\pi }\beta $ and expand the periodic configurations ${\bf q}_n(\tau )$ involved in the path-integral for the partition function in Fourier modes with fundamental frequency $\omega $ along imaginary time, taking ${\bf q}_n(\tau )=\frac 1{\sqrt{\beta } }\sum_{k=-\infty }^{+\infty }{\bf q}_{nk}e^{i\omega k\tau }$. The dissipative term, once Fourier-transformed, will contribute a peculiar $ \left| \omega k\right| $ term, remnant of the derivative nature of ohmic dissipation. In Fourier space, the effective action is \begin{equation} \label{Action}S(\beta )=\frac 1 2 \sum \limits_{n=0}^{N-1}\sum \limits_{k=-\infty }^{+\infty }\left[ M\left( \omega_k^2+\Omega_n^2\right) + \Lambda \left| \omega_k\right| \right] \left| {\bf q}_{nk}\right|^2 \;. \end{equation} $M=\mu L$ is the total mass, $\Lambda = \eta L$ the total friction coefficient and $\Omega = \sqrt{\frac \epsilon \mu} \frac \pi L$ the characteristic vibration frequency; we have used the notation $\omega_k = \omega k$ and $\Omega_n = \Omega n$. Since there is only a finite number $N$ of layers, we have cut-off the possible modes at that value. A more natural way back to the finite $N$ case is to write a discrete action in terms of the single pancake vortices coordinates ${\bf x} _l(t)={\bf q}(t ,ld)$. One then obtains the action for $N$ pancake vortices of mass $\mu d$ and friction coefficient $\eta d$, each harmonically coupled to its nearest neighbors with elastic modulus $\frac \epsilon d$. The decoupling of these degrees of freedom can be achieved performing a change of variable analogous to the Fourier expansion of the continuum case, defining ${\bf q}_n(t)$ from ${\bf x}_l(t)$ through $ {\bf x}_l(t)= {\bf q}_o(t) + \sqrt{2}\sum_{n=1}^{N-1}{\bf q}_n(t) \cos \left( \nu n(l+\frac 12)d\right) $ . One then obtains the action (\ref{Action}), but with a modified frequency spectrum given by \begin{equation} \Omega _n=\frac{2N}\pi \sin \left( \frac \pi 2\frac nN\right) \Omega \;. \end{equation} Obviously, in the limit of $N \rightarrow +\infty $, the lattice version of the theory reduces to the continuum formulation, with the same frequency spectrum $\Omega _n=\Omega n$. In the following, for the sake of clearness we will concentrate on the basic case of absence of pinning of vortex lines by impurities and defects, and focus on the effects of elasticity and dissipation (the effect of the Magnus force will instead be considered in Section V). However, the treatment of this important generalization could be easily faced in exactly the same way; introducing a single columnar pinning center through a harmonic confining potential with elastic modulus $k$ and characteristic frequency $\Omega_p=\sqrt{\frac k m}$, the only novel feature would be the modification of the spectrum of the modes to $\Omega_n^*=\sqrt{\Omega_n^2 + \Omega_p^2}$. \section{Partition function} We have seen that the dynamics of the vortex line is encoded in a zero mode describing its translational motion and $N-1$ harmonic modes with frequency spectrum $\Omega _n$ describing its vibration. Consequently, the partition function, \begin{eqnarray} Z(\beta )&=&Tre^{-\beta H} \nonumber \medskip\ \\ &=&\int_{{\bf q}(0,z)={\bf q}(\beta ,z)}{\cal D}{\bf q} (\tau ,z)e^{-S\left( {\bf q}(\tau ,z)\right) }\;, \end{eqnarray} will factorize into the product of the partition functions of the modes. Thus, all that we need to know for our aim is the dissipative thermodynamics of the zero-mode degree of freedom and the remaining harmonic modes. The path-integral for the latter can be evaluated quite easily in both cases since it is Gaussian, and reduces to a product of ordinary integrals in Fourier space. \subsection{Zero-mode} For the zero mode, we get \begin{eqnarray} Z(\beta ,\Lambda )&=& {\cal N}(\beta )\prod\limits_{k=0}^{+\infty }\int\!\!\!\int d{\bf q}_{ok}d{\bf q} _{ok}^{*}e^{-{\bf q}_{ok}^{*}\left( M\omega_k^2+\Lambda \omega_k \right) {\bf q}_{ok}}\nonumber \medskip \\ &=&{\cal N}^{\prime}(\beta )A\prod\limits_{k=1}^{+\infty }\left( 1+\frac \gamma \omega \frac 1k\right) ^{-2}\; . \end{eqnarray} where $\gamma =\frac \Lambda M=\frac \eta \mu $ is the characteristic frequency related to dissipation and $A$ is the area. The normalization factor is fixed by requiring that in the limit $\Lambda \rightarrow 0$ the partition function reduces to the free one $Z_o(\beta )=\frac M{2\pi \beta }A$. It follows \begin{equation} {\cal N}^{\prime}(\beta )=\frac M{2\pi \beta }\;. \end{equation} Unfortunately, the infinite product representing the effect of dissipation is divergent. We then have to introduce a frequency cut-off $\omega _c$ above which the effect of dissipation is assumed to be small, or at least not adequately described by a simple ohmic viscous term in the equation of motion, and truncate the product at $k_c(\omega )=\frac{\omega _c}\omega $. Using the infinite product representation of the $\Gamma $-function, Eq. (\ref{gam}), we obtain for $k_c(\omega )\rightarrow +\infty $, \begin{equation} \prod\limits_{k=1}^{k_c(\omega )}\left( 1+\frac \gamma \omega \frac 1k\right) =\frac{k_c(\omega )^{\frac \gamma \omega }}{\frac \gamma \omega \Gamma \left( \frac \gamma \omega \right) }\;. \end{equation} The same result can be obtained using the Drude model as regularization \cite {Weiss1,Weiss2}, where the frequency spectrum of the thermal bath is cut-off at $\omega _c$ dividing it by a factor $1+(\frac \omega {\omega _c}) ^2$ ; the cut-off frequency can then be related to a microscopic relaxation time $\tau _c$ by $\omega _c=$ $\frac {2 \pi}{\tau _c}$. The final form for the partition function is conveniently written in terms of the function \begin{equation} \label{gamtil} \tilde \Gamma (z)=\frac 1{\sqrt{2\pi }}z^{\frac 12-z}e^z\Gamma (z) \;, \end{equation} which goes to one for large arguments away from the real negative axis (see Eq. (\ref{andgamtil})); one gets \begin{equation} Z(\beta ,\gamma )=\frac \Lambda {2\pi }A{\tilde \Gamma }^2\left( \frac \gamma \omega \right) e^{-\beta {\cal E}_o(\gamma )}\;, \end{equation} with a zero-point energy \begin{equation} \label{eo} {\cal E}_o(\gamma )=\frac \gamma \pi \left( 1+\ln \frac{\omega _c}\gamma \right) \;. \end{equation} Quite interestingly, we learn that, since the partition function depends only on $\frac \omega \gamma $ apart from an irrelevant multiplicative constant and the zero-point energy, the effect of dissipation on the thermodynamics of the zero-mode is only to modify the temperature scale. This could have been expected from dimensional analysis, since the only energy scale we can construct from $M$ and $\Lambda $ is $\gamma $ (we do not consider $\omega _c$ since we expect dissipation to manifest in a universal ohmic way in the limit of small relaxation time $\tau _c$, situation in which the only role of $\omega _c$ should be to renormalize the energy of the open system), and the only dimensionless quantity that can enter the partition function is the reduced temperature $\frac \omega \gamma $. Notice that in the limit of strong damping $\frac \gamma \omega \gg 1$, that is equivalent to the small mass limit $\mu $$\rightarrow 0$ since $\frac \omega \gamma =\frac{\mu \omega }\eta $, we are left, apart from the zero-point energy, with a trivial constant partition function $Z_l(\beta, \gamma)$ independent of $\mu $; using Eq. (\ref{andgamtil}) \begin{eqnarray} Z(\beta ,\gamma ) \longrightarrow\hspace{-20pt}\raisebox{-6pt} {$\scriptscriptstyle{\frac \gamma \omega \gg 1}$}\; &&\frac \Lambda {2\pi }A\left[ 1+\frac \omega {12\gamma }\right] ^2e^{-\beta {\cal E}_o(\gamma )} \nonumber \medskip\ \\ && \simeq \frac \Lambda {2\pi }Ae^{-\beta {\cal E}_o(\gamma )}\;. \end{eqnarray} \subsection{Harmonic modes} For the other modes, we proceed exactly in the same way and the partition function is given by \begin{eqnarray} &&Z(\beta ,\Omega _n,\Lambda )= \nonumber \medskip\ \\ &&\quad= {\cal N}(\beta)\prod\limits_{k=0}^{+\infty }\int\!\!\!\int d{\bf q}_{nk}d {\bf q}_{nk}^{*}e^{-{\bf q}_{nk}^{*}\left[ M\left( \omega_k^2 +\Omega _n^2\right) +\Lambda \omega_k\right] {\bf q}_{nk}}\nonumber \medskip \\ &&\quad= {\cal N}^{\prime}(\beta)\frac 1{\Omega^2_n} \prod\limits_{k=1}^{+\infty }\left[ 1+\frac \gamma \omega \frac 1k+\left( \frac{\Omega _n}\omega \right) ^2\frac 1{k^2}\right] ^{-2}\;. \end{eqnarray} The requirement this to reduce to the pure harmonic oscillator result $Z_o(\beta ,\Omega _n)=(2\sinh \frac{\beta \Omega _n}2)^{-2}$ in the limit $\Lambda \rightarrow 0$ fixes, using Eq. (\ref{sinh}), the normalization coefficient to \begin{equation} {\cal N}^{\prime}(\beta)=\frac 1 {\beta^2}\;. \end{equation} As before, the dissipative term is responsible for a divergence in the correction factor accounting for the damping of the system; introducing the same frequency cut-off, the product can be carried out by factorization, obtaining \begin{eqnarray} &&\prod\limits_{k=1}^{k_c(\omega )}\left[ 1+\frac \gamma \omega \frac 1k+\left(\frac{\Omega _n}\omega \right) ^2\frac 1{k^2}\right] = \nonumber \medskip \\ &&\qquad \qquad =\frac{k_c(\omega )^{\frac \gamma \omega }}{\left( \frac{\Omega _n} \omega \right) ^2\Gamma \left( \frac{\frac \gamma 2+i\xi _n}\omega \right) \Gamma \left( \frac{\frac \gamma 2-i\xi _n}\omega \right) }\;, \end{eqnarray} where $\xi _n=\sqrt{\Omega _n^2-\frac{\gamma ^2}4}$. Again, the same result can be obtained using the Drude model as regularization \cite{Weiss1,Weiss2}, allowing a microscopic interpretation of the cut-off. The partition function can then be written, using again the $\tilde \Gamma$-function defined above, as \begin{equation} Z(\beta ,\Omega _n,\gamma )={\tilde \Gamma }^2\left( \frac{\frac \gamma 2+i\xi _n}\omega \right) {\tilde \Gamma }^2\left( \frac{\frac \gamma 2-i\xi _n}\omega \right) e^{-\beta {\cal E}_o(\Omega _n,\gamma )}\;, \end{equation} with a zero-point energy given by \begin{equation} {\cal E}_o(\Omega _n,\gamma )=\frac \gamma \pi \left( 1+\ln \frac{\omega _c }{\Omega _n}\right) -i\frac{\xi _n}\pi \ln \frac{\frac \gamma 2+i\xi _n} {\frac \gamma 2-i\xi _n}\;. \end{equation} In the strongly dissipated regime, which means $\frac \gamma \Omega \gg n$, and successively in the limit of low temperature with respect to damping, corresponding to $\frac \gamma \omega \gg 1$, the partition function is seen, using respectively Eqs. (\ref{shigam}) and (\ref{andgamtil}), to reduce to \begin{eqnarray} Z(\beta ,\Omega _n,\gamma ) \longrightarrow\hspace{-20pt}\raisebox{-6pt} {$\scriptscriptstyle{\frac \gamma \Omega \gg n}$}\; &&{\tilde \Gamma }% ^2\left( \frac \gamma \omega \right) {\tilde \Gamma }^2\left( \frac{\Omega _n^2}{\gamma \omega }\right) e^{-\beta {\cal E}_o(\gamma )}\medskip\ \nonumber \\ \longrightarrow\hspace{-20pt}\raisebox{-6pt} {$\scriptscriptstyle{\frac \gamma \omega \gg 1}$}\; &&\left[ 1+\frac \omega {12\gamma }\right] ^2{\tilde \Gamma }^2\left( \frac{\Omega _n^2}{% \gamma \omega }\right) e^{-\beta {\cal E}_o(\gamma )}\medskip\ \nonumber \\ && \simeq {\tilde \Gamma }^2\left( \frac{\Omega _n^2}{% \gamma \omega }\right) e^{-\beta {\cal E}_o(\gamma )}\;. \end{eqnarray} The last extreme limit can be seen as a small mass limit $\mu \rightarrow 0 $. Since the variable $\frac{\gamma \omega }{\Omega ^2}=\left( \frac N\pi \right) ^2\frac{\eta d^2}\epsilon \omega $ is independent of $\mu $, the resulting behavior $Z_l(\beta ,\Omega _n,\gamma )$ no longer depends on $\mu $ apart from the zero-point contribution ${\cal E}_o (\gamma)$, which is found to be the same as the one for the zero-mode, Eq. (\ref{eo}). Again, this is obvious from dimensional analysis. The only dimensionless quantity independent of $\mu $ we can form from the three variables $\omega $, $\gamma $ and $\Omega $ is $\frac{\gamma \omega }{\Omega ^2}$, indeed entering the resulting behavior; instead, the $\mu$-dependent variable $\frac \omega \gamma = \frac{\mu \omega }\eta $ disappears in the limit. Having eliminated one of the scales, we fall in a regime in which, as for the zero-mode, dissipation manifests itself only in the temperature scale. \subsection{Vortex-string} In order to compute the partition function of the string, all that we have to do is multiply those relative to the modes that we have just computed, that is \begin{eqnarray} &&Z_{vor}(\beta ,\Omega ,\gamma )=Z(\beta ,\gamma )\prod\limits_{n=1}^{N-1}Z(\beta ,\Omega _n,\gamma ) \nonumber \medskip \\ &&\qquad =\frac \Lambda {2\pi }A {\tilde \Gamma }^2\left( \frac \gamma \omega \right) \prod\limits_{n=1}^{N-1}{\tilde \Gamma }^2\left( \frac{\frac \gamma 2+i\xi _n}\omega \right) {\tilde \Gamma }^2\left( \frac{\frac \gamma 2-i\xi _n} \omega \right) \medskip\ \nonumber \\ &&\qquad \quad \; e^{-\beta E_o(\Omega ,\gamma)}\;. \end{eqnarray} The zero-point energy is the sum of those of the modes \begin{equation} E_o(\Omega ,\gamma )={\cal E}_o(\gamma )+\sum\limits_{n=1}^{N-1}{\cal E} _o(\Omega _n,\gamma )\;. \end{equation} The first observation we can do is that for fixed $\omega $, $\gamma $ and $\Omega $, the factor $Z(\beta ,\Omega _n,\gamma )$ in the product over the modes in the partition function goes to one for $n\rightarrow +\infty $, and the latter is thus a well defined function of these variable for any $N$, since the product can be shown to converge. The meaning of this observation is that the variables we have chosen represent the true physical scales of the problem, and increasing $N$ just add higher frequency modes that are more and more suppressed on the fixed scale we consider. Conversely, the zero-point energy increases indefinitely with $N$. Using the lattice regularization and the results derived in appendix B, the total zero-point energy can be recast in the following form \begin{equation} E_o(\Omega ,\gamma )=E_{odiv}(\Omega ,\gamma )+E_{ofin}(\Omega ,\gamma )\;, \end{equation} with \begin{eqnarray} &&E_{odiv}(\Omega ,\gamma )=\left\{ \frac N\pi \left( \frac{4N}\pi -1\right) \Omega -\frac 18\ln N\frac{\gamma ^2}\Omega \right. \medskip\ \nonumber \\ &&\qquad \qquad \qquad \; \; +\left. \left[ \frac N\pi \ln \left( \frac \pi N \frac{\omega _c}{\Omega} \right)+\frac{\ln N}{2\pi }\right] \gamma \right\} \;, \medskip\ \\ &&E_{ofin}(\Omega ,\gamma )= \nonumber \medskip\ \\ && \qquad =-\left\{ \frac 1{12}\Omega -\frac 1\pi \left[ 1-\ln \left( \pi \frac \gamma \Omega \right) \right] \gamma +\frac 18\ln G \frac{\gamma ^2}\Omega \right\} \nonumber \medskip\ \\ &&\qquad \quad -\sum\limits_{n=1}^{N-1}\left[ \Omega _n-\frac \gamma \pi -\frac 18\frac{\gamma ^2}{\Omega _n}+\ i\frac{\xi _n}\pi \ln \frac{\frac \gamma 2+i\xi _n}{\frac \gamma 2-i\xi _n}\right] \;. \end{eqnarray} The last sum is finite also for $N \rightarrow + \infty$. Notice also that it vanishes in the free case $\gamma \rightarrow 0$. In the limit in which each of the $N$ modes is in the strong damping regime and at low temperatures with respect to friction, meaning $\frac \gamma \Omega \gg N$ and $\frac \gamma \omega \gg 1$, and corresponding to a limit of vanishing mass $\mu \rightarrow 0$, the partition function first factorizes essentially into the product of two functions depending respectively on the combinations $\frac \omega \gamma =\frac{\mu \omega }\eta $ and $\frac{\gamma \omega }{\Omega ^2}=\left( \frac N\pi \right) ^2\frac{\eta d^2}\epsilon \omega $, and then simplifies to a function $Z_{lvor}(\beta ,\Omega ,\gamma )$ which, apart from the zero-point energy, depends only on the $\mu $-independent variable $\frac{\gamma \omega }{\Omega ^2}$. In fact, \begin{eqnarray} \label{Zlim} &&Z_{vor}(\beta ,\Omega ,\gamma ) \rightarrow \nonumber \medskip\ \\ && \qquad \longrightarrow\hspace{-20pt}\raisebox{-6pt} {$\scriptscriptstyle{\frac \gamma \Omega \gg N}$}\; \frac \Lambda {2\pi }A{\tilde \Gamma }^{2N}\left( \frac \gamma \omega \right) \prod\limits_{n=1}^{N-1}{\tilde \Gamma }^2\left( \frac{\Omega _n^2}{\gamma \omega }\right) e^{-\beta E_o(\gamma )}\nonumber \medskip\ \\ && \qquad \longrightarrow\hspace{-20pt}\raisebox{-6pt} {$\scriptscriptstyle{\frac \gamma \omega \gg 1}$}\; \frac \Lambda {2\pi }A\left[ 1+\frac \omega {12\gamma }\right] ^{2N}\prod\limits_{n=1}^{N-1}{\tilde \Gamma }^2\left( \frac{\Omega _n^2}{% \gamma \omega }\right) e^{-\beta E_o(\gamma )} \nonumber \medskip\ \\ && \qquad \qquad \; \simeq \frac \Lambda {2\pi }A\prod\limits_{n=1}^{N-1} {\tilde \Gamma }^2\left( \frac{\Omega _n^2}{\gamma \omega }\right) e^{-\beta E_o(\gamma)}\;, \end{eqnarray} where now the zero-point energy has simplified to \begin{equation} E_o(\gamma )=N{\cal E}_o(\gamma )=N\frac \gamma \pi \left( 1+\ln \frac{\omega _c}\gamma \right) \;. \end{equation} Again, we see that the product in the partition function is well defined for any $N$ since the $n$-th factor goes quickly to one for $n\rightarrow + \infty$; the zero-point energy grows indefinitely if one add modes increasing $N$, as in the general case. In the following, we shall distinguish between two regimes in the thermodynamics of the vortex line: the underdamped one, for which $\frac \gamma \Omega \ll N$, and the overdamped one, for which $\frac \gamma \Omega \gg N$. \section{Specific heat} Having computed the partition function of the vortex line, we can easily compute its specific heat. The latter can be defined in presence of dissipation as the derivative of the mean energy of the open system with respect to temperature, considering dissipation just as an additional mechanism for exchanging energy with the thermal bath and modifying the response of the system under heating. In fact, within the Caldeira-Leggett formulation, the partition function of the complete system factorizes into the partition function of the bath and that of the open system with an effective action containing the influence of dissipation. Technically, this is implemented with an appropriate shift in the bath's oscillators coordinates which are just dummy variables in the path-integral \cite{Cald,Weiss1}. Thus, in this scheme, the total heat capacity of the system is the sum of the heat capacity of the bath alone plus the one of the vortex lines. One of the interesting features of the specific heat is that it is independent of the rather unknown parameter $\omega_c$ entering the zero-point energy. Also, from our previous discussion, we expect it to remain well behaved in the $\mu \rightarrow 0$ limit. \subsection{General expression} The specific heat of a single vortex line is \begin{equation} C_{vor}(\beta , \Omega, \gamma )=- \frac{\partial ^2}{\partial T\partial \beta }\ln Z_{str} (\beta, \Omega, \gamma)\;. \end{equation} From now on, we will work with the reduced specific heat, dropping a factor $k_B$. The usual one is obtained by multiplying the latter by $k_B$ and by the density of vortices in the sample. The result for the specific heat is easily computed as a sum over the contributions of the modes and is found to be \begin{eqnarray} \label{Cal} &&C_{vor}(\beta ,\Omega ,\gamma )=C(\beta ,\gamma )+\sum\limits_{n=1}^{N-1}C(\beta ,\Omega _n,\gamma ) \nonumber \medskip\ \\ &&\qquad \quad =2\left( \frac \gamma \omega \right)^2 \tilde \Psi ^{\prime }\left( \frac \gamma \omega \right) \nonumber \medskip \\ &&\qquad \qquad +2\sum\limits_{n=1}^{N-1}\left\{ \left( \frac{\frac \gamma 2+i\xi _n}\omega \right) ^2\tilde \Psi ^{\prime }\left( \frac{\frac \gamma 2+i\xi _n}\omega \right) \right. \nonumber \medskip\ \\ &&\qquad \qquad \qquad \qquad +\left. \left( \frac{\frac \gamma 2-i\xi _n}\omega \right) ^2\tilde \Psi ^{\prime }\left( \frac{\frac \gamma 2-i\xi _n}\omega \right) \right\} \;, \end{eqnarray} where \begin{equation} \label{psitilp} \tilde \Psi^{\prime}(z)=\frac{d^2}{dz^2}\ln \tilde \Gamma (z)\;. \end{equation} and goes to zero for large arguments away from the real negative axis, Eq. (\ref{andpsitil}). At high temperature, the specific heat saturates at the value $2N-1$; all the $N-1$ harmonic modes contributes $2$, whereas the zero-mode only $1$, as seen from Eq. (\ref{and2psitil}). In the limit of strong damping for each of the $N$ modes, $\frac \gamma \Omega \gg N$, we have $\frac {\frac \gamma 2 \pm i \xi_n}\omega \rightarrow \frac {\Omega_n^2}{\gamma \omega}, \frac \gamma \omega$. At low temperatures with respect to friction, $\frac \gamma \omega \gg 1$, the specific heat simplifies to \begin{eqnarray} \label{Callim} &&C_{vor}(\beta ,\Omega ,\gamma ) \rightarrow \nonumber \medskip\ \\ && \qquad \longrightarrow\hspace{-20pt}\raisebox{-6pt} {$\scriptscriptstyle{\frac \gamma \Omega \gg N}$}\; 2N \left(\frac \gamma \omega \right)^2 \tilde \Psi^{\prime} \left( \frac \gamma \omega \right) + 2\sum\limits_{n=1}^{N-1}\left( \frac{\Omega _n^2}{\gamma \omega }\right) ^2\tilde \Psi ^{\prime }\left( \frac{\Omega _n^2}{\gamma \omega }\right) \nonumber \medskip\ \\ && \qquad \longrightarrow\hspace{-20pt}\raisebox{-6pt} {$\scriptscriptstyle{\frac \gamma \omega \gg 1}$}\; \frac N3 \frac \omega \gamma + 2\sum\limits_{n=1}^{N-1}\left( \frac{\Omega _n^2}{\gamma \omega }\right) ^2\tilde \Psi ^{\prime }\left( \frac{\Omega _n^2}{\gamma \omega }\right) \nonumber \medskip\ \\ && \qquad \qquad \; \simeq 2\sum\limits_{n=1}^{N-1}\left( \frac{\Omega _n^2} {\gamma \omega } \right) ^2\tilde \Psi ^{\prime }\left( \frac{\Omega _n^2} {\gamma \omega } \right) \;. \end{eqnarray} In the final extreme limit, the specific heat reduces to a function $C_{lvor}(\beta ,\Omega ,\gamma )$ which depends only on the $\mu $-independent variable $\frac{\gamma \omega }{\Omega ^2}$. \subsection{Behavior in various regimes} As shown, the variables which naturally enter the thermodynamics of the string have the dimensions of an energy and are the Matsubara frequency $\omega = 2 \pi k_B T$ which is proportional to the thermal energy, the characteristic frequency $\Omega = \sqrt{\frac \epsilon \mu} \frac \pi L$ of the string and $\gamma =\frac \eta \mu $ which is the characteristic energy of dissipative processes. These variables represent the dependence on temperature and damping on a physical energy scale that depends on $L=Nd$ and thus on $N$, the number of layers. It is convenient to define the reduced temperature $t$ and the dimensionless damping parameter $\alpha $ according to \begin{eqnarray} &&t=\frac \omega \Omega = 2 N \sqrt{\frac \mu \epsilon} d k_B T = \frac {\pi I}N \frac T{T_o} \;, \medskip\ \\ &&\alpha =\frac \gamma \Omega = \frac N \pi \frac {\eta d} {\sqrt{\epsilon \mu}} = \frac N{\pi I}\;. \end{eqnarray} The $\mu$-independent variable arizing in the $\mu \rightarrow 0$ limit is then simply \begin{equation} \alpha t = \frac {\omega \gamma}{\Omega^2} = \frac {2 N^2}{\pi} \frac {\eta d^2}{\epsilon} k_B T = \frac T{T_o}\;. \end{equation} The meaning of a possible continuum limit $N \rightarrow +\infty$ can now be better elucidated. In order to compare situations with different $N$s, we work with the $N$-dependent variables $\alpha$ and $t$, so that adding modes does not change the contributions of those that were already present but constitute only a correction. The continuum limit will then be relevant as a reliable approximation to the finite $N$ case whenever the modes with $n>N$ give a negligible contribution to thermodynamics. As we will see, this will be true for low temperatures. As we already pointed out, the thermodynamics of the vortex line clearly exhibits two distinct and very different regimes. We shall call the underdamped regime the case in which $\alpha \ll N$, and the overdamped regime the case in which $\alpha \gg N$, and analyze them separately. The global behavior of the specific heat is shown in Fig. \ref{fig1}. There are two regimes according to the value of $\alpha $. For $\alpha \ll N$, the underdamped case, the specific heat depends only weakly on it and the effect of increasing $N$ is to raise the temperature at which the saturation to the asymptotic value $2N-1$ begins. For $\alpha \gg N$, the overdamped case, the specific heat acquires a strong dependence on $\alpha $. Increasing the temperature, the specific heat raises very quickly to the value $N-1$, going approximately like a square root, and continues to grow very slowly and almost linearly towards its high temperature saturation value $2N-1$. In the limit of very small mass, the specific heat stop its raising at $N-1$ instead of $2N-1$. This is due to the fact that the extreme damping has killed almost all the kinetic part of dynamics, corresponding to an asymptotic specific heat equal to $0$ instead of $1$ for the center of mass particle degree of freedom (which gets completely killed) and equal to $1$ instead of $2$ for each of the harmonic modes (for which the kinetic and potential part of the energy are equal in the free case). Moreover, if damping is strong enough with respect to inertia, the specific heat depends only on the $\mu$-independent variable $\alpha t$. \begin{figure}[h] \centerline{\psfig{figure=fig1.eps,width=210pt}} \caption{The specific heat $C_{vor}(t,\alpha )$ of the vortex-string as a function of the reduced temperature $t$. The solid lines refers to $\alpha =$ 0, 3, $N$, $2N$, $4.5N$, $12N$. For $\alpha = 4.5N$, $12N$, the limiting strong damping scaling behavior with (dot-dashed lines) and without (dashed lines) the linear correction (term $\frac N 3 \frac \omega \gamma$ in Eq.(\protect \ref{Callim})) is seen to be an increasingly good approximation.} \label{fig1} \end{figure} \subsubsection{Underdamped regime: $\alpha \ll N$ ($I \gg 1$)} In this regime, we use Eq. (\ref{Cal}), which depend on both $\alpha$ and $t$. Notice first that in the high temperature limit, $t \gg N$, the specific heat tends to its maxmimal value $2N-1$ and is almost constant. Next consider not too high temperatures, $t \ll N$. The contribution of the modes with $n>N$ in the sum giving the total specific heat would give a negligible contribution (since the arguments of both the $\tilde \Psi$-functions would be large), and the continuum limit is a good approximation. We can thus take the limit $N\rightarrow +\infty $ keeping $\alpha $ and $t$ fixed. In this way, the asymptotic behaviors of the specific heat in the extremes of this range of temperature can be computed and finally \begin{equation} \label{Calund} C_{vor}(t,\alpha )\simeq \left\{ \begin{array}{l} \displaystyle{\frac 13\left( \frac 1\alpha + \frac{\pi ^2}6\alpha \right) t\;,\;t\ll 1} \medskip\ \\ \displaystyle{\frac \pi 3t\;,\;1\ll t\ll N} \medskip\ \\ 2N-1\;,\;t\gg N \end{array} \right. \;. \end{equation} The behavior for $t \ll 1$ directly follows from the asymptotic behavior of the $\tilde \Psi^{\prime}$- function, Eq. (\ref{andpsitil}), whereas the one for $1 \ll t \ll N$ is obtained approximating the sum in the specific heat with an integral. At low temperatures, $t \ll N$, the underdamped case further splits into three sub-regimes. For values of $\alpha $ below a first critical value $\alpha _{c1}\simeq 0.4$, the specific heat progressively pass from the free shape to a linearly growing one; moreover, any infinitesimal amount of dissipation, $\alpha >0$, causes the specific heat to start from zero instead of one, because of the center of mass contribution. For $\alpha $ beyond a second critical value $\alpha _{c2}\simeq 1.5$, the specific heat starts to deviate from the linear shape, getting more and more convex at low temperature. Finally, for $\alpha $ between the two critical values, there is no substantial dependence of the specific heat on friction, and the response to heating is approximately linear in the temperature in the whole continuum region $t \ll N$. The two critical values of $\alpha$ can be obtained by requiring the matching of the behaviors for $t \ll 1$ and $1 \ll t \ll N$. They satisfy thus the quadratic equation \begin{equation} \frac 13\left( \frac 1{\alpha _c}+\frac{\pi ^2}6\alpha _c\right) =\frac \pi 3\;, \end{equation} which yields \begin{equation} \alpha _{c1,2}=\frac{3\mp \sqrt{3}}\pi \;, \end{equation} in agreement with the values extracted from the plot. \subsubsection{Overdamped regime: $\alpha \gg N$ ($I \ll 1$)} In this regime, we can use Eq. (\ref{Callim}), which depend only on $\alpha t = \frac T{T_o}$. At high temperature, $\alpha t \gg N^2$, the specific heat grows linearly from the value $N-1$ with a slope $\frac N{3\alpha}$ until $\alpha t \sim \alpha^2$, where it starts saturating to its asymptotic value $2N-1$. For low temperature limit, $\alpha t \ll N^2$, the contribution of the modes with $n>N$ in the sum giving the total specific heat are again negligible, and we can take the continuum limit $N\rightarrow +\infty$ keeping $\alpha t$ fixed. In this case, the asymptotic behaviors of the specific heat at the extremes of the low temperature region can be found and finally the behavior is the following \begin{equation} \label{Calover} C_{vor}(t,\alpha )\simeq \left\{ \begin{array}{l} \displaystyle{\frac{\pi ^2}{18}\alpha t\;,\;\alpha t \ll 1} \medskip\ \\ \displaystyle{2C_o \sqrt{\alpha t}\;,\;1 \ll \alpha t\ll N^2} \medskip\ \\ \displaystyle{ N-1+\frac N{3\alpha} t\;,\; N^2 \ll \alpha t \ll \alpha^2=\frac{N^2} {\pi^2 I^2}} \medskip\ \\ \displaystyle{2N-1\;,\;\alpha t\gg \alpha^2 = \frac{N^2}{\pi^2 I^2}} \end{array} \right. \;, \end{equation} where $C_o=0.490$ is the constant given by the integral (\ref{int}) quoted in appendix A. As before, the behavior for $\alpha t \ll 1$ directly stems from the asymptotic behavior of the $\tilde \Psi^{\prime}$-function, Eq. (\ref{andpsitil}), whereas the one for $1 \ll \alpha t \ll N^2$ is computed approximating the sum over the modes with an integral. \vskip 25pt The reduced temperatures that are important for the shape transitions in the specific heat are recognized to be $t=1,N$ in the underdamped case and $t=\frac 1\alpha,\frac{N^2}\alpha, \alpha $ in the overdamped one. The important question that we shall now address is whether some of these can be relevant at the superconductivity temperature scale. \section{Discussion} The available experimental and theoretical work on superconductors and vortex dynamics does not allow for a precise knowledge of all the parameters entering the description of the problem. In particular, whereas the damping and elasticity coefficients can be computed, the mass density is ambiguous. We will thus focus on $I = \frac {\sqrt{\epsilon \mu}}{\eta d}$ as the fundamental unknown quantity carrying the dependence on $\mu$. The value of $\eta$ can be estimated at low temperaure using the Bardeen-Stephen expression \cite{Bard}, whereas $\epsilon$ is known from the microscopic theory; in Gaussian units (and recovering $\hbar$), \begin{equation} \eta = \frac{\Phi _o^2}{2\pi c^2\xi ^2\rho _N} \;,\; \epsilon =\kappa^2 \left( \frac{\Phi _o}{4\pi \lambda }\right) ^2\ln \frac \lambda {\kappa \xi} \;, \end{equation} where $\lambda $ is London's penetration depth, $\xi $ the $xy$ coherence length, $\kappa = \sqrt{\frac mM}$ the anisotropy ratio, $\rho _N$ the normal state resistivity, and $\Phi _o=\frac{hc}{2e}$ the flux quantum. For YBCO films, we can take the typical values $d \simeq 12\stackrel {\circ }{A}$, $\lambda \simeq 1400\stackrel{\circ }{A}$, $\xi \simeq 15\stackrel{\circ }{A}$, $\kappa \simeq \frac 15$ and $\rho_N \simeq 100\;\mu \Omega \cdot cm$, yielding \begin{equation} \label{etaeps} \eta \simeq 3.0\;10^{-6}\;\frac{Erg\cdot s}{cm^3} \;,\; \epsilon \simeq 3.4\;10^{-7}\;\frac{Erg}{cm}\;. \end{equation} In order to get contact with realizable temperatures in the framework of superconductivity, we define the $\mu$-independent temperature \begin{equation} T_s=N T_o = \frac \pi {2 N} \frac \hbar {k_B} \frac \epsilon {\eta d^2} \simeq \frac {10^2}{N}\;K \;. \end{equation} For reasonable $N$, ranging from $10^2$ to $10^4$ and corresponding to a thickness $L$ of the sample between $0.1\;\mu m$ and $10\; \mu m$, $T_s$ can go from $10\;mK$ to $1\;K$, indeed representing the accessible temperature scale for the problem. In the following, we will assume $T \ll \frac 1 {2\pi} \frac \hbar{k_B} \frac \eta \mu$ in order to satisfy the condition $\frac \gamma \omega \gg 1$ which has been used in deriving the behavior in the overdamped case. Using the estimate given at the end of this section for $\mu$, this means temperatures below $10^5\;K$, and does not constitute any restriction. In terms of $T_s$, the reduced temperature is given by \begin{equation} t= \pi \frac {\sqrt{\epsilon \mu}}{\eta d} \frac T {T_s} \;. \end{equation} Observe that since $T_s$ is known but $\mu$ is not, the temperature scale entering $t$ is ambiguous. We are now able to look closer to the order of magnitude of the transition temperatures in the two regimes we have studied, taking as reasonable temperature scale $T_s$ ($T_o = \frac {T_s}N$ is thus small). In the underdamped case, $\alpha= \frac N\pi \frac {\eta d} {\sqrt{\epsilon \mu}} \ll N$, the important reduced temperatures where found to be $t_A=1$ and $t_B=N$, corresponding to the temperatures $T_A=\frac \alpha N T_s \ll T_s$ and $T_B = \alpha T_s \sim T_s$. Thus, we conclude that the relevant regimes for the specific heat are in this case the first and second of Eq. (\ref{Calund}), that is \begin{equation} \label{Res1} C_{vor} \simeq \left\{ \begin{array}{l} \displaystyle{\frac N3 \left(\frac 1{\alpha^2} + \frac {\pi^2}6 \right) \frac T{T_s}\;,\;T \ll \frac {T_s}N} \medskip\ \\ \displaystyle{\frac \pi 3 \frac N \alpha \frac T{T_s} \;,\;T \gg \frac {T_s}N} \end{array} \right. \;. \end{equation} In the overdamped case, $\alpha=\frac N\pi \frac{\eta d}{\sqrt{\epsilon \mu}} \gg N$, we had instead $t_A = \frac 1\alpha$, $t_B = \frac {N^2}\alpha$ and $t_C = \alpha$, corresponding to the temperatures $T_A = \frac 1N T_s \ll T_s$, $T_B = N T_s \gg T_s$ and $T_C = \frac {\alpha^2}N T_s \gg T_s$. Thus, the relevant regimes for this case are the first and second of Eq. (\ref{Calover}), that is \begin{equation} \label{Res2} C_{vor} \simeq \left\{ \begin{array}{l} \displaystyle{\frac {N \pi^2}{18} \frac T{T_s}\;,\;T \ll \frac {T_s}N} \medskip\ \\ \displaystyle{0.98 \sqrt{N \frac T{T_s}} \;,\;T \gg \frac {T_s}N} \end{array} \right. \;. \end{equation} Multiplying the results (\ref{Res1}) and (\ref{Res2}) by $k_B$ and the density of vortices $\frac {B}{\Phi_o L}$ in order to obtain the true specific heat per unit volume, and expliciting all the variables, our final result can be written as \begin{equation} \label{Res} C_{vor} \simeq \left\{ \begin{array}{l} \left\{\begin{array}{l} \displaystyle{\frac \pi 9 \frac B {\Phi_o} \frac {k_B^2}{\hbar} \left(\frac {\eta L}{\epsilon} + \frac {6 \mu}{\eta L} \right) T \;,\;T \ll T_o} \medskip\ \\ \displaystyle{\frac {2 \pi}3 \frac B {\Phi_o} \frac {k_B^2}{\hbar} \sqrt{\frac \mu \epsilon} T \;,\;T \gg T_o} \end{array} \right. ,\;I \gg 1 \medskip\ \\ \left\{\begin{array}{l} \displaystyle{\frac \pi 9 \frac B {\Phi_o} \frac {k_B^2}{\hbar} \frac {\eta L}{\epsilon} T \;,\;T \ll T_o} \medskip\ \\ \displaystyle{0.98 \sqrt{\frac 8 \pi} \frac B {\Phi_o} \frac {k_B^{\frac 32}}{\hbar^{\frac 12}} \sqrt{\frac \eta \epsilon} \sqrt{T}\;,\;T \gg T_o} \end{array} \right. ,\;I \ll 1 \end{array} \right. \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \end{equation} with \begin{equation} T_o = \frac \pi 2 \frac \hbar {k_B} \frac \epsilon {\eta L^2} \;,\; I= \frac {\sqrt{\epsilon \mu}}{\eta d} \;. \end{equation} In a previous work by Blatter and Ivlev \cite{Blat4} the low temperature regime for small magnetic fields was considered using the usual continuum description; this corresponds to the limit $N \rightarrow \infty$ in our description and temperatures $T \ll T_o$. In the anisotropic and elastic limit, Eq. (33) of that work for the specific heat reduces to \begin{equation} C_{vor} \simeq \frac 23 \frac B {\Phi_o} \frac {k_B^2}\hbar \frac \eta \epsilon \int_{k_{zmin}}^{\infty} \frac {dk_z}{k_z^2} \;. \end{equation} Our finite $N$ case can be recovered with the identifications $k_z \rightarrow \nu n $, $dk_z \rightarrow \nu$ ($\nu = \frac \pi L$), meaning $k_{zmin}=\frac \pi L$ and \begin{equation} \int_{k_{zmin}}^{\infty} \frac {dk_z}{k_z^2} \rightarrow \frac 1 \nu \sum_{n=1}^{N-1} \frac 1{n^2} \simeq \frac \pi 6 L \;. \end{equation} In this way, one matches the linear and $\mu$-independent results of the first and third row of (\ref{Res}), which are relevant for $T \ll T_o$ and coincide in the continuum limit on wich we are focusing. Note that out result Eq. (\ref{Res2}), case $T \gg \frac {T_s}N$, would be modified by the introduction of a lower cut-off $k_{zmin}$ in the following way \begin{equation} C_{vor} \simeq 2 \sqrt{N \frac {T}{T_s}} \int_{\sqrt{N \frac T{T_s}} \frac d \pi k_{zmin}}^{\sqrt{N \frac {T}{T_s}}} dz z^4 \tilde \Psi^\prime (z^2) \;. \end{equation} We took $k_{zmin}=\frac \pi L$, but the behavior $C_{vor} \sim \sqrt{N \frac T{T_s}}$ would hold also for other possible $k_{zmin}$, provided $k_{zmin} \ll \frac \pi d \sqrt{\frac {T_s}{N T}}$. The behavior for large magnetic fields has been studied by Bulaevskii and Maley \cite{Bula1}. In this case, the specific heat is still linear in the temperature, but has a different dependence on the magnetic field and the microscopic parameters of the vortices, and the result can not be expressed as a function of only $\mu$, $\eta$ and $\epsilon$; this signals that in this regime the vortices do not retain their line structure \cite{Blat4} and other kind of configuration become important. Summarizing, for $T \ll T_o$, and in the continuum limit $N \rightarrow \infty$, we find the universal behavior already studied in the litterature \cite{Blat4}. For $T \gg T_o$ and in the whole range of superconductivity temperatures we find instead a different behavior depending on the value of the parameter $I$. In the underdamped case $I \gg 1$ the specific heat is linear in the temperature and depends only on inertia and not on friction, whereas in the overdamped case $I \ll 1$ it goes like the square root of the temperature and depends only on friction and not on inertia. It is important to observe that for $N$ ranging from $10^2$ to $10^4$, the temperature $T_o$ is very small and varies from $1\;\mu K$ to $10\;mK$. This suggest that an interesting experimentally observable regime should be $T > T_o$. As an example of theoretical estimate, one can approximate the mass density $\mu$ with its electronic and electromagnetic contributions. These are found found to be \cite{Blat2,Suhl} \begin{equation} \mu_{el} = \frac 2 {\pi^3} m_e k_F \;,\; \mu_{em} = \left(\frac {\Phi_o}{4 \pi c \xi}\right)^2 \;. \end{equation} For YBCO films, the Fermi momentum is $k_F=0.5 \stackrel{\circ}{A^ {-1}}$, and one obtains $\mu_{el} \simeq 2.9 \; 10^{-21} \; \frac {gr}{cm}$, $\mu_{em} \simeq 1.2 \; 10^{-22} \; \frac {gr}{cm}$. This would mean $I \simeq 0.1$, which does not corresponds clearly to any of the two damping regimes. Thus, if one uses this theoretical estimate, the thermodynamics should lie somewhere inbetween the underdamped and overdamped regime that we have considered in more detail. However, trusting other conventional arguments \cite{Stephen}, the overdamped case should be the relevant one. An experimental measure of the specific heat for $T \gg T_o$ could thus provide specific informations on this important issue. \section{Effect of the Magnus force} In this section we will discuss the influence of a possible Magnus force on the thermodynamics and show how our results for the specific heat generalize to the case in which both the Magnus effect and friction are important. The Magnus force corresponds to a term \begin{equation} \delta \,^*\!{\bf \dot q} \end{equation} in the equation of motion (\ref{eqnmot}) of the vortex line, where $\,^*\!{\bf q} = {\bf q} \times {\bf z}$ is the dual of the $xy$ position. This term can be accounted for in the thermodynamics by adding in the Euclidean effective action (\ref{effact}) the term \begin{equation} \int_0^Ldz \int_0^\beta d\tau \frac i2 \delta {\bf q} \,^*\!{\bf \dot q} \;. \end{equation} Actually, the Magnus force is rather controvertial \cite{Thou}. In the low magnetic field limit, by estimating $\delta$ from the number density of the superconducting fluid, one would get a value of the same order of $\eta$ as in eq. (\ref{etaeps}). Accordingly, the partition function for each of the modes of the vortex line engoes the following modification \begin{eqnarray} &&\prod_{k=1}^{+ \infty}\left[1 + \frac \gamma \omega \frac 1k + \left(\frac {\Omega_n}\omega \right)^2 \frac 1{k^2} \right]^{-2} \rightarrow \nonumber \\ && \rightarrow \prod_{k=1}^{+ \infty}\left\{\left[1 + \frac \eta {\mu \omega} \frac 1k + \left(\frac {\Omega_n}\omega \right)^2 \frac 1{k^2} \right]^2 +\left(\frac {\delta}{\mu \omega}\right)^2 \frac 1{k^2}\right\}^{-1} \nonumber \\ && \quad \;\; = \prod_{k=1}^{+ \infty}\left|1 + \frac {\hat \gamma} \omega \frac 1k + \left(\frac {\Omega_n}\omega \right)^2 \frac 1{k^2} \right|^{-2}\;. \end{eqnarray} We have introduced the complex damping frequency \begin{equation} \hat \gamma = \frac {\eta + i \delta}{\mu} \;. \end{equation} This leads to the following modifications in the expressions for the partition functions \begin{eqnarray} &&\tilde \Gamma^2 \left(\frac \gamma \omega \right) \rightarrow \tilde \Gamma \left(\frac {\hat \gamma} \omega \right) \tilde \Gamma \left(\frac {\hat \gamma^*} \omega \right) \nonumber \\ &&\tilde \Gamma^2 \left(\frac {\frac \gamma 2 \pm i \xi_n} \omega \right) \rightarrow \tilde \Gamma \left(\frac {\frac {\hat \gamma}2 \pm i \hat \xi_n} \omega \right) \tilde \Gamma \left(\frac {\frac {\hat \gamma^*}2 \pm i \hat \xi^*_n} \omega \right) \end{eqnarray} where \begin{equation} \hat \xi_n = \sqrt{\Omega_n^2 - \frac {\hat \gamma^2}4} \;\;,\;\; \hat \xi^*_n = \sqrt{\Omega_n^2 - \frac {\hat \gamma^{*2}}4} \;. \end{equation} All the analysis done for the dissipative case goes through without any major difficulty. Underdamped and overdamped cases now means with respect to the modulus $\eta_e$ of the complex friction coefficient $\hat \eta = \eta_e e^{i\phi}$ for which \begin{equation} \eta_e = \sqrt{\eta^2 + \delta^2} \;\;,\;\; \phi = \arctan \frac \delta \eta \;. \end{equation} The result (\ref{Res}) generalizes to \begin{equation} \label{Resmod} C_{vor} \simeq \left\{ \begin{array}{l} \left\{\begin{array}{l} \displaystyle{\frac \pi 9 \frac B {\Phi_o} \frac {k_B^2}{\hbar} \left(\frac {\eta L}{\epsilon} + \frac {6 \mu \eta}{\eta^2_{e} L} \right) T \;,\;T \ll \hat T_o} \medskip\ \\ \displaystyle{\frac {2 \pi}3 \frac B {\Phi_o} \frac {k_B^2}{\hbar} \sqrt{\frac \mu \epsilon} T \;,\;T \gg \hat T_o} \end{array} \right. \!\!,\;\hat I \gg 1 \medskip\ \\ \left\{\begin{array}{l} \displaystyle{\frac \pi 9 \frac B {\Phi_o} \frac {k_B^2}{\hbar} \frac {\eta L}{\epsilon} T \;,\;T \ll \hat T_o} \medskip\ \\ \displaystyle{2C_{\phi} \sqrt{\frac 8 \pi} \frac B {\Phi_o} \frac {k_B^{\frac 32}}{\hbar^{\frac 12}} \sqrt{\frac {\eta_{e}} \epsilon} \sqrt{T}\;,\;T \gg \hat T_o} \end{array} \right. ,\;\hat I \ll 1 \end{array} \right. \!\!\!\! \end{equation} with \begin{equation} \hat T_o = \frac \pi 2 \frac \hbar {k_B} \frac \epsilon {\eta_{e} L^2} \;,\; \hat I= \frac {\sqrt{\epsilon \mu}}{\eta_{e} d}\;. \end{equation} The constant $C_{\phi}$ appearing in the overdamped case now depends on the phase $\phi$ and is again given by an integral \begin{equation} C_{\phi} = \Re \left\{e^{-2i\phi}\int_{0}^{+\infty} dz z^4 \tilde \Psi^{\prime} (e^{-i\phi}z^2)\right\}\; . \end{equation} Since the relevant asymptotic behavior of the integrand for $z \rightarrow 0$ is independent of $\phi$, we expect $C_{\phi}$ to depend only weakly on it. In fact, the to extreme values corresponding to $\eta \neq 0$, $\delta = 0$ ($\phi = 0$) and $\eta = 0$, $\delta \neq 0$ ($\phi = \frac \pi 2$) are given by the integrals (\ref{int}) and (\ref{int2}) of appendix A, \begin{equation} C_o = 0.490 \;\;,\;\; C_{\frac \pi 2} = 0.346 \;. \end{equation} For arbitrary $\phi$, $C_{\phi}$ is of the same order of magnitude and can be computed numerically. Observe finally that in the overdamped case, in the sense explained above (with respect to $\eta_e$), the specific heat depends on the square root of the temperature and on the effective damping coefficient $\eta_e = \sqrt{\eta^2 + \delta^2}$. In the ultra clean limit ($\eta_e = \delta$) this was already noticed bye Blatter and Ivlev \cite{Blat4}, who also found $C_v \sim \sqrt{T}$ in this case. {\bf Aknowledgements}. This work has been partially supported by EEC contract ERBFMRXCT 96-0045.
2023-04-23T06:41:21.660Z
1997-10-01T11:06:23.000Z
redpajama/arxiv
arxiv_0001
2,271
8,389
adc83b19e793491b1c6ea0fd8b46cd9f32e592fc
2023-04-23T06:41:21.911Z
1999-05-27T09:42:04.000Z
redpajama/arxiv
arxiv_0001
2,277
2
6d62004a8369f9ddeda93e1d9b9775a3fadcf58d
\section{Introduction} The spin content of the proton has received increasing attention since the observation of the Ellis-Jaffe sum rule violation in the experiments of polarized deep inelastic scattering (DIS) of leptons on the nucleon at CERN, DESY and SLAC (For recent reviews on the data see \cite{Refs}). From the naive quark model we know that the three valence quarks provide the quantum numbers of the proton, thus the sum of the quark spins should be equal to the proton spin. However, it was found from the observed value of the Ellis-Jaffe integral that the sum of quark helicities is much smaller than 1/2. This gave rise to the proton ``spin puzzle'' or ``spin crisis" since one usually identifies the ``quark helicity'' observed in polarized DIS with the ``quark spin''. However, it has been pointed out in Ref.~\cite{Ma91b,Bro94} that the quark helicity ($\Delta q$) observed in polarized DIS is actually the quark spin defined in the light-cone formalism and it is different from the quark spin ($\Delta q_{RF}$) as defined in the quark model (or rest frame of the nucleon). Thus the small quark helicity sum observed in polarized DIS is not necessarily in contradiction with the quark model in which the proton spin is provided by the valence quarks \cite{Ma96}. From another point of view, the sea quarks of the nucleon seem to have non-trivial non-perturbative properties \cite{Bro96} that may be related to a number of empirical anomalies such as the Gottfried sum rule violation \cite{NMC91}, the strange quark content of the nucleon \cite{CTEQ93,CCFR}, the large charm quark content at high $x$ \cite{Bro81}, as well as the Ellis-Jaffe sum rule violation. There are also indications that the gluons play an important role in the spin content of the proton \cite{Gluon}. Therefore the situation concerning the spin content of the proton might be more complicated than the naive quark model picture in which the spin of the proton is carried by the three valence quarks. It would be helpful in order to clarify this situation if one could find a way to measure $\Delta q_{RF}$, the quark spin in the rest frame of the nucleon (or quark model). It is the purpose of this paper to point out an approximate relation that can be used to measure $\Delta q_{RF}$: \begin{equation} \Delta q_{RF}(x)+\Delta q(x) = 2 \delta q(x), \label{eq1} \end{equation} where $\Delta q(x)$ and $\delta q(x)$ are the corresponding quark helicity and transversity distributions, related to the axial quark current $\bar q \gamma^{\mu} \gamma^5 q$ and the tensor quark current $\bar q \sigma^{\mu\nu} i \gamma^5 q$ \cite{h1} respectively. We recall that the quark helicity distributions $\Delta q(x)$ are extracted from the spin-dependent structure functions $g_1^N(x)$, defined as $g_1^N (x)=1/2\sum_{q}e_q^2\Delta q(x)$,~obtained in several polarized Deep Inelastic Scattering experiments\cite{NSMCN}. The transversity distribution $\delta q(x)$ measures the difference of the number of quarks with transverse polarization parallel and antiparallel to the proton transverse polarization. It can be obtained, in principle, by measuring a Drell-Yan process in a $pp$ collision where both protons are transversely polarized \cite{h1,Bou95,Bar97}, but it seems rather difficult and a different method has been proposed \cite{Jaf97}. Assuming that $\Delta q(x)$ and $\delta q(x)$ have been measured, we can then obtain the quark spin distributions $\Delta q_{RF}(x)$ by using Eq.~(\ref{eq1}). We will show how Eq.~(\ref{eq1}) can be derived by making use of the Melosh-Wigner rotation connecting the ordinary quark spin and the light-cone quark spin. We will also make numerical predictions of the $x$-dependent distributions $\delta q(x)$ and $\Delta q_{RF}(x)$ in a light-cone SU(6) quark-spectator model and present some relevant discussions on the effect from the sea quark-antiquark pairs. \section{The Melosh-Wigner rotation} It is proper to describe deep inelastic scattering as the sum of incoherent scatterings of the incident lepton on the partons in the infinite momentum frame or in the light-cone formalism. We will work along with the developements in refs.~\cite{Ma91b,Bro94,Sch97}, by taking into account the effect due to the Melosh-Wigner rotation \cite{MW,MW2} which is an important ingredient in the light-cone formalism \cite{Bro97}. The axial charge $\Delta q=\int {\mathrm d} x \Delta q(x)$ measured in polarized deep inelastic scattering is defined by the axial current matrix element \begin{equation} \Delta q=<p,\uparrow|\overline{q} \gamma^{+} \gamma_{5} q|p,\uparrow>. \end{equation} In the light-cone or quark-parton descriptions, $\Delta q (x)=q^{\uparrow}(x)-q^{\downarrow}(x)$, where $q^{\uparrow}(x)$ and $q^{\downarrow}(x)$ are the probability of finding a quark or antiquark with longitudinal momentum fraction $x$ and polarization parallel or antiparallel to the proton helicity in the infinite momentum frame. However, in the nucleon rest frame one finds \cite{Ma91b,Bro94}, \begin{equation} \Delta q (x) =\int [{\mathrm d}^2{\mathbf k}_{\perp}] M_q(x,{\mathbf k}_{\perp}) \Delta q_{RF} (x,{\mathbf k}_{\perp}), \label{Melosh1} \end{equation} with \begin{equation} M_q(x,{\mathbf k}_{\perp})=\frac{(k^+ +m)^2-{\mathbf k}^2_{\perp}} {(k^+ +m)^2+{\mathbf k}^2_{\perp}}, \label{eqM1} \end{equation} where $M_q(x,{\mathbf k}_{\perp})$ being the contribution from the relativistic effect due to the quark transverse motions (or Melosh-Wigner rotation effect),~ $q_{s_z=\frac{1}{2}}(x,{\mathbf k}_{\perp})$ and $q_{s_z=-\frac{1}{2}}(x,{\mathbf k}_{\perp})$ being the probabilities of finding a quark and antiquark with rest mass $m$ and transverse momentum ${\mathbf k}_{\perp}$ and with spin parallel and anti-parallel to the rest proton spin, one then has, $\Delta q_{RF} (x,{\mathbf k}_{\perp})= q_{s_z=\frac{1}{2}}(x,{\mathbf k}_{\perp})- q_{s_z=-\frac{1}{2}}(x,{\mathbf k}_{\perp})$, and $k^+=x {\cal M}$, where ${\cal M}^2=\sum_{i}(m^2_i+{\mathbf k}^2_{i \perp}) / {x_i}$. The Melosh-Wigner rotation factor $M_q(x,{\mathbf k}_{\perp})$ ranges from 0 to 1; thus $\Delta q$ measured in polarized deep inelastic scattering cannot be identified with $\Delta q_{RF}$, the spin carried by each quark flavor in the proton rest frame or the quark spin in the quark model. The same technique by making use of the Melosh-Wigner rotation effect has been applied to the quark tensor charge \cite{Sch97} which is calculated from \begin{equation} 2\delta q=<p,\uparrow|\bar{q}_{\lambda}\gamma^{\perp} \gamma^{+} q_{-\lambda}|p,\downarrow>, \end{equation} with $\lambda=+$ and $\gamma^{\perp}=\gamma^1+i \gamma^2$, and it is found that the quark transversity distribution equals to \begin{equation} \delta q (x) =\int [{\mathrm d}^2{\mathbf k}_{\perp}] {\widetilde M}_q(x,{\mathbf k}_{\perp}) \Delta q_{RF} (x,{\mathbf k}_{\perp}), \label{Melosh2} \end{equation} with \begin{equation} {\widetilde M}_q(x,{\mathbf k}_{\perp})=\frac{(k^+ +m)^2} {(k^+ +m)^2+{\mathbf k}^2_{\perp}} \label{eqM2} \end{equation} being the correction factor from the Melosh-Wigner rotation \footnote{In Eq.~(\ref{eqM2}), $\widetilde M_q(x,{\mathbf k}_{\perp})$ has additional terms like $k_{1}^2 - k_{2}^2 $ in the numerator, where ${\mathbf k}_{\perp}=(k_1,k_2)$ is the transverse momentum of the struck quark. These terms vanish upon integration over the azimuth of ${\mathbf k}_{\perp}$.}. From Eqs.~(\ref{eqM1}) and (\ref{eqM2}) one easily finds the relation\cite{Sch97} \begin{equation} 1 + M_q = 2\widetilde{M}_q. \label{eq1b} \end{equation} Combining Eqs.~(\ref{Melosh1}), (\ref{Melosh2}), and (\ref{eq1b}), one has Eq.~(\ref{eq1}). Eq.~(\ref{eq1b}) is valid in a quite general framework of the light-cone quark model \cite{Bro94,MW2}, and is in fact non-perturbative. We point out that correction factors similar to $M_q$ and ${\widetilde M}_q$ have been also found in other papers \cite{Bar97,Mul97} on the quark distribution functions $\Delta q(x)$ and $\delta q(x)$. Though the explicit expressions for the $M_q$ and ${\widetilde M}_q$ and the physical significances are different, Eq.~(\ref{eq1b}) also holds in these different approaches. Recently there has been also a proof of the above suggested relation Eq.~(\ref{eq1}) in a QCD Lagrangian based formalism\cite{Qing98}. Thus Eq.~(\ref{eq1b}), and consequently its extension to Eq.~(\ref{eq1}), might be considered as a relation with general physical implications. Since $\Delta q(x)$ and $\delta q(x)$ have different evolution behaviors, the relation Eq.~(\ref{eq1}) should be considered as valid at some model energy scale $Q^2_0$ \cite{Bar97}, such as $Q^2_0 \approx 1 \to 5$ GeV$^2$ in our case. Although it has a similar appearance, Eq.~(\ref{eq1}) is not a saturation of the inequality \cite{Sof95}: \begin{equation} q(x) + \Delta q(x) \ge 2\big |\delta q(x)|, \label{Sie} \end{equation} since $\Delta q_{RF}(x)$ is clearly not the same as $q(x)$. \section{The light-cone SU(6) quark-spectator model} We now discuss the $x$-dependent quark distributions $\Delta q_{RF}(x)$ and $\delta q(x)$ in a light-cone SU(6) quark-spectator model \cite{Ma96}, which can be considered as a revised version of the quark-spectator model developed in \cite{Car75}. The unpolarized valence quark distributions $u_v(x)$ and $d_v(x)$ are given in this model by \begin{eqnarray} &&u_{v}(x)=\frac{1}{2}a_S(x)+\frac{1}{6}a_V(x);\nonumber\\ &&d_{v}(x)=\frac{1}{3}a_V(x), \label{eq:ud} \end{eqnarray} where $a_D(x)$ ($D=S$ for scalar spectator or $V$ for axial vector spectator) is normalized such that $\int_0^1 {\mathrm d} x a_D(x)=3$, and it denotes the amplitude for quark $q$ to be scattered while the spectator is in the diquark state $D$. Exact SU(6) symmetry provides the relation $a_S(x)=a_V(x)$, which implies the valence flavor symmetry $u_{v}(x)=2 d_{v}(x)$. This gives the prediction $F^n_2(x)/F^p_2(x)\geq 2/3$ for all $x$, which is ruled out by the experimental observation $F^n_2(x)/F^p_2(x) < 1/2$ for $x \to 1$. The mass difference between the scalar and vector spectators can reproduce the $u$ and $d$ valence quark asymmetry that accounts for the observed ratio $F_2^{n}(x)/F_2^{p}(x)$ at large $x$ \cite{Ma96}. This supports the quark-spectator picture of deep inelastic scattering in which the difference between the mass of the scalar and vector spectators is important in order to reproduce the explicit SU(6) symmetry breaking, while the bulk SU(6) symmetry of the quark model still holds. From the above discussions concerning the Melosh-Wigner rotation effect, we can write the quark helicity distributions for the $u$ and $d$ quarks as \cite{Ma96} \begin{eqnarray} &&\Delta u_{v}(x)=u_{v}^{\uparrow}(x)-u_{v}^{\downarrow}(x)= -\frac{1}{18}a_V(x)M_V(x) \nonumber \\ &&\phantom{..............................................} +\frac{1}{2}a_S(x)M_S(x);\nonumber\\ &&\Delta d_{v}(x)=d_{v}^{\uparrow}(x)-d_{v}^{\downarrow}(x) =-\frac{1}{9}a_V(x)M_V(x), \label{eq:sfdud} \end{eqnarray} in which $M_S(x)$ and $M_V(x)$ are the Melosh-Wigner correction factors for the scalar and axial vector spectator-diquark cases. They are obtained by averaging Eq.~(\ref{eqM1}) over ${\mathbf k}_{\perp}$ with ${\cal M}^2=(m^2_q+{\mathbf k}^2_{\perp}) /{x} +(m^2_D+{\mathbf k}^2_{\perp}) /{x}$, where $m_D$ is the mass of the diquark spectator, and are unequal due to unequal spectator masses $\rightarrow$ unequal ${\mathbf k}_{\perp}$ distributions. From Eq.~(\ref{eq:ud}) one gets \begin{eqnarray} &&a_S(x)=2u_v(x)-d_v(x);\nonumber\\ &&a_V(x)=3d_v(x). \label{eq:qVS} \end{eqnarray} Combining Eqs.~(\ref{eq:sfdud}) and (\ref{eq:qVS}) we have \begin{eqnarray} &&\Delta u_{v}(x) =[u_v(x)-\frac{1}{2}d_v(x)]M_S(x)-\frac{1}{6}d_v(x)M_V(x); \nonumber \\ &&\Delta d_{v}(x)=-\frac{1}{3}d_v(x)M_V(x). \label{eq:dud} \end{eqnarray} Thus we arrive at simple relations \cite{Ma96} between the polarized and unpolarized quark distributions for the valence $u$ and $d$ quarks. The relations (\ref{eq:dud}) can be considered as the results of the conventional SU(6) quark model, and which explicitly take into account the Melosh-Wigner rotation effect \cite{Ma91b,Bro94} and the flavor asymmetry introduced by the mass difference between the scalar and vector spectators \cite{Ma96}. The extension of relations Eq.~(\ref{eq:dud}) to the quark spin distributions $\Delta q_{RF}(x)$ and transversity $\delta q(x)$ is straightforward: we can simply replace $M_S(x)$ and $M_V(x)$ by $1$ for $\Delta q_{RF}(x)$ and by ${\widetilde M}_S(x)$ and ${\widetilde M}_V(x)$ for $\delta q(x)$, \begin{eqnarray} &&\Delta u^{RF}_{v}(x) =u_v(x)-\frac{2}{3}d_v(x); \nonumber\\ &&\Delta d^{RF}_{v}(x)=-\frac{1}{3}d_v(x); \label{eq:dudRF} \end{eqnarray} \begin{eqnarray} &&\delta u_{v}(x) =[u_v(x)-\frac{1}{2}d_v(x)]{\widetilde M}_S(x) -\frac{1}{6}d_v(x){\widetilde M}_V(x); \nonumber \\ &&\delta d_{v}(x)=-\frac{1}{3}d_v(x){\widetilde M}_V(x). \label{eq:dudT} \end{eqnarray} We notice that the quark spin distributions $\Delta q_{RF}(x)$, i.e., Eq.~(\ref{eq:dudRF}), are connected with the unpolarized quark distributions without any model parameter. Thus any evidence for the invalidity of Eq.~(\ref{eq:dudRF}), by combining together the measured $\Delta q_v(x)$ and $\delta q_v(x)$, will provide a clean signature for new physics beyond the SU(6) quark model. The $x$-dependent Melosh-Wigner rotation factors $M_S(x)$ and $M_V(x)$ have been calculated \cite{Ma96} and an asymmetry between $M_S(x)$ and $M_V(x)$ was found. The calculated polarization asymmetries $A_1^N=2 x g_1^N(x)/F_2^N(x)$ including the Melosh-Wigner rotation have been found \cite{Ma96} to be in reasonable agreement with the experimental data, at least for $x \geq 0.1$. A large asymmetry between $M_S(x)$ and $M_V(x)$ leads to a better fit to the data, than that obtained from a small asymmetry. Therefore it is reasonable to expect that the calculated $\delta q(x)$ and $\Delta q_{RF}(x)$ may lead to predictions close to the real situation. In Fig.~(\ref{eq1}) we present the calculated $\Delta q(x)$, $\delta q(x)$ and $\Delta q_{RF}(x)$ for the $u$ and $d$ valence quarks. From Eqs.~(\ref{Melosh1}), (\ref{Melosh2}) and Fig.~(\ref{eq1}) we observe the inequalities, \begin{equation} |\Delta q_{RF}(x)| \ge |\delta q(x)| \ge |\Delta q(x)|. \label{IE} \end{equation} However, the different evolution behaviors of $\delta q(x)$ and $\Delta q(x)$ may break the inequality $|\delta q(x)| \ge |\Delta q(x)|$ at large $Q^2$ \cite{Bar97}. This interesting hierarchy is specific of this model and is not necessarily satisfied in general. \vspace{0.5cm} \begin{figure}[htb] \begin{center} \leavevmode {\epsfysize=10cm \epsffile{ma02.ps}} \end{center} \caption[*]{\baselineskip 13pt The x-dependent quark spin distributions $x \Delta q_{RF} (x)$ (solid curves), transversity distributions $x \delta q(x)$ (dashed curves), and helicity distributions $x \Delta q(x)$ (dotted curves) in the light-cone SU(6) quark-spectator model by using Eqs.~(\ref{eq:dud}-\ref{eq:dudT}), with the Gl\"uck-Reya-Vogt parameterization \cite{GRV95} of unpolarized quark distributions as input: (a) for $u$ quarks; (b) for $d$ quarks. } \label{mssf1} \end{figure} As we have pointed out, one should not confuse Eq.~(\ref{eq1}) with the saturation of the inequality (\ref{Sie}), which is valid for each flavor, likewise for antiquarks. Eq.~({\ref{eq1}) only equals to the saturated (\ref{Sie}) for the scalar spectator case, but not for the vector spectator case due to the fact that $q(x) \neq \Delta q_{RF}(x)$. Since $|\Delta q_{RF} (x)| \leq q(x)$, we may re-write from Eq.~(1) another inequality \begin{equation} q(x) \geq |2 \delta q(x) - \Delta q(x) |\ , \label{Sieb} \end{equation} which is similar to, but different from, the inequality (\ref{Sie}). Actually (\ref{Sie}) is a stronger constraint than (\ref{Sieb}). Nevertheless, we point out without detailed argument here that the inequality (\ref{Sie}) is valid in the light-cone SU(6) quark model, even when the meson-baryon fluctuations, that will be considered in the next section, are also taken into account. Since the Melosh-Wigner rotation factor $M$ is less than 1, we expect to find that $\sum_{q} \Delta q_{RF}$, where $\Delta q_{RF}$ is the first moment of $\Delta q_{RF}(x)$, will be much closer to 1 than the usual helicity sum $\Delta\Sigma = \sum_{q} \Delta q$, which experimentally is about $0.2$, and whose departure from the quark model value of 1 originated the ``spin crisis''. In this context it is interesting to notice that lattice QCD calculations gave an axial charge $\Delta \Sigma =0.18 \pm 0.10$ \cite{lQCD1} and a tensor charge $\delta \Sigma =0.562 \pm 0.088$ \cite{lQCD2}. Thus the spin carried by quarks from lattice QCD should be $\sum_{q} \Delta q_{RF}= 0.94 \pm 0.28$ from Eq.~(\ref{eq1}), and this supports the naive quark picture that the spin of the proton is mostly carried by quarks. In a quark model that does not contain antiquarks, $\sum_{q} \Delta q_{RF}$ will be strictly 1, but in general it will receive contributions other than the usual valence quarks. Thus it will be of great interest to develop a more refined quark model which can explain or predict its actual experimental value. \section{The sea quark-antiquark pairs} We still need to consider the higher Fock states for a better understanding of a number of empirical anomalies related to the nucleon sea quarks probed in deep inelastic scattering. The Ellis-Jaffe sum rule violation is closely related to the Gottfried sum rule violation, which implies an excess of $d \bar d$ pairs over $u \bar u$ pairs in the proton sea \cite{NMC91,Pi}. This can be explained by the meson-baryon fluctuation picture of the nucleon sea \cite{Bro96,Pi}: the lowest nonneutral $u \bar u$ fluctuation in the proton is $\pi^{-}(d \bar u)\Delta^{++}(uuu)$, and its probability is small compared to the less massive nonneutral $d \bar d$ fluctuation $\pi^{+}(u \bar{d})n(udd)$. Therefore the dominant nonneutral light-flavor $q \bar q$ fluctuation in the proton sea is $d \bar d$ through the meson-baryon configuration $\pi^{+}(u \bar{d})n(udd)$. For the spin structure of the $q \bar q$ pairs from the meson-baryon fluctuation model, it is observed \cite{Bro96} that the net $d$ quark spin of the intrinsic $q \bar q$ fluctuation is negative, whereas the net $\bar d$ antiquark spin is zero. The quark helicity distributions $\Delta q(x)$ and transversity distributions $\delta q(x)$ should be measured for quarks and antiquarks separately for applying Eq.~(\ref{eq1}). Thus we need techniques that allow the measurement of $\Delta q (x)$ and $\delta q (x)$ for quarks and antiquarks. The antiquark contributions to $\Delta q$ and $\delta q$ are predicted to be zero in the meson-baryon fluctuation model \cite{Bro96} and in a broken-U(3) version of the chiral quark model \cite{Che95}. There have been explicit measurements of the helicity distributions for the individual $u$ and $d$ valence and sea quarks by the Spin Muon Collaboration (SMC) \cite{NSMCN}. The measured helicity distributions for the $u$ and $d$ antiquarks are consistent with zero, in agreement with the above predictions \cite{Bro96,Che95}. The SMC data for the quark helicity distributions $\Delta u_{v}(x)$ and $\Delta d_{v}(x)$, which are actually $\Delta u(x)-\Delta \bar{u}(x)$ and $\Delta d(x)-\Delta \bar{d}(x)$, are still not precise enough for making detailed comparison, but the agreement of the SMC data with the calculated $\Delta u_{v}(x)$ turns out to be reasonable \cite{Ma96}. It seems that there is some evidence for an additional source of negative helicity contribution to the valence $d$ quark beyond the conventional quark model from the refined results by SMC \cite{NSMCN}. This supports the prediction \cite{Bro96} that the measured $\Delta d(x) -\Delta \bar d(x)$ should receive additional negative contribution from the intrinsic $d$ sea quarks in comparison with the valence-dominant result presented in Fig.~(\ref{mssf1}). In case of symmetric quark-antiquark sea pairs, we may consider Eq.~(\ref{eq1}) as a relation that applies to valence quarks. The tensor charge, defined as $\delta Q=\int_0^1 {\mathrm d} x [ \delta q(x) -\delta \bar q(x)]$, receives only contributions from the valence quarks since those from the sea quarks and antiquarks cancel each other, due to the charge conjugation properties of the tensor current $\bar q \sigma^{\mu\nu} i \gamma^5 q$. The helicity distributions for quarks and antiquarks can be measured in semi-inclusive deep inelastic processes separately \cite{NSMCN}, thus we can measure the valence quark helicity distributions defined by $\Delta q_v(x)=\Delta q(x)-\Delta \bar q(x)$ from experiment. We also notice that there is no clear way to strictly distinguish between the valence quarks and sea quarks for the $u$ and $d$ flavors, since one can have a symmetric quark-antiquark sea pairs by defining the valence quark $q_v=q-\bar q$ due to the excess of net $u$ and $d$ quarks in the nucleon. Eq.~(\ref{eq1}) is also valid for the above defined valence quarks (which should be actually $q-\bar q$) in case of non-zero spin contribution from antiquarks. One interesting feature of the meson-baryon fluctuations is the strange quark-antiquark asymmetry from the virtual $K^+ \Lambda$ pair of the proton \cite{Bro96}. The intrinsic strangeness fluctuations in the proton wavefunction are mainly due to the intermediate $K^+ \Lambda$ configuration since this state has the lowest off-shell light-cone energy and invariant mass. The intrinsic strange quark normalized to the probability $P_{K^+\Lambda}$ of the $K^+\Lambda$ configuration yields a fractional contribution $\Delta S_{s}=2 S_z(\Lambda)=-\frac{1}{3}P_{K^+\Lambda} $ to the proton spin, whereas the intrinsic antistrange quark gives a zero contribution: $\Delta S_{\bar s}=0$ \cite{Bro96}. In case of symmetric strange quark-antiquark pairs, one shall predict a zero strange tensor charge. However, a non-zero strange tensor charge will arise from the strange quark-antiquark spin asymmetry due to the meson-baryon fluctuations and we predict a strange tensor charge $\delta s \approx -0.02 \to -0.03$ (similar to $\Delta s$ \cite{Bro96}) corresponding to the probability $P_{K^+\Lambda}=5 \to 10 \%$. \section{Discussion and Summary} In this paper we have proposed an approximate relation that can be used to measure the quark spin distribution $\Delta q_{RF}(x)$, as implied in the quark model or in the rest frame of the nucleon. It will be very meaningful if a clear definition of this spin distribution or any other way for measuring this quantity can be found. It has been noticed recently \cite{MS2} that the quark spin distribution defined in this paper is actually equivalent to $\Delta q(x)+2 L_q(x)$, where $\Delta q(x)$ is the quark helicity distribution and $L_q(x)$ is the quark orbital angular momentum obtained by calculating the matrix element of the operator ${\mathbf L}_q =-i \gamma^+ {\mathbf k} \times \nabla _{\mathbf k}$. Thus $\Delta q_{RF}(x)$ is a quantity that can be calculated in an exact theoretical framework, such as lattice QCD, and might be measurable in the future. This means that Eq.~(\ref{eq1}) might be a practical relation that can be tested by other means. In summary, we showed in this paper that the quark spin distributions $\Delta q_{RF}(x)$, in the rest frame of the nucleon, are connected with the quark helicity distributions $\Delta q(x)$ and the quark transversity distributions $\delta q(x)$ by an approximate but simple relation: $\Delta q_{RF}(x) + \Delta q(x)=2 \delta q(x)$. This relation will be useful to measure the quark spin distributions of the nucleon once the quark helicity distributions and quark transversity distributions are measured. It will be also very useful in order to check various models and will provide more information concerning the spin structure of the nucleon. \bigskip {\bf Acknowledgments: } We would like to thank V.~Barone, S.J.~Brodsky, R.~Jakob, K.-F.~Liu, and P.J.~Mulders for helpful discussions. This work is partially supported by National Natural Science Foundation of China under Grant No.~19605006, Fondecyt (Chile) under grant 1960536, by the cooperation programme ECOS-CONICYT between France and Chile under contract No. C94E04, by a C\'atedra Presidencial (Chile), and by Fundaci\'on Andes (Chile). \newpage
2023-04-23T06:41:22.684Z
1998-09-16T23:32:37.000Z
redpajama/arxiv
arxiv_0001
2,302
4,223
f683875680c447be92b44b67d20f106584ab6441
\section{introduction} The understanding of diamond growth via the CVD process has proved difficult for both theorists and experimentalists alike. This is due to the large number of experimental parameters contributing to the problem and an uncertainty about the growth species. Progress has been made on the latter by the work of D'Evelyn et al\cite{develyn} who, using isotope labelling techniques, claim to have unequivocally identified the principal growth species to be CH$_{3}$. With this in mind, Harris\cite{harris} has proposed a complex mechanism for diamond growth, whose initial steps lead to the deposition of a CH$_{2}$ group at a bridge site above a surface reconstruction bond. \\ Recently, the effect of B and N doping on the CVD growth process has produced a series of intriguing results. In the case of B, various workers have found that B improves the crystalline quality of (100) CVD surfaces and enhances the p-type conductivity of the films\cite{nemanich,roth,hiraki}. Interest in the role of N in CVD diamond has been heightened by experimental observations that N preferentially catalyses growth in the (100) direction\cite{koidl,giling,moustakas}. To the authors' knowledge, no serious attempts have been made to explain these phenomena theoretically. Indeed, it is unclear whether these somewhat puzzling results are compatible with the Harris mechanism or if in doping cases a different growth process is at work. In this paper, we answer this question by investigating the effect of subsurface B and N on the energetics of the Harris mechanism. We find that the energies of the various growth steps are greatly altered, casting doubt on the applicability of the Harris method in these cases. We therefore discuss a possible alternative to the initial steps of the process.\\ The paper is arranged as follows: in section II the theoretical tools used in this study are described, whilst section III explains the first few steps of the Harris mechanism. Section IV contains theoretical results for the N and B doping on (100):H 2 $\times$ 1 surfaces whilst section V includes a discussion of these results. Section VI proposes a new model for the CVD diamond growth and a conclusion is given in section VII. \section{theoretical method and the model system} The density functional tight--binding method (DF--TB) derives its name from its use of self--consistent density functional calculations for pseudo--atoms in order to construct transferable tight--binding (TB) potentials for a non--selfconsistent solution of the Kohn--Sham equations for the many body case. It differs from conventional tight--binding techniques in that there is a systematic way of deriving these potentials, independent of the atom type involved. This is thus not a ``parametrisation'' as is usually meant when one talks about TB approaches. For an in depth description, the reader is referred to Ref.~\cite{dirk}. The method has been successfully applied to various scale carbon systems, ranging from small clusters to buckminster fullerenes and the bulk phase\cite{dirk}, the electronic and vibrational properties of (100) and (111) surfaces\cite{koe,stern}, amorphous carbon systems of all densities\cite{uwe}, as well as boron nitride\cite{jurg} and boron and nitrogen doping of diamond and amorphous systems\cite{sitch}. We have furthermore used the ab--initio cluster programs of Pederson and Jackson\cite{pederson} and Jones and Briddon\cite{jones} to check selected results. These programs are highly accurate but computationally very expensive, hence we are limited in these cases to very small clusters which can only represent highly idealized surfaces. Nevertheless, these calculations are useful insofar as they serve to verify the essential physics underpinning the results of our DF--TB work.\\ The 144 atom (100):H supercell with the 2 $\times$ 1 reconstructed surface used in this investigation is shown in Fig.~\ref{fig1}. It is made up of eight reconstructed surface bonds and six layers of carbon atoms. The dangling bonds on the lower surface are terminated with pseudo--hydrogen atoms. Unless otherwise stated, we have performed conjugate gradient relaxations, keeping the pseudo--hydrogen atoms and the lowest two layers of C atoms fixed. In the diffusion barrier study we have applied a constrained conjugate gradient technique (see Fig.~\ref{fig2}).\\ We have observed that, owing to the relatively small size of our supercell, $\Gamma$ point sampling produces unphysical results. This stems from the fact that at the Gamma point, the electronic states on the surface are lower in energy compared to the bulk states, a result which is not generally reproduced at other k-points. When no further k-point sampling is made, this leads, in the worst cases, to extra surface charges of order half elementary charge/atom at some of the surface atoms. This does not occur when an average over several representative K-points is made. The calculations have therefore been performed using the (2 $\times$ 2 $\times$ 1) k-point-grid recommended by Cunningham \cite{cunningham}. \\ \iflayout \begin{figure} \epsfig{file=./dimer100_H.finstr.001.ps,width=7.5cm} \caption{The model of the reconstructed diamond (100):H 2 $\times$ 1 surface.\\ } \label{fig1} \end{figure} \fi The diffusing atom is moved stepwise from the starting to the final position and is allowed to relax in the plane perpendicular to the direction of the vector connecting its' starting and final positions. No constraints are applied to other atoms (except the fixed lowest two layers of C atoms). \iflayout \begin{figure} \epsfig{file=./const_cg.eps,width=7.5cm} \caption{The constrained conjugate gradient relaxation.\\ } \label{fig2} \end{figure} \fi \section{The Harris Mechanism} The initial stages of the Harris mechanism can be divided into 4 steps: (i) removal of an H atom from an otherwise fully H-terminated surface, (ii) adsorption of a CH$_{3}$ radical at the newly formed dangling bond site, (iii) loss of H from the CH$_{3}$ adsorbed species and simultaneous formation a C=C double bond with a surface C, which breaks its surface reconstruction bond whilst leaving the adjacent surface atom 3--fold coordinated. It can be considered that the steps (i) to (iii) inclusive are a complex mechanism by which a CH$_{2}$ group is deposited in a position where it can ``attack'' the weakened surface reconstruction bond. This is achieved in (iv), where the CH$_{2}$ species rotates into the bridging position above the two surface C atoms. Steps (i)-(iv) are illustrated in Fig.~\ref{fig3}. \iflayout \begin{figure} \epsfig{file=./harris_mechanism.eps,width=7.5cm} \caption{The initial steps in diamond growth\\ on dimerised diamond (100):H surface according to Harris. } \label{fig3} \end{figure} \fi We cannot accurately calculate barriers for processes of ad/desorption to/from a surface, such as those in (i), (ii) and (iii), since charge transfer effects within DF-TB mean that the detaching radical--surface complex cannot be properly represented. However, if ad/desorption is not accompanied by any significant electronic or structural relaxation, as indeed is the case in steps (i) \& (ii) for the impurity free surface, we can safely assume that there are no significant additional contributions to the energy barriers to such processes other than the difference in formation energy between the initial and final structures. As we shall describe in the next section, this is not so for the impurity case, where structural reorganization around the N and an accompanying subsurface impurity--surface charge transfer occurs. We can therefore not talk with any confidence about the energy barriers here. In the light of this, we must limit our discussion for steps (i) to (iii), where the particle number at the surface is not conserved, to comparing formation energies for the resultant structures and making inferences where possible as to the nature of the energy barrier between. In the case of process (4), where surface particle number is conserved, calculation of an energy barrier is possible within our method. \section{Results} We discuss here the energetics of each of the steps of the Harris mechanism described in the previous section for the impurity free and the subsurface N and B calculations. We show in Tab.~\ref{harris} the calculated differences in formation energies for steps (i) to (iv) inclusive and also the energy barrier for step (iv). The relative energies after each step are depicted in Fig.~\ref{fig3b}.\\ \iflayout \begin{figure} \epsfig{file=./form_energies.eps,width=7.5cm} \caption{The relative total energies after each step (i) - (iv).\\ The zero of the energy is the energy of the three differently doped initial structures. The energy barrier of the step (iv) is also shown. } \label{fig3b} \end{figure} \fi ${\em Step (i): Removal~of~H~from~the~surface}$.\\ We obtain 6.1 eV for the binding energy of an H atom to the undoped surface. This high value is in agreement with other theoretical calculations \cite{anderson,garrison,latham} and reflects the strong nature of the C--H bond. The binding energy in the presence of N, at 2.8 eV, is much lower. This is due to the occurrence of a structural relaxation after a removal of the surface H atom, consequently lowering the energy of the final structure: the N atom moves from off to onsite and an electron migrates from the impurity atom to the surface. Such a process has been described in detail in an earlier paper\cite{sitch2}, where it was shown that the position of the N atom in the lattice is governed by the Fermi level. Namely, when E$_{f}$ lies at or above the single occupied A$_{1}$ level associated with the defect, the N atom lowers its energy by moving offsite along one of the bonding ${<}$111${>}$ directions. Conversely, if E$_{f}$ is pinned below A$_{1}$, onsite N is stabilized by a charge transfer to deeper lying states. The latter is the case here: the removal of an H atom from the surface leaves a deep lying dangling bond state, to which an electron migrates from the neighbourhood of the N atom. We observed in\cite{sitch2} that this spontaneous onsite motion is accompanied by an energy gain of 1.4 eV as measured by DF--TB. The transference of charge to the surface is confirmed in our case by a Mulliken study, which shows that a lone pair now resides on the 3--fold coordinated surface C atom. Thus the formation energy of the resulting structure is reduced. For B doping, the binding energy is lowered to 4.3 eV. Mulliken studies show clearly that a similar charge transfer effect is also responsible here - the surface dangling bond electron is pulled into a deep-lying subsurface acceptor state associated with the B atom.\\ ${\em Step (ii): Methyl~Absorption}$.\\ The methyl radical has the largest binding energy, 5.84 eV, when attaching to the non-doped surface, indicating the strength of the ${\sigma}$ C-C bond. Adsorption of the CH$_{3}$ in the presence of a subsurface N atom is not favored, instead of a binding energy we find that this step costs ${\approx}$ 1 eV. This stems from the inherent stability of the initial structure. We also suggest that a large barrier will exist for this process, since the site to which the radical should attach is no longer a dangling bond, as is the case for the impurity free supercell, but a fully saturated lone pair. The electrostatic repulsion between the lone pair and the CH$_{3}$ radical must first be overcome in order for a bond to be formed. In the B doped case the CH$_3$ binding energy is lowered to 4.04 eV, which again can be attributed to the charge transfer induced stability of the start structure.\\ {\em Step 3:~H~abstraction~and~surface rearrangement}. The cost of extraction of an H atom from the CH$_3$ species is again relatively high for the impurity free case at 6.2 eV. A C-C sp$^{2}$ bond is spontaneously formed, with the C and H atoms in CH$_2$ and the C atom on the surface all lying roughly in the same plane. The dimer-dimer bonding close to CH$_2$ lengthens by 13\%. This weakening is crucial for the final step in the growth process, in which the CH$_{2}$ group rotates into a bridging position above this bond, breaking it in the process. On the N doped surface, the CH$_2$ fragment maintains the sp$^3$-like configuration, with charge transfer from the subsurface N to the CH$_{2}$ adspecies, thus saturating the newly created dangling bond in the form of a lone pair (i.e. identical charge transfer mechanism to that of step (i)). In contrast to the undoped case, the surface reconstruction bond is not lengthened. As we shall explain in the discussion of step (iv), this actually hinders growth. For B doping, the surface spontaneously rearranges: the CH$_2$ group occupies the bridging position and the Harris cycle is completed. The energy gain in this process is 3.59 eV. \\ {\em Step (iv):~Migration~of~CH$_{2}$~to~bridging position}.\\ We obtain an energy barrier of 1.75 eV for the CH$_2$ diffusion to the bridge position with the undoped sample, in reasonable agreement with Anderson, who has found this barrier to be less than 1.92 eV \cite{anderson}. The N doped sample gives an energy barrier of 3.03 eV for the CH$_2$ diffusion, which is understandable since in this case the surface reconstruction bond must be broken, which is energetically costly. For B, as previously stated, the incorporation of the CH$_2$ fragment to the bridging position takes place with no energy barrier. \section{Discussion} \subsection{Nitrogen Doping} It is clear from these results that the Harris mechanism cannot explain N catalysis of (100) diamond growth. Without doping, the hydrogen abstraction reactions (i) and (iii), as well as the energy barrier for the motion of the CH$_{2}$ adspecies to the bridge position (iv), are the most prohibiting steps. Our results suggest that step (ii), where in the impurity free case a CH$_{3}$ group attaches to a surface dangling bond site, is severely hindered by the presence of subsurface N. Here, charge transfer from N to the surface means the CH$_{3}$ radical must attack a fully saturated site, where the C surface atom has an associated lone pair of electrons. The probable high energy barrier to overcome such an electrostatic repulsion suggests that the CH$_3$ bonding to the surface in step (ii) is unlikely. Further, the subsurface-surface charge transfer severely disrupts step (iii). In the undoped case, the extraction of an H atom leads to the formation of a C=C adatom-surface sp$^{2}$ bond, together with a weakening of the adjacent surface reconstruction bond. In the doped case, charge transfer from the N atom to the C adatom saturates the dangling bond, thus leaving the C-C adatom-surface bond sp$^{3}$ like and the surface reconstruction bond unperturbed. A critical analysis of the Harris mechanism would suggest step (iii) to be the most crucial in the whole process, since it at once places a CH$_{2}$ group in a position where it can ``attack'' a weakened surface reconstruction bond, subsequently forming a bridge site, which acts as a seed for further growth on the plane. This is manifestly not the case when subsurface N is present, where a full strength C-C reconstruction bond must be broken by an essentially ``saturated'' CH$_{2}$ group (the C atom having one C-C, two C-H and an associated lone pair) rotating into the bridge site. Thus one is led to question the suitability of such a complex model in this case. In section VI we describe a possible alternative. \subsection{Boron Doping} Although the energetics of Harris mechanism is perturbed by the presence of subsurface B atoms, this does not suggest that the mechanism should cease to be valid in this case. Just as for N dopants, a charge transfer is responsible for the discrepancy in the formation energies of the start and finish structures for steps (i) and (ii) between the B doped and impurity free structures. However, this does not lead to the problems encountered with N, since charge is now transferred {\em from} the surface to a subsurface B acceptor level. The structure after H abstraction (step (i)) is stabilized by charge transfer, with the 3-fold coordinated surface C atom now having one completely empty level. Hence although adsorption of a CH$_{3}$ radical is now not as attractive as when a dangling bond is present (impurity free surface), there is not, as is the case for N, an electrostatic repulsion preventing such an occurrence. Once the CH$_{3}$ group is adsorbed onto the surface (step (ii)), the rest of the Harris mechanism is energetically favorable. Although we cannot say exactly how big the energy barrier for H abstraction from the CH$_{3}$ group is, we can reason that it has as its' upper bound the energy for abstraction from the undoped surface. This is due to charge transfer during abstraction - removal of H from the undoped surface requires the breaking of a full strength C-H bond, whereas when a B subsurface dopant is present, the energy barrier for the process may be lowered by charge transfer to the subsurface B atom. After H abstraction, the relatively electropositive CH$_{2}$ group is pulled spontaneously to the electron rich bridge site. The overall energy gain in H abstraction + CH$_{2}$ diffusion to the bridge site is 3.6 eV. \section{An Alternative Model for Growth with N Doping: the ``Zipper'' mechanism} We consider a far simpler method would be more appropriate to describe N-catalysed (100) growth, since it would remove the unnecessary and costly initial steps of the Harris mechanism. We suggest here one such model. We have found in our studies that, although the 3--fold coordinated N atom is the most stable configuration for a fully hydrogenated (100) surface, a structure where the excess ``doping'' charge is transferred to a surface reconstruction ${\sigma^{\ast}}$ state is metastable. This has been confirmed by an {\em ab-initio} all-electron cluster calculation, using the code developed by Pederson and Jackson\cite{pederson}, where a difference in energy of 2.40 eV between the two structures is found. The ${\sigma^{\ast}}$ state is strongly localized on one reconstruction C-C bond, which as a consequence lengthens from 1.62 {\AA} to 2.30 {\AA}. We have found this electron rich site to be an ideal adhesion point for a CH$_{2}$ species. Indeed, using the {\em ab-initio} cluster code of Jones and Briddon\cite{bob2}, we observe no energy barrier for the adhesion process and a binding energy of ${\approx}$ 8 eV. Once the CH$_{2}$ species adheres to the surface, the bridging and bridged C atoms are electronically saturated, thus allowing the ``doping'' charge to migrate to the next adhesion site and so on. Growth of a whole layer may thus be catalysed by the presence of one N electron. We therefore visualize the growth process in the following way: the growing crystal is a non-equilibrium thermodynamic system, in which atoms on the surface are vibrating in a variety of different phonon modes. It is perfectly plausible that the two carbons of a reconstruction bond describe a ``breathing mode'', in which their C-C bond length is periodically much larger than the already weakened C-C reconstruction bond. This therefore represents an ideal target for an adhering CH$_{2}$ species. The energy barrier to overcome the breaking of the residual C-C reconstruction bond is further lowered by the simultaneous transfer of charge from the subsurface N to the surface. Once the CH$_{2}$ adhesion at the bridging site is completed, the excess electronic is free to mediate a similar reaction at the adjacent site. Thus growth of a whole layer may thus be catalysed by the presence of one N electron.\\ Due to the geometry of the diamond structure, smooth growth in the (100) direction requires the dimer row on the upper terrace to be perpendicular to the dimer row in the lower terrace. This can be achieved by the dimer opening reaction if two CH$_{2}$ adjacent adspecies (see Fig.~\ref{fig4}) both eject one of their H atoms and bond together to form an isolated dimer. This isolated dimer can thereafter transform to a C=C$H_{2}$ adspecies and migrate towards an existing dimer row as proposed by Skokov\cite{skokov}. The suggested new model is depicted in Fig.~\ref{fig4}. Instead of CH$_{2}$, the CH$_{3}$ molecule may also be a good candidate attaching to the open dimer. In this case two H$_{2}$ abstractions are required. \iflayout \begin{figure} \epsfig{file=./zipper.eps,width=7.5cm} \caption{The novel growth model with N doping of CVD (100):H diamond: the Zipper mechanism. i) The extra electron from N migrates to the surface and opens a dimer bond. ii) A CH$_{2}$ adsorbs to the open dimer, and the neighboring dimer is opened. iii) Another CH$_{2}$ adsorbs to the open dimer, and the next dimer is opened. iv) H$_{2}$ is abstracted and a new isolated dimer is formed to the upper terrace. \\ } \label{fig4} \end{figure} \fi In our argument thus far we have neglected two important questions: (1) how big is the energy barrier for the dimer opening? (2) why is this method only valid for (100) orientations? We estimate (1) by noting that the essential difference between the stable 3-fold coordinated N + closed dimer structure and that of the metastable 4-fold coordinated N + open dimer consists of the energy cost of breaking the C-C reconstruction bond and the energy gain of the onsite motion of N on losing an electron. We have calculated the former to be 2.4 eV and argue in section IV above that the latter is 1.4 eV. Hence we arrive at the energy barrier of 1.0 - 2.4 eV, a plausible figure given the energies discussed in connection with the Harris mechanism.\\ Point (2) is answered by noting that the (100) differs from the (111) and (110) surfaces in that the clean surface possesses two dangling bonds per atom. Reconstruction and hydrogenation results in a structure where the surface C atoms have two C-C bulk, one C-C surface, plus a saturating C-H bond. Hydrogenated (110) and (111) surfaces possess 3 bulk C-C plus one C-H bond. The reconstruction surface (100) C-C bond, at 1.62 {\AA} is longer, and consequently weaker and more vulnerable to attack than a bulk ${\sigma}$ bond. In the case of, for example the hydrogenated (111) surface, no such reconstruction bonds exist. To activate a surface bond would therefore require the breaking of a far stronger bulk-like ${\sigma}$ bond, which is then correspondingly energetically more expensive and hence less probable. \section{Conclusions} In this paper we have employed a density functional method to investigate the effect of N and B doping on the growth of CVD diamond (100):H 2 $\times$ 1 surfaces. Consistent with recent CVD experiments which have shown that Boron improves the xcrystalline quality of (100) CVD diamond surfaces, we have found the Harris mechanism to be an energetically favorable pathway in the CVD growth of B doped samples. In the N doping case, we argue that the increased diamond growth rate in the (100) direction cannot be accounted for by the Harris mechanism, rather we suggest an alternative model in which the (100) surface is charged by N-donor electrons. In this model CH$_{2}$ group is directly inserted into the bridging position.
2023-04-23T06:41:22.713Z
1997-10-06T14:03:44.000Z
redpajama/arxiv
arxiv_0001
2,305
4,004
2ca466ca143fa8be12866235ab774123561b677f
\section{Introduction} A variety of mature observational techniques are now in use studying galaxy clusters. Through optical studies of cluster galaxies, analyses of weak gravitational lensing distortions of the background galaxy field, observations of radio sources within and behind clusters, and X--ray images and spectra of the intracluster medium (ICM), we now have a wealth of data to compare to models of clusters drawn from analytic treatment and numerical simulation. The paradigm for cluster formation and evolution that has emerged from such modeling is one in which clusters form through gravitational collapse of an overdense region (Gunn \& Gott 1972; Bertschinger 1985). While analytical descriptions typically assume spherical symmetry, cluster observations and N--body simulations of hierarchical clustering from initially Gaussian, random density fields show that the collapse process is generally irregular, involving mergers of protoclusters flowing along large--scale filaments, along with accretion of smaller satellite systems and weakly clustered material. It is commonly held that rich clusters formed at recent epochs. Nevertheless, since the relaxation timescales for clusters are significantly less than a Hubble time, the standard model for describing the distribution of matter within clusters is one based on hydrostatic equilibrium. Early one--dimensional collapse simulations by Perrenod (1978) supported this assumption, later confirmed in three--dimensions by Evrard (1990a,b). The isothermal $\beta$--model (Cavaliere \& Fusco--Femiano 1976, 1978; Sarazin \& Bahcall, 1977) makes further simplifying assumptions of an isothermal ICM temperature and spherical symmetry of an assumed, dominant collisionless potential, now taken to be generated by dark matter. Each component follows a density profile of the form \begin{equation} \label{eq:betaden} \rho(r) \,=\, \rho_{0}\left[1\,+\,\left(\frac{r}{r_c}\right)^2\right]^{-3\alpha/2} \end{equation} where $r_c$ is the core radius within which the density profile relaxes to a constant, central value $\rho_0$. In this model, the outer profile slopes of the gas and dark matter, measured by their respective values of $\alpha$, provide information on the relative temperatures of the two components. The parameter \begin{equation} \label{eq:betadef} \beta\,\equiv\,\frac{\sigma^2}{\left(\frac{kT}{\mu m_p}\right)}. \end{equation} from which the model takes its name, is the ratio of specific energy in dark matter, measured by the one-dimensional velocity dispersion $\sigma$, to that in gas, measured by its temperature $T$ and mean molecular weight $\mu$, with $k$ Boltzmann's constant and $m_p$ the proton mass. Since the ICM mass dominates the galaxy mass in rich clusters such as Coma (Briel, Henry \& Bohringer 1992; White {\it et al.\ } 1993), it is reasonable to assume that the ICM plasma originates in primordial gas leftover from galaxy formation. In this case, the gas and galaxies cluster hierarchically within the same potential wells, so it is similarly reasonable to expect that the specific energies of the two components will be nearly equal, $\beta \! \simeq \! 1$. (A refined discussion of this point is provided in the Appendix.) However, there is evidence that the history of the intracluster medium is more complicated. In particular, the presence in the ICM of iron and other elements produced by stars, at abundances near solar, necessitates significant interaction between galaxies and the hot intracluster plasma. Mechanisms for this metal enrichment process include feedback from a very early stellar population such as Population III stars (Carr, Bond \& Arnett 1984), ram pressure stripping by the ICM of the interstellar medium from galaxies (Gunn \& Gott 1972; Biermann 1978; Takeda, Nulsen, \& Fabian 1984; Gaetz, Salpeter, \& Shaviv 1987), and ejection of hot enriched gas from galaxies via winds (Yahil \& Ostriker 1973; Larson \& Dinerstein 1975). How might we discriminate between these? First of all, a key question with respect to the dynamics of the ICM plasma is whether significant energy deposition accompanied the enrichment process. ``Passive'' mechanisms, such as primordial enrichment or ram pressure stripping, do not add considerable energy to the ICM. Galactic winds, on the other hand, represent an ``active'' mechanism which deposits both energy and metal enriched material into the ICM. Meanwhile, there is some evidence implying cluster gas has a greater specific energy than cluster galaxies, or $\beta\,<\,1$ (\hbox{\it cf.}\/ Edge \& Stewart 1991), a result consistent with additional, non--gravitational energy input into the ICM. Also, several studies of the relation between the galaxy velocity dispersion and ICM X--ray temperatures in clusters suggest that $\beta$ varies with the depth of the potential well (Edge \& Stewart 1991; Lubin \& Bahcall 1993; Bird, Mushotzky, \& Metzler 1995; Girardi {\it et al.\ } 1995). To be fair, cluster velocity dispersions and X--ray temperatures are difficult to compare in an unbiased manner, since the quantities are prone to different types of systematic errors and are typically not measured within the same region of a cluster (Metzler 1997). However, if robust, such a result may be expected from wind models. Since the specific energy of an individual galactic wind should not depend upon the host cluster whereas the specific thermal energy supplied by gravitational collapse does depend on cluster mass, winds should affect more strongly the ICM of clusters with small velocity dispersions. This may introduce a dependence of the ratio of specific energies with temperature in the manner described above. Another possible discriminant between enrichment mechanisms lies in the distribution of metals in the intracluster medium. However, it is difficult to infer analytically the type of abundance gradient expected from each of these three mechanisms. Simulations of cluster evolution incorporating enrichment can clarify this, and provide an expectation to compare to observations of abundance gradients now becoming available (\hbox{\it e.g.}\/ Tamura {\it et al.\ } 1996; Xu {\it et al.\ } 1997; Ikebe {\it et al.\ } 1997). We present here results from an ensemble of simulations which include the effects of galactic winds in a self--consistent, three--dimensional fashion. A unique feature of these is the ability to trace the structure of galaxies and metal--enriched gas in the ICM. This work expands the examination of a single, Coma--like cluster presented in an earlier paper (Metzler \& Evrard 1994, hereafter Paper I). Since galactic wind models themselves are uncertain, we take a heuristic approach and employ a simple, and in some ways extreme, model for galactic winds in an attempt to explore the upper envelope within which realistic models should lie. We examine an ensemble of eighteen cluster realizations, spanning a factor of 50 in cluster mass, drawn from a standard cold dark matter cosmogony. Each initial realization is evolved twice, with one run incorporating and the other ignoring galaxies and their ejecta. This paper focuses on the three--dimensional structure of the present epoch population; a subsequent paper will examine the effect of feedback on X--ray observations. In Section 2, we elaborate on the numerical techniques used in this work, as well as the general properties of the two cluster ensembles used. Section 3 provides a look at the structure of the collisionless components (dark matter and galaxies) in these simulations. The structure and metal distribution of the intracluster medium are examined in Section 4. A revised model of the ICM, based on the halo model of Navarro, Frenk \& White (1996, hereafter NFW2), is considered in Section 5. The relative structures of the various cluster components are compared in Section 6; also included there are some comments about implications for estimates of the cluster baryon fraction. Our results are summarized in Section 7. \section{Method} \subsection{Initial Conditions} The simulations and their initial conditions use as their basis the standard biased cold dark matter (CDM) scenario (Blumenthal {\it et al.\ } 1984; Davis {\it et al.\ } 1985): $\Omega = 1$; baryonic fraction $\Omega_{b}\,=\,0.1$; Hubble constant $h\,\!=\!\,0.5;$ and power--spectrum normalization $\sigma_{8}\,=\,0.59$. These parameters are used throughout this work when scaling to physical units. The path--integral formalism of Bertschinger (1987) is used to generate initial density fields which are constrained, when smoothed with a Gaussian filter, to have a specified value at the center of the simulated volume. For the simulations in this paper, we filter with a Gaussian of scale $R_{f}\,=\,0.2L {\rm\ Mpc}$, where $L$ is the length of the periodic volume, corresponding to a mass scale of $M_{f}\,=\,\left(2\pi\right)^{3/2}\rho_c\;R_{f}^{3} \,=\,5.6 \times 10^{14} \left(L/40{\rm\ Mpc}\right)^3{\rm M}_\odot$ (Bardeen {\it et al.\ } 1986). Here $\rho_c \!=\! 3 {\rm H}_0^2/8\pi G$ is the critical density, also the mean background density of the models. The perturbation height at the center was constrained to a value $\delta_{c} \,=\, 2.0$ when filtered on scale $M_{f}$. For all of the simulations described in this paper, $32^3\,=\,32768$ particles are initially placed for each of the dark matter and gas fluids; the mass of an individual dark matter particle is related to the mass of a gas particle by $m_{DM}\,=\,9m_{gas}$, reflecting their fractions of the total density. The primordial density field is used to generate a particle distribution at the starting redshift $z_i\,=\,9$ using the Zel'dovich approximation, as described in Efstathiou {\it et al.\ } (1985). Since in generating the constrained initial density field, we filter on a fixed fraction of the box length, we can simulate clusters spanning a range in mass simply by varying the box size. The mass per simulation particle is proportional to $L^3$, but so is the filter mass scale. This causes the number of particles in the final collapsed object to be roughly comparable in all runs, so the fractional mass resolution in the various simulations presented here is equivalent. This avoids any systematics that might be introduced into correlations between cluster quantities (X--ray luminosity vs. mass, for example) if the resolution varied in a systematic way from low--mass to high--mass clusters. \subsection{Including Galaxies} The technique used for inserting galaxies in the simulation is described in detail in Paper I. We Gaussian--filter the initial conditions on the approximate scale of bright galaxies ($R_{f}\,=\,0.5{\rm\ Mpc}$, corresponding to $M_{f}\,=\,\left(2\pi\right)^{3/2}\rho_c\;R_{f}^{3} \,=\,1.4\times 10^{11}{\rm M}_\odot$) and locate peaks in the initial overdensity field on that scale above a fiducial threshold of $2.5\sigma$, chosen to reproduce the observed number density of bright galaxies. We then return to the initial particle distribution and replace the gas particles associated with each peak with a composite ``galaxy particle.'' We assume an effective collapse redshift of $z_c\,=\,4.5$, corresponding to a linearly determined mean interior overdensity at the starting redshift $z_i$ of \begin{equation} \delta_{gal}\,=\,1.686\frac{1\,+\,z_c}{1\,+\,z_i}\,=\,0.933. \end{equation} The gas particles within this mean interior overdensity are removed, and the mass of the resulting galaxy particle is set to the number of gas particles removed. The initial linear momentum of a galaxy particle is set by demanding conservation of linear momentum. A valid concern with our method and results can be raised over our use of peaks to simulate galaxies. The most natural thing to do would be to allow the gas in the simulations to cool and form galaxies, and then allow those galaxies to provide the sources for the feedback into the intracluster medium. However, such an approach suffers from limitations in our ability to accurately model star formation, in both a physical and numerical sense. As we wish to perform many simulations to ensure adequate statistics when considering issues of cluster structure and evolution, we must economize computatonal resources spent on an individual run, and our approximate peak treatment to galaxy formation provides considerable numerical savings. The peak model has some physical basis in that there is known to be ``crosstalk'' from large to small scales during hierarchical clustering from Gaussian initial conditions in the non--linear regime which enhances the rate of small--scale structure formation for the power spectrum shape considered here (White {\it et al.\ } 1987; Juszkiewicz, Bouchet, \& Colombi 1993). The model also has some phenomenological success in explaining the qualitative shape of galaxy luminosity functions (Evrard 1989) and the morphology--density relation in clusters (Evrard, Silk \& Szalay 1990). However, since the theory of Gaussian random fields (Bardeen {\it et al.\ } 1986) tells us that peaks on smaller scales are likely to be biased towards peaks on larger scales, and since our initial conditions are constrained to produce a high--peak on cluster scales at the center of the simulation volume, the initial galaxy distribution will be more centrally concentrated than the overall mass distribution. The thermal history and metal distribution of the ICM is certainly sensitive to the assumed galaxy formation model. To quantify this, several runs were performed with galaxies placed randomly in the volume, rather than at the locations of overdense peaks. By removing the peak correlations induced by the presence of the cluster, random placement resulted in a substantial reduction in the number of bright galaxies within the simulated clusters, even though the number density in the entire simulated volume was held fixed. The effect of feedback was reduced to the point that the ejection runs differed little from their non--ejection counterparts, and so we do not discuss these runs further in this paper. High resolution numerical experiments resolving galaxy formation within clusters will ultimately settle this question. The current best effort on this issue favors the peaks approach over random placement (Frenk {\it et al.\ } 1996). \subsection{Numerical Algorithm and the Wind Model} We use the N--body + hydrodynamical algorithm P3MSPH, which combines the well known particle-particle--particle-mesh ($P^{3}M$) algorithm of Efstathiou \& Eastwood (1981) with the Smoothed Particle Hydrodynamics (SPH) formalism of Gingold \& Monaghan (1977). The combined algorithm is described in Evrard (1988), and some of the post--simulation analysis procedures used are described in Evrard (1990b) and Paper I. The simulation algorithm can follow collisionless dark matter and collisional baryonic gas; we have modified the simulation algorithm to also model galaxy particles of varying mass, and to allow the galaxies included to eject energetic, metal--enriched gas. The technique used is described in detail in Paper I. The galaxy mass fraction lost through winds is described by a time--dependent rate curve; specific energy and iron ejection rate curves are also assumed as input. For each galaxy, the wind rate curve is integrated until the amount of mass ejected equals the mass of a simulated gas particle. Energy and iron mass fraction are then assigned to that particle by integrating those curves over the same period. The process is then repeated for as long as the ejection rate curve is non--zero. Thermal energy, momentum, and iron mass are mixed approximately over the scale of one SPH smoothing length. The smoothing process, described in detail in Paper I, is based on conservation of mass, momentum and energy and a scenario in which wind ejecta is rapidly mixed into the surrounding ICM. For these simulations, we have assumed a wind model in which galaxies eject half their mass at a flat rate from a redshift of four to the present, with a wind luminosity for a galaxy with $10^{10} {\rm M}_\odot$ in baryons of $L_{wind}\,=\,4\times 10^{42} \hbox{$\,$ erg s$^{-1}$} $, and a total energy release of $1.5 \times 10^{60}$ erg. \subsection{The Cluster Ensemble} To study systematic trends, it is necessary to examine an ensemble. To this end, we assemble 18 sets of initial conditions, and evolve them with and without galaxies and winds, for a total of 36 simulations. Five comoving box lengths are used. For comoving box lengths of 20, 25, and 30 Mpc, four sets of initial conditions each are used, while three each are run at 40 and 60 Mpc. A summary of general properties of the runs is shown is Table 1. As in Paper I, we refer to the ensemble of runs with galaxies and winds as the EJ, or ejection, ensemble, and the runs without galaxies as the 2F, or two--fluid, ensemble. When referring to individual runs in this paper, all run names begin with the comoving box length in megaparsecs and end with a suffix to differentiate between runs. We will indicate whether ejection is included as appropriate. \section{The Collisionless Components } \subsection{Clusters Sizes and Characteristic Scales} In formation via graviational instability, one expects a characteristic length to emerge which divides the regions within which material is close to hydrostatic equilibrium and exterior to which matter is on its first infall or expanding (Gunn \& Gott 1972; Rivolo \& Yahil 1984; Bertschinger 1985). Because infall occurs on a gravitational timescale $t_{grav} \! \propto \! \rho^{-1/2}$, one expects this characteristic radius to occur at a fixed value of the mean enclosed density. Figure~\ref{fig:dm_vrprof} shows the radial velocity profile at $z\,=\,0.02$ for the dark matter in four of the two--fluid simulations using mean interior density contrast as the abscissa, defined as $\delta_c \!=\! \rho(<r)/\rho_c$. These four were chosen because they have qualities worth describing in more detail; the remaining clusters have similar structure. All show a velocity profile characteristic of gravitational collapse in an expanding world model. Spherical clusters would have a zero velocity surface at a density contrast of $\sim 5.5$ (Peebles 1980). As shown by the outer dashed line, this overdensity does an excellent job of marking the turnaround radius. In run 20e, the velocity magnitude in the region of infall is somewhat small, and the infall occurs over a narrow range of overdensities. This simulation forms three small clusters of approximate mass ratios 2:2:1, and the two largest objects are near each other, causing the infall region in each to be weak due to interference from the other cluster. There is not an obviously sharp transition marking the virialized region. Some simulated clusters, such as the 20b and 40a runs shown, have a reasonably quiescent region interior to a region of strong infall. For these clusters, the rough prediction of the spherical model --- the inner dashed line at an overdensity of 170 --- provides a good approximation to the outer boundary of the virialized region. Other objects, however, have a complicated velocity structure within this overdensity. In particular, the most massive clusters exhibit infall extending into much larger overdensities. Massive systems form later, and these clusters are still experiencing strong infall and are not relaxed. The three worst offenders --- runs 40c (shown), 60c, and 60d --- experience strong mergers and asymmetric accretion after a redshift of 0.5. Nonetheless, since no other characteristic virial overdensity emerges from the data, we use the radius with a mean interior overdensity of 170, hereafter called $r_{170}$, as a fiducial virial radius in the analysis below. Cluster properties such as density and temperature will then be profiled against the scaled radius $x\,=\,r/r_{170}$. For convenience, the relation between $r_{170}$ and total cluster mass $M_{170}$ is \begin{equation} r_{170} = 1.72 \left({ M_{170} \over 10^{15} \hbox{$\, h^{-1}$} {\rm M}_\odot} \right)^{1/3} \hbox{$\, h^{-1}$} {\rm\ Mpc} . \end{equation} If clusters are very nearly self--similar over the range in mass probed here, then the choice of another overdensity value for the virial scale merely amounts to relabelling the radial coordinate of our profiles. Much of the literature follows the example of Navarro, Frenk \& White (1995, hereafter NFW1), who employ an overdensity of 200. However, Evrard, Metzler \& Navarro (1996, hereafter EMN) demonstrate that a density contrast of 500 is a more conservative choice for the hydrostatic boundary of clusters, in the sense that the mass weighted radial Mach number has smaller variance and an ensemble mean more consistent with zero within $r_{500}$ than $r_{200}$. For power--law density profiles near $r^{-2}$, $r_{200}$ and $r_{170}$ differ by about $8\%$. The mass, mean dark matter velocity dispersion, and intracluster medium temperature within a radius $r_{170}$ for the members of the two--fluid ensemble are shown in Table 2. Although the simulations span a factor of 27 in volume, the resulting clusters span a factor of nearly 50 in mass. This difference is due to the fact that in two of the smallest volume runs, two clusters of comparable mass form and have not merged by the end of the simulation. For the analyses here, the larger of the two clusters in each simulation was chosen. In Table 3, we give information for the ejection ensemble, including the global fraction of the initial gas mass remaining in the volume after insertion of galaxies ($f_{gas}$), the number of galaxies in the simulation, the number within $r_{170}$ of the present epoch cluster, and the mean temperature of ICM within that radius. Gas and galaxy fractions within the clusters are discussed in \S VI. The masses and dark matter velocity dispersions for the ejection ensemble are very similar to their two--fluid counterparts, so we do not quote them here. \subsection{Dark Matter Density Profiles} We now consider the dark matter distribution of the simulated clusters. The dark matter structure in the runs with galaxies and ejection is nearly identical to their two--fluid counterparts, so we present results from only the 2F set in this and the following section. Figure~\ref{fig:dm_denprof} shows the dark matter density profiles for the eighteen clusters in our ensemble, taken at $z\,=\,0.02$. These profiles were constructed by defining radial bins containing 200 particles each, then measuring the volume of the bin to arrive at the density. The shapes of the profiles look remarkably similar. In Figure~\ref{fig:dm_denprof_sc}a, the profiles have been rescaled; we plot the local density contrast $\rho /\rho_c$, versus scaled radius $x\,=\,r/r_{170}$. There is some difference in central overdensity between models, but at larger radii (smaller overdensities), this dispersion tightens. Vertical lines in both figures denote the values of the gravitational softening parameter $\epsilon$ for each individual run at this epoch. The agreement among the density profiles of the ensemble reinforces previous findings of a characteristic density profile for halos formed via hierachical clustering. The self--similarity displayed in this figure confirms the choice of $r_{170}$ as a scale radius, although choices of overdensity near this value would work equally well. Motivated by the self--similar appearance in Figure~\ref{fig:dm_denprof_sc}a, we construct a mean density profile for the two--fluid runs by averaging the values of the density derived from each individual cluster in radial bins evenly spaced in $\log\left(x\right)$. The result, along with comparison to various functional forms, is shown in Figure~\ref{fig:dm_denprof_sc}b. Each of these functions has at least two adjustable parameters --- an amplitude, and either a scale length or an exponent. However, it is important to note that one parameter is constrained by the required mean overdensity interior to $r_{170}$. In fitting to these functions, only data within $r_{170}$ are used. We first consider a fitting function of the form introduced by NFW1 \begin{equation} \label{eq:nfwden} \frac{\rho\left(x\right)}{\rho_c} \,=\, \Delta \left(\frac{x}{\lambda}\right)^{-1} \left[1 + \left(\frac{x}{\lambda}\right)\right]^{-2} \end{equation} where, as before, $x\,=\,r / r_{170}$, the scaled radius of Figure~\ref{fig:dm_denprof_sc}. This profile approximates an $r^{-1}$ power law at small radii, and an $r^{-3}$ power law at large radii. The characteristic scaled radius $\lambda$, or physical radius $\lambda r_{170}$, is the radius at which the logarithmic slope of the density profile is $-2$; $\Delta$ is four times the local overdensity at that radius. Since we will apply this functional form to the entire density profile, our integral constraint requires that \begin{equation} \Delta\,=\, \frac {170} {3 \lambda^{3} \left[\ln\left(1+\frac{1}{\lambda}\right) - \frac{1}{1 + \lambda}\right]} . \end{equation} A single member of this class of functions with fixed values $\lambda\,=\,0.2$ and $\Delta\,=\,7500$ was introduced by NFW1, and shown to model well the inner profiles of their simulated CDM clusters. Subsequent work (NFW2; Metzler 1995; Cole \& Lacey 1996; Tormen, Bouchet \& White 1997) generalized this profile to allow $\lambda$ to be a free parameter. When applied to our mean profile, this form provides an excellent fit, with a best fit $\lambda\,\simeq\,0.154\pm 0.008$ (implying $\Delta\,\simeq\,13600$). Our normalization looks much larger than the original NFW1 result but, as explained below, the discrepancy is due to differences in the samples employed in the studies. If we apply this form to two subsets of the ensemble, one comprised of the six highest--mass runs and one comprised of the eight lowest--mass runs, we find that the mean profiles are significantly different, with the high mass ensemble requiring a higher value of $\lambda$ ($0.176\pm 0.010$) than the low mass ensemble ($0.145\pm 0.005$). A small value for $\lambda$ corresponds to a steeper inner density profile; low mass CDM halos are more centrally concentrated than high mass halos. The difference in density structure between high and low mass objects reflects the formation epochs of different objects. In hierarchical clustering cosmogonies such as CDM, lower mass objects form earlier, when the background density is higher, so their mass is expected to be more centrally concentrated. This effect is expressed clearly by NFW2, who examine halos spanning four decades in mass. It is this mass dependence which explains the difference between our best fit parameters and the original NFW1 values. Our fits are, in fact, in good agreement with the standard CDM case considered in NFW2 (their Figure~5). Contrast the seeming success of this model with the standard $\beta$--model profile, Equation~\ref{eq:betaden}, which provides a three--parameter fitting function as \begin{equation} \frac{\rho\left(x\right)}{\rho_c} \,=\, \Delta_{0} \left[1 + \left(\frac{x}{x_c}\right)^2\right]^{-3\alpha_{DM} / 2}. \end{equation} This functional form implies a central, constant density core, characterized by the core radius $r_c\,=\,x_c r_{170}$ and central density $\Delta_{0}\rho_c$. Using this expression, we find a best--fit core radius of $x_c = 0.053$, slightly under twice the mean softening scale (see Figure~\ref{fig:dm_denprof_sc}). At such a radius, the deviation of the softened force from a normal Newtonian force law is significant, so we cannot claim to resolve such scales in the mean profile. Fits of this function to the density profiles of individual clusters produce resolvable core radii only in systems with recent or in--progress merger activity. We therefore cannot claim to resolve any core in our simulated clusters' density profiles, in agreement with numerous previous studies. The implied large--radius logarithmic slope for the mean profile is $-3\alpha_{DM}\,=\,-2.48$. The simplest description of the density profile is that of a power law, $\rho(x) \propto x^{-\alpha}$. The curvature in the density profiles evident in Figure~\ref{fig:dm_denprof} implies that a power law is inappropriate over the entire range of resolved structure, and formal fits verify its inadequacy. It is worth noting, however, that while the curvature is clearly present, it is not extreme. The local logarithmic slope of the density profile lies between $-1.5$ and $-2.5$ over the entire resolved range, lending support to analyses of cluster structure which assume isothermality. Considering only radii with a local overdensity in the range $100 \leq \rho/\rho_c \leq 3000$, a power law with $\alpha \!=\! 2.39 \pm 0.08$ provides an excellent fit to the mean profile. This result is consistent with the value $2.33 \pm 0.04$ found in the $\Omega\,=\,1$, $n\,=\,-1$ model of Crone, Evrard \& Richstone (1994). At smaller radii, the profile is more shallow; a fit between local density constrasts of $10^{5}$ and 5000 yields $\alpha \!=\! 1.56$. The spatial and mass resolution of the experiments is not sufficient to demonstrate convergence to this, or any other, value of the logarithmic slope of the dark matter density as $x \rightarrow 0$. Parameters extracted from fits to the dark matter profiles are summarized in Table 4. There are several important points to summarize. First, the clusters have a characteristic density profile consistent with those found in previous studies. The logarithmic slope of the profile is typically shallower than -2 at small radii, and steeper at large radii; the division between these two regions occurs between 0.1--0.2 $r_{170}$, depending upon the mass of the cluster. For clusters with emission weighted X--ray temperatures of $7{\rm\ keV}$ or so, this should correspond to radii of about 350--700${\rm\ kpc}$ at the present. The degree of central concentration is mass--dependent, with less massive clusters being more centrally concentrated. The outer portions of cluster density profiles are well--approximated by power--laws and demontrate less sensitivity to mass. There is no evidence that CDM clusters have or even approach constant density cores. The behavior of the density profile in the very central regions of clusters remains uncertain; recent high resolution simulations exhibit central profiles steeper than that prediced by the NFW form, Equation~(\ref{eq:nfwden}) (Moore {\it et al.\ } 1997). \subsection{Dark Matter Velocity Dispersion Profiles} The top half of Figure~\ref{fig:dm_sigprof_sc} shows the dark matter velocity dispersion profile for the eighteen members of the two--fluid ensemble. The profiles have been rescaled --- the radial coordinate by $r_{170}$ for each cluster, and the velocity dispersion by the quantity $\sigma_{170}$, defined as \begin{equation} \sigma_{170}\,=\,\left(\frac{GM_{170}}{2r_{170}}\right)^{1/2}, \end{equation} where $M_{170}$ is the mass within $r_{170}$. Most of these profiles have a common shape, rising from the center of the cluster and then falling again towards the virial radius, but recent merger activity causes deviations from this profile for some systems. As noted earlier, the typical dark matter density profile for the ensemble is shallower than $r^{-2}$ at small radii, and steeper at larger radii, corresponding to the velocity dispersion profiles seen. The radius at which the velocity dispersion is a maximum will lie somewhat beyond the break radius at which the density profile has a local logarithmic slope of $-2$. For the NFW profile, if we assume the velocity dispersion to vary weakly with radius (true for the simulated clusters), and that velocity anisotropy is unimportant, then the location of the velocity dispersion maximum can be calculated to lie at $x_{max}\,\simeq\,1.16 \lambda$; the mean profile would then predict the maximum of the velocity dispersion at $x_{max}\,\simeq\,0.18$, very near where most of the curves in Figure~\ref{fig:dm_sigprof_sc} reach their maximum. Deviations from this prediction for individual curves originates from transients associated with mergers and/or the presence of long--lived orbital anisotropy in the velocity distribution. The bottom half of the figure shows the velocity dispersion anisotropy parameter, $A\left(r\right)\,=\,1\,-\,\sigma^2_t/\sigma^2_r$, where $\sigma_r$ and $\sigma_t$ are the dispersions in the radial and transverse velocities respectively. The dark matter orbits are mostly radial over much of the profile for all of the members of the ensemble, reducing the kinetic support somewhat and steepening the dark matter density profile. At small radii, the dispersions converge towards isotropy, although one run (60d) shows evidence for the irregular state noted in its radial velocity profile. \subsection{Galaxy Number Density Profiles} Representation of galaxies as a separate, collisionless component in the ejection ensemble allows us to investigate the kinematics of this visible population. We fit the distribution of galaxies in the simulated clusters to a $\beta$--model profile. Galaxy number density profiles are determined by constructing Lagrangian radial bins for each simulated cluster, holding five galaxies each, out to $r_{170}$. The central two bins of each profile are excluded from the fit, to minimize the effect of force softening on the results. This makes determination of central galaxy number density and core radius uncertain; but these parameters are of questionable value, since in real clusters their determination is prone to a variety of errors, particularly from the choice of cluster center. We can estimate the large--radius logarithmic slope of the galaxy number density profile, and address the question of whether the dark matter is more extended than the galaxy distribution. Finally, we consider only cluster profiles which have at least eight fitting bins after this exclusion, and thus at least five degrees of freedom. This requires at least 50 galaxies within $r_{170}$. Figure~\ref{fig:galnumdenprofs} shows the galaxy number density profiles of the six largest clusters in the ensemble --- the only six that fit the minimum criteria above. Also shown are best--fit $\beta$--model profiles. In five of the six cases, the large--radius slope of the galaxy number density profile $-3 \alpha_{GAL}$ is steeper than that of the dark matter; the dark matter is more extended than the cluster galaxies. Although the number statistics here are poor, a comparison of cumulative masses using the entire ensemble, shown in \S VI, clearly demonstrates that the galaxy population is, in the mean, more centrally concentrated than the dark matter. \subsection{Velocity Bias} Since our initial placement of galaxies is upon peaks in the density field, and since such peaks are expected to be spatially biased towards the peak on large mass scales associated with the cluster itself, the galaxies are expected to be somewhat more centrally concentrated than the dark matter. There are, however, physical mechanisms which can contribute to such concentration. Apart from the contribution of galaxies to the overall cluster potential well, the distribution of galaxies and dark matter will be affected to some degree by interactions between the two components (Barnes 1985; Evrard 1987; West \& Richstone 1988; Carlberg 1991; Carlberg \& Dubinski 1991; Carlberg 1994). Given a CDM halo which is initially well--traced by the distribution of galaxies, dynamical friction will transfer energy from the galaxies to the dark matter, resulting in a dark matter distribution which is more extended than would be the case in the absence of galaxies, and a galaxy density profile which is more centrally concentrated than that of the halo. A simple timescale argument based on the Chandrasekhar dynamical friction formula (\hbox{\it cf.} Binney \& Tremaine 1987) suggests that, on the periphery of clusters or in the largest clusters, dynamical friction should be unimportant. However, in cluster cores and larger parts of poor clusters, this timescale can be comparable to or less than a dynamical time. The effect of such friction on the structure of the dark matter is small. If galaxies and dark matter both have the same initial specific energy, and if each galaxy loses a fraction $k$ of its initial specific energy through dynamical friction, the specific energy of the dark matter is boosted by a factor $\left(1\,+\,k M_{gal}/M_{DM}\right)$. In rich clusters, galaxies typically account for perhaps 6\% of the total mass. If baryons make up $30\%$ of clusters --- more than suggested by analyses of their mean properties (Evrard 1997) --- then $M_{gal}/M_{DM} \simeq 0.085$. In this extreme case, even if galaxies lose as much as 25\% of their specific energy through dynamical friction, the effect on the dark matter is only 2\%. Integrating over the fits to the two--fluid and ejection ensembles' mean dark matter density profiles confirms that the total and specific energy differences between the two are less than a couple of percent. Such an energy gain by the dark matter is insignificant, and at any rate may be swamped by energy lost heating the ICM through the varying gravitational potential during collapse and relaxation. However, while the effect upon the dark matter should be weak even if the galaxies lose a large fraction of their kinetic energy, the actual magnitude of effect upon the galaxies is unclear. A possible signal of dynamical friction is the presence of velocity bias in the cluster, $b_v \!<\! 1$, where $b_v\,=\,\sigma_{gal}/\sigma_{DM}$ is the ratio of galaxy to dark matter velocity dispersions. We examine the evolution of $b_{v}$ for our simulated clusters, constructing velocity dispersions by averaging over all the galaxies or dark matter within $r_{170}$, in three dimensions. For individual clusters, the instantaneous value of $b_{v}$ undergoes strong fluctuations depending upon the dynamical state of the cluster at that time. Even so, the value of the bias parameter is only slightly above unity (up to $1.05$) for brief periods, and for only a few runs. We attempt to average out the noise associated with individual clusters by showing in Figure~\ref{fig:velbias_evol2} the evolution of $b_{v}$ averaged in each time bin over the entire ensemble of ejection runs, over the six most massive runs, and the eight least massive runs. In all cases, velocity bias is clearly present. The ensemble--averaged bias parameter, time--averaged over the period from $z\,=\,0.1$ to the present, is 0.84. This value agrees well with an independent determination of $b_v$ made by Frenk {\it et al.\ } (1996) using a self--consistent treatment for galaxy formation within the cluster. The curves imply a mass--dependence to the degree of velocity bias, in the sense that more massive clusters are less strongly affected. This is consistent with dynamical friction arguments, where the braking effect of the dark matter background is more efficient in low velocity dispersion environments. There is no evidence for a continued decay in the velocity bias parameter, as would result from dynamical friction. However, close examination of the $b_v$ evolution curves for individual clusters shows that they can often be described by a moderate decay, followed by a jump in velocity dispersion. The jumps occur when additional galaxies fall into the virialized volume, boosting the velocity dispersion with their infall velocities. With such complicated evolution, it is unclear whether dynamical friction is actually taking place. The observational status of velocity bias in clusters is unclear, primarily because $\sigma_{DM}$ is, of course, not directly measurable. If we define $\beta_{DM}$ as the ratio of specific energies of the dark matter and gas, \begin{equation} \label{eq:betadmdef} \beta_{DM}\,=\,\frac{\sigma_{DM}^{2}}{\left(\frac{kT}{\mu m_{p}}\right)}, \end{equation} and if the velocity dispersion for cluster galaxies determined from observations does not suffer from anisotropies and projection effects (and these simulations suggest that it would), then $\beta_{spec}$, the spectroscopic value determined from cluster galaxies, should be related to $\beta_{DM}$ through the velocity bias parameter, \begin{equation} \beta_{spec}\,=\,\frac{\sigma_{GAL}^{2}}{\left(\frac{kT}{\mu m_{p}}\right)} \,=\,b_{v}^{2} \, \beta_{DM}. \end{equation} If the specific kinetic energy in dark matter and thermal energy in cluster gas are both faithful representations of the cluster potential well depth, then $\beta_{DM}$ should equal unity. In this case, the determination of $\beta_{spec}$ for a cluster would allow determination of its velocity bias parameter. This approach was taken by Lubin \& Bahcall (1993), who examined an ensemble of clusters and calculated the average value of $\beta_{spec}$ for the ensemble, with the intent of eliminating dependence on dynamical state through the average. They found $\left<\beta_{spec}\right>\,=\,0.97\pm0.04$, which suggests that little or no velocity bias is present. However, this result is subject to the validity of the assumptions noted above. Their sample of clusters demonstrated a correlation between velocity dispersion and temperature, $\sigma_{GAL}\propto T^{0.6\pm0.1}$. This result was confirmed by Bird, Mushotzky \& Metzler (1995), who found $\sigma_{GAL}\propto T^{0.61\pm0.13}$ for a sample of clusters explicitly corrected for the effects of substructure. Girardi {\it et al.\ } (1996) also obtained a similar result, using an independent analysis designed to minimize the effects of velocity anisotropies. While consistent with $\sigma_{GAL}\propto T^{0.5}$, the power law more strongly suggested by the data implies that $\beta_{spec}$ is temperature dependent. This means that any average value of $\beta_{spec}$ taken from a sample of clusters will depend on the temperature distribution of the sample, making its interpretation unclear. Furthermore, when following the evolution of an individual cluster, excursions in both $\beta_{DM}$ and $\beta_{spec}$ can occur as a result of mergers. Finally, the assumption that $\beta_{DM}\,=\,1$ implicitly assumes that upon infall, cluster gas thermalizes very efficiently, and retains little or no energy in macroscopic motions. Perfect thermalization is not seen in simulations; a small fraction of residual kinetic energy in the gas is routinely found. A comparison of 11 gas dynamic codes applied to a single cluster realization yields a mean and standard error $\beta_{DM} \!=\! 1.16 \pm 0.03$ (Frenk {\it et al.\ } 1997). Heating of cluster gas through energy input from galaxies drives $\beta_{DM}$ to lower values, but with several effects pushing values larger, a modest velocity bias could still be present. It should also be noted that the mass--dependence of velocity bias noted above pushes in the direction of a relation steeper than the virial prediction $\sigma_{GAL}\propto T^{0.5}$. In this sense, observational data on the $\sigma$--$T$ relation are {\it consistent with} the presence of velocity bias. \section{The Intracluster Medium} \subsection{Hydrostatic Equilibrium } The sound crossing time in cluster gas defines a timescale for the gas to respond to acoustic disturbances. For an isothermal, $\gamma\,=\,5/3$ gas, and with parameters on the low end of rich clusters, this timecale is \begin{equation} t_{cross}\,=\,\frac{r_{170}}{c_s}\,=\, 2.0\left(\frac{r_{170}}{1{\rm\ Mpc}}\right) \left(\frac{T}{10^7K}\right)^{-0.5} {\rm Gyr} \end{equation} For an $\Omega\,=\,1$, $h\,=\,0.5$ cosmogony, a lookback time of $2.0$ Gyr corresponds to a redshift of 0.12. Since X--ray clusters have been seen to much higher redshift (\hbox{\it cf.}\/ Bower {\it et al.\ } 1994; Castander {\it et al.\ } 1994), it seems reasonable to expect that much of the gas in clusters should be in hydrostatic equilibrium. Because the temperature scales with radius as $T \! \propto \! r_{170}^2$ (EMN), the above timescale is independent of cluster size. Figure~\ref{fig:gas_vrprof} shows a profile of the gas radial Mach number for two--fluid runs. The ejection runs are very similar --- the main difference being a modest reduction in infall velocities --- so we do not show them here. The velocities are measured with respect to the velocity of the center--of--mass of the dark matter distribution, as in Figure~\ref{fig:dm_vrprof}. Again, as in that figure, the multiple--cluster run (20e) displays a weak infall region. The rest of the curves show infall Mach numbers, averaged over radial shells, that reach a maximum magnitude of at most 1.2. Internal to $r_{170}$, radial motions of the gas are quite weak. There is, however, some modest infall of the gas occurring at $r_{170}$, where $\langle v_r/c_s \rangle \simeq 0.2$. This feature is what prompted EMN to suggest $r_{500}$ as a conservative estimate of the hydrostatic boundary of clusters. Within $r_{500}$ there are no significant radial motions of gas in either ensemble. Mass weighted mean values of the radial Mach number within $r_{500}$ quoted by EMN are $-0.022 \pm 0.022$ and $0.001 \pm 0.016$ for the 2F and EJ ensembles, respectively. The gas is in hydrostatic balance within $r_{500}$, and very near to it within $r_{170}$. \subsection{ICM Density Profiles } The gas density profiles for the individual members of both ensembles are shown in scaled fashion in Figure~\ref{fig:gas_denprof_sc_new}. Like the dark matter, the density profiles of the 2F runs display remarkable similarity outside $0.2 r_{170}$. Although the mean profile (bold line in the figure) drops two orders of magnitude in this regime, variation about the mean is limited to $\mbox{$^{<}\hspace{-0.24cm}_{\sim}$}\, 20\%$. Dispersion in the central gas densities is much higher, and larger by about a factor of 3 than the central variation in the dark matter in Figure~\ref{fig:dm_denprof_sc}. This difference may be physical, originating from shocks and sonic disturbances in the gas which are absent in the dark matter. Care must be taken, however, since the spatial scales involved are quite close to the minimum hydrodynamic smoothing in the experiments. Higher resolution models will be able to address this issue. For now, we note that the cluster with the most diffuse central gas (60c) has a rather violent formation history, involving strong merger activity at low redshift. The impact of ejection on the gas density structure is dramatic. The gas in the EJ runs is much less centrally concentrated than that of the 2F ensemble. At $0.1 r_{170}$, the average density is depressed by over a factor of 3. Power law fits to the mean gas density profiles in the overdensity range $100\,\leq\,\rho_{gas}/\left(\Omega_b\rho_c\right)\,\leq\,3000$ (the overdensity range fitted for the dark matter) produce logarithmic slopes of $-1.75$ (EJ) and $-2.34$ (2F). The difference in the mean profile values is driven primarily by low temperature clusters with ejection. Self--similarity across the mass spectrum probed by the experiments is strongly broken in the EJ ensemble; there is a systematic change in ICM structure between low and high mass clusters. Direct evidence for this is shown in Figure~\ref{fig:alpha_T}, where we plot values of $\alpha_{GAS}$ from fits to the standard profile, Equation~\ref{eq:betaden}, against the mean, mass weighted cluster temperature $T$ within $r_{170}$ for both ensembles. Mean values of $\alpha_{GAS}$ for the two ensembles are listed in Table~5, along with means for clusters hotter and cooler than 4 keV. The 2F models show no apparent trend with temperature, whereas the EJ clusters tend to smaller values of $\alpha_{GAS}$, meaning more extended gas distributions, at lower $T$. The trend in $\alpha_{GAS}$ with $T$ exhibited by the models with galactic winds agrees well with the observed behavior of $\beta_{fit}$ with $T$ (Mohr \& Evrard 1997) and appears consistent with semi--analytic treatments of galactic wind input (Cavaliere, Menci \& Tozzi 1997). The larger extent of the gas in the EJ clusters results from the work done by the wind energy dumped into these systems. The trend with temperature results from the fact that the work done in small clusters represents a larger fraction of their overall energy budget. We next consider the energetics of the ICM. \subsection{Energetics and Temperature Profiles } Figure~\ref{fig:T_M170} shows the mass weighted temperature within $r_{170}$ against mass $M_{170}$ within that radius for the two ensembles. In contrast to the density structure, the striking aspect of the $T-M$ relation is its relative lack of sensitivity to galactic feedback. The 2F ensemble is well fit by the solid line $T_{2f}(M) = 4.0 (M/10^{15} {\rm M}_\odot)^{2/3} {\rm\ keV}$, while the EJ ensemble has a slightly shallower slope ($0.62$) and modest ($\mbox{$^{<}\hspace{-0.24cm}_{\sim}$}\, 20\%$) upward displacement within the range of total masses explored. The dotted line in Figure~\ref{fig:T_M170} shows the expectation for the ejection run temperature $T_{ej}$ based on assuming the wind energy is thermalized and retained within $r_{170}$. In this case, energy accounting yields (White 1991) \begin{equation} T_{ej}(M) \ = \ T_{2f}(M) \ + \ f_{wind} \, T_{wind} \label{Tej_T2f} \end{equation} where $T_{2f}(M)$ is the relation from pure infall, $f_{wind}$ is the ICM gas fraction injected by winds and $T_{wind}$ is the effective wind temperature defined in \S 2. The models display no systematic trend of $f_{wind}$ with temperature, so for the purpose of illustration we use a constant value $f_{wind} \!=\! 0.22$. The expected temperatures exceed the measured values over all masses, considerably so at the low mass end. The wind energy is not retained as heat in the ICM. Rather, it is used to do work in effectively lifting the gas within the dark matter dominated potential. To substantiate this statement, we calculate an estimate of the work done on the gas in each run by comparing the final states of gas in each 2F/EJ realization pair. Since the dark matter which dominates the mass distribution is nearly identical in the two runs, we can make an ``instantaneous'' approximation of the work done by integrating the change in gravitational potential energy associated with lifting a gas element from its final radius in the 2F realization to its final radius in the EJ realization. Summing, in a Lagrangian fashion, over radially ordered gas mass shells (taking into account the small reduction in gas mass due to galaxies) produces an estimate of the total work required to perturb the 2F gas distribution into the EJ configuration for each cluster. This estimate of the work required can be compared against a similarly approximate, ``instantaneous'' estimate of the wind energy input input by galaxies within $r_{170}$, $E_{inp} \!=\! 3/2 M_{gal} kT_{wind}$, where $M_{gal}$ is the galaxy mass within $r_{170}$. Figure~\ref{fig:work_Mvir2f_paper}a shows the result of this exercise. The agreement between these two ``instantaneous'' measures is quite good for most of the clusters. There is a systematic trend apparent; the slope of the points is evidently steeper than unity. We do not fully understand the cause of this steepening, but speculate that it may be connected to the difference in formation histories discussed in \S 3. Given the approximate nature of this calculation --- assuming a static potential well when, in reality, heating of the gas occurred within the evolving potential over nearly a Hubble time --- it is perhaps surprising that the agreement for most clusters is as good as it is. In poor clusters, the estimated work done exceeds the total thermal energy of the cluster gas affected, as shown in Figure~\ref{fig:work_Mvir2f_paper}b. For systems with total mass $M_{170} \,\mbox{$^{<}\hspace{-0.24cm}_{\sim}$}\, 3 \times 10^{14} {\rm M}_\odot$ ($T \,\mbox{$^{<}\hspace{-0.24cm}_{\sim}$}\, 4 {\rm\ keV}$), the work estimate is comparable to, and in a few cases exceeds, the thermal energy of the gas. Given the magnitude of the input energy, it is remarkable how little net thermal heating occurs, as displayed in Figure~10. How much is the internal temperature structure affected by winds? Figure~\ref{fig:gas_Tprof_sc_new} shows the temperature profiles for the members of both ensembles at $z\,=\,0.02$, scaled by a fiducial virial temperature, \begin{equation} T_{170}\,=\,\frac{GM_{170}}{2r_{170}} \,=\,7.57\times 10^6 \left(\frac{r_{170}}{1{\rm\ Mpc}}\right)^2 \left(1\,+\,z\right)^3\,K \end{equation} There are clear structural similarities in the temperature profiles of the two samples; both display approximately isothermal behavior within half the virial radius followed by a drop to about half the central value at $r_{170}$. There is a fair amount of dispersion at small radii and evidence for a modest central temperature inversion in some of the 2F systems. Such a temperature inversion may be expected from the shape of the dark matter velocity dispersion profiles; the density profile is shallower than $r^{-2}$ at small radii. There are some structural differences as well. The EJ profiles are slightly ($\sim \! 25\%$) hotter in the central regions than the 2F models. This offsets the lowered density in these models and maintains hydrostatic balance. The central pressures in the EJ runs are smaller by factors of $2-3$ than their 2F counterparts, but the thermal pressure gradient supporting the gas ($- \stackrel{\rightharpoonup}{\nabla} \! P/\rho$) is similar in the models. One observable consequence of ejection is a slightly steeper temperature profile, a feature for which there appears to be some empirical support from ASCA observations of clusters (Markevitch 1997). Note that constructing temperature profiles by averaging over spherical shells can mask significant structure in the temperature distribution. For example, EMN present in their Figure~1 a highly irregular projected temperature map for a simulated cluster which, when averaged over three--dimensional shells or two--dimensional annuli, appears to have an isothermal temperature profile. Finally, we turn to the energetics of the ICM with respect to the dark matter. Figure~\ref{fig:betatruehists} shows the distribution of values of $\beta_{DM}$, defined by Equation~(\ref{eq:betadmdef}), measured within $r_{170}$ for the two ensembles. We also construct a second set of values for $\beta_{DM}$ for each ensemble, where the temperature is replaced by a total specific energy, \begin{equation} \label{eq:teff} T\,\rightarrow\,T\,+\,\frac{\mu m_p \sigma_{gas}^2}{k}, \end{equation} to take into account the bulk motions (including rotation) and residual velocity dispersion of the gas caused by mergers and infall. The figure shows average values for $\beta_{DM}$ for each data set. Because most of the wind energy goes into redistributing the gas, the shift in the mean $\beta$ values between the EJ and 2F models is only $10\%$, comparable to the effect of including gas kinetic energy. Note also that if $\beta_{DM}$ correlates with cluster temperature, as may be true for the observed $\beta_{spec}$, then our average values of $\beta_{DM}$ depend on the cluster sample used. The important point here is not the exact values of these averages, but the relationships between them. \medskip \subsection{Iron Abundances and Abundance Gradients} The gas ejected by our ``galaxies'' is metal enriched. The distribution of metals in the ICM of our simulated clusters can thus be examined and compared with observations. Although predictions for the metal distribution are a feature unique to the ejection models, the predictions themselves are not unique, but depend on the choice of ejection history, as shown in Paper I. We noted above that the gas distribution is more extended than the dark matter distribution in the ejection runs, with $-1.75$ as the best--fit power law slope at overdensities $100\leq \rho_{gas}/\left(\Omega_{b}\rho_c\right)\leq 3000$. Meanwhile, in Figure~\ref{fig:galnumdenprofs}, we saw that the values of $\alpha_{GAL}$ from fits to the galaxy number density profiles for individual runs were typically significantly higher, with only a small overlap in range of values. The gas is thus considerably more extended than the galaxy distribution. In Paper I, we showed how this can lead to an abundance gradient; the ejected, enriched gas traces the galaxies, and thus has a gradient with respect to primordial gas. Figure~\ref{fig:Feprof} shows that such a gradient is generally seen in the runs from about $x\,=\,0.15$ out to the virial radius. The flat metallicity profiles at smaller radii are not induced by any gravitational or hydrodynamic force resolution effects. Since resolution limits flatten the gas density profile at a larger radius than the galaxies, we would expect this to actually steepen the gradients near the center. Instead, the flattening is due to the mixing of metals at the time of gas particle ejection, which effectively acts as a diffusive term. Many gas particle ejections take place at radii below $0.1$ $r_{170}$, and the core gas consequently undergoes many such mixing events. This was noted in Paper I (Figure 5b), where we showed that a model with no mixing evidenced a steeper abundance profile at small radius. An ejection model different from the one used here, in which the metal enrichment took place at earlier times, was also shown in Paper I to exhibit a flatter central abundance profile; this occurs because the difference in profiles between galaxies and gas was not yet large when the enrichment took place. Data on abundance gradients from real clusters are only now becoming available through observations with the ASCA satellite. These data suggest that clusters sometimes have gradients and sometimes do not and that poorer systems are more likely to show evidence for an abundance gradient (Mushotzky 1994; Xu {\it et al.\ } 1997). Splitting our sample into high--mass and a low--mass subsets reveals no substantial difference in their gradients, as shown by the dotted and dashed lines in Figure~\ref{fig:Feprof}. Recall that the abundances in Figure~\ref{fig:Feprof} are scaled by the wind abundance. If the wind abundance is constant in time, its absolute value will not affect the shape of the abundance gradient. Similarly, a change in the total quantity of mass blown out of galaxies will not change the shape of the gradients, as long as the ejection rate remains flat. What {\em will} affect the gradient shape is if the metallicity of the wind or the ejection rate from galaxies vary with time. If the observations continue to support weak abundance gradients in rich clusters, then the discrepancy between these simulations and the observations implies that an ejection model in which the enrichment and heating took place predominantly at early times would be more appropriate. By late times, the gradient between galaxies and gas that we see in our simulations should be present in both low--mass and high--mass clusters. Late--time enrichment should thus result in abundance gradients for both mass ranges. However, since low--mass systems form earlier in hierarchical structure models, early--ejection models would enrich the ICM when a galaxy/gas gradient is in place for low--mass clusters, but not for the high--mass clusters we see at low redshift. This would explain such an observational distinction. There is some evidence from observations of Abell 370 ($z\,=\,0.37$) that enrichment takes place primarily at early times (Bautz {\it et al.\ } 1994); the case for this is strengthened by the ASCA detection of significant quantities of neon, silicon, and other heavy elements in the ICM (Mushotzky {\it et al.\ } 1996) in quantities which imply injection of Type II supernova enriched gas rather than Type I. This, in turn, implies that the feedback took place early in the lifetime of cluster galaxies. We await additional ASCA and upcoming AXAF data, as well as simulations with ejection based on well motivated star formation histories, to clarify this issue. \section{A Model for ICM Structure and the Core Radius Question } The traditional formalism used in describing the cluster gas distribution is the hydrostatic isothermal $\beta$--model, reviewed in the Appendix. However, it is somewhat disconcerting that the fundamental assumptions upon which the $\beta$--model is based are probably wrong. In particular, despite the presence of cores in X--ray images, there is no evidence for the presence of a core in cluster potentials examined through gravitational lensing observations, and simulations of clusters in both CDM and scale--free cosmologies show no core in the underlying dark matter distribution. The success of the model in fitting X--ray surface brightness profiles may be due to having three free parameters in the fitting function, when there are three basic features to fit in the data --- an amplitude, a scale length to define curvature, and a large radius slope. In addition, the choice of cluster center is often adjusted to provide the best fit, introducing additional degrees of freedom. The success of the $\beta$--model profile function in modelling the gas distribution need not require that its underlying assumptions about the form of the potential be valid. A second model can be constructed from what we learned in the previous section on the dark matter density. The one--parameter form introduced by NFW provides a good fit to the dark matter mean density profile over all resolved radii; integrating to find the dark mass within a given radius, we have \begin{equation} \label{eq:nfwmass} M_{DM}\left(< x\right)\,=\, 4\pi\rho_c r_{170}^3\Delta\lambda^3 \left[\ln\left(1\,+\,\frac{x}{\lambda}\right)\,-\, \left(\frac{x}{\lambda}\right)\left(1\,+\,\frac{x}{\lambda}\right)^{-1}\right] \end{equation} where recall that $x\,=\,r / r_{170}$, the radius of interest as a fraction of our fiducial virial radius. This gives the mass within a scaled radius $x$, and thus within a physical radius $r\,=\,x r_{170}$. If we make the assumption that the dark mass essentially determines the cluster potential (approximately true in our simulations where the baryon fraction of the simulation volume is 10\%), we can rewrite this expression as \begin{eqnarray} M\left(<x\right)\,=\,M_{170} \frac {\left[\ln\left(1\,+\,\frac{x}{\lambda}\right)\,-\, \left(\frac{x}{\lambda}\right)\left(1\,+\,\frac{x}{\lambda}\right)^{-1}\right]} {\left[\ln\left(1\,+\,\frac{1}{\lambda}\right)\,-\, \left(1\,+\,\lambda\right)^{-1}\right]}, \end{eqnarray} Here $M_{170}\,=\,\frac{4\pi}{3}r_{170}^3\times170\rho_c$ is the mass within $r_{170}$. For an isothermal gas in hydrostatic equilibrium in the potential defined by this mass profile, we then have \begin{equation} \frac{1}{\rho_{g}}\frac{d\rho_{g}}{d\left(\frac{x}{\lambda}\right)} \,=\,-K\left(\frac{x}{\lambda}\right)^{-2} \left[ \ln\left(1\,+\,\frac{x}{\lambda}\right) \,-\, \left(\frac{x}{\lambda}\right)\left(1\,+\,\frac{x}{\lambda}\right)^{-1}\right] \end{equation} where $\rho_g$ is the local gas density, and the constant $K$ is \begin{eqnarray} K & = & 4\pi G \rho_c r_{170}^2\left(\frac{kT}{\mu m_{p}}\right)^{-1} \Delta\lambda^2 \\ & = & \frac{G}{\lambda r_{170}}\left(\frac{kT}{\mu m_p}\right)^{-1} \frac{M_{170}}{\left[\ln\left(1\,+\,\frac{1}{\lambda}\right)\,-\, \left(1\,+\,\lambda\right)^{-1}\right]} \\ \label{eq:Kbeta} & \simeq & \frac{2 \beta_{DM}}{\lambda\left[\ln\left(1\,+\,\frac{1}{\lambda}\right)\,-\, \left(1\,+\,\lambda\right)^{-1}\right]}, \end{eqnarray} where here we have written $\sigma_{DM}^{2}\,\simeq\,GM_{170}/2r_{170}$. In the absence of any significant post--infall heating or cooling of the intracluster gas, we expect the gas temperature to reflect the potential well depth, and thus $\beta_{DM}\,=\,1$. Note that even without such additional physics, we do not expect $\beta$ to equal unity locally, because we have assumed the gas to be isothermal, while we have shown earlier that this potential generates velocity dispersion profiles which are not strictly isothermal. However, K depends only on the global value; we consider the impact of such local deviations upon the global value to be small, and make this assumption in the interests of simplicity. Integrating over a range in radii from $x_{1}$ to $x$ gives \begin{equation} \ln\left(\frac{\rho_g\left(x\right)}{\rho_g\left(x_{1}\right)}\right) \,=\, K\left(\frac{x}{\lambda}\right)^{-1}\left[ \ln\left(1\,+\,\frac{x}{\lambda}\right) \,-\, \left(\frac{x}{x_1}\right)\ln\left(1\,+\,\frac{x_1}{\lambda}\right) \right]. \end{equation} This can be rearranged to give \begin{equation} \label{eq:nfwgasden} \rho_g\left(x\right)\,=\, \rho_g\left(0\right)\, \exp\left\{ K\left[ \left(\frac{x}{\lambda}\right)^{-1}\ln\left(1\,+\,\frac{x}{\lambda}\right)\,-\,1 \right] \right\} \end{equation} Clusters in this two--component model are essentially a two--parameter family, determined by $\lambda$ and $T$. The latter sets $K$ through Equation~(\ref{eq:Kbeta}). Given $K$, the normalization $\rho_g\left(0\right)$ is determined by the total gas content of the cluster. In a cosmological setting, the shape parameters $\lambda$ and $T$ are not formally independent, as they are both related to the cluster mass. This implies that clusters are essentially a one--parameter family, with internal structure determined by their mass. However, one must be cautious not to oversimply the picture. First, as noted before, there is considerable scatter in values of $\lambda$ at fixed mass --- Tormen, Bouchet \& White (1997) quote a factor of 2 at the $2\sigma$ level --- which presumably arises from particular differences in clusters' dynamical histories. Also, scatter in the relation between temperature and mass of $10 -20 \%$ arise from mergers (EMN), and other heating and cooling mechanisms may increase this scatter. Finally, the gas is not exactly isothermal, as assumed in the model. Nevertheless, we present this model, if only as an alternative to the standard $\beta$--model. A clear advantage over the latter is that the potential assumed in deriving Equation~\ref{eq:nfwgasden} is motivated by direct simulation of hierarchical clustering. Like the $\beta$--model profile, Equation~\ref{eq:nfwgasden} has zero logarithmic derivative at the origin. At the $\beta$--model profile's core radius, the density has dropped to $\rho_{GAS}\left(0\right)/(2^{3\alpha_{GAS}/2})$, or slightly less than half its central value for observed values of $\beta$. If we define a core radius $x_{1/2}$ for our density profile as the radius at which the gas density drops to half its central value, we have \begin{equation} \label{eq:nfwgascore} \left(\frac{x_{1/2}}{\lambda}\right)^{-1} \ln\left(1\,+\,\frac{x_{1/2}}{\lambda}\right) \,=\,1\,-\,\frac{\ln 2}{K}. \end{equation} This is a transcendental equation for $x_{1/2}$ as a function of $\lambda$. For the value of $\lambda\,=\,0.154$ from the fit to the mean dark matter profile, and for the an approximate $\beta_{DM}\,=\,1.17$ for the two--fluid ensemble, we have $K\,=\,13.2$, which in turn implies $x_{1/2}\,=\,0.11\lambda\,=\,0.017$. For a cluster with a $3{\rm\ Mpc}$ virial radius, this corresponds to a core radius for the gas of $51{\rm\ kpc}$, too low to compare favorably with observations. Alternately, using the definition of $K$ earlier, a core radius $x_{1/2}\,\simeq\,0.1$ (corresponding to gas core radii 100--400 {\rm\ kpc}) implies $\lambda\,\simeq\,0.6$, much larger than is seen in the simulations. Figure~\ref{fig:gasdenprofs5} shows the ICM mean density profile for the ensemble. The large dispersion in central values is reflected in the large error bars for the central points. The vertical lines denote the values of the central SPH smoothing length for the members of the ensemble. Also shown is a fit to Equation~\ref{eq:nfwgasden}. The success of the fitting function is quite striking, but the best--fit $\lambda\,=\,0.26$ is inconsistent with the best--fit $\lambda\,=\,0.154$ extracted from the mean dark matter profile. For contrast, the profile for $\lambda\,=\,0.154$ is shown as a dashed line; the normalization for this curve is set by the average baryon fraction within $x \!=\! 1$ for the ensemble. The core radius $x_{1/2}$ produced by the best--fit profile with $\lambda\,=\,0.26$ is $0.034$, still too small compared to observations, although better than the curve inferred from the dark matter distribution. In Section III, we established that the NFW functional form provided a good description of the dark matter profile. Earlier in this section, we showed that the gas in the simulations can be thought of as isothermal and in hydrostatic equilibrium. And yet here, the gas density profile that theoretically should be a simple consequence of these facts is found to conflict with the gas density profile in the simulations. Why does the model fall short with the gas, when it succeeds with the dark matter and when the assumptions about the gas upon which it is based seem valid? To answer this, we first note that the shape of Equation~(\ref{eq:nfwgasden}) depends both on $\lambda$ and $K$, which is determined by the temperature $T$ through the hydrostatic equation. Figure~\ref{fig:betatruehists} indicates that post--infall thermalization of cluster gas is incomplete and that modest residual bulk motions exist in the gas. In this case, the gas temperature alone slightly underestimates the degree of pressure support. Incorporating these motions into an effective temperature, Equation~(\ref{eq:teff}), leads to a decrease in $K$ and a gas distribution which is more extended than that derived from the thermal temperature alone. From Figure~\ref{fig:betatruehists}, using $\beta_{DM}\,=\,1.03$ instead of $\beta_{DM}\,=\,1.17$ results in the dash--dotted curve shown in Figure~\ref{fig:gasdenprofs5}. (Again, the normalization comes from forcing the baryon fraction to the average value for the ensemble). At large radius, the agreement with data from the simulations is quite good. Since bulk motions {\it should} be included as a source of support, this curve is the one of interest. Our question has therefore changed: why does the theoretical prediction based on the potential and the complete energy budget of the ICM overestimate the central gas density when constrained to accurately describe the gas density at large radius? Why is the gas more extended in the simulations than is predicted? To address this, we fit the gas density profiles of individual two--fluid runs to the $\beta$--profile and extracted core radii. The top row of Figure~\ref{fig:gasden_corehists} shows the distribution of core radii obtained, the relation between the resulting values of $r_c$ and the corresponding clusters' virial radii, and between $r_c$ and the numerical parameters $\epsilon$ (the gravitational softening) and $h_{cen}$, the SPH smoothing length at the cluster center. Of principal importance is the latter. The cluster core radius resulting from a fit to the gas density is typically three times the central value of the SPH smoothing length. This is the approximate effective width of the smoothing kernel used in the simulations; when volume--weighted, the kernel drops to 10\% of its maximum value at 2.2$h$; it drops to 1\% at 2.8$h$ and 0.1\% at 3.2$h$. We argue therefore that the theoretical model, when the entire ICM energy budget is considered, provides a good description of the large--radius behavior of the gas density in the simulations. Its failure to compare well to the simulations at smaller radii largely reflects the fact that the gas density cores seen in the simulations are principally numerical in origin. Since the hydrodynamical resolution in these simulations is comparable to most other studies, and since no physics beyond shock heating exists in such simulations to raise the adiabat of cluster core gas, we suggest that gas cores seen in these other studies are also numerical in origin. Support for this position is found in recent numerical studies by Anninos \& Norman (1996), which were not able to converge to a well--defined gas density core as resolution improved. It remains possible that physical effects contribute to the difference between the model and simulated core profiles. In particular, deviations from isothermality and small amounts of core rotational support are present in the simulated clusters but not in the analytic model. In the right--hand--side of Figure~\ref{fig:gasdenprofs5}, we compare the NFW gas density profile to the mean profile for the ejection ensemble. Once again, the best--fit does an excellent job of reproducing the mean profile, but with a scale length of $\lambda\,=\,0.329$, much larger than the scale in the potential. The dashed and dash--dotted lines are as before; thus, the dash--dotted line contains the bulk motions of the gas indicated in Figure~\ref{fig:betatruehists}, and is the true prediction of the theoretical model. Once again, the core in the real density profile is more pronounced than that of the model, although not as severely as for the two--fluid runs. The ICM in the ejection runs has experienced post--infall heating, slightly raising ICM gas temperatures over pure infall runs, and resulting in $\beta_{DM}$ decreased typically by 8\%. This translates into a more extended gas distribution. However, as is seen in the figure, the heating provided by our ejection model is not sufficient alone to account for the ICM cores we see in the simulation runs; numerical effects must still be important. This is illustrated by the vertical lines on the figure which mark the value of $h_{cen}$ for the various members of the ensemble; again, $3h_{cen}$ approximately marks the range of interest, where deviation from the theoretical prediction begins to be important. The bottom row of Figure~\ref{fig:gasden_corehists} shows that the cores still have a lower limit of $3 h_{cen}$, but now extend to larger radii as well. Numerics still dominate the core radii in the ejection ensemble, but the entropy introduced by winds is beginning to play a role. We conclude that the model employed here is not sufficient to generate the depressed central densities and cluster core radii necessary to compare to real systems. We could have predicted this result. Equation~\ref{eq:nfwgascore} gives the dependence of the core radius in this model on the parameter $K$. For $\lambda\,=\,0.154$, correcting $K$ by a factor of $1.03/1.17\,=\,0.88$ (the ratio of values of $\beta$, or of temperatures) merely raises the size of the core radius by 10\%. While additional sources of heating raise the temperature, thus lowering $K$ and thus increasing $x_{1/2}$, the increase we need is extremely large. To raise $x_{1/2}$ to values comparable to real clusters (\hbox{\it i.e.}\ $x_{1/2}\,=\,0.1$), the temperature must be boosted by a factor of three or four; such a model is physically implausible and observationally unjustified. We remain without an explanation of the gas cores of real clusters. The cores predicted by the analytic model, based upon the application of the NFW profile to our simulated mass distributions, appear too small to explain those observed in real clusters. One possibility is that the mass density profile for real clusters is typically shallower than indicated by the NFW profile here, in the sense that it approaches $r^{-2}$ at a larger radius than occurs in these simulations. This would be the case if the parameter $\lambda$, the scale radius of the NFW model in units of $r_{170}$, were significantly larger than is seen here, as may be the case for CDM models with lower values of $\Omega_{o}$ (Navarro, Frenk \& White 1997). Another possibility, however, is that additional physics, such as magnetic fields, provide an important source of support at small radius. This view is bolstered by measurements of Faraday rotation in X--ray clusters (Taylor, Barton \& Ge 1994; Ge \& Owen 1994) and comparisons of X--ray and lensing mass measurements for cluster core regions (Loeb \& Mao 1994; Miralda-Escude \& Babul 1995). Finally, winds with energy output concentrated at early times may raise the central entropy to a higher level than that seen in the models used here, and this may be sufficient to generate core radii of the requisite scale. Accurate modelling of these effects awaits future simulations. \section{Relative Structure and the Baryon Fraction} We summarize in Table 6 what we have learned about the relative extents of gas and dark matter by showing the results of power law fits to the outer slope of the mean ensemble density profile for each fluid in the two ensembles. For the dark matter, we do not quote separate values for the two ensembles as their outer slopes are not significantly different. Small number statistics do not allow a comparable fit for the galaxies; the range of values quoted in Table 6 come from Figure 5. We can show their relative extent more clearly by displaying in Figure~\ref{fig:relextents_new} the {\it cumulative} density measure --- that is, the fraction of the virial mass in each component found within a given radius. This figure illustrates that the gas with or without feedback is more extended than the dark matter, and that the difference is significantly enhanced by the energy input from galactic winds. At small radii, the dark matter in the ejection ensemble is typically slightly more extended than the corresponding dark matter in the two--fluid runs; however, the difference is quite small. Finally, the galaxy profile is more centrally concentrated than any of the other components. The half--mass radii for each of the fluids displays this hierarchy. As a consequence of the extended nature of the gas distribution, the mean, enclosed baryon fraction is reduced relative to the global value $\Omega_b / \Omega_0$ at radii interior to the virial radius. In the EJ ensemble, the ICM mass fraction is further reduced by the baryons incorporated into galaxies. The amplitude of this reduction within radii encompassing density contrasts $\delta_c \!=\! 170$ and 500 is shown in Figure~\ref{fig:fb_T_paper}, where we plot the normalized, local baryon fraction of each component, defined by $\Upsilon_X \!=\! f_X (\Omega_b/\Omega_0)^{-1}$ with $f_X \!=\! M_X(\delta_c)/M_{tot}(\delta_c)$ the mass fraction of component $X$ (gas, galaxies, total baryons) within the radius encompassing density contrast $\delta_c$. The ensemble without ejection displays modest diminutions, $\langle \Upsilon \rangle \!=\! 0.94$ within $r_{170}$ and $0.91$ within $r_{500}$, with no apparent trend with cluster size. The slight differences in formation history between low and high mass systems, which produces the modest structural differences discussed in \S III, is not apparent in the behavior of gas mass fraction with temperature. The ensemble with ejection exhibits markedly different behavior. Although the galaxy mass fraction is insensitive to cluster size, the ICM mass fraction is noticeably reduced in lower temperature systems, particularly those with $T \,\mbox{$^{<}\hspace{-0.24cm}_{\sim}$}\, 4 {\rm\ keV}$. The magnitude of the effect depends on the region under consideration. At $\delta_c \!=\! 170$, there is a $30\%$ fractional drop (from $\Upsilon_{gas} \!=\! 0.75$ to $0.55$) between 10 and 1 keV, whereas the effect is nearly a factor two drop at $\delta_c \!=\! 500$. The larger effect at higher densities or smaller radii reflects the difference in ICM structure between low and high $T$ objects displayed in Figure~\ref{fig:alpha_T}; lower temperature clusters in the ensemble have less centrally concentrated gas distributions. At smaller radii or higher density contrasts, the disparity in gas fractions between low and high $T$ clusters increases. For high temperature clusters ($T \,\,\mbox{$^{>}\hspace{-0.24cm}_{\sim}$} \,\, 4 {\rm\ keV}$), the total baryon fraction within $r_{170}$ is nearly unaffected by galactic winds. There are, however, modest structural differences in the gas distribution within this radius, as indicated from the reduction in $\alpha_{gas}$ from values $\sim \! 0.89$ to $\sim \! 0.79$ (Table~5). The work associated with the wind energy injected into rich clusters is thus used to redistribute gas within $r_{170}$, but causes little or no ``spillover'' at this radius. This is not the case for the low temperature clusters, where the considerable wind energy results in a more dramatic redistribution of the gas extending beyond $r_{170}$. Still, the diminution of the total baryon fraction is not catastrophic, even at low temperatures. A crude fit to the total baryon fraction $\Upsilon(T)$ with $T$ (in keV) for the EJ ensemble yields \begin{equation} \label{eq:fb_T_fit} \Upsilon(T) \ \simeq 1 - A T^{-2/3} \end{equation} where $A \! \simeq \! 0.3$ at $\delta_c \!=\! 170$ and $A \! \simeq \! 0.4$ at $\delta_c \!=\! 500$. These are shown as solid lines in Figure~\ref{fig:fb_T_paper}. We stress that the exact form and magnitude of $\Upsilon(T)$ is likely to be sensitive to the specific wind model employed. One must look for empirical and/or additional theoretical support to justify a particular model. Our particular implementation is successful at reproducing the slope of the observed X--ray size--temperature relation (Mohr \& Evrard 1997), and this may indicate that our particular wind implementation is well calibrated, in terms of its energetic or entropic effects. This provides some reason to be optimistic that the predicted form for $\Upsilon(T)$ in Equation~\ref{eq:fb_T_fit} may apply to real clusters. Future efforts employing more realistic wind ejection histories are needed to clarify this issue. \section{Summary} We investigate the structure of clusters in a CDM universe using multi--component, numerical simulations spanning a factor of 50 in cluster mass. Dark matter, intracluster gas and galaxies are included in a set of models designed to explore the role of feedback of mass and energy into the ICM. Two principal assumptions made for the galaxy population are: (i) galaxies form at the locations of peaks in the initial density field and (ii) galaxies lose half of their initial mass via winds at a flat rate from $z=4.5$ to the present. The self--similar form for CDM cluster density profiles seen in earlier works (NFW2; Metzler 1995; Cole \& Lacey 1996; Tormen, Bouchet \& White 1997) provides an excellent description of the dark matter distribution in these simulated clusters. The degree of central concentration increases with decreasing cluster mass, as expected from these earlier works and from the formation history of objects in hierarchical clustering models. The characteristic scale radius for this profile appears at $0.1$--$0.2$ $r_{170}$ in these simulations; however, Navarro, Frenk \& White (1997) caution that the location of the scale radius is dependent upon the assumed value of the density parameter. The two--fluid models, lacking winds, exhibit a self--similar gas distribution. At large radii, the gas density profile matches well the expection from hydrostatic equilibrium and the self--similar profile observed for the dark matter. The agreement at smaller radii is difficult to determine because of resolution limits. The introduction of winds raises the gas entropy above levels achieved by gravitational infall and thus produces a more extended gas distribution within the dark matter dominated potential. The effect of winds is strongest on low--mass clusters, where the energy input by winds is comparable to the total thermal energy of the gas. Thus, the self--similarity seen in the gas distribution of the two--fluid models is broken with the introduction of winds, with the gas in low temperature clusters being more strongly affected than their high temperature counterparts. The energy input through winds is primarily spent in redistributing the gas within the potential; the effect on the gas temperatures is slight. Thus, the introduction of winds does not seem to affect the relation between ICM temperature and mass. This, in turn, supports the use of mass estimators that assume gas temperature to be simply related to potential well depth. However, the dependence of this result upon the wind model chosen remains to be determined. Detailed temperature information from X--ray satellites, combined with independent mass estimates from weak lensing analysis, can probe this relation in real clusters. Galaxies are more centrally concentrated than the dark matter in these simulations. Part of this effect arises simply because the initial distribution of galaxies is more concentrated than the dark matter --- an artifact of considering overdense peaks as likely sites for galaxy formation. However, in addition, a persistent velocity bias between galaxies and dark matter is present in the ejection ensemble; the degree of bias correlates with cluster mass in a manner consistent with the expectations of bias induced by dynamical friction. As the gas is more extended than the dark matter, while the galaxies are more concentrated, a gradient between galaxies and gas exists. As a result, metal abundance gradients are generic to the ejection models. However, this result is sensitive to the specific wind model used; the gradient is reduced if winds and metal enrichment occur only at high redshift. The strength of the metal abundance gradients seen in this work shows no dependence on cluster mass, which may be in contradiction with observations. The relative ordering of extents of the three fluids simulated is, however, consistent with present observations. While energy input from feedback depresses central gas densities and causes a more extended gas distribution, the effect is not significant enough to explain observed cluster gas density cores observed in X--ray images. While cores of appropriate size are present in these simulations, they appear predominantly numerical in origin. Possible sources of X--ray cores include additional sources of support such as magnetic fields, or a change to the cluster density profile such as is expected for low--$\Omega$ cosmologies (Navarro, Frenk \& White 1997). An ejection history featuring vigorous, early winds may also lead to larger cores, since the central gas entropy could be raised above that seen in the experiments presented here. Since the low--temperature clusters simulated experience the strongest effect upon the gas density profile, estimates of the baryon fraction should be taken from high--temperature clusters whenever possible. The local baryon fraction in 10 keV clusters is approximately 90\% of the global value and is insensitive to winds, while loss approaching a factor of two in gas fraction can occur in interior parts of low--temperature clusters. This research suggests at least two directions for future numerical experiments. One is to consider the simplified evolution of gas assumed to be isentropic at some high redshift, but which is allowed to change its adiabat through shock heating and (optionally) radiative cooling at later times. Systematically varying the initial adiabat would mimic the effect of abrupt wind input of varying strength at high redshift, and would allow structural issues, such as core radii generation, to be addressed. Another approach is to increase the spatial and mass resolution in the experiments and add appropriate physics to allow galaxy formation to be modelled self--consistently within forming clusters. This is the long--term goal of such cosmological simulations, but it remains a formidable task because of the uncertainties in modeling star formation on galactic scales, the inherent complexity of the dynamical system involved, and the large parameter space (physical and numerical) associated with the problem. There is much yet to be gained from simple models, but the problem cannot be considered ``solved'' until the latter approach is complete. \acknowledgments CAM would like to thank Tina Bird, Mary Crone, Richard Mushotzky, Julio Navarro, and Gordon Squires for useful discussions. This work was supported by the NASA Astrophysics Theory Program through grants NAGW--2367 and NAG5-2790. CAM also would like to acknowledge support from a Rackham Predoctoral Fellowship and a Sigma Xi Grant--in--Aid of Research at the University of Michigan, and the Department of Energy and NASA Grant NAG5-2788 at Fermilab. AEE acknowledges support from the CIES and CNRS during a sabbatical stay at the Institut d'Astrophysique in Paris.
2023-04-23T06:41:23.014Z
1997-10-28T21:48:28.000Z
redpajama/arxiv
arxiv_0001
2,322
14,388
9c8e20d19c007acfb7b004dd5837fec80b869c8b
\section{Introduction and summary} Theories with extended supersymmetry have provided a rich testing ground for the study of many phenomena in field theory and string theory \cite{SW}. This paper deals with $N=2$ supergravity in four-dimensional spacetimes, whose coupling to a number of matter multiplets has been worked out in considerable detail. The most well-known matter multiplets are the vector multiplet and the hypermultiplet. Off-shell a vector multiplet comprises $8+8$ bosonic and fermionic degrees of freedom \cite{grimm}. The off-shell representation of hypermultiplets \cite{fayet} is more subtle. To remain off-shell one can either choose a formulation with (off-shell) central charges based on $8+8$ degrees of freedom and ensuing constraints, or one must accept a description based on an infinite number of degrees of freedom. For the latter description, harmonic superspace provides a natural setting \cite{harmonic}. The tensor multiplet \cite{DWVH}, which is dual to a massless hypermultiplet, also comprises $8+8$ off-shell degrees of freedom. All three multiplets describe $4+4$ physical degrees of freedom. There are two other matter multiplets describing the same number of physical states. First there is the vector-tensor multiplet \cite{sohnius,DWKLL}, which is dual to a vector multiplet. Secondly, there exists a double-tensor multiplet, which contains two tensor gauge fields and which is dual to a hypermultiplet. The off-shell representation of both these multiplets requires the presence of off-shell central charges and their superconformal formulation requires the presence of background fields. All multiplets appear naturally in the context of four-dimensional $N=2$ supersymmetric compactifications of the three ten-dimensional supergravity theories, and may therefore play a role in the effective low-energy actions associated with appropriate string compactifications. For instance, the vector-tensor multiplet can be associated with the supermultiplet of vertex operators of four-dimensional heterotic $N=2$ supersymmetric string vacua, which contains the operators of the axion-dilaton complex together with an extra vector gauge field. Similarly, the tensor and the double-tensor multiplet would appear in the context of type-IIA and type-IIB string compactifications. At the level of the four-dimensional effective actions, the latter multiplets are usually converted into vector multiplets and hypermultiplets, which, at least in string-perturbation theory, yields an equivalent description. We should stress that this conversion rests on a purely on-shell equivalence. The question whether certain off-shell configurations are prefered by string theory has a long history (for a recent discussion, see \cite{siegel}). At any rate, not every system of vector multiplets or hypermultiplets can be converted back into vector-tensor, tensor or double-tensor multiplets, so there are certain restrictions (see, e.g. \cite{siebel}). Furthermore, recent experience in dual systems, for instance in the context of three spacetime dimensions \cite{3dmirror}, has taught us that the answer to these questions involves nonperturbative issues. While in \cite{DWKLL} the vector-tensor multiplet was introduced, motivated by heterotic string perturbation theory, it meanwhile turned out that vector-tensor multiplets have a different role to play and emerge in heterotic compactifications at the nonperturbative level. This phenomenon was initially described in the context of six-dimensional heterotic string compactifications, where it turned out that certain singularities in the effective action were associated with noncritical strings becoming tensionless \cite{SeiWit}. In six-dimensions this is related to the presence of tensor multiplets. In four dimensions, vector-tensor multiplets play a similar role \cite{LSTY}. Couplings of the vector-tensor multiplet appear in two varieties. One type of coupling could be of six-dimensional origin \cite{FerMinSag}. The origin of the second coupling is less clear. For recent discussions on the couplings of six-dimensional tensor multiplets, we refer to \cite{BSS}. In two previous publications \cite{vt1,vt2} we have developed the coupling of the vector-tensor multiplet in the context of the superconformal multiplet calculus. So far we restricted ourselves to rigid supersymmetry and we constructed all possible couplings to a general (background) configuration of vector multiplets that are invariant under rigid scale and chiral U(1) transformations. The requirement of scale and chiral invariance forces the scalar fields of the vector multiplet to act as compensators. In the context of rigid supersymmetry this feature does not represent a restriction: at the end one can always freeze the vector multiplets to constants, thereby causing a breaking of scale invariance. In the case of local supersymmetry it is more subtle to freeze the vector multiplets. The reason for insisting on rigid scale and chiral invariance, is that, in this form, one can rather straightforwardly incorporate the coupling to supergravity by employing the superconformal multiplet calculus \cite{DWVHVP,DWLVP}. In this paper, we report on the results of the extension to local supersymmetry. We give a comprehensive treatment of matter couplings to $N=2$ supergravity. For the vector-tensor multiplet we make use of a formulation based on a finite number of off-shell degrees of freedom, which employs off-shell central charges. These degrees of freedom are described by one vector, one two-rank tensor gauge field, two real scalar fields, one of them auxiliary, and a doublet of Majorana spinors. In this formulation the gauge field associated with the central charge is known to exhibit rather peculiar couplings \cite{zachos,DWVHVP}. Recently another specific example of such a coupling was studied in \cite{BD}. In \cite{vt2} we established the existence of two different vector-tensor multiplets. Their difference is encoded in the Chern-Simons couplings between the vector and tensor gauge fields, whose form is constrained by supersymmetry. One version, first discussed in \cite{vt1}, is characterized by the fact that the vector and tensor gauge field of the vector-tensor multiplet exhibit a direct Chern-Simons coupling. This leads to unavoidable nonlinearities (in terms of the vector-tensor multiplet fields) of the action and transformation rules. This theory is formulated with at least one abelian vector multiplet, which provides the gauge field for the central-charge transformations. When freezing this vector multiplet to a constant we obtain a vector-tensor multiplet with a self interaction. It takes the form (after a suitable rescaling of the fields) \begin{eqnarray}\label{vt-selfinteraction} {\cal L}&\propto& \ft12 \phi (\partial_\mu\phi)^2 + \ft14 \phi (\partial_\mu V_\nu-\partial_\nu V_\mu)^2 +\ft34 \phi^{-1} \Big(\partial_{[\mu}B_{\nu\rho]}- V_{[\mu}\partial_\nu V_{\rho]}\Big)^2 \nonumber \\ && +\ft12 \phi\, \bar \l^i {\stackrel{\leftrightarrow}{\hbox{\ooalign{$\displaystyle\partial$\cr$/$}}}} \l_i -2\phi\, (\phi^{({\rm z})})^2- \ft14 i\Big(\varepsilon^{ij} \bar \l_i\sigma^{\mu\nu} \l_j - {\rm h.c.}\Big) \, (\partial_\mu V_\nu - \partial_\nu V_\mu) \nonumber\\ && -\ft1{24} \phi^{-1} \bar \l_i \gamma^\mu \gamma^\nu\gamma^\rho \l^i \Big(\partial_{[\mu} B_{\nu\rho]} - V_{[\mu} \partial_\nu V_{\rho]} \Big) + \ft3{16} \phi^{-1} \Big(\bar\l^i\gamma_\mu\l_i\Big)^2 \nonumber\\ && + \ft1{32} \phi^{-1}\Big((\varepsilon^{ij} \bar\l_i\sigma_{\mu\nu}\l_j)^2 + {\rm h.c.} \Big) \,, \end{eqnarray} where we have included an auxiliary field, $\phi^{({\rm z} )}$. We will comment on its role in due course. The second version of the vector-tensor multiplet, which requires more than one abelian vector multiplet, avoids the direct Chern-Simons coupling between the vector and tensor field of the vector-tensor multiplet, but there are nonvanishing Chern-Simons couplings with the additional vector multiplets. In this case the action remains quadratic in terms of the vector-tensor multiplet fields. Recently a number of papers appeared dealing with the superspace formulation of vector-tensor multiplets \cite{GHH,HOW,DK,BHO,DK2}. Most of this work concerns the linear version of the vector-tensor multiplet with its corresponding Chern-Simons couplings, which can be obtained by dimensional reduction from six dimensions \cite{BSS}. Unfortunately, even in the framework of harmonic superspace, it turns out that it is not possible to avoid an explicit central charge with corresponding constraints \cite{DK}. On the other hand, the complexity of our results clearly demonstrates the need for a suitable superspace formulation. For rigid supersymmetry, the self-interaction \eqn{vt-selfinteraction} has been derived recently in harmonic superspace \cite{DK2,IS}. This paper is organized as follows. In section~2 we present a survey of the superconformal multiplet calculus and establish our notation. In section~3 we introduce the vector-tensor multiplet and discuss its superconformal transformation rules. Section~4 contains the derivation of the locally supersymmetric actions for vector-tensor multiplets. In section~5 we discuss their dual version in terms of vector multiplets. A number of useful formulae has been collected in an appendix. \setcounter{equation}{0} \section{Superconformal Multiplet Calculus} Off-shell formulations of supergravity theories can be described in a form that is gauge equivalent to a superconformal theory. In four spacetime dimensions this enables a relatively concise organization of the field content. It also allows the systematic construction of supersymmetric Lagrangians via techniques known collectively as multiplet calculus. In this section we review these concepts for the case of $N=2$ theories to make this paper self-contained and to establish our notation. Most of the material presented here is known (see, e.g., \cite{DWVHVP,DWLVP}). One is interested, firstly, in identifying irreducible representations of the superconformal algebra. The relevant algebra contains general-coordinate, local Lorentz ($M$), dilatation ($D$), special conformal ($K$), chiral SU(2) and U(1), supersymmetry ($Q$) and special supersymmetry ($S$) transformations. The most conspicuous aspects of this algebra are that it involves field-dependent structure `constants' and that only a subset of the gauge fields are realized as independent fields. To be specific, the gauge fields associated with general-coordinate transformations ($e_\mu^a$), dilatations ($b_\mu$), chiral symmetry (${\cal V}^{\,i}_{\mu\, j}, A_\mu$) and $Q$-supersymmetry ($\psi_\mu^i$), are realized by independent fields. The remaining gauge fields of Lorentz ($\o^{ab}_\mu$), special conformal ($f^a_\mu$) and $S$-supersymmetry transformations ($\phi_\mu^i$) are dependent fields. Their form is determined by a set of covariant constraints. The identification of the appropriate constraints and the precise commutator relations is nontrivial \cite{DWVHVP}. Of primary interest is the Weyl multiplet, which is the representation consisting of $24+24$ off-shell degrees of freedom, corresponding to the independent gauge fields associated with the superconformal algebra and three auxiliary fields: a Majorana spinor doublet $\chi^i$, a scalar $D$ and a selfdual Lorentz tensor $T_{abij}$ (where $i, j,\ldots$ are chiral SU(2) spinor indices)\footnote{% $T_{abij}$ is antisymmetric in both Lorentz indices $a,b$ and chiral SU(2) indices $i,j$. It is a selfdual Lorentz tensor and therefore complex. Its complex conjugate is the anti-selfdual field $T^{ij}_{ab}$. Obviously the tensor field transforms as a singlet under SU(2), but it transforms nontrivially under chiral U(1). Our conventions are such that SU(2) indices are raised and lowered by complex conjugation. The SU(2) gauge field ${\cal V}_\mu^{\;i}{}_j$ is antihermitean and traceless, i.e., ${\cal V}_\mu^{\;i}{}_j+{\cal V}_{\mu j}{}^{i}= {\cal V}_\mu^{\;i}{}_i=0$. We refer to the appendix for further details. }. When additional supermultiplets, such as vector multiplets or vector-tensor multiplets, are added to the superconformal theory, additional gauge symmetries may arise, which must be included into the algebra. We denote these extra symmetry transformations by $\d_{{\rm gauge}}$, which may incorporate central-charge transformations. The Weyl multiplet itself is invariant under $\d_{{\rm gauge}}$. The most important of the commutator relations which specify the algebra, is the one between a pair of $Q$-supersymmetry transformations, given by \begin{equation} [\d_Q(\epsilon_1),\d_Q(\epsilon_2)]= \d^{(cov)}(\xi) +\d_M(\varepsilon)+\d_K(\Lambda_K)+\d_S(\eta)+\d_{{\rm gauge}} \,, \label{qqcomb}\end{equation} where $\d^{(cov)}, \d_M, \d_K$ and $\d_S$ denote a covariant general-coordinate transformation, a Lorentz transformation, a special conformal transformation, and an $S$-supersymmetry transformation. The associated parameters are given by the following expressions, \begin{eqnarray} \xi^\mu &=& 2\,\bar{\epsilon}_2^i\gamma^\mu\epsilon_{1i}+{\rm h.c.}\,, \nonumber\\ \varepsilon^{ab} &=& \bar{\epsilon}^i_1\epsilon^j_2\,T^{ab}_{ij}+{\rm h.c.}\,, \nonumber\\ \Lambda_K^a &=& \bar{\epsilon}^i_1\epsilon^j_2\, D_bT^{ba}_{ij} -\ft32\bar{\epsilon}_2^i\gamma^a\epsilon_{1i}\,D+{\rm h.c.}\,, \nonumber\\ \eta^i &=& 3\,\bar{\epsilon}^i_{[1}\epsilon^j_{2]}\,\chi_j \,, \label{qqparamsb}\end{eqnarray} where $D_b$ denotes the derivative that is covariant with respect to all the superconformal symmetries. In the sequel we will occasionally need the commutator of an $S$- and a $Q$-supersymmetry variation\footnote{% To clarify our notation, for instance, $\bar{\eta}^i\epsilon_j-({\rm h.c.}\, ;\,{\rm traceless}) =\bar{\eta}^i\epsilon_j-\bar{\eta}_j\epsilon^i -\frac{1}{2}\d^i{}_{\!j}(\bar{\eta}^k\epsilon_k-\bar{\eta}_k\epsilon^k)$.} :% \begin{eqnarray} [\d_S(\eta), \d_Q(\epsilon)] &=& \d_M \Big( 2 \bar{\eta}^i\sigma^{ab}\epsilon_i + {\rm h.c.} \Big) + \d_D \Big( \bar{\eta}_i \epsilon^i + {\rm h.c.} \Big) + \d_{\rm U(1)} \Big( i\bar{\eta}_i \epsilon^i + {\rm h.c.} \Big) \nonumber\\ && + \d_{\rm SU(2)} \Big( -2 \bar{\eta}^i \epsilon_j -({\rm h.c.}\,;\, {\rm traceless}) \Big) \,. \label{SQcomm} \end{eqnarray} Given the $S$-supersymmetry variations one may compute the special conformal boosts from the commutator \begin{equation} [\d_S(\eta_1),\d_S(\eta_2)] = \d_K (\Lambda^a_K) \,,\quad \mbox{ with } \L^a_K = \bar \eta_{2i}\gamma^a\eta^i_1 + {\rm h.c.}\,. \end{equation} Poincar\'e supergravity theories are obtained by coupling the Weyl multiplet to additional superconformal multiplets containing Yang-Mills and matter fields. The resulting superconformal theory then becomes gauge equivalent to a theory of Poincar\'e supergravity. This is conveniently exploited by imposing gauge conditions on certain components of the extra superconformal multiplets. Subsequently one can eliminate the auxiliary superconformal fields. The additional multiplets are necessary to provide compensating fields and to overcome a deficit in degrees of freedom between the Weyl multiplet and the minimal field representation of Poincar\'e supergravity. For instance, the graviphoton, represented by an abelian vector field in the Poincar\'e supergravity multiplet, is provided by an $N=2$ superconformal vector multiplet. In the following subsections we briefly describe the Weyl multiplet, vector multiplets, hypermultiplets and linear multiplets. \vspace{.1in} \subsection{The Weyl multiplet} We already specified the fields belonging to the Weyl multiplet. The Weyl and chiral weights and the fermion chiralities of the Weyl-multiplet fields, the composite connections, and also those of the supersymmetry transformation parameters, are shown in table 2.1. The Weyl and chiral weights, $w$ and $c$, govern the transformation of a generic field under dilatations and U(1) transformations according to \begin{equation} \phi(x) \longrightarrow \exp[ w \,\L_{\rm D} (x) + i c\,\L_{\rm U(1)}(x) ]\,\phi(x)\,. \end{equation} Here we summarize the transformation rules for the independent fields under $Q$- and $S$-super\-symmetry and under $K$-transformations, \begin{eqnarray} \d e_\mu{}^a &=& \bar{\epsilon}^i\gamma^a\psi_{\mu i}+{\rm h.c.}\,, \nonumber\\ \d\psi_\mu^i &=& 2{\cal D}_\mu\epsilon^i -\ft14 \sigma\cdot T^{ij}\gamma_\mu\epsilon_j -\gamma_\mu\eta^i \,,\nonumber\\ \d b_\mu &=& \ft12\bar{\epsilon}^i\phi_{\mu i} -\ft34\bar{\epsilon}^i\gamma_\mu\chi_i -\ft12\bar{\eta}^i\psi_{\mu i}+{\rm h.c.} +\L_K^a\,e_\mu^a \,,\nonumber\\ \d A_\mu &=& \ft{1}{2}i \bar{\epsilon}^i\phi_{\mu i} +\ft{3}{4}i\bar{\epsilon}^i\gamma_\mu\chi_i +\ft{1}{2}i \bar{\eta}^i\psi_{\mu i}+{\rm h.c.}\,, \nonumber\\ \d {\cal V}_{\mu\,j}^i &=& 2\bar{\epsilon}_j\phi_\mu^i-3\bar{\epsilon}_j\gamma_\mu\chi^i+2\bar{\eta}_j\psi_\mu^i -({\rm h.c.} \, ; \, {\rm traceless})\,, \nonumber\\ \d T_{ab}^{ ij} &=& 8 \bar{\epsilon}^{[ i} \hat{R}_{ab}(Q)^{j]}\,, \nonumber\\ \d\chi^i &=& -\ft16\sigma\cdot\hbox{\ooalign{$\displaystyle D$\cr$\hspace{.03in}/$}} T^{ij}\epsilon_j +\ft{1}{3}\hat{R}({\rm SU(2)})^i{}_{j}\cdot\sigma\epsilon^j -\ft{2}{3}i \hat{R}({\rm U(1)})\cdot\sigma\epsilon^i \nonumber\\ && +D\,\epsilon^i +\ft16\sigma\cdot T^{ij}\eta_j \,,\nonumber\\ \d D &=& \bar\epsilon^i\hbox{\ooalign{$\displaystyle D$\cr$\hspace{.03in}/$}}\chi_i+{\rm h.c.}\,, \label{transfo4}\end{eqnarray} where ${\cal D}_\mu$ are derivatives covariant with respect to Lorentz, dilatational, U(1) and SU(2) transformations, and $D_\mu$ are derivatives covariant with respect to {\it all} superconformal transformations. Both ${\cal D}_\mu$ and $D_\mu$ are covariant with respect to the additional gauge transformations associated with possible gauge fields of the matter multiplets. The quantities $\hat{R}_{ab}(Q), \hat{R}_{ab}({\rm U(1)})$ and $\hat{R}_{ab}({\rm SU(2)})^i_{\,j}$ are supercovariant curvatures related to $Q$-supersymmetry, U(1) and SU(2) transformations. Their precise definitions are given in the appendix. The gauge fields for Lorentz, special conformal, and $S$-supersymmetry transformations are denoted $\omega_\mu^{ab}, \phi_\mu^i$, and $f_\mu^a$, respectively. These are composite objects, which depend in a complicated way on the independent fields (see the appendix). Under supersymmetry and special conformal boosts they transform as follows, \begin{eqnarray} \d\omega_\mu^{ab} &=& -\bar{\epsilon}^i\sigma^{ab}\phi_{\mu i} -\ft12\bar{\epsilon}^iT^{ab}_{ij}\psi_\mu^j +\ft32\bar{\epsilon}^i\gamma_\mu\sigma^{ab}\chi_i \nonumber\\ && +\bar{\epsilon}^i\gamma_\mu\hat{R}^{ab}(Q)_i -\bar{\eta}^i\sigma^{ab}\psi_{\mu i} + {\rm h.c.} + 2\L_K^{[a}\,e_\mu^{b]} \,, \nonumber\\ \d\phi_\mu^i &=& -2f_\mu^a\gamma_a\epsilon^i -\ft14\hbox{\ooalign{$\displaystyle D$\cr$\hspace{.03in}/$}} T^{ij}\cdot\sigma\gamma_\mu\epsilon_j +\ft32\big[(\bar{\chi}_j\gamma^a\epsilon^j)\gamma_a\psi_\mu^i -(\bar{\chi}_j\gamma^a\psi_\mu^j)\gamma_a\epsilon^i\big] \nonumber\\ & & +\ft{1}{2}\hat{R}({\rm SU(2)})^i{}_{j} \cdot\sigma\gamma_\mu\epsilon^j +i\hat{R}({\rm U(1)})\cdot\sigma\gamma_\mu\epsilon^i +2{\cal D}_\mu\eta^i +\L^a_K \gamma_a\psi_\mu^i \,,\nonumber\\ \d f_\mu^a &=& -\ft12\bar{\epsilon}^i\psi_\mu^j\, D_b T^{ba}_{ij} -\ft34e_\mu{}^a\bar{\epsilon}^i\hbox{\ooalign{$\displaystyle D$\cr$\hspace{.03in}/$}}\chi_i -\ft34\bar{\epsilon}^i\gamma^a\psi_{\mu i}\,D \nonumber\\ & & +\bar{\epsilon}^i\gamma_\mu D_b\hat{R}^{ba}(Q)_i +\ft12\bar{\eta}^i\gamma^a\phi_{\mu i}+ {\rm h.c.} +{\cal D}_\mu\L^a_K \,. \end{eqnarray} \begin{figure} \begin{center} \begin{tabular}{|c||cccccccc|ccc||cc|} \hline & \multicolumn{11}{c||}{Weyl multiplet} & \multicolumn{2}{c|}{parameters} \\ \hline \hline field & $e_\mu{}^a$ & $\psi_\mu^i$ & $b_\mu$ & $A_\mu$ & ${{\cal V}_\mu}^i{}_j$ & $T_{ab}^{ij}$ & $\chi^i$ & $D$ & $\omega_\mu^{ab}$ & $f_\mu{}^a$ & $\phi_\mu^i$ & $\epsilon^i$ & $\eta^i$ \\[.5mm] \hline \hline $w$ & $-1$ & $-\ft12$ & $0$ & $0$ & $0$ & $1$ & $\ft32$ & $2$ & $0$ & $1$ & $\ft12$ & $-\ft12$ & $\ft12$ \\[.5mm] \hline $c$ & $0$ & $-\ft12$ & $0$ & $0$ & $0$ & $-1$ & $-\ft12$ & $0$ & $0$ & $0$ & $-\ft12$ & $-\ft12$ & $-\ft12$ \\[.5mm] \hline $\gamma_5$& & $+$ & & & & & $+$ & & & & $-$ & $+$ & $-$ \\[.5mm] \hline \end{tabular}\\[.13in] \parbox{5.7in}{Table 2.1: Weyl and chiral weights ($w$ and $c$, respectively) and fermion chirality ($\gamma_5$) of the Weyl multiplet component fields and of the supersymmetry transformation parameters.} \end{center} \end{figure} \subsection{The vector multiplet} The $N=2$ vector multiplet transforms in the adjoint representation of a given gauge group. For each value of the group index $I$, there are $8+8$ component degrees of freedom off-shell, including a complex scalar $X^{I}$, a doublet of chiral fermions $\Omega_i^{\,I}$, a vector gauge field $W_\mu^{\,I}$, and a real $SU(2)$ triplet of scalars\footnote{% The real triplet $Y_{ij}^{\,I}$ satisfies $Y_{ij}^{\, I}=Y_{ji}^{\,I}$ and $Y_{ij}^{\, I}=\varepsilon_{ik}\varepsilon_{jl}Y^{kl\,I}$.} % $Y_{ij}^{\,I}$. The Weyl and chiral weights and the fermion chirality of the vector-multiplet component fields are listed in table 2.2. Under $Q$- and $S$-supersymmetry these transform as follows, \begin{eqnarray} \d X^{I} &=& \bar{\epsilon}^i\Omega_i^{\,I} \,,\nonumber\\ \d\Omega_i^{\,I} &=& 2\hbox{\ooalign{$\displaystyle D$\cr$\hspace{.03in}/$}} X^{I}\epsilon_i +\varepsilon_{ij}\sigma\cdot {\cal F}^{I-}\epsilon^j +Y_{ij}^{\,I}\epsilon^j -2gf_{JK}{}^IX^{J}\bar{X}^{K}\varepsilon_{ij}\epsilon^j +2X^{I}\eta_i\,, \nonumber\\ \d W_\mu^I &=& \varepsilon^{ij}\bar{\epsilon}_i\gamma_\mu\Omega_j^{\,I} +2\varepsilon_{ij}\bar{\epsilon}^i\bar{X}^{I}\psi_\mu^j+ {\rm h.c.}\,, \nonumber\\ \d Y_{ij}^{\,I} &=& 2\bar{\epsilon}_{(i}\hbox{\ooalign{$\displaystyle D$\cr$\hspace{.03in}/$}}\Omega_{j)}^I +2\varepsilon_{ik}\varepsilon_{jl}\bar{\epsilon}^{(k}\hbox{\ooalign{$\displaystyle D$\cr$\hspace{.03in}/$}}\Omega^{l)\,I} -4gf_{JK}{}^I\, \varepsilon_{k(i}\Big(\bar{\epsilon}_{j)}X^J\Omega^{k\,K} -\bar{\epsilon}^k\bar{X}^J\Omega_{j)}^{\,K}\Big) \,, \label{vrules}\end{eqnarray} where $f_{JK}{}^I$ are the structure constants of the group, $[t_I,t_J]=f_{IJ}{}^K\,t_K$, and $g$ is a coupling constant. The field strengths ${\cal F}_{\mu\nu}^I$ are defined by \begin{equation} {\cal F}_{\mu\nu}^I= 2\partial_{[\mu}W_{\nu]}^I -g f_{JK}{}^I\, W_\mu^J W_\nu^K -\Big(\varepsilon_{ij}\bar{\psi}_{[\mu}^i\gamma_{\nu]}\Omega^{j\,I} +\varepsilon_{ij}\bar{X}^{I}\bar{\psi}_\mu^i\psi_\nu^j +\ft14\varepsilon_{ij}\bar{X}^{I}T^{ij}_{\mu\nu}+{\rm h.c.}\Big) \,. \label{calF} \end{equation} They satisfy the Bianchi identity \begin{equation} D^b\Big({\cal F}^{+I}_{ab} - {\cal F}^{-I}_{ab} +\ft14 X^I T_{ab\,ij} \varepsilon^{ij} -\ft14 \bar X^I T^{ij}_{ab} \varepsilon_{ij} \Big) = \ft34 \Big(\bar \chi^i\gamma_a \Omega^{Ij}\varepsilon_{ij} -\bar \chi_i\gamma_a \Omega_j^I\varepsilon^{ij} \Big)\,. \end{equation} Under supersymmetry they transform as follows, \begin{equation} \d{\cal F}^{I}_{ab}= -2 \varepsilon^{ij}\bar{\epsilon}_i\gamma_{[a}D_{b]}\Omega_j^{\,I} -2\varepsilon^{ij}\bar{\eta}_i\sigma_{ab}\Omega_j^{\,I} + {\rm h.c.}\,. \end{equation} The transformation rules (\ref{vrules}) satisfy the commutator relation (\ref{qqcomb}), including a field-dependent gauge transformation on the right-hand side, which acts with the following parameter \begin{equation} \theta^I=4\varepsilon^{ij}\bar{\epsilon}_{2i}\epsilon_{1j}\,X^I+ {\rm h.c.} \,. \label{vgauge} \end{equation} The covariant quantities of the vector multiplet constitute a so-called reduced chiral multiplet. A general chiral multiplet contains $16+16$ off-shell degrees of freedom and an arbitrary Weyl weight factor $w$ (corresponding to the Weyl weight of its lowest component). The covariant quantities of the vector multiplet may be obtained from a chiral multiplet with $w=1$ by the application of a set of reducibility conditions, one of which is the Bianchi identity. \vspace{.1in} \subsection{The hypermultiplet} A finite field configuration describing off-shell hypermultiplets must have a nontrivial central charge. This charge acts on a basic unit underlying $r$ hypermultiplets, which consists of $r$ quaternions $A_i^{\;\a}$ and $2r$ chiral fermions $\zeta^\a$. The Weyl and chiral weights and fermion chirality of these fields are listed in table 2.2.\footnote{% Our notation is such that the $2\times 2r$ matrix $A_i^{\;\a}$, with complex conjugate $A^i_{\;\a}$, satisfies the constraint $A^i_{\;\a} = \varepsilon^{ij}\rho_{\a\b}A_j^{\;\b}$ where, under certain conditions \cite{DWLVP}, $\rho_{\a\b}$ can be brought in block-diagonal form, $\rho ={\rm diag}(i\sigma_2, i\sigma_2,...)$. Solving this constraint reduces $A^i_{\;\a}$ to a sequence of $r$ quaternions $(q_1,...q_r)$, where each quaternion is represented by the $2\times 2$ matrix $q_a=q_a^{(0)}+iq_a^{(1)}\sigma_1+iq_a^{(2)}\sigma_2+iq_a^{(3)}\sigma_3$.} The index $\a$ runs from $1$ to $2r$. As the basic unit contains twice as many fermionic as bosonic components, it is necessary to assume the presence of an infinite number of them. These multiple ``copies'', which will be distinguished by appending successive ``z'' indices to the fields, will be organized in a linear chain, such that the central charge maps each one of them into the next one. For instance, on $A_i{}^\a$ it acts as $\d A_i = z A_i^{\;\a({\rm z})}$, where $z$ is the transformation parameter. Successive applications of the central charge thus generate an infinite sequence, \begin{equation} A_i{}^\a\longrightarrow A_i{}^{\a({\rm z})}\longrightarrow A_i{}^{\a({\rm zz})}\longrightarrow {\rm etcetera}\,, \label{hhier} \end{equation} and similarly on the fermionic fields\footnote{% A hierarchy such as (\ref{hhier}) arises naturally when starting from a five-dimensional supersymmetric theory with one compactified coordinate, but this interpretation is not essential.}. The supersymmetry transformation rules for the basic fields are summarized as follows, \begin{eqnarray} \d A_i{}^\a &=& 2\bar \epsilon_i{\zeta}^\a +2\rho^{\a\b}\varepsilon_{ij}\bar\epsilon^j {\zeta}_\b \,,\nonumber\\ \d\zeta^\a &=& \hbox{\ooalign{$\displaystyle D$\cr$\hspace{.03in}/$}} A_i{}^\a\epsilon^i +2X^0 A_i{}^{\a ({\rm z})} \varepsilon^{ij}\epsilon_j +2gX^\a{}_\b \, A_i{}^\b\varepsilon^{ij}\epsilon_j +A_i{}^\a\eta^i \,, \label{hrules}\end{eqnarray} where $X^0$ is the scalar component of a background vector multiplet which supplies the gauge field for the central charge, and $X^\a{}_{\b}$ is the scalar component of a Lie-algebra valued vector multiplet\footnote{ Our conventions are such that $X^\a_{\;\b}=X^I\,(t_I)^\a{}_{\;\b}$ and $\bar{X}^\a_{\;\b}=\bar{X}^I\,(t_I)^\a_{\;\b}$, where $t_I$ are the generators of the Lie algebra. Consistency requires that $(t_I)_\a^{\;\b} =-\rho_{\a\gamma}(t_I)^\gamma_{\;\eta}\rho^{\eta\b}$. A nontrivial action requires the existence of an hermitean tensor (which is not necessarily positive-definite, as one of the hypermultiplets may act as a compensator), which restricts the gauge group to a subgroup of (a noncompact version of) USp($2r$). }. % Central charge transformations commute with the supersymmetry transformations when acting on the hypermultiplet fields\footnote{% Later when we discuss the vector-tensor multiplet we will see that that $[\d_z(z), \d_Q (\epsilon)]$ closes into the tensor and vector gauge transformations that are associated to the vector-tensor multiplet. The hypermultiplet is inert under the latter transformations. }. % It follows that the supersymmetry transformation for e.g. $A_i^{\;\a({\rm z})}$ is obtained from (\ref{hrules}) by placing a ``z'' index onto the rule for $A_i^{\;\a}$. Similarly, the transformation rules for all fields higher in the hierarchy can be obtained from those corresponding to lower-lying fields by appending successive ``z'' indices. In order to close the supersymmetry algebra an infinite number of constraints must be imposed. The fields $(A_i^{\;\a},\zeta^\a,A_i^{\;\a({\rm z})})$ are not affected by the constraints. As a result they constitute the fundamental $8r+8r$ degrees of freedom contained in the $r$ hypermultiplets. The constraints, which relate higher-$z$ elements of the central charge hierarchy to the fundamental degrees of freedom, are described by the following two relationships, \begin{eqnarray} \zeta^{\a({\rm z})} &=& -\frac{1}{2\bar{X}^{0}}\Big[ \rho^{\a\b}\hbox{\ooalign{$\displaystyle D$\cr$\hspace{.03in}/$}}\zeta_\b +\Omega^{0\,i}A_i^{\;\a({\rm z})} +g\Omega^{i\,\a}{}_{\!\b} A_i^{\;\b} +2g\bar{X}^\a{}_\b\zeta^\b +\ft18\sigma\cdot T_{ij}\varepsilon^{ij}\zeta^\a -\ft32\varepsilon^{ij}\chi_iA_j^{\;\a} \Big] \,,\nonumber\\ A_i^{\a({\rm zz})} &=& -\frac{1}{4|X^{0}|^2}\Big[ (D^aD_a+\ft32D)A_i^{\;\a} +\varepsilon_{ik}Y^{0\,jk}A_j^{\;\a({\rm z})} +2\Big(\rho^{\a\b}\bar{\Omega}_i^{0}\zeta_\b^{({\rm z})} -\varepsilon_{ij}\bar{\Omega}^{0\,j}\zeta^{\a({\rm z})}\Big) \nonumber\\ & & \hspace{.8in} +2g(\bar{X}^{0}X^\a{}_\b+X^{0}\bar{X}^\a{}_\b)A_i^{\;\b({\rm z})} +2g\Big(\rho^{\a\b}\bar{\Omega}_{i\b}{}^\gamma\,\zeta_\gamma -\varepsilon_{ij}\bar{\Omega}^{j\,\a}{}_{\!\b}\zeta^\b \Big) \nonumber\\ & & \hspace{.8in} +g\varepsilon_{ik}Y^{jk\,\a}{}_\b\, A_j^{\;\b} +2g^2\{\bar{X},X\}^\a{}_\b\, A_i^{\;\b} \Big] \,. \label{hconstraints}\end{eqnarray} All other constraints are obtained from these by application of the central charge. An important observation is that the constraints (\ref{hconstraints}) are algebraic relationships. For instance, the equation for $\zeta^{\a({\rm z})}$ involves $\zeta_\b^{({\rm z})}$ on the right hand side, through the covariant derivative $D_\mu\zeta_\b=\partial_\mu\zeta_\b-W_\mu^0\zeta_\b^{({\rm z})}+\cdots$. Taking the complex conjugate of the equation for $\zeta^{\a({\rm z})}$, we obtain the analogous equation for $\zeta_\a^{({\rm z})}$ which we may then substitute back. Similar manipulations may be done to the equation for $A_i^{\a({\rm zz})}$. In this manner we can restructure the constraints into the following form, \begin{eqnarray} \zeta^{\a({\rm z})} &=& -\ft12 X^0 \Big(\vert X^0\vert^2+\ft{1}{4}W_\mu^{0}W^{\mu {0}}\Big)^{-1}\Big( \rho^{\a\b}\,\hbox{\ooalign{$\displaystyle\partial$\cr$/$}}\zeta_\b+\cdots\Big)\,, \nonumber\\ A_i^{\;\a({\rm zz})} &=& -\ft14 \Big(|X^{0}|^2 +\ft{1}{4} W_\mu^{0}W^{\mu {0}}\Big)^{-1}\Big( \partial^2 A_i^{\;\a} +\cdots\Big) \,. \end{eqnarray} The above infinite-dimensional hierarchical structure of basic units $(A_i{}^\a, \zeta^\a)$ endowed with an infinite sequence of constraints leaving precisely $8+8$ degrees of freedom, was worked out in \cite{DWLVP}. However, we should recall that this approach does not enable one to derive the most general couplings of hypermultiplets. These can be obtained in the harmonic-superspace formulation, which avoids the presence of an off-shell central charge at the expense of an infinite number of unconstrained fields. For the vector-tensor multiplet there seems no way to avoid the central charge \cite{DK}. Therefore we use the same method as outlined above for the construction of Lagrangians for interacting vector-tensor multiplets. This is described in the next section. \vspace{.1in} \subsection{The linear multiplet} A linear multiplet contains three scalar fields transforming as an SU(2) triplet. The defining condition is that, under supersymmetry, these scalars transform into a doublet spinor. Furthermore it contains a Lorentz vector, subject to a constraint. The linear multiplet can transform in a real representation of some gauge group, as well as under a central charge. For this reason the supersymmetry transformations contain the Lie-algebra valued components of a vector multiplet associated with this gauge group. This is exactly the same as for the hypermultiplets, but here we do not introduce extra indices to indicate the matrix-valued character. The terms associated with the gauge group carry a coupling constant $g$. The central-charge transformations are simply incorporated into the generic gauge group and will not be indicated explicitly. The Weyl and chiral weights and the fermion chirality of the component fields of the linear multiplet are listed in table 2.2. The transformation rules for the component fields of the linear multiplet are as follows \begin{eqnarray} \d L_{ij} &=& 2\bar{\epsilon}_{(i}\varphi_{j)} +2\varepsilon_{ik}\varepsilon_{jl}\bar{\epsilon}^{(k}\varphi^{l)}\,, \nonumber\\ \d\varphi^i &=& \hbox{\ooalign{$\displaystyle D$\cr$\hspace{.03in}/$}} L^{ij}\epsilon_j+\hbox{\ooalign{$\displaystyle E$\cr$\hspace{.03in}/$}}\varepsilon^{ij}\epsilon_j-G\epsilon^i +2g\bar{X}L^{ij}\varepsilon_{jk}\epsilon^k + 2 L^{ij}\eta_j\,,\nonumber\\ \d G &=& -2\bar{\epsilon}_i\hbox{\ooalign{$\displaystyle D$\cr$\hspace{.03in}/$}}\varphi^i - \bar{\epsilon}_i \Big( 6 \chi_j L^{ij} + \ft12 \varepsilon^{ij} \varepsilon^{kl}\sigma \cdot T_{jk} \varphi_l \Big) \nonumber\\ & &+2g\bar{X}\Big(\varepsilon^{ij}\bar{\epsilon}_i\varphi_j - {\rm h.c.} \Big) -2g\bar{\epsilon}_i\Omega^j L^{ik}\varepsilon_{jk} + 2 \bar \eta_i\varphi^i\,, \nonumber\\ \d E_a &=& 2\varepsilon_{ij}\bar{\epsilon}^i\sigma_{ab}D^b\varphi^j + \ft14 \bar {\epsilon}^i \gamma_a \Big( 6 \varepsilon_{ij} \chi_k L^{jk} - \ft12 \sigma\cdot T_{ij} \varepsilon^{jk} \varphi_k\Big) \nonumber\\ & &+2g\bar{X}\bar{\epsilon}^i\gamma_a\varphi_i +g \bar{\epsilon}^i\gamma_a\Omega^j L_{ij} + \ft32 \bar \eta^i\gamma_a\varphi^j\varepsilon_{ij}+ {\rm h.c.}\,, \label{linear} \end{eqnarray} where $L_{ij}=\varepsilon_{ik}\varepsilon_{jl}L^{kl}$ and \begin{equation} 2D_a E^a= g \Big(\ft12 Y^{ij}L_{ij} -2 X G-2\bar{\Omega}^i\varphi_i\Big) - 3 \bar {\varphi}^i \chi^j \varepsilon_{ij} + {\rm h.c.} \,. \end{equation} For $g=0$ the above constraint can be solved and $E^a$ can be written as the (supercovariant) field strength of a two-rank tensor gauge field $E_{\mu\nu}$. The solution takes the form \begin{equation} E^a =\ft12 i e^{-1} e^a_\mu \varepsilon^{\mu\nu\rho\sigma} D_\nu E_{\rho\sigma}\,. \end{equation} The resulting multiplet is known as the $N=2$ tensor multiplet. \begin{figure} \begin{center} \begin{tabular}{|c||cccc||ccc||cccc|} \hline & \multicolumn{4}{c||}{vector multiplet} & \multicolumn{3}{c||}{hypermultiplet} & \multicolumn{4}{c|}{linear multiplet} \\ \hline \hline field & $X^I$ & $\Omega_i^{\,I}$ & $W_\mu^{\,I}$ & $Y_{ij}^{\,I}$ & $A_i^\a$ & $\zeta^\a$ & $A_i^{\a({\rm z})}$ & $L_{ij}$ & $\varphi^i$ & $G$ & $E_a$ \\[.5mm] \hline \hline w & $1$ & $\ft32$ & $0$ & $2$ & $1$ & $\ft32$ & $1$ & $2$ & $\ft52$ & $3$ & $3$ \\[.5mm] \hline c & $-1$& $-\ft12$ & $0$ & $0$ & $0$ & $-\ft12$ & $0$ & $0$ & $\ft12$ & $1$ & $0$ \\[.5mm] \hline $\gamma_5$& & $+$ & & & & $-$ & & & $+$ & & \\[.5mm] \hline \end{tabular}\\[.13in] \parbox{4.6in}{Table 2.2: Weyl and chiral weights ($w$ and $c$, respectively) and fermion chirality ($\gamma_5$) of the vector, hyper and linear multiplet component fields.} \end{center} \end{figure} \subsection{Multiplet calculus} The identification of the various rules for multiplying multiplets is a central aspect of the multiplet calculus. This has been explicitly described in previous papers \cite{DWVHVP,DWLVP}. There are product rules that define how to construct multiplets from products of certain other multiplets. For some of the multiplets one can find density formulae, which yield a superconformally invariant action upon integration over spacetime. In the context of our work here the most relevant density formula is the one involving an abelian vector and a linear multiplet. The linear multiplet can transform under a central charge, in which case the vector multiplet must be the one that supplies the gauge field for the central charge transformations. Apart from this the linear multiplet must be neutral under the gauge group. The density formula reads, \begin{eqnarray} e^{-1}{\cal L}&=& X^0 G - \Big( \ft14 Y^{0\,ij} + \ft12 \bar \psi^{i}_\mu \gamma^\mu \Omega^{0\,j} + \bar X^0 \bar \psi^i_{\mu} \sigma^{\mu\nu} \psi_\nu^j\Big) L_{ij} + \bar\varphi^i \Big( \Omega_i^0 + X^0 \gamma^\mu \psi_{\mu i} \Big)\nonumber\\ & &- \ft12 W_a^0 \Big( E^a + 2 \bar \varphi^i \sigma^{ab} \psi_b^j \varepsilon_{ij} - \ft12 \varepsilon^{abcd}\, \bar \psi_{b\, k}\gamma_c\psi_d^i L_{ij} \varepsilon^{jk} \Big) + {\rm h.c. }\,, \label{linaction} \end{eqnarray} where the vector-multiplet fields carry a superscript ``0" to indicate that they belong to an abelian vector multiplet, possibly associated with central-charge transformations. \setcounter{equation}{0} \section{The vector-tensor multiplets} \subsection{Central charges and Chern-Simons terms} {}From off-shell counting it follows immediately that the vector-tensor multiplet must be subject to a central charge when it is based on a finite number of off-shell components. Just as in \cite{vt1,vt2} we use the same strategy as presented for hypermultiplets in the previous section. The basic unit of the vector-tensor multiplet consists of a scalar field $\phi$, a vector gauge field $V_\mu$, a tensor gauge field $B_{\mu\nu}$ and a doublet of spinors $\l_i$. This unit consists of seven bosonic and eight fermionic components. To close the supersymmetry algebra off shell, we must assume the existence of an infinite hierarchy of these units, again distinguished by appending successive indices ``z''. The central charge then raises the number of ``z'' indices, such as, for instance, in $\d_{z}\,\phi=z\phi^{({\rm z})}$. Successive applications thus generate a sequence of terms, \begin{equation} \phi\longrightarrow\phi^{({\rm z})}\longrightarrow \phi^{({\rm zz})} \longrightarrow {\rm etcetera}\,, \label{hierarchy} \end{equation} and similarly on all other fields. It will turn out that $\phi^{({\rm z})}$ corresponds to an auxiliary field. All other objects in the hierarchy, $\phi^{({\rm zz})}$, $V_{\mu}^{({\rm z})}$, $V_\mu^{({\rm zz})}$, etcetera, are dependent, and will be given by particular combinations of the independent fields. Hence we end up with precisely $8+8$ degrees of freedom. In order to couple the vector-tensor multiplet to supergravity we employ the superconformal multiplet calculus. When the supersymmetry is local then also the central-charge transformations must be local. Therefore we must couple the vector-tensor multiplets to at least one vector multiplet, whose gauge field couples to the central charge. However, for reasons that have been described in \cite{vt2}, it is advisable to couple the vector-tensor multiplet to a more general background of vector multiplets, so we consider $n$ vector multiplets. One of these provides the gauge field for the central charge, which we denote by $W_\mu^0$. This must be an abelian gauge field. The remaining $n-1$ vector multiplets supply additional background gauge fields $W_\mu^A$, which need not be abelian. The index $A$ is taken to run from $2$ to $n$, for reasons we explain shortly. Also, since $W_\mu^0$ is the gauge field for the central charge, the associated transformation parameter $\theta^0$ is identified with the central charge parameter $z$ introduced above, i.e., $z\equiv\theta^0$. The vector gauge transformations act as follows on the background gauge fields, \begin{equation} \d W_\mu^0=\partial_\mu z \,,\qquad \d W_\mu^A=\partial_\mu\theta^A+f^A{}_{\!BC}\theta^BW_\mu^C\,. \end{equation} In addition to the central charge, the vector-tensor multiplet has its own gauge transformations associated with the tensor $B_{\mu\nu}$ and the vector $V_\mu$. We reserved the index $1$ for the vector field $V_\mu$ of the vector-tensor multiplet. (The reason for this choice is based on the dual description of our theory, where the vector-tensor multiplet is replaced with a vector multiplet, so that the dual theory involves $n+1$ vector multiplets.) In the interacting theory, the tensor field $B_{\mu\nu}$ necessarily couples to Chern-Simons forms. This coupling is evidenced by the transformation behavior of the tensor. To illustrate this, if we ignore the central charge (other than its contribution to $W^0_\mu$), then the vector field of the vector-tensor multiplet would transform as \begin{equation} \d V_\mu=\partial_\mu\theta^1 \,, \label{simv} \end{equation} and the tensor field would transform as \begin{equation} \d B_{\mu\nu}=2\partial_{[\mu}\Lambda_{\nu]} +\eta_{IJ}\,\theta^I\partial_{[\mu}W_{\nu]}^J, \label{simb} \end{equation} where $\theta^I$ and $\Lambda_\mu$ are the parameters of the transformations gauged by $W_\mu^I$ and $B_{\mu\nu}$ respectively, and the index $I$ is summed from $0$ to $n$. As mentioned above, in this context $W_\mu^1$ is identified with $V_\mu$. Closure of the combined vector and tensor gauge transformations requires that $\eta_{IJ}$ be a constant tensor invariant under the gauge group. There is an ambiguity in the structure of $\eta_{IJ}$, which derives from the possibility of performing field redefinitions. Without loss of generality, $\eta_{IJ}$ can be modified by absorbing a term proportional to $W_\mu^IW_\nu^J$ times some group-invariant antisymmetric tensor into the definition of the tensor field $B_{\mu\nu}$. Without loss of generality, we thus remove all components of $\eta_{IJ}$ except for $\eta_{11}, \eta_{1A}$ and $\eta_{AB}$, and also we render $\eta_{AB}$ symmetric. Also note that, since $\eta_{1A}$ is invariant under the gauge group, it follows that $\eta_{1A}W_\mu^A$ is an abelian gauge field. The situation is actually more complicated, since $V_\mu$ and $B_{\mu\nu}$ are also subject to the central-charge transformation. As described above, under this transformation these fields transform into complicated expressions, denoted $V_\mu^{({\rm z})}$ and $B_{\mu\nu}^{({\rm z})}$, respectively, which involve other fields of the theory. Accordingly, we deform the transformation rule (\ref{simv}) to \begin{equation} \d V_\mu = \partial_\mu \theta^1+z V_\mu^{({\rm z})} \, , \end{equation} and, at the same time, (\ref{simb}) to \begin{equation} \d B_{\mu\nu} = 2\partial_{[\mu}\Lambda_{\nu]} +\eta_{11}\,\theta^1\partial_{[\mu}V_{\nu]} +\eta_{1A}\,\theta^1\partial_{[\mu}W_{\nu]}^A +\eta_{AB}\,\theta^A\partial_{[\mu}W_{\nu]}^B +z B_{\mu\nu}^{({\rm z})} \,. \end{equation} All $\theta^0$-dependent terms, including any such Chern-Simons contributions, are now contained in $V_\mu^{({\rm z})}$ and $B_{\mu\nu}^{({\rm z})}$, which are determined by closure of the full algebra, including supersymmetry. The deformed transformation rules must still lead to a closed gauge algebra. In particular one finds that \begin{equation} [\d_z ( z), \d_{{\rm vector}} (\theta^1)] = \d_{{\rm tensor}} (\ft12z\, \eta_{11}\,\theta^1\, V_\mu^{({\rm z})}) \, . \end{equation} This implies that $V_\mu^{({\rm z})}$ and the combination $B_{\mu\nu}^{({\rm z})}+\eta_{11}V_{[\mu}V_{\nu]}^{({\rm z})}$ both transform covariantly under the central charge, but are invariant under all other gauge symmetries. However, under local supersymmetry, they do not transform covariantly, as we will see below (cf. \ref{VzBz}). The resulting gauge algebra now consists of the standard gauge algebra for the vector fields augmented by a tensor gauge transformation. Observe that we have neither specified $V^{({\rm z})}_\mu$ nor $B_{\mu\nu}^{({\rm z})}$, which are determined by supersymmetry and will be discussed in the next section. As it turns out these terms give rise to additional Chern-Simons terms involving $W_\mu^0$ that depend on the scalar fields. The presence of these terms is a direct result of the deformation of the standard algebra of tensor and vector gauge transformations. Before giving specific results on the local supersymmetry transformations, we discuss a crucial feature of our results. It turns out \cite{vt1,vt2} that the coefficients $\eta_{IJ}$ that encode the Chern-Simons terms cannot all be zero, as otherwise the supersymmetry variations turn singular and supersymmetric completions in the action will vanish. In fact, one can show that there are just two inequivalent representations of the vector-tensor multiplet. One is the case where $\eta_{11}=0$. In this case there is no Chern-Simons coupling between the tensor and the vector fields of the vector-tensor multiplet. The choice $\eta_{11}=0$ removes the conspicuous self-interaction between the vector-tensor multiplet fields and in fact the supersymmetry transformations become linear in these fields (but not in the background fields) and the action quadratic. However, in this case not all the $\eta_{1A}$ Chern-Simons coefficients can vanish simultaneously. Therefore we are dealing with at least three abelian gauge fields, namely, $W_\mu^0$, $\eta_{1A}W^A_\mu$ and $V_\mu$. In the case of rigid supersymmetry, one can freeze some or all of the vector multiplets to a constant, but this will not alter the structure of the couplings. This first class seems to coincide with the theories one obtains by reducing (1,0) tensor multiplets in six spacetime dimensions to four dimensions. The tensor multiplet comprises a scalar, a self-dual tensor gauge field and a symplectic Majorana spinor. The self-dual tensor field decomposes in four dimensions into the vector and tensor gauge fields of the vector-tensor multiplet. To have also a vector field that couples to the central charge presumably requires the dimensional reduction of a theory of tensor multiplets coupled to supergravity. A recent study of various Chern-Simons terms in six dimensions was carried out in \cite{BSS}. The second, inequivalent class of couplings is characterized by the fact that $\eta_{11}\not=0$. In that case, it turns out that one can absorb certain terms of the background multiplets into the definition of the vector-tensor fields such that all the coefficients $\eta_{1A}$ vanish \cite{vt2}. In this case we have at least two abelian vector fields, namely $W^0_\mu$ and $V_\mu$. Hence in practical situations the Chern-Simons coefficients can be restricted to satisfy either $\eta_{11}=0$ or $\eta_{1A}=0$. In the following we will not pay much attention to this fact, but simply evaluate the transformation rules and the action for general values of the coefficients $\eta_{11}$, $\eta_{1A}$, $\eta_{AB}$. \subsection{The vector-tensor transformation rules\label{s:vttrans}} In \cite{vt1,vt2} the transformation rules for the vector-tensor multiplet have been determined by imposing the supersymmetry algebra iteratively on the multiplet component fields. In this procedure, the supersymmetry transformation rules for vector multiplets remain unchanged. Therefore, the algebra represented by the vector-tensor multiplet in the presence of a vector multiplet background is fixed up to gauge transformations which pertain exclusively to the vector-tensor multiplet. The most relevant commutator in this algebra involves two supersymmetry transformations and was given in (\ref{qqcomb}). In the present situation we have that \begin{eqnarray} \d_{\rm gauge} &=& \d_z\Big( 4\varepsilon^{ij}\bar{\epsilon}_{2i}\epsilon_{1j}X^0+ {\rm h.c.} \Big) +\d_{{\theta^A}} \Big( 4\varepsilon^{ij}\bar{\epsilon}_{2i}\epsilon_{1j}X^A+ {\rm h.c.} \Big) \nonumber\\ &&+\d_{\rm vector}\Big( \theta^1(\epsilon_1,\epsilon_2)\Big) +\d_{\rm tensor}\Big( \Lambda_\mu(\epsilon_1,\epsilon_2)\Big) \,. \label{algebra} \end{eqnarray} The field $X^0$ is the complex scalar of the vector multiplet associated with the central charge\footnote{% Henceforth we will suppress the superscript on $X^0$ and define $X\equiv X^0$ to simplify the formulae.}. The field-dependent parameters $\theta^1(\epsilon_1,\epsilon_2)$ and $\Lambda_\mu(\epsilon_1,\epsilon_2)$ are found by imposing the $Q$-super\-symmetry commutator on the vector-tensor multiplet. They will be specified in due course. In this paper we repeat the derivation of the transformation rules, but now in the context of local supersymmetry. This means that we follow the same procedure, but now in a background of conformal supergravity combined with vector multiplets. Because the transformation rules for the superconformal fields are also completely known, the supersymmetry algebra is determined up to the gauge and central-charge transformations associated with the vector-tensor multiplet itself. The procedure followed in \cite{vt1,vt2} is tailor-made for an extension to local supersymmetry. First of all, we already insisted on rigid scale and chiral invariance. Because of that, the scalar fields of the vector multiplets will play the role of compensating fields to balance possible differences in scaling weigths of the various terms. Secondly, one of the vector multiplets was required to realize the central charge in a local fashion. In the context of the superconformal multiplet calculus, local dilations, chiral and central-charge transformations are necessary prerequisites for the coupling to supergravity. As was already discussed in \cite{vt1}, there remains some flexibility in the assignment of the scaling and chiral weights for the vector-tensor multiplet. By exploiting the scalar fields of the vector multiplets we may arbitrarily adjust the weights for each of the vector-tensor components by suitably absorbing functions of $X$ and $X^A$. In this way we choose the weights for the vector-tensor components to be as shown in table 3.1. \begin{figure} \begin{center} \begin{tabular}{|c||ccccc|} \hline & \multicolumn{5}{c|}{vector-tensor multiplet} \\ \hline \hline field & $\phi$ & $V_\mu$ & $B_{\mu\nu}$ & $\l_i$ & $\phi^{({\rm z})}$ \\[.5mm] \hline \hline $w$ & $0$ & $0$ & $0$ & $\ft12$ & $0$ \\[.5mm] \hline $c$ & $0$ & $0$ & $0$ & $\ft12$ & $0$ \\[.5mm] \hline $\gamma_5$& & & & $+$ & \\[.5mm] \hline \end{tabular}\\[.13in] \parbox{5in}{Table 3.1: Scaling and chiral weights ($w$ and $c$, respectively) and fermion chirality ($\gamma_5$) of the vector-tensor component fields.} \end{center} \end{figure} The bosonic vector-tensor fields must all have chiral weight $c=0$ since they are all real. To avoid a conflict between scale transformations and vector-tensor gauge transformations we adjusted $V_\mu$ and $B_{\mu\nu}$ to be also neutral under scale transformations. Note that there remains a freedom to absorb additional combinations of the background fields into the definition of $\phi$ and $\l_i$. Furthermore, the fields $V_\mu$ and $B_{\mu\nu}$ can be redefined by appropriate additive terms. Needless to say, it is important to separate relevant terms in the transformation rules from those that can be absorbed into such field redefinitions. In deriving our results this aspect has received proper attention. In order to define the vector-tensor multiplet as a superconformal multiplet, we must also choose the assignments under the special $S$-supersymmetry transformations (which in turn determine the behaviour under special conformal boosts $K$). We have assumed that the scalar $\phi$ is $S$- and $K$-invariant, which leads to consistent results. While this is a natural assignment for the lowest-dimensional component of a supermultiplet, we found no rigorous arguments to rule out other assignments. The choice we made is the simplest one and, as it turns out, implies that all the vector-tensor fields remain $S$- and $K$-invariant. The latter follows from the commutator of $Q$- with $S$-supersymmetry, and subsequently, by using the $[S,S]$ commutation relation, which yields a $K$-transformation. The transformation rules coincide with the ones found in \cite{vt1,vt2} apart from the presence of certain covariantizations. As before we suppress nonabelian terms for the sake of clarity; they are not important for the rest of this paper. We are not aware of arguments that would prevent us from switching on the nonabelian interactions. Furthermore we introduce the following notation for homogeneous, holomorphic functions of zero degree that occur frequently in our equations, \begin{equation} g=i\eta_{1A}\frac{X^A}{X}\,, \qquad b=-\ft14 i\eta_{AB}\frac{X^AX^B}{X^2} \,. \label{bgdef} \end{equation} For arbitrary Chern-Simons coefficients $\eta_{IJ}$, the transformation rules under $Q$-supersymmetry are (we emphasize that in the remainder of this section and in section~4, the index $I$ does not take the value $I=1$), \begin{eqnarray} \d\phi &=& \bar{\epsilon}^i\l_i +\bar{\epsilon}_i\l^i \,,\nonumber\\ \d V_\mu &=& i\varepsilon^{ij}\bar{\epsilon}_i\gamma_\mu \Big( 2X\l_j+\phi\Omega_j^0\Big) -iW_\mu^0\,\bar{\epsilon}^i\l_i + 2i\phi X \varepsilon^{ij} \bar{\epsilon}_i \psi_{\mu j} + {\rm h.c.} \,, \nonumber\\ \d B_{\mu\nu} &=& -2\bar{\epsilon}^i\sigma_{\mu\nu}|X|^2\Big( 4\eta_{11}\phi - 2 {\rm Re}\, g \Big)\l_i \nonumber\\ & & -2\bar{\epsilon}^i\sigma_{\mu\nu}\bar{X}\Big( 2\eta_{11}\phi^2\Omega_i^0 +\phi\bar{X}\partial_{\bar{I}}\bar{g}\,\Omega_i^I -4i{\rm Re}[\partial_I(Xb)]\Omega_i^I \Big) \nonumber\\ & & -2 \bar\epsilon^i\gamma_{[\mu}\psi_{\nu]i} \bar X\Big( 2\eta_{11}\phi^2 X +\phi\bar X \partial_{\bar I}\bar g X^I -4i{\rm Re}[\partial_I(Xb)] X^I\Big) \nonumber\\ & & +i\varepsilon^{ij}\bar{\epsilon}_i\gamma_{[ \mu}V_{\nu ]} \Big( \eta_{11} ( 2X\l_j +\phi\Omega_j^0 ) -i \eta_{1A}\Omega_j^A\Big) \nonumber\\ & & +2i \varepsilon^{ij}\bar{\epsilon}_i\psi_{j[\mu }V_{\nu ]} X \Big( \eta_{11} \phi - g \Big) \nonumber\\ & & +\varepsilon^{ij}\bar{\epsilon}_i\gamma_{[\mu}W_{\nu]}^0\Big( 2X (2\eta_{11}\phi-g)\l_j +\eta_{11}\phi^2\Omega_j^0 -i\eta_{1A}\phi\Omega_j^A -4i\partial_I(Xb)\Omega_j^I\Big) \nonumber\\ & & + 2 \varepsilon^{ij}\bar{\epsilon}_i\psi_{j[\mu}W_{\nu]}^0 X \Big(\eta_{11}\phi^2 -\phi g -4i b \Big) \nonumber\\ & & + \varepsilon^{ij}\bar{\epsilon}_i \gamma_{[ \mu}W_{\nu ]}^A \eta_{AB} \Omega_j^B + 2\varepsilon^{ij}\bar{\epsilon}_i \psi_{j[\mu } W_{\nu ]}^A \eta_{AB} X^B \nonumber\\ & & -i\eta_{11}W_{[\mu}^0V_{\nu]}\bar{\epsilon}^i\l_i + {\rm h.c.} \,, \nonumber\\ \d\l_i &=& \Big(\hbox{\ooalign{$\displaystyle D$\cr$\hspace{.03in}/$}}\phi-i \hatVslash^{({\rm z})}\Big)\epsilon_i -\frac{i}{2X}\varepsilon_{ij}\sigma\cdot\Big( {\cal F}^-(V) -i\phi{\cal F}^{- 0}\Big)\epsilon^j +2\varepsilon_{ij}\bar{X}\phi^{({\rm z})}\epsilon^j \nonumber\\ & & -\frac{1}{X}(\bar{\epsilon}^j\l_j)\Omega_i^0 -\frac{1}{X}(\bar{\epsilon}^j\Omega_j^0)\l_i \nonumber\\ & & -\frac{1}{2X (2\eta_{11}\phi - {\rm Re}\, g )} \epsilon^j \Big[ 2\eta_{11}\phi^2Y_{ij}^0 +\phi\bar{X}\partial_{\bar{I}}\bar{g}\,Y_{ij}^I -4i{\rm Re}\,\partial_I(Xb)Y_{ij}^I\nonumber\\ & & \hspace{1.6in} -2\eta_{11} \Big( X\bar{\l}_i\l_j -\bar{X}\varepsilon_{ik}\varepsilon_{jl}\bar{\l}^k\l^l\Big) \nonumber\\ & & \hspace{1.6in} +X \Big( X\partial_I g \,\bar{\Omega}_{(i}^I\l_{j)} -\bar{X}\varepsilon_{ik}\varepsilon_{jl}\partial_{\bar{I}}\bar{g}\, \bar{\Omega}^{I(k}\l^{l)}\Big) \nonumber\\ & & \hspace{1.6in} + i \Big( \partial_I\partial_J(Xb)\,\bar{\Omega}_i^I\Omega_j^J + \varepsilon_{ik}\varepsilon_{jl}\,\partial_{\bar{I}}\partial_{\bar{J}}(\bar{X}\bar{b}) \bar{\Omega}^{Ik}\Omega^{Jl}\Big)\Big]\,.\; \label{momrules} \end{eqnarray} Except from the explicit gravitino fields in the variations of $V_\mu$ and $B_{\mu\nu}$, all extra covariantizations are implicitly contained in covariant derivatives and field strengths. Let us now first define a number of quantities that appear in (\ref{momrules}) or are related to them. The supercovariant field strengths for the vector-tensor multiplet gauge fields are equal to \begin{eqnarray} \label{fs} {\cal F}_{\mu\nu} (V) &=& 2\partial_{[\mu}V_{\nu]} -2W^0_{[\mu}V_{\nu]}^{({\rm z})} + \ft14i\phi\Big[\bar{X} T_{\mu\nu}^{ij} \varepsilon_{ij}-{\rm h.c.}\Big] \nonumber\\ && - i\Big[ \varepsilon^{ij} \bar{\psi}_{i[\mu} \gamma_{\nu ]}\Big( 2X\l_j +\phi\Omega_j^0\Big) + \phi X \varepsilon^{ij} \bar{\psi}_{\mu i} \psi_{\nu j} - {\rm h.c.}\Big] \,, \nonumber\\ H^\mu &=& \ft12 {i} e^{-1} \varepsilon^{\mu\nu\l\sigma} \Big[\partial_\nu B_{\l\sigma} - \eta_{11} V_\nu \,\partial_\l V_\sigma -\eta_{1A} V_\nu \,\partial_\l W_\sigma^A \nonumber\\ && \hspace{1.8cm} - \eta_{AB} W_\nu^A \partial_\l W_\sigma^B - W_\nu^0 \Big( B_{\l\sigma}^{({\rm z})} + \eta_{11} V_\l V_\sigma^{({\rm z})} \Big) \Big] \\ &&-\Big[ i {\bar\psi}^i_\nu \sigma^{\mu\nu}\Big( 2 |X|^2 \Big( 2\eta_{11}\phi - {\rm Re}\, g \Big)\l_i \nonumber\\ &&\hspace{2cm} + \bar{X}\Big( 2\eta_{11}\phi^2\Omega_i^0 +\phi\bar{X}\partial_{\bar{I}}\bar{g}\,\Omega_i^I -4i{\rm Re}[\partial_I(Xb)]\Omega_i^I \Big)\Big) + {\rm h.c.}\Big] \nonumber\\ && + \ft14 {i} e^{-1} \varepsilon^{\mu\nu\l\sigma}{\bar\psi}^i_\nu \gamma_\l \psi_{\sigma i} \Big[ \bar X\Big( 2\eta_{11}\phi^2 X +\phi\bar X X^I\,\partial_{\bar I}\bar g -4iX^I\,{\rm Re}[\partial_I(Xb)] \Big) +{\rm h.c.} \Big] \, .\nonumber \end{eqnarray} The Bianchi identities corresponding to the field strengths (\ref{fs}) are straightforward to determine and read, \begin{eqnarray} && D_\mu\Big(\tilde{{\cal F}}^{\mu\nu}(V) + \ft14i\phi (\bar XT^{\mu\nu\, ij}\varepsilon_{ij} + X T_{ij}^{\mu\nu}\varepsilon^{ij}) \Big) \nonumber\\ &&\hspace{1.15cm} =-V_\mu^{({\rm z})}\Big[\tilde{{\cal F}}^{0\mu\nu} -\ft14 (\bar XT^{ij\, \mu\nu}\varepsilon_{ij} - X T_{ij}^{\mu\nu}\varepsilon^{ij}) \Big] - \ft34 i\Big[\varepsilon_{ij} \bar \chi^i\gamma^\nu (2\bar X \l^j+ \phi\Omega^{j0})+ {\rm h.c.} \Big] \,,\nonumber \\ && D_\mu H^\mu = - \ft14 i\Big[\eta_{11}\, \tilde{\cal F}_{\mu\nu}(V)\, {\cal F}^{\mu\nu}(V) +\eta_{1A}\,\tilde {\cal F}_{\mu\nu}(V)\, {\cal F}^{\mu\nu A} +\eta_{AB}\, \tilde{\cal F}_{\mu\nu}^{A}\, {\cal F}^{\mu\nu B} + 2 \tilde {\cal F}^{0}_{\mu\nu}\hat B^{\mu\nu\,({\rm z})}\Big] \nonumber\\ &&\hspace{1.6cm} -\ft1{16}i\Big[T_{ij}^{\mu\nu}\Big( 2\eta_{11}\, \phi X\,{\cal F}_{\mu\nu}(V) +\eta_{1A}(X^A \, {\cal F}_{\mu\nu}(V) +i \phi X {\cal F}_{\mu\nu}^A) +2\eta_{AB}\, X^A\,{\cal F}_{\mu\nu}^B \nonumber \\ && \hspace{3.4cm} + 2X \hat B^{({\rm z})}_{\mu\nu}+X \, {\cal F}^{0}_{\mu\nu} (\eta_{11} \phi^2 -\phi\,g -4ib)\Big) - {\rm h.c.}\Big] \nonumber \\ &&\hspace{1.6cm} + 3 i( \bar \l_i\chi^i- \bar\l^i\chi_i) \, \vert X\vert^2 (2\eta_{11} \phi - {\rm Re}(g)) \nonumber \\ &&\hspace{1.6cm} - \ft32 i\Big[X \,\bar \chi_i (2\eta_{11} \phi^2 \Omega^{i0} + \phi X \partial_{ I} g \Omega^{Ii} + 4i {\rm Re}[\partial_I(Xb)] \Omega^{Ii}) - {\rm h.c.} \Big] \,. \label{bianchis} \end{eqnarray} Observe that the Bianchi identity for $H_\mu$ is not linear in the vector-tensor fields. On the right-hand side there are nonlinear terms that are either of second-order (the term proportional to $\eta_{11}$) or of zeroth-order (the term proportional to $\eta_{AB}$) in the vector-tensor fields. Furthermore the quantity $\hat B_{\mu\nu}^{({\rm z})}$ does not depend homogeneously on the vector-tensor fields either as will become clear soon. Hence, generically the vector-tensor multiplet is realized in a nonlinear fashion, as we have already pointed out in the previous subsection. Furthermore, the following quantities appear in the above formulae, which are the supercovariant part of the $z$-transformed vector and tensor fields, \begin{eqnarray} {\hat V}_a^{({\rm z})} &=& \frac{-1}{2|X|^2(2\eta_{11}\phi- {\rm Re}\,g) } \Big\{ H_a - \Big[ i X D_a \bar{X}^I \Big( 2\eta_{11}\phi^2 \d_I{}^0 +\phi\bar{X}\partial_{\bar{I}}\bar{g} -4i{\rm Re}[\partial_I(Xb)] \Big) + {\rm h.c.} \Big]\Big\} \nonumber\\ & & + {\rm fermion\ terms}\,, \phantom{\Big[ } \nonumber\\ {\hat B}_{ab}^{({\rm z})} &=& -\ft12 {\rm Im}\,g \, {\cal F}_{ab}(V) +\ft12{i} (2\eta_{11}\phi- {\rm Re\,}g)\tilde{{\cal F}}_{ab}(V) -\ft12\phi(\eta_{11}\phi-{\rm Re}\,g){\cal F}_{ab}^0 \nonumber\\ & & +\ft12i \phi{\rm Im}(X\partial_I g)\tilde{{\cal F}}_{ab}^I +4{\rm Im}\Big[\partial_I(Xb){\cal F}_{ab}^{I-} \Big] + {\rm fermion\ terms} \,. \label{vzbz} \end{eqnarray} The caret indicates that these expressions are fully covariant with respect to all local symmetries; they do not coincide with the image of $V_\mu$ and $B_{\mu\nu}$ under the central charge, $V_\mu^{({\rm z})}$ and $B_{\mu\nu}^{({\rm z})}$. The latter are given by \begin{eqnarray} \label{VzBz} V_\mu^{({\rm z})} &=& {e_\mu}^a {\hat V}_a^{({\rm z})} + \ft12\Big( i \bar{\psi}^i_\mu \l_i + {\rm h.c.} \Big) \,, \nonumber\\ B_{\mu\nu}^{({\rm z})}&=& {e_\mu}^{[a}{e_\nu}^{b]} {\hat B}^{({\rm z})}_{ab} - \eta_{11} V_{[\mu} V^{({\rm z})}_{\nu ]} \nonumber\\ &&+\ft12 \Big[X \varepsilon^{ij} ( \bar{\psi}_{\mu i} \psi_{\nu j} +\ft{1}{4}T_{\mu\nu\, ij})(\eta_{11}\phi^2-\phi g-4ib ) + 2 X\varepsilon^{ij} \bar{\psi}_{i[\mu} \gamma_{\nu ]}\l_j \,(2\eta_{11}\phi-g) \nonumber\\ &&\hspace{8mm} + \varepsilon^{ij} \bar{\psi}_{i[\mu} \gamma_{\nu ]} \Big( \eta_{11}\phi^2\Omega_j^0 -i\eta_{1A}\phi\Omega_j^A -4i\partial_I(Xb)\Omega_j^I\Big) + {\rm h.c.}\Big] \,. \end{eqnarray} There are of course similar expressions for $\l_i^{({\rm z})}$ and $\phi^{({\rm zz})}$, which are of less direct relevance. Because the fields $\phi$ and $\l_i$ are themselves covariant, the action of the central charge will yield covariant expressions. The results for the central charge transformations are determined from the commutator, \begin{equation} [\d_Q(\epsilon), \d_z( z)] = \d_{\rm vector}\Big( i z \bar{\epsilon}^i \l_i + {\rm h.c.} \Big) + \d_{\rm tensor}\Big(\Lambda_\mu (\epsilon, z) \Big)\,, \end{equation} where \begin{eqnarray} \Lambda_\mu (\epsilon , z) &=& \ft{1}{2} z \varepsilon^{ij} \bar{\epsilon}_i \gamma_\mu \Big( 2X(2\eta_{11}\phi-g)\l_j +\eta_{11}\phi^2\Omega_j^0 -i\eta_{1A}\phi\Omega_j^A -4i\partial_I(Xb)\Omega_j^I\Big) \nonumber\\ &&+z \varepsilon^{ij} \bar{\epsilon}_i \psi_{\mu j} X \Big(\eta_{11}\phi^2 -\phi g -4ib \Big) +\ft12{i} z \eta_{11} V_\mu \bar{\epsilon}^i \l_i + {\rm h.c.}\,, \end{eqnarray} which implies that the supersymmetry transformations of $\phi^{({\rm z})}$, $\l_i^{({\rm z})}$ are just the $z$-transformed versions of $\d_Q \phi, \d_Q \l_i$ as given in (\ref{momrules}). Hence, with the exception of $\phi^{({\rm z})}$ all the $z$-transformed fields are subject to constraints. By acting on these constraints with central-charge transformations, one recovers an infinite hierarchy of constraints. These relate the components of the higher multiplets $(V_\mu^{({\rm z})}, B_{\mu\nu}^{({\rm z})}, \l_i^{({\rm z})}, \phi^{({\rm zz})})$, etcetera to the lower ones, in such a way as to retain precisely $8+8$ independent degrees of freedom. At this point we specify the expressions for the vector and tensor gauge transformations in the commutator \eqn{algebra}, \begin{eqnarray} \theta^1 (\epsilon_1, \epsilon_2) &=& 4i\phi X \,\varepsilon^{ij} \bar{\epsilon}_{i2} \epsilon_{j1} + {\rm h.c.} \,, \nonumber\\ \Lambda_\mu (\epsilon_1, \epsilon_2) &=& 2 \bar{\epsilon}^i_2 \gamma_\mu \epsilon_{i1}\, \bar X\Big( 2\eta_{11}\phi^2 X +\phi\bar X X^I\partial_{\bar I}\bar g -4iX^I{\rm Re}[\partial_I(Xb)]\Big) \nonumber\\ && + 2i \varepsilon^{ij} \bar{\epsilon}_{i2} \epsilon_{j1} \,X \Big( V_\mu (\eta_{11}\phi - g) -i W_\mu^0 (\eta_{11}\phi^2 -\phi g -4ib) \Big) \nonumber\\ && + 2 \varepsilon^{ij} \bar{\epsilon}_{i2} \epsilon_{j1} \,W_\mu^A \eta_{AB} X^B + {\rm h.c.}\,. \end{eqnarray} We close this section with a number of supersymmetry variations of various quantities defined above. The supercovariant field strengths transform as follows: \begin{eqnarray} \d {\cal F}_{ab} (V) &=& -2i\varepsilon^{ij} \bar{\epsilon}_i\gamma_{[a}D_{b]} \Big( 2X\l_j + \phi \Omega^0_j\Big) -2 \varepsilon^{ij} \bar{\epsilon}_i\gamma_{[a}\Omega^0_j\, {\hat V}_{b]}^{({\rm z})} -i\bar{\epsilon}^i\l_i {\cal F}_{ab}^0 \nonumber\\ &&-2 i \varepsilon^{ij} \bar{\eta}_i \sigma_{ab} \Big( 2X \l_j + \phi \Omega^0_j \Big) + {\rm h.c.} \,,\nonumber\\ \d H^a &=& 4i \bar{\epsilon}^i\sigma^{ab}D_b \Big[ |X|^2 \Big( 2\eta_{11}\phi - {\rm Re}\, g\Big)\l_i\Big] \nonumber\\ && + 2i \bar{\epsilon}^i\sigma^{ab}D_b \Big[ \bar{X}\Big( 2\eta_{11}\phi^2\Omega_i^0 +\phi\bar{X}\partial_{\bar{I}}\bar{g}\,\Omega_i^I -4i{\rm Re}[\partial_I(Xb)]\Omega_i^I \Big) \Big] \nonumber\\ &&+\ft{3}{2} i \bar{\epsilon}^i\gamma^a\chi_i \,\bar X\Big( 2\eta_{11}\phi^2 X +\phi\bar X \partial_{\bar I}\bar g X^I -4i{\rm Re}[\partial_I(Xb)] X^I\Big) \nonumber\\ &&-\ft{1}{2} \varepsilon^{ij} \bar{\epsilon}_i\gamma_b \,{\tilde{\cal F}}^{ba}(V) \Big( 2\eta_{11} ( 2X\l_j +\phi\Omega_j^0 ) -i \eta_{1A}\Omega_j^A\Big) \nonumber\\ &&+\ft12{i} \varepsilon^{ij} \bar{\epsilon}_i\gamma_b \,{\tilde{\cal F}}^{ba\, 0} \Big( 2X(2\eta_{11}\phi-g)\l_j +\eta_{11}\phi^2\Omega_j^0 -i\eta_{1A}\phi\Omega_j^A -4i\partial_I(Xb)\Omega_j^I\Big) \nonumber\\ &&+\ft12 {i} \varepsilon^{ij} \bar{\epsilon}_i\gamma_b \,{\tilde{\cal F}}^{ba\, A} \Big( i\eta_{1A} (2X \l_j +\phi\Omega_j^0) + 2\eta_{AB} \Omega_j^B \Big) \nonumber\\ && +i\varepsilon^{ij}\bar{\epsilon}_i\gamma_b\Omega_j^0 \, {\tilde{\hat B}}{}^{({\rm z})\, ba} \nonumber\\ && -\ft14 i \bar{\epsilon}_i\gamma_b \, T^{ba\, ij} \Big[ 2 |X|^2 \Big( 2\eta_{11}\phi - {\rm Re}\, g \Big)\l_j \nonumber\\ &&\hspace{24mm} +\bar{X}\Big( 2\eta_{11}\phi^2\Omega_j^0 +\phi\bar{X}\partial_{\bar{I}}\bar{g}\,\Omega_j^I -4i{\rm Re}[\partial_I(Xb)]\Omega_j^I \Big)\Big] \nonumber\\ && +\ft{3}{2} i \bar\eta^i\gamma^a \Big[ 2|X|^2 \Big( 2\eta_{11}\phi - {\rm Re}\, g \Big)\l_i \nonumber\\ &&\hspace{12mm} +\bar{X}\Big( 2\eta_{11}\phi^2\Omega_i^0 +\phi\bar{X}\partial_{\bar{I}}\bar{g}\,\Omega_i^I -4i{\rm Re}[\partial_I(Xb)]\Omega_i^I \Big) \Big] +{\rm h.c.}\,. \end{eqnarray} The variation of the covariant fields ${\hat V}^{({\rm z})}_a$ and ${\hat B}^{({\rm z})}_{ab}$ equals \begin{eqnarray} \d {\hat V}^{({\rm z})}_a &=& i \varepsilon^{ij} \bar{\epsilon}_i \gamma_a \Big( 2 X \l_j + \phi \Omega^0_j \Big)^{({\rm z})} + i \bar{\epsilon}^iD_a\l_i -\ft18{i}\bar{\epsilon}_i \gamma_a\sigma\cdot T^{ij} \l_j - \ft12{i} \bar{\eta}^i \gamma_a \l_i + {\rm h.c.}\,, \nonumber\\ \d {\hat B}^{({\rm z})}_{ab} &=& -4\bar{\epsilon}^i\sigma_{ab}|X|^2 \Big( (2\eta_{11}\phi- {\rm Re}\, g )\l_i \Big){}^{({\rm z})} \nonumber\\ && -2\bar{\epsilon}^i\sigma_{ab}\, \phi^{({\rm z})} \bar{X} \Big( 4\eta_{11}\phi \Omega_i^0 + \bar{X} \partial_{\bar{I}}\bar{g}\,\Omega_i^I \Big) \\ && - \varepsilon^{ij}\bar{\epsilon}_i\gamma_{[a} D_{b]}\Big( 2X(2\eta_{11}\phi-g)\l_j +\eta_{11}\phi^2\Omega_j^0 -i\eta_{1A}\phi\Omega_j^A -4i\partial_I(Xb)\Omega_j^I\Big) \nonumber\\ && + i \varepsilon^{ij}\bar{\epsilon}_i\gamma_{[a} {\hat V}^{({\rm z})}_{b]} \Big( 2\eta_{11}( 2X\l_j +\phi\Omega_j^0) -i\eta_{1A}\Omega_j^A \Big) \nonumber\\ && + i \Big( \eta_{11} {\cal F}_{ab}(V) + \ft{1}{2} \eta_{1A} {\cal F}_{ab}^A \Big) \bar{\epsilon}^i\l_i \nonumber\\ && - \varepsilon^{ij} \bar\eta_i \sigma_{ab} \Big( 2X(2\eta_{11}\phi-g)\l_j +\eta_{11}\phi^2\Omega_j^0 -i\eta_{1A}\phi\Omega_j^A -4i\partial_I(Xb)\Omega_j^I\Big) + {\rm h.c.}\, .\nonumber \end{eqnarray} The same structure is repeated as one goes higher up in the central-charge hierarchy. It was already observed in \cite{vt1} that the transformations of the higher-$z$ fields involve objects both at the next and at the preceding level. The transformations of the basic vector-tensor fields as given in (\ref{momrules}) are special in this respect. They involve only the next level as there is no lower level. The consistency of this is ensured by the gauge transformations of the fields $V_\mu$ and $B_{\mu\nu}$, which allows for a truncation of the central charge hierarchy from below. \setcounter{equation}{0} \section{Invariant actions involving vector-tensor multiplets} In this section we present the construction of invariant actions for the vector-tensor multiplet, using the multiplet calculus described in section 2. We start by constructing a general linear multiplet depending on the vector-tensor fields and the background vector-multiplet components. {}From this linear multiplet we construct the associated supergravity actions. Their dual description in terms of vector multiplets alone, which requires the use of field equations, is the issue of the following section. \subsection{The linear multiplet} It is possible to form products of vector-tensor multiplets, using the background vector multiplets judiciously, so as to form $N=2$ linear multiplets. One starts by constructing the lowest component $L_{ij}$ of the linear multiplet in terms of vector-tensor fields as well as the background fields, which must have weights $w=2$ and $c=0$ and transform into a spinor doublet under $Q$-supersymmetry. We also note that $L_{ij}$ must transform as a real vector under chiral SU(2) transformations. The only vector-tensor component which transforms under SU(2) is the fermion $\l_i$. For the vector multiplets, only the fermions $\Omega_i^I$ and the auxiliary fields $Y_{ij}^I$ transform nontrivially under SU(2). Therefore, the most general possible linear multiplet must be based on an $L_{ij}$ of the following form \begin{eqnarray} L_{ij} &=& X{\cal A}\,\bar{\l}_i\l_j +\bar{X}\bar{{\cal A}}\,\varepsilon_{ik}\varepsilon_{jl}\bar{\l}^k\l^l +X{\cal B}_I\,\bar{\l}_{(i}\Omega_{j)}^I +\bar{X}\bar{{\cal B}}_{\bar{I}}\, \varepsilon_{ik}\varepsilon_{jl}\bar{\l}^{(k}\Omega^{Il)} \nonumber\\ & & +{\cal C}_{IJ}\,\bar{\Omega}_i^I\Omega^J_j +\bar{{\cal C}}_{\bar{I}\bar{J}}\, \varepsilon_{ik}\varepsilon_{jl}\bar{\Omega}^{Ik}\Omega^{Jl} +{\cal G}_IY_{ij}^I \,, \label{ansatz}\end{eqnarray} where ${\cal A}$, ${\cal B}_I$, ${\cal C}_{IJ}$ and ${\cal G}_I$ are functions of $\phi$, $X^I$ and $\bar{X}^I$. In this section the index $I$ does not take the value $I=1$. In order that $L_{ij}$ has weights $w=2$ and $c=0$, the functions $\cal A$ and ${\cal G}_I$ must have weights $w=c=0$, while ${\cal B}_I$ and ${\cal C}_{IJ}$ have weights $w=-c=-1$. Obviously, the reality condition on $L_{ij}$ requires that ${\cal G}_I$ be real. As before, we suppress the superscript zeroes of the central-charge vector multiplet for the sake of clarity. We also expect the linear multiplet to transform only under the central charge and not under the gauge transformations associated with the other vector multiplets, but this is not important for most of the construction. Requiring that $L_{ij}$ transforms into a spinor doublet as indicated in (\ref{linear}), puts stringent requirements on each of the functions ${\cal A}(\phi,X^I,\bar{X}^I)$, ${\cal B}_I(\phi,X^I,\bar{X}^I)$, ${\cal C}_{IJ}(\phi,X^I,\bar{X}^I)$ and ${\cal G}_I(\phi,X^I,\bar{X}^I)$, which take the form of coupled first-order, linear differential equations. These equations are exactly the same as in the rigid case, which were given in \cite{vt2}. We will not repeat them here but immediately present their solution, which is a linear combination of three distinct solutions, each with an independent physical interpretation. The most interesting of these is given as follows, \begin{eqnarray} [{\cal A}{}]_1 &=& \eta_{11}(\phi+i\zeta) -\ft12 g \,, \nonumber\\ {}[{\cal B}_I{}{}]_1 &=& -\ft12(\phi+i\zeta)\partial_I g -2i\partial_I b \,,\nonumber\\ {}[{\cal C}_{IJ}]_1 &=& -\ft12 i(\phi+i\zeta)\partial_I\partial_J(Xb) \,, \nonumber\\ {}[{\cal G}_I{}]_1 &=& {\rm Re}\Big\{ [\ft13\eta_{11}(\phi+i\zeta)^3 -\ft12 i\zeta(\phi+i\zeta)g]\d_I{}^0 +\ft12(\phi+i\zeta)X\partial_I(g\phi+4ib)\Big\} \,, \label{first}\end{eqnarray} where \begin{equation} \zeta(\phi, X^I,\bar X^I)=\frac{{\rm Im}(\phi g+4ib)} {2\eta_{11}\phi-{\rm Re}\,g} \,. \label{zetadef}\end{equation} In terms of the action, which will be discussed shortly, this solution provides the couplings which involve the vector-tensor fields. The remaining two solutions, which we discuss presently, give rise either to a total divergence or to interactions which involve only the background fields. The latter of these correspond to previously known results. The second solution takes the form, \begin{eqnarray} {}[{\cal A}{}]_2 &=& i\eta_{11}\zeta' -i\a \,, \nonumber\\ {}[{\cal B}_I{}]_2 &=& -\ft12 i\zeta'\partial_I g -2i\partial_I\gamma \,, \nonumber\\ {}[{\cal C}_{IJ}{}]_2 &=& \ft12\zeta'\partial_I\partial_J(Xb) \,,\nonumber\\ {}[{\cal G}_I{}]_2 &=& {\rm Re}\Big\{ 2i X \phi \partial_I \gamma + \ft i2 \zeta' X \phi \partial_I g - 2 \zeta' \partial_I (Xb)\Big\}\,, \label{second}\end{eqnarray} where $\gamma=\ft14 i\a_A{X^A}/{X}$ is a holomorphic homogeneous function of the background scalars $X^A$ and $X^0$; $\a$ and $\a_A$ are arbitrary real parameters. Furthermore \begin{equation} \zeta'(\phi, X^I,\bar X^I)=\frac{2\a\phi+4{\rm Re}\,\gamma} {2\eta_{11}\phi-{\rm Re}\,g} \,. \label{zpdef} \end{equation} Note that this solution could be concisely included into the first solution by redefining $g\to g+2i\a$ and $b\to b+\gamma$. In fact, this second solution indicates that the functions $g$ and $b$ are actually defined modulo these shifts. In terms of the action, this ambiguity is analogous to the shift of the theta angle in an ordinary Yang-Mills theory. The third and final solution is given by \begin{eqnarray} {}[{\cal A}{}]_3 &=& 0 \,, \nonumber\\ {}[{\cal B}_I{}]_3 &=& 0 \,, \nonumber\\ {}[{\cal C}_{IJ}{}]_3 &=& -\ft18 i\partial_I\partial_J(f(X)/X) \,, \nonumber\\ {}[{\cal G}_I{}]_3 &=& -\ft12{\rm Im}\,\partial_I(f(X)/X ) \,. \label{third}\end{eqnarray} Where $f(X)$ is a holomorphic function of $X^0$ and $X^A$, of degree 2. In terms of the action, this solution corresponds to interactions amongst the background vector multiplets alone. Since the possible vector multiplet self-couplings have been fully classified, this solution does not provide us with new information. The function $f(X)$ provides the well-known holomorphic prepotential for describing the background self-interactions. All solutions have in common that they are homogeneous functions of $X^I$ and $\bar X^I$: $\cal A$ and ${\cal G}_I$ are of degree 0 and ${\cal B}_I$ and ${\cal C}_{IJ}$ are of degree $-1$. This is a result of the fact that the field $\phi$ has $w=0$. Furthermore we note the identities, \begin{equation} X^I\,{\cal B}_I = X^I\,{\cal C}_{IJ}=0\,, \end{equation} which ensure that $L_{ij}$ is invariant under $S$-supersymmetry, in accord with (\ref{linear}). Now that we have determined the scalar triplet $L_{ij}$, in terms of the specific functions ${\cal A}(\phi,X^I,\bar{X}^I)$, ${\cal B}_I(\phi,X^I,\bar{X}^I)$, ${\cal C}_{IJ}(\phi,X^I,\bar{X}^I)$, and ${\cal G}_I(\phi,X^I,\bar{X}^I)$ given above, we can generate the remaining components of the linear multiplet, $\varphi_i,\, G$, and $E_\mu$ by varying (\ref{ansatz}) with respect to supersymmetry. Given the complexity of the transformation rule for $\l_i$ found in (\ref{momrules}), it is clear that a fair amount of work is involved in carrying out this process. However, since we are only interested in the bosonic part of the action, we are only interested in the bosonic part of $E_a$ and $G$, viz. (\ref{linaction}). The higher components of the linear multiplet are then given by \begin{eqnarray} \varphi^i &=& -\bar{X}(\hbox{\ooalign{$\displaystyle D$\cr$\hspace{.03in}/$}}\phi+i \hat V\!\!\llap/\,\,{}^{({\rm z})}) (\bar {\cal A}\l^i+\ft12 \bar{\cal B}_{\bar{I}}\Omega^{Ii}) +{\cal G}_I\hbox{\ooalign{$\displaystyle D$\cr$\hspace{.03in}/$}}\Omega^{Ii} \nonumber\\ & & -\ft{i}{2}\varepsilon^{ij}\sigma\cdot ({\cal F}(V)-i\phi{\cal F}^0)({\cal A}\l_j+\ft12 {\cal B}_I\Omega^I_j) \nonumber\\ & & +\ft12\varepsilon^{ij}\sigma\cdot{\cal F}^I(X{\cal B}_I\l_j+2{\cal C}_{IJ}\Omega_j^J) \nonumber\\ & & -\hbox{\ooalign{$\displaystyle D$\cr$\hspace{.03in}/$}}\bar{X}^I(\bar{X} \bar{\cal B}_{\bar{I}}\l^i +2\bar{\cal C}_{\bar{I}\bar{J}}\Omega^{Ji}) \nonumber\\ & & -|X|^2\phi^{({\rm z})}\varepsilon^{ij}(2{\cal A}\l_j+{\cal B}_I\Omega_j^I) \nonumber\\ & & +\ft12 Y^{Iij}\Big( (\partial_\phi {\cal G}_I)\l_j+(\partial_J {\cal G}_I)\Omega_j^J\Big) +{\rm 3\,fermion\ terms}\,, \nonumber\\ G &=&\bar X \bar{\cal A}\,(D_a\phi+i\hat V^{({\rm z})}_a) (D^a\phi+i{\hat V}^{a({\rm z})}) \nonumber\\ & & +2\bar{X}\bar{\cal B}_{\bar{I}}\,D_a\bar{X}^I (D^a\phi+i{\hat V}^{a ({\rm z})}) \nonumber\\ & & +4\bar{\cal C}_{\bar{I}\bar{J}}\,D_a\bar{X}^I\,D^a\bar{X}^J -2{\cal G}_I \,D_aD^a\bar{X}^I \nonumber\\ & & +\frac{1}{4X}({\cal F}(V)^--i\phi{\cal F}^{0-})_{ab} \big({\cal A}({\cal F}(V)^--i\phi{\cal F}^{0-}) +2i X{\cal B}_I{\cal F}^{I-}\big)^{ab} \nonumber\\ & & -{\cal C}_{IJ}{\cal F}^{I-}_{ab}{\cal F}^{J-ab} -4\bar{X}|X|^2 {\cal A} (\phi^{({\rm z})})^2 \nonumber\\ & & -\ft14(\partial_{(I}{\cal G}_{J)}+X^{-1}P_{(I}\,\partial_\phi {\cal G}_{J)})\, Y_{ij}^IY^{Jij}\nonumber\\ & & -\ft12 {\cal G}_I \,{\cal F}^{I+}_{ab}\, T^{ab}_{ij}\varepsilon^{ij} +{\rm fermion\ terms}\,,\nonumber\\ E_a &=& {\rm Re}\Big(-4|X|^2\phi^{({\rm z})} (\bar{\cal A}\,(D_a\phi+i {\hat V}_a^{({\rm z})}) + {\cal B}_I \,D_a X^I) \nonumber\\ & & \hspace{.3in} -2i(D^b\phi + i {\hat V}^{({\rm z})\, b})({\cal A}\,({\cal F}(V)^-_{ab} -i \phi{\cal F}_{ab}^{0-}) +iX {\cal B}_I{\cal F}_{ab}^{I-}) \nonumber\\ & & \hspace{.3in} -2D^b X^I(i{\cal B}_I \,({\cal F}(V)_{ab}^- -i \phi{\cal F}_{ab}^{0-}) -4{\cal C}_{IJ}\,{\cal F}_{ab}^{J-}) \nonumber\\ & & \hspace{.3in} -2 {\cal G}_I \,D^b({\cal F}_{ab}^{-I} -\ft14 \bar{X}^I T^{ij}_{ab} \varepsilon_{ij}) \Big) + {\rm fermion\ terms} \,. \label{lincomponents} \end{eqnarray} Here we used the notation \begin{equation} P_I = -\ft12\phi\,\d_I{}^0 +i {{\rm Im}\Big( \phi X\partial_I g +4i\partial_I(Xb)\Big)\over 2(2\eta_{11} \phi - {\rm Re}\, g)} \,. \label{epdef} \end{equation} The appearance of terms containing $T_{ab}^{ij}$ may seem strange because this field does not appear in the transformation rules for $\l_i$ and $\Omega_i$. However, this field appears in the variation of $\hbox{\ooalign{$\displaystyle D$\cr$\hspace{.03in}/$}} \Omega_i$ and in the Bianchi identities for ${\cal F}^I_{ab}$, which have to be used to obtain $G$ and $E_a$. Having derived the complete linear multiplet we can construct the action. \subsection{The action} Now we want to use the linear multiplet components derived above in the action formula (\ref{linaction}). Since this linear multiplet transforms under the central charge we need to use the central-charge vector multiplet in the action formula, as explained in section 2. This yields an action that is both invariant under local supersymmetry and local gauge transformations. Carrying out this calculation we note the following term in Langrange density, \begin{equation} {\cal L} = 4 e \bar X{{\cal C}}_{IJ}\, D^a X^I D_a X^J -2e {\cal G}_I\, X\,D_a D^a \bar{X}^I \cdots \,, \end{equation} which we rewrite by splitting off a total derivative. This leads to derivatives of the function ${\cal G}_I$, which we rewrite using its explicit form (or the differential equations of which it is a solution). After this manipulation, the bosonic terms of the full action read, \begin{eqnarray} e^{-1} {\cal L} &=& -2 {\cal G}_I\, X \bar{X}^I (\ft16 {\cal R} - D)\nonumber\\ & & + |X|^2{\cal A}\,(\partial_\mu \phi -i{\hat V}^{({\rm z})}_\mu)^2 +2|X|^2{\cal B}_I\,D^\mu X^I (\partial_\mu\phi -i{\hat V}_\mu^{({\rm z})}) \nonumber\\ & & - 4 X {{\cal C}}_{IJ}\, D^\mu X^I D_\mu \bar{X}^J -2 \bar{X}(X {\cal B}_I + {\cal A}\, P_I) \partial_\mu \phi \, D^\mu X^I\nonumber\\ & &-2 X ({\cal B}_I \,P_J D_\mu X^I + \bar {\cal B}_{\bar I}\, \bar P_{\bar J}\,D_\mu \bar{X}^I)\, D^\mu \bar{X}^J + 2 {\cal G}_I\, D_\mu X \,D^\mu \bar{X}^I \nonumber\\ & & + {\cal A}\,({\cal F}(V)^{-\, \mu\nu}-i\phi{\cal F}^{-\, 0\mu\nu})\Big( \ft14({\cal F}(V)_{\mu\nu}^--i\phi{\cal F}^{-\, 0}_{\mu\nu}) +iW^{0}_{\mu}(\partial_\nu \phi -i{\hat V}_\nu^{({\rm z})})\Big) \nonumber\\ & & +iX{\cal B}_I\,{\cal F}^{-\, I \mu\nu}\Big( \ft12({\cal F}(V)_{\mu\nu}^--i\phi{\cal F}^{-\, 0}_{\mu\nu}) +iW^0_\mu (\partial_\nu \phi - i{\hat V}_\nu^{({\rm z})})\Big) \nonumber\\ & & +i{\cal B}_I\,({\cal F}(V)^{-\, \mu\nu} - i\phi {\cal F}^{-\, \mu\nu})W^0_\mu D_\nu X^I\nonumber\\ & & -{\cal C}_{IJ}\,{\cal F}^{I-\mu\nu}\Big( X {\cal F}^{J-}_{\mu\nu} + 4 W^0_\mu D_\nu X^J\Big) \nonumber \\ & & -|X|^2 {\cal A}\,(W_\mu^{0}\,W^{\mu\,0} + 4|X|^2)(\phi^{({\rm z})})^2 \nonumber\\ & & -\ft14(X\,\partial_{(I}{\cal G}_{J)} +P_{(I}\,\partial_\phi{\cal G}_{J)})Y_{ij}^IY^{Jij} -\ft14{\cal G}_I\,Y_{ij}^0 Y^{Iij}\nonumber\\ & &- \ft12 {\cal G}_I X {\cal F}^{I+}_{ab}\, T^{ab}_{ij}\varepsilon^{ij} +{\cal G}_I\,W^{0}_a \,D_b( {\cal F}^{-I\,ab} - \ft14 \bar{X}^I T^{ab\,ij} \varepsilon_{ij} ) + {\rm h.c.}\,, \label{lagrangian} \end{eqnarray} where we have made the terms proportional to $W^0_\mu$ in the covariant derivatives explicit. The above result describes the coupling of a vector-tensor multiplet to $n$ vector multiplets. Note that each term involves a factor of the functions ${\cal A}(\phi,\,X^I,\,\bar{X}^I)$, ${\cal B}_I(\phi,\,X^I,\,\bar{X}^I)$, ${\cal C}_{IJ}(\phi,\,X^I,\,\bar{X}^I)$ ${\cal G}_I(\phi,\,X^I,\,\bar{X}^I)$ or $P_I(\phi,\,X^I,\,\bar{X}^I)$, which were given explicitly in the previous section. This form of the action would be a suitable starting point to consider the breaking of superconformal gravity into Poincar\'e gravity. An additional compensator e.g. a hypermultiplet would be needed to be able to define a gauge for the dilatations. The procedure would then be completely analogous to the case described in \cite{DWLVP}. However, it is not the purpose of this paper to go into the details of this. In the general case described above, the functions ${\cal A}$, ${\cal B}_I$, ${\cal C}_{IJ}$ and ${\cal G}_I$, which define the Lagrangian are linear superpositions of three distinct terms, one of which describes the local couplings of the vector-tensor multiplet components, another which is a total derivative, and one which codifies the self-interactions of the background. As a result of this, the Lagrangian (\ref{lagrangian}) can be written as a sum of three analogous pieces: a vector-tensor piece, a total-derivative piece, and a background piece. Now that we have given the action in terms of the functions ${\cal A}$, ${\cal B}_I$, ${\cal C}_{IJ}$ and ${\cal G}_I$, it is instructive to give the solutions for the two inequivalent representations described in section \ref{s:vttrans}. \vspace{.1in} \noindent{\it The nonlinear vector-tensor multiplet:}\\ As described above, when the parameter $\eta_{11}$ does not vanish, the tensor field involves a coupling to the Chern-Simons form $V\wedge {\rm d}V$, which is quadratic in terms of vector-tensor fields. Consequently, the corresponding transformation rules contain significant nonlinearities. As was shown in \cite{vt2}, in this case it is possible to remove the parameter $\eta_{1A}$, and therefore the $V\wedge {\rm d} W^A$ Chern-Simons couplings. Without loss of generality, we then define $\eta_{11}=1$ and $\eta_{1A}=0$. In this case the functions ${\cal A}(\phi,X^I,\bar{X}^I)$, ${\cal B}_I(\phi,X^I,\bar{X}^I)$, ${\cal C}_{IJ}(\phi,X^I,\bar{X}^I)$, and ${\cal G}_I(\phi,X^I,\bar{X}^I)$ which define the linear multiplet and, more importantly, the vector-tensor Lagrangian (\ref{lagrangian}) are given by the following expressions \begin{eqnarray} {\cal A} &=& \phi+i\phi^{-1}(b+\bar{b})\,, \nonumber\\ {\cal B}_I &=& -2i\partial_Ib \,,\nonumber\\ {\cal C}_{IJ} &=& -\ft12 i(\phi +i\phi^{-1} (b+\bar{b})\partial_I\partial_J(Xb) -\ft18 i\partial_I\partial_J(X^{-1}f) \,,\nonumber\\ {\cal G}_I &=& {\rm Re}\Big(\ft13\phi^3\,\d_I{}^0 +2i\phi X\partial_I b -2\phi^{-1}(b+\bar{b})\,\partial_I(Xb)\Big) -\ft12{\rm Im}\,\partial_I\Big( X^{-1}f\Big) \,. \label{nonlin}\end{eqnarray} For the sake of clarity, we have absorbed the parameters $\a$ and $\a_A$ into the functions $b$ and $g$ in the manner described immediately after equation (\ref{zpdef}). Substituting these functions in the Lagrangian (\ref{lagrangian}), it is easy to see that the action contains, besides the total derivative and terms that depend only on the background vector multiplet fields, a cubic part and a linear part in vector-tensor fields. This is the immediate generalization to a background with more than one vector multiplet of the Lagrangian described in \cite{vt1}. \vspace{.1in} \noindent{\it The linear vector--tensor multiplet:}\\ As described previously, if $\eta_{11}=0$, implying the absense of the $V\wedge {\rm d}V$ Chern-Simons coupling, we obtain a vector-tensor multiplet which is distinct from the nonlinear case just discussed. In this case, it is not possible to perform a field redefinition to remove all of the $\eta_{1A}$ parameters and the supersymmetry transformation rules are linear in terms of the vector-tensor component fields. The functions ${\cal A}(\phi,X^I,\bar{X}^I)$, ${\cal B}_I(\phi,X^I,\bar{X}^I)$, ${\cal C}_{IJ}(\phi,X^I,\bar{X}^I)$, and ${\cal G}_I(\phi,X^I,\bar{X}^I)$ which define the linear multiplet and, more importantly, the vector-tensor Lagrangian (\ref{lagrangian}) are now given by the following expressions \begin{eqnarray} {\cal A} &=& -\ft12 g \,,\nonumber\\ {\cal B}_I &=& -\frac{1}{g+\bar{g}}\Big( \phi\bar{g}\partial_Ig +2i(g+\bar{g})\stackrel{\leftrightarrow}{\partial}_I(b+\bar{b})\Big)\,, \nonumber\\ \nonumber\\ {\cal C}_{IJ} &=& -\frac{1}{g+\bar{g}} \Big( i\phi\bar{g}+2(b+\bar{b})\Big)\partial_I\partial_J(Xb) -\ft18 i\partial_I\partial_J(X^{-1}f)\,, \nonumber\\ {\cal G}_I &=& \frac{1}{g+\bar{g}}{\rm Re}\Big\{ \phi\bar{g}X\partial_I(\phi g+4ib) -2i(b+\bar{b})\partial_I[X(\phi g+4ib)]\Big\} \,. \end{eqnarray} As above, for the sake of clarity we have absorbed the parameters $\a$ and $\a_A$ into the functions $b$ and $g$ in the manner described immediately after equation (\ref{zpdef}). Substituting these functions into the Lagrangian (\ref{lagrangian}), one obtains a Lagrangian that contains, besides the total derivative terms and a part that depends exclusively on the background mentioned above, a quadratic part and a linear part in vector-tensor fields. \setcounter{equation}{0} \section{Dual versions of vector-tensor actions} As we already mentioned in the introduction, a vector-tensor multiplet is classically equivalent to a vector multiplet. The theory which we have presented, involving one vector-tensor multiplet and $n$ vector multiplets is classically equivalent to a theory involving $n+1$ vector multiplets. Since these latter theories are well understood, it is of interest to determine what subset of vector multiplet theories are classically equivalent to vector-tensor theories. Furthermore, low-energy effective string Lagrangians with $N=2$ supersymmetry are usually described in terms of vector multiplets, such that by going to the vector multiplet language one can more easily verify which string theories are described by the vector-tensor multiplets we constructed above. A significant restriction along these lines has to do with the K\"ahler spaces on which the scalar fields of the theory may live. In the case of $N=2$ vector multiplets these consist of ``special K\"ahler" spaces, and the associated geometry is known as special geometry. For the case of effective Lagrangians corresponding to heterotic $N=2$ supersymmetric string compactifications, this space must contain, at least at weak string coupling, an SU(1,1)/U(1) coset factor parametrized in terms of the complex scalar corresponding to the axion/dilaton complex. According to a well-known theorem \cite{FVP} this uniquely specifies the special K\"ahler space. Perhaps not too surprisingly, the observations made in \cite{vt2} are not altered by going to local supersymmetry. Thus we will find that the vector-tensor multiplets we have been studying in the present article, fail to exhibit the SU(1,1)/U(1) factor, at least if one insists that it is the vector-tensor scalar and tensor field (the latter after a duality transformation, to be discussed below) that parametrize this subspace. Therefore it is impossible to associate this scalar and the tensor field with the (perturbative) heterotic dilaton-axion complex. However, they do play a natural role in the description of the non-perturbative heterotic string effects we alluded to in the introduction. One goes about constructing the dual vector multiplet formulation, in the usual manner, by introducing a Lagrange multiplier field $a$, which, upon integration, enforces the Bianchi identity on the field strength $H_\mu$. The relevant term to add to the Lagrangian is therefore \begin{eqnarray} e^{-1} {\cal L}(a) &=& a\, D_\mu H^\mu \nonumber\\ &&+ \ft14 i a \Big[\eta_{11}\, \tilde{\cal F}_{\mu\nu}(V)\, {\cal F}^{\mu\nu}(V) +\eta_{1A}\,\tilde {\cal F}_{\mu\nu}(V)\, {\cal F}^{\mu\nu A} +\eta_{AB}\, \tilde{\cal F}_{\mu\nu}^{A}\, {\cal F}^{\mu\nu B} + 2 \tilde {\cal F}^{0}_{\mu\nu}\hat B^{\mu\nu\,({\rm z})}\Big] \nonumber\\ &&+\ft1{16}i a\Big[T_{ij}^{\mu\nu}\Big( 2\eta_{11}\, \phi X\,{\cal F}_{\mu\nu}(V) +\eta_{1A}(X^A \, {\cal F}_{\mu\nu}(V) +i \phi X {\cal F}_{\mu\nu}^A) +2\eta_{AB}\, X^A\,{\cal F}_{\mu\nu}^B \nonumber \\ && \hspace{1.8cm} + 2X \hat B^{({\rm z})}_{\mu\nu}+X \, {\cal F}^{0}_{\mu\nu} (\eta_{11} \phi^2 -\phi\,g -4ib)\Big) - {\rm h.c.}\Big] \,. \label{aterm}\end{eqnarray} Note that we dropped the explicit fermionic terms, as we will do in the remainder of this section. Including the Lagrange multiplier term, we treat $H_\mu$ as unconstrained and integrate it out in the action, thereby trading the single on-shell degree of freedom represented by $B_{\mu\nu}$ for the real scalar $a$. Doing this, we obtain a dual theory involving only vector multiplets. To perform these operations, it is instructive to note that all occurences of $H_\mu$ in (\ref{lagrangian}) and (\ref{aterm}) are most conveniently written in terms of $\hat V_\mu^{({\rm z})}$, which can be done using (\ref{vzbz}). Because we are suppressing the fermions in what follows, we will henceforth drop the caret on $V_\mu^{(\rm z)}$. All such terms can then be collected, and written as follows, \begin{equation} {\cal L}( V_\mu^{({\rm z})})= \ft14e(2\eta_{11}\phi-{\rm Re}\,g) \Big( W^{0\,\mu} W^{0\,\nu}-(W_\l^0 W^{0\,\l}+4|X|^2)g^{\mu\nu}\Big) \Big( V_\mu^{({\rm z})} V_\nu^{({\rm z})} -2V_\mu^{({\rm z})}\partial_\nu(a-\zeta)\Big) \,, \label{lvz}\end{equation} where $\zeta$ was defined in (\ref{zetadef}). It is interesting how the terms involving $V_\mu^{({\rm z})}$ factorize into the form given in (\ref{lvz}). The equation of motion for $H_\mu$ is conveniently written in terms of $V_\mu^{({\rm z})}$, which follows immediately from (\ref{lvz}). It is given by the following simple expression, \begin{equation} V_\mu^{({\rm z})}=\partial_\mu(a-\zeta) \,. \label{vzeom} \end{equation} We also impose the equations of motion for the auxiliary fields, $\phi^{({\rm z})}=Y_{ij}^I=0$ (up to fermionic terms). After substituting these solutions, we manipulate the result into the familiar form for the bosonic Lagrangian involving vector multiplets, \begin{eqnarray} e^{-1} {\cal L} &=& \ft12i (F_I \bar X^I - X^I \bar F_I) \Big( -\ft16 {\cal R} + D \Big) +\ft12 i\big({\cal D}_\mu F_I\,{\cal D}^\mu\bar{X}^I -{\cal D}_\mu X^I\,{\cal D}^\mu\bar{F}_I\big)\nonumber \\ &&- \ft18 i {\bar F}_{IJ} F_{\mu\nu}^{+ I} F^{+\mu\nu\, J} - \ft1{16}i(F_I - X^J\bar F_{JI}) F_{\mu\nu}^{+ I} T^{\mu\nu}_{ij} \varepsilon^{ij} \nonumber\\ &&+ \ft{1}{128}i(F_I - X^J\bar F_{JI})X^I \Big( T_{\mu\nu ij} \varepsilon^{ij} \Big)^2 + {\rm h.c.} \, , \label{aform} \end{eqnarray} characterized by a holomorphic function $F(X^0,X^1,X^A)$, which is homogeneous of degree two. Here the field strengths are equal to $F_{\mu\nu}^I = 2\partial_{[\mu}W^I_{\nu]}- g f_{JK}{}^{\!I}W_\mu^JW_\nu^K$. In (\ref{aform}), a subscript $I$ denotes differentiation with respect to $X^I$. The natural bosons in the dual theory are found to be \begin{eqnarray} X^1 &=& X^0\Big((a-\zeta)+i\phi\Big)\,, \nonumber\\ W_\mu^1 &=& V_\mu+(a-\zeta)W_\mu^0 \,, \label{defW1} \end{eqnarray} and one can check that these transform as components of a common vector multiplet. For the general case, the dual theory obtained in this manner is described by the following holomorphic prepotential, \begin{eqnarray} F(X^0, X^1,X^A) &=& -\frac{1}{X^0}\Big( \ft13\eta_{11}X^1X^1X^1 +\ft12\eta_{1A}X^1X^1X^A +\eta_{AB}X^1X^AX^B \Big) \nonumber\\ & & -\a X^1X^1 +\a_A X^1X^A +f(X^0,X^A) \,. \label{prepot}\end{eqnarray} The quadratic terms proportional to $\a$ and $\a_A$ (defined in section~4.1) give rise to total derivatives since their coefficients are real. The term involving the function $f(X^0,X^A)$ represents the self-interactions of the background vector multiplets. The first three terms in (\ref{prepot}) encode the couplings of the erstwhile vector-tensor fields, $\phi$ and $a$, and it is these which we are most interested in. As mentioned above, it is relevant to investigate whether the K\"ahler space described by this prepotential function can contain an SU(1,1)/U(1) factor parametrized by the field $X^1/X^0$. According to the theorem of \cite{FVP}, this requires that $X^1/X^0$ appears linearly in the prepotential. This is obviously not the case for (\ref{prepot}), as we have quadratic and cubic terms which cannot be removed by absorbing some of the other fields into the would-be dilaton field $X^1/X^0$. As discussed earlier in this paper, the best one can do is to remove {\it either} $\eta_{11}$ or $\eta_{1A}$. There exists an obstruction to removing both of these. We recall that these parameters are related to the Chern-Simons couplings of the tensor field in the dual formulation. The obstruction to removing the unwanted terms in the prepotential derives from the inability to formulate an interacting off-shell vector-tensor theory without any such Chern-Simons couplings. In the present supergravity context it is important to note that the duality transformation we just described, does not interfere with the fields of the Weyl multiplet. This can be seen by nothing that (\ref{lvz}), (\ref{vzeom}) and (\ref{defW1}) are completely identical to the relations found in \cite{vt2} in the rigid supersymmetric case. This implies that the Weyl multiplet is not involved in the duality transformation and can be kept off-shell. The vector multiplets are not realized off-shell after the duality transformation, but the auxiliary fields $Y^I_{ij}$ can be reinstated afterwards. In this respect it is instructive to compare our results to the analysis performed in \cite{siebel}. Here the most general vector-multiplet theories admitting a (reverse) dualization into an antisymmetric tensor theory, were considered. They were found to precisely comprise the cases described here, plus the $\eta_{11}=0$, $\eta_{1A}=0$ case which is relevant for weakly coupled heterotic strings. However, in this last case the dualization into an antisymmetric tensor theory can no longer be carried out with the Weyl multiplet as a spectator. In particular, one is forced to first eliminate the U(1) chiral gauge field $A_\mu$, which in the Poincar\'e theory plays the role of an auxiliary field. Irrespective of these considerations, we note that the results we obtained in this article are a concise description of two very different situations. As described in detail in section 3, depending on whether the parameter $\eta_{11}$ is vanishing or not, indicating the absence or presence, respectively, of a $V\wedge {\rm d}V$ Chern-Simons coupling to the tensor field, the theory takes on very distinct characters. It is instructive then, to summarize our results independently for each of these two cases. For the nonlinear vector-tensor multiplet, we obtain a dual description involving only vector multiplets, characterized by the following holomorphic prepotential, \begin{eqnarray} F &=& -\frac{X^1}{X^0}\Big( \ft13 \eta_{11} X^1X^1 +\eta_{AB}X^AX^B \Big) -\a X^1X^1 +\a_A X^1X^A +f(X^0,X^A) \,. \end{eqnarray} As already mentioned above, the quadratic terms proportional to $\a$ and $\a_A$ represent total derivatives, and the last term involves the background self-interactions. Notice that in this case the prepotential is cubic in $X^1$. No higher-dimensional tensor theory is known that gives rise to this coupling. For the linear vector-tensor multiplet the dual description in terms of only vector multiplets is characterized by the following prepotential, \begin{eqnarray} F &=& -\frac{X^1}{X^0}\Big( \ft12\eta_{1A}X^1X^A +\eta_{AB}X^AX^B \Big) -\a X^1X^1 +\a_A X^1X^A +f(X^0,X^A)\,. \nonumber\\ \end{eqnarray} Again, as discussed above, the quadratic terms involving $\a$ and $\a_A$ represent total derivatives, while the last term involves the background self-interactions. Notice that in this case the prepotential has a term quadratic in $X^1$, which cannot be suppressed. Such a term also arises from the reduction of six-dimensional tensor multiplets to four dimensions. In that case, the presence of the quadratic term is inevitable, because it originates from the kinetic term of the tensor field \cite{FerMinSag}. Observe that we have at least three abelian vector fields coupling to the vector-tensor multiplet, namely $W^0_\mu$, $W^1_\mu$ and $\eta_{1A}W^A_\mu$. The work presented in this paper represents an exhaustive analysis of the $N=2$ vector-tensor multiplet coupled to supergravity and a number of background vector multiplets. One of these vector multiplets provides the gauge field that couples to the central charge. Although we considered only a single vector-tensor multiplet, our methods can be straightforwardly applied to theories where several of these multiplets are present. We have presented the complete and general superconformal transformation rules in this context, and have shown that these actually include two distinct cases, one of which is nonlinear in the vector-tensor components, and the other of which is linear. The difference between these two cases is encoded in the the coefficients of the Chern-Simons couplings, denoted by $\eta_{IJ}$. Furthermore we have constructed a supersymmetric action for this system, and exhibited its bosonic part. The dual descriptions in terms of vector multiplets have been obtained, and the respective prepotential functions exhibited. \vspace{1cm} \noindent {\bf Acknowledgement} \noindent We thank F. Brandt, N. Dragon, E. Ivanov, S.M. Kuzenko, B.A. Ovrut and E. Sokatchev for informative discussions.\\ Work supported by the European Commission TMR programme ERBFMRX-CT96-0045, in which P.~C. is associated to Leuven, B.~d.W. and B.~K. to Utrecht, M.~F. to Berlin and P.~T. to Torino. \\ P.~C. and R.~S. thank the FWO (Belgium) and B.~K. and M.~F. the FOM (The Netherlands) for financial support. P.~T. is a postdoctoral fellow in the above TMR programme. \pagebreak
2023-04-23T06:41:23.739Z
1997-10-28T12:29:04.000Z
redpajama/arxiv
arxiv_0001
2,347
16,745
1082290239e49ae72c73bc990543a9b6bb7a883a
\section{Introduction} The modified Newtonian dynamics (MOND) proposes that the law of inertia or gravity takes on a specific non-Newtonian form at accelerations well below a definite universal value (Milgrom 1983a,b,c). As an alternative to dark matter, the simple MOND formula with one new fixed parameter (the critical acceleration $a_o$) has been quite successful in predicting the form of spiral galaxy rotation curves from the observed distribution of detectable matter (Begeman et al. 1990, Sanders 1996) as well as the magnitude of the conventional mass discrepancy in galaxy clusters (Milgrom 1983c) and in superclusters (Milgrom 1997). MOND subsumes the global scaling relationships for galaxies-- the Tully-Fisher relation for spirals and the Faber-Jackson relation for ellipticals, as well as an equivalent gas temperature-mass relation for clusters of galaxies (Sanders 1994). MOND stabilizes rotationally supported thin disks (Brada 1996), explains the presence of a maximum critical surface density in spiral galaxies and ellipticals (Milgrom 1983b), and predicts the observed large conventional mass discrepancy in low-surface brightness systems (Milgrom 1983b, McGaugh \& de Blok 1997). However, an argument often directed against MOND is that, as a theory, it is ad hoc and incomplete. In particular, MOND makes no predictions with respect to cosmology and cosmogony. Even though the near coincidence of the empirically determined $a_o$ with $cH_o$ is suggestive of a cosmological basis for MOND, the structure of that cosmology is not at all evident. The naive expectation is that a hypothesis which posits such a radical departure from Newtonian dynamics (and presumably General Relativity) on the scale of galaxies and clusters of galaxies would surely lead to a highly unconventional cosmology which would be inconsistent with the phenomenological successes of the standard Big Bang-- most notably the nucleosynthesis of the light elements in their observed abundances and the large and small scale isotropy of the Universe at the epoch of recombination. Such criticism cannot be addressed by an incomplete theory. The reason for this incompleteness is that MOND at the present time lacks a relativistic extention; there is no credible general theory of gravity which predicts the MOND phenomenology in the appropriate limit. In fact, non-standard scalar-tensor theories have been proposed as a theoretical underpinning of MOND (Bekenstein and Milgrom 1984, Bekenstein 1988, Romatka 1992, Sanders 1997). Two of these, phase-coupling gravity (Bekenstein 1988) and stratified (preferred frame) theories with aquadratic scalar field Lagrangians (Bekenstein \& Milgrom 1984, Sanders 1997) do yield sensible cosmologies-- isotropic and homogeneous and similar to the low-density Friedmann models (Sanders 1989, Sanders 1997). Although such theories do have the considerable advantage of predictive power on scales other than extra-galactic, the fact is that they are unnatural; these non-standard scalar-tensor theories are as contrived as the ad hoc modification of Newton's laws which they presume to replace. Thus it is perhaps premature to work out fully the cosmological implications of such complicated and tentative theories. Must, then, considerations of MOND cosmology then be postponed until the final theory is in place? Even before further theoretical developments, it may be possible to draw some preliminary conclusions about a MOND universe by considering a finite expanding spherical region. It is well-known that the application of Newtonian dynamics to such a region leads to the Friedmann equation for the evolution of the cosmic scale factor. Even the cosmological constant can be included as an additional fluid with negative pressure. It might be expected that some insight into a pure MOND cosmology might be gained by such a procedure using Milgrom's formula instead of Newton's. An interesting start in this direction was made by Felten (1984). He pointed out that with MOND, the physical size of the expanding region cannot be factored out. This means that uniform expansion of a spherical region is not possible; that the dynamical history of such a region in the Universe depends upon its physical size. This would suggest that an isotropic and homogeneous universe, as described by the Robertson-Walker (RW) metric is not possible in the context of MOND. Moreover, due to the effective logarithmic potential implied by MOND, any finite-size region will re-collapse in a finite time. The universe, out to the present horizon, will eventually re-collapse regardless of its mean density. In a low-density Universe a region with a present size of 20 to 30 Mpc would just now be turning around, and this, as stressed by Felten, is roughly the observed scale of large-scale structure-- voids and superclusters. In the present paper I continue the discussion of Felten on MOND cosmology in the context of finite expanding spherical regions. Three assumptions underly this discussion, all of which were more or less implicit in the work of Felten. The first is that the dynamics of such a region are not influenced by the exterior universe-- that there is, in effect, an equivalent of the Birkhoff theorem for the relativistic theory of MOND. This assumption is the most shaky. Scalar-tensor theories of modified dynamics violate the strong equivalence principle which means that no dynamical system is isolated from its environment. The time-variation of the gravitational constant due to universal expansion is one example of the possible effect of the rest of the Universe on the spherical region. Here the assumption is that any such effects which may be present in the final theory will play a minor role in the dynamical history of the finite volume. The second assumption is that the modified dynamics holds for all accelerations below $a_o$-- that there is no return to Newtonian attraction or inertia at very much lower accelerations, as in cosmological PCG. This then will be an exploration of the cosmology of pure MOND. Finally, it is assumed that the critical acceleration $a_o$ is independent of cosmic time (not true in cosmological PCG). The fact that $a_o\approx cH_o$ would suggest that $a_o$ is time dependent (as is the Hubble parameter). However, this expression could have one of several meanings as pointed out by Milgrom (1994). One possible basis is that $a_o\approx c{\sqrt{\Lambda}}$ where $\Lambda \approx {H_o}^2$ is the cosmological constant. Such an interpretation would consistent with the assumption of no time variation of $a_o$. Alternatively, the numerical coincidence could arise from anthropic considerations: as will be shown below, structure develops when $cH\rightarrow a_o$. Having made these assumptions, I re-derive Felten's expression for the evolution of an isolated spherical region dominated by non-relativistic, pressure-less matter. I then demonstrate that, because of the acceleration threshold, modified dynamics can only be valid inside a critical radius $r_c$. This critical radius expands faster than the scale factor and the horizon, so that in the earlier universe the size of the region in which MOND applies is smaller than the horizon scale. This is the principal result of the present paper. Friedmann cosmology, characterized by uniform expansion, applies on the scale of the horizon, while MOND, in which the expansion is highly non-uniform, applies on sub-horizon scales. Therefore, the usual Friedmann equation is valid for evolution of the universal scale factor, but the Felten equation is valid for spherical regions smaller than $r_c$. This means that, at any epoch, while the Universe overall is homogeneous, density inhomogeneities should be present on the scale of $r_c$ at that time. Only recently in the history of the Universe has the size of the region dominated by MOND expanded to include a significant fraction of the the observable Universe, c/H$_o$ ($r_c$ at present depends upon the value of the cosmological constant); i.e., the Universe on large scale has only become ``MONDIAN'' at late cosmic time. These results remain qualitatively the same when radiation is included. In the early universe, at the epoch of nucleosynthesis, MOND regions are very much smaller than the horizon. When pressure gradient forces are considered, the density and expansion of the universe remain highly uniform during the radiation-dominated era and identical to that of the standard hot Big Bang, so all results concerning primordial nucleosynthesis are retained. Moreover, MOND-induced inhomogeneities at the epoch of recombination are many orders of magnitude smaller than those implied by the observed fluctuations in the background radiation. Regions of larger comoving size and mass enter the regime of modified dynamics at later times. Re-collapse of finite size regions dominated by modified dynamics proceeds rapidly once non-relativistic matter dominates the energy density, even in a very low density universe. This is due to the effective logarithmic gravitational potential implied by MOND. At the epoch of matter-radiation equality the mass enclosed within a MOND dominated region is $\approx 10^9$ M$_\odot$, comparable to a low mass galaxy. This gives a preferred mass scale to the first objects which collapse and virialize and may explain why galaxy mass objects are the fundamental virialized building blocks. The expansion of MOND-dominated regions to include larger and larger comoving scales leads to a scenario of structure formation which is extremely hierarchical and ``bottom up'', with the smallest objects forming first-- star clusters and low mass galaxies-- and larger structures forming later by a series of mergers. Massive galaxies should be in place as virialized objects by a redshift of 5 to 10. The largest objects just now being virialized are the rich clusters and supercluster scale objects would only now be separating out of Hubble expansion. The scale-dependent deceleration induced by MOND implies that the Universe, at the present epoch, is inhomogeneous on large scale with a mean density of galaxies about a given galaxy decreasing out to hundreds of Mpc. The scale for cross-over to homogeneity depends the value of the cosmological constant. \section{Dynamics of an Isolated Spherical Region} The Cosmological Principle which postulates the isotropy and homogeneity of the Universe permits separations between physical objects to be described by a universal dimensionless scale factor which is only a function of cosmic time. It is well-known (eg. see Peebles 1993) that the Friedmann equation for the evolution of scale factor can be derived by considering the Newtonian equation of motion for an isolated uniform spherical region of radius r: $$\ddot{r} = -{{GM}\over {r^2}}. \eqno(1)$$ Here M is the active gravitational mass given, in the weak-field static limit of the Einstein field equations, by $$M = {{4\pi r^3}\over 3}(\rho + 3p)\eqno(2)$$ where $\rho$ and $p$ are the density and pressure of the smooth fluid. Combining eqs. 1 and 2 we have $$\ddot{r} = - {{4\pi G r}\over 3}{(\rho + 3p)} \eqno(3)$$ which is supplemented by the energy conservation equation $${{d\rho}\over{(\rho+p)}} = -{{3dr}\over r}\eqno(4)$$ and an equation of state. Obviously one may write $r = Lx$ where L is a fixed length scale and x is a time-dependent scale factor (here normalized to be 1 at the present epoch). Then L disappears in eq.\ 3 and 4. Taking the fluid to be a mixture of non-relativistic pressure-less matter ($p=0$), radiation ($p_r = {1\over3} \rho_r$), and vacuum energy density ($p_v = -\rho_v = -{{3\lambda {H_o}^2}/8\pi G}$) and integrating eq.\ 3 we find the usual dimensionless Friedmann equation for the time-evolution of the scale factor, $$ h^2=\Bigl({{\dot{x}\over x}\Bigr)^2 = {\Omega_o}x^{-3}} + {\Omega_r}x^{-4} - (\Omega_o +\Omega_r+\lambda-1)x^{-2} + \lambda \eqno(5)$$ where $\lambda$ is the dimensionless cosmological constant, $$\Omega_o = {{8\pi G \rho_o}\over{3{H_o}^2}}\eqno(6)$$ is the density parameter for non-relativistic matter (with $\rho_o$ being the present mean density of matter) and $$\Omega_r = {{8\pi G a{T_o}^2}\over {3{H_o}^2}c^2}\eqno(7)$$ is the density parameter for the cosmic background radiation where $T_o$ is the temperature of the cosmic blackbody radiation at the present epoch (2.73 K) and $a$ is the radiation density constant. The quantity $(\Omega_o+\Omega_r + \lambda -1)$ is the integration constant which is to be identified with curvature of space-time. The quantity $h$ is the Hubble parameter in units of the present Hubble parameter $H_o$ and time is in units of the Hubble time $\rm{{H_o}^{-1}}$. MOND posits that for accelerations below a critical value $a_o$, the true gravitational force $g$ is related to the Newtonian gravitation force $g_n$ as $${\bf g}\mu(g/a_o) = {\bf g_n} \eqno(8)$$ (Milgrom 1983a) where $\mu(x)$ is an unspecified function such that $\mu(x)\rightarrow 1$ if $x>>1$ and $\mu(x)=x$ if $x<<1$. Thus in the high acceleration limit the gravitational force is the usual Newtonian force, but in the low acceleration limit $g = \sqrt{g_n a_o}$ (this may also be written as a modification of the law of inertia where $F = ma\mu(a/a_o)$ replaces the usual expression). Because we are interested here only in a broad view of the overall dynamics of a MOND universe, we will assume that the transition in $\mu(x)$ between the two asymptotic limits occurs abruptly at x=1. In the low acceleration limit the MOND equivalent of eq.\ 3 becomes $$\ddot{r} = -\Bigl[{{4\pi G a_o}\over 3}(\rho+3p)r\Bigr]^{1\over 2}\eqno(9)$$ where now, obviously, a constant length scale cannot be factored out. Neglecting for the moment radiation and vacuum energy density and taking the equation of state only for non-relativistic pressure-less matter, the conservation equation implies that $$\rho = \rho_o (r/r_o)^{-3}\eqno(10)$$ where $r_o$ is the comoving radius of the spherical region (i.e., the radius the spherical region would have at present if it continued to expand according to eq.\ 5) and $\rho_o$ is the present mean density in the equivalent Friedmann model universe. Then eq.\ 9 becomes $$\ddot{r} = -\Bigl[{{\Omega_o}\over 2}{H_o}^2{r_o}^3 a_o\Bigr]^{1\over2} r^{-1}. \eqno(11)$$ This equation may be integrated once to give the equivalent of the Friedmann equation $${\dot{r}}^2 = {u_i}^2 - \Bigl[{{2\Omega_o}}{H_o}^2{r_o}^3 a_o \Bigr]^{1\over 2} ln(r/r_i)\eqno(12)$$ where $r_i$ is an initial radius of the sphere and $u_i$ is the expansion velocity at this initial radius. From the form of eq.\ 12 it is obvious that at some maximum radius $r_m$ the expansion will stop and the spherical region will re-collapse. This is given by $$r_m/r_i = e^{q^2} \eqno(13a)$$ where $$q^2 = {{{u_i}^2}\over{{(2\Omega_o{H_o}^2}{r_o}^3 a_o)}^{1\over2}} \eqno(13b).$$ This is the expression derived by Felten (1984) written in a somewhat different form. At first sight, it may seem odd to use terms such as $\Omega_o$ which are valid for Friedmann cosmology but have no obvious relevance to MOND cosmology. But it is proven below that MOND on small scale is consistent with Friedmann on large scale. \section{A critical length scale for Modified Dynamics} I now consider the meaning of the initial radius $r_i$ in eqs.\ 12 and 13. Looking back at the Newtonian expression eq.\ 3 we see that, at any epoch characterized by some value of density and pressure, the acceleration of the radius of the shell increases linearly with r. This implies that there should be some critical radius $r_c$, beyond which the acceleration exceeds the MOND acceleration $a_o$ and the dynamics is Newtonian. That is to say, on all scales greater than $r_c$, the usual Newtonian expressions, eqs.\ 3 and 5 apply to the expansion of a spherical region and the evolution of the scale factor. This critical length scale is given by $$r_c = \sqrt{{GM}\over a_o} \eqno(14)$$ where again M is the active gravitational mass. Making use of eqs.\ 4-7, eq.\ 14 becomes $$r_c = a_o\Bigl|{{{\Omega_o{H_o}^2}\over{2x^3}} + {{\Omega_r{H_o}^2}\over {x^4}} - {\lambda{H_o}^2}}\Bigl|^{-1} \eqno(15a)$$ or $$r_c = {{2a_o}\over{\Omega_o{H_o}^2}}x^3 \eqno(15b)$$ when the universe is matter dominated, $$r_c = {{a_o}\over{\Omega_r{H_o}^2}}x^4 \eqno(15c)$$ when the universe is radiation dominated, and $$r_c = {{a_o}\over{\lambda{H_o}^2}} \eqno(15d)$$ when the universe is vacuum energy dominated. Therefore, at any epoch characterized by a scale factor x, modified dynamics can only apply in regions smaller than $r_c$. This critical radius grows faster than both the scale factor and the horizon, $l_h$, in the radiation- and matter-dominated regimes (i.e., $r_c/l_h \propto t^2$). In Fig.\ 1 $r_c$ is plotted against scale factor. Here and elsewhere below the cosmological parameters are taken to be $H_o = 75$ km/(s Mpc), $\Omega_o = 0.02$, $\Omega_r = 4.48\times 10^{-5}$ and $a_o = 1.2\times 10^{-8}$ cm/s$^2$. This value of $\Omega_o$ is consistent with the baryonic content of the Universe implied by considerations of primordial nucleosynthesis (Walker et al. 1991) and with estimates of the stellar mass in galaxies and intra-cluster hot gas (Carlberg et al. 1998); in the context of MOND this would be the total matter content of the Universe (i.e., no significant contribution from non-baryonic dark matter). The density parameter in radiation, $\Omega_r$, is exactly that for a black body of 2.73 K and the assumed Hubble parameter. The value of the acceleration parameter, $a_o$, is that determined from fitting to the extended rotation curves of nearby galaxies (Begeman et al. 1990). For the purposes of this plot the cosmological constant has been set to zero. The age of a model universe with this assumed H$_o$ and $\Omega_o$ would be $1.26 \times 10^{10}$ years which is consistent with the recent determinations of the ages of globular clusters in light of the new cluster distance scale (Chaboyer et al. 1997) Also shown in Fig.\ 1 is the horizon scale ($\approx ct$) as a function of scale factor determined by numerical integration of eq.\ 5. It is evident that at early epochs $r_c$ is much smaller than the horizon size. This suggests that any causally connected region of the Universe can be isotropic and homogeneous with the expansion governed by the usual Friedmann equation, eq.\ 5. That is to say, for spatial separations larger $r_c$, it is valid to apply a Universal scale factor and, presumably, the RW metric. However, about a typical point in the Universe there exist a smaller volume with radius $r_c$ within which modified dynamics and eq.\ 12 applies; i.e., in which separations cannot be described in terms of a scale factor and which expand at a slower rate than the universe at large. Thus at any epoch inhomogeneities must be present on a scales smaller than $r_c$. For a universal scale factor greater than 0.23, the critical MOND radius $r_c$ exceeds the horizon scale. This means that the entire causally connected Universe has become MONDIAN at a redshift of about 3.3 and can no longer be described by the RW metric. This interpretation does contain a logical conundrum. In an actual MOND universe with Friedmann expansion on a horizon scale but slower MOND expansion on sub-horizon scales, not every point in the fluid can possibly be a center of MOND-dominated expansion and collapse; it is not possible that a horizon-scale volume can expand while smaller spherical regions about every point within that volume re-collapse. This is a paradox of the present non-relativistic treatment which can only be resolved in a more complete theory-- a theory involving fluctuations in which local peaks probably play the role of seeds for MOND-dominated expansion and re-collapse with voids developing elsewhere. But, in any case, is likely that $r_c$ is the approximate scale below which there exist MOND-induced inhomogeneities at any epoch in an evolving Universe. Accepting this interpretation, we note that larger and larger masses enter the MOND regime as the universe evolves. Given that $r_o$ is the comoving scale of mass $M_c$ and taking $x = r_c/r_o$, we have $$M_c = {{\Omega_o{H_o}^2}\over {2G}}\Bigl({{r_c}\over {x}}\Bigr)^3 \eqno(16)$$ This is also shown in Fig.\ 1 where it is evident that objects of galaxy mass ($10^{11}$ M$_\odot$) enter the MOND regime at $x \approx 7\times 10^{-3}$ corresponding to a redshift of 140. Obviously then the appropriate value for the initial radius, $r_i$ in eq.\ 13a, would be the radius at which eq.\ 12 first applies to the expansion of the spherical region; i.e., $$r_i = r_c.\eqno(17)$$ This would be about 14 kpc for the galaxy size region. Moreover, the initial expansion velocity would be given by the Hubble expansion on a scale of $r_i$: $${u_i} = Hr_i = r_iH_o\sqrt{\Omega_o(r_o/r_i)^3 + \Omega_r(r_o/r_i)^4 +(1-\Omega_o)(r_o/r_i)^2} \eqno(18)$$ which is about 320 km/s for the the $10^{11}$ M$_\odot$ region. We then find, from eq.\ 13, that $q^2 = 1.25$ and $r_m/r_i = 3.5$; i.e., after entering the MOND regime at a redshift in excess of 100, a galaxy mass would only expand by a factor of about four before re-collapsing. The time-scale for this expansion is given by $$\Delta t = \sqrt{\pi}q{H^{-1}}\,\Bigl({{r_m}\over{r_i}}\Bigr) \,{erf(q)}\eqno(19)$$ (Felten 1984); i.e., objects of galaxy size and smaller enter the MOND regime early and fall out of Hubble expansion on a time-scale comparable to the age of the Universe at that epoch. For the $10^{11}$ M$_\odot$ sphere this would be approximately $3\times 10^8$ years. Re-collapse and virialization might take three or four times longer, so we see that with modified dynamics in the context of a low density Friedmann universe, galaxies should be in place as virialized objects when the Universe is about $10^9$ years old corresponding to a redshift of 9 or 10. \section{The early radiation dominated Universe} In the early universe the size of spherical regions dominated by modified dynamics is small compared to the horizon. Taking eq.\ 15c and noting that the temperature of the black body radiation scales with the inverse of the scale factor, we have $$r_c = {{a_o}\over{\Omega_r{H_o}^2}}\Bigl({{T_o}\over T}\Bigr)^4 \eqno(20)$$ which, for the assumed cosmology becomes $$r_c = 4.54\times 10^{31}\Bigl({{T_o}\over T}\Bigr)^4\,\,cm\eqno(21)$$ With $T_o = 2.73$ K and $T=10^9$ K, corresponding to the epoch of nucleosynthesis, we find $r_c = 2.5\times 10^{-3}$ cm. The scale of the horizon in the radiation-dominated era is given by $$l_h \approx 0.5{(\Omega_r{H_o}^2})^{-{1\over 2}}(\Bigl({{T_o}\over T}\Bigr)^2 c \eqno(22)$$ which is $10^{13}$ cm at the epoch of nucleosynthesis. That is to say, the expansion of the Universe as a whole is identical to that of the standard Big Bang with the scale of modified dynamics being 15 orders of magnitude smaller than the horizon size. But because nucleosynthesis occurs on a smaller scale still-- that of internucleon spacing ($\approx 10^{-7}$ cm)-- we must consider the fate of these small regions of non-standard dynamics. Taking eqs.\ 4 and 9 but now with the equation of state for radiation, we find $$\ddot {r} = -(\Omega_r{H_o}^2a_o{r_o}^4)^{1\over 2}r^{-{3\over 2}} \eqno(23)$$ which integrates to $${\dot{r}}^2 = {u_i}^2 - 4(\Omega_r{H_o}^2{a_o}{r_o}^4)^{1\over 2} \Bigl[{1\over{{r_i}^{1\over 2}}} - {1\over{r^{1\over 2}}}\Bigr]\eqno(24a)$$ Given eq.\ 18 for $u_i$ and that $x = r_c/r_o$ we find with eq.\ 15c that $${u_i}^2 = (\Omega_r{H_o}^2{a_o}^2{r_o}^4)^{1\over 3}.\eqno(24b)$$ Then it is straight forward to show that, in the absence of other considerations, any such region will expand more slowly than the universe as a whole and will re-collapse after expanding by a factor $r_m/r_i = 16/9$. This might well seem devastating not just for primordial nucleosynthesis, but also for the overall homogeneity of the early universe. There is, however, another physical effect which must be considered. The slower expansion of these regions on the scale of $r_c$ will very rapidly lead to density and hence pressure gradients which will resist re-collapse. Because the gravitational acceleration in these regions is so small, only small density gradients are required to keep these regions expanding with the Universe at large. The gravitational acceleration in these small MOND regions is (by definition) on the order of $a_o$. So the pressure gradient required to resist re-collapse can be estimated from $${1\over\rho}{{dp}\over {dr}} = a_o\eqno(25)$$ Setting $p = {1\over 3} \rho c^2$ and $dr = r_c$ and making use of eq.\ 20 we estimate the corresponding density fluctuation over this scale to be $${{\delta \rho}\over {\rho}} = {{3{a_o}^2}\over{\Omega_r{H_o}^2c^2}} \Bigl({{T_o}\over T}\Bigr)^4 \eqno(26)$$ which implies ${\delta\rho}/{\rho} \approx 10^{-31}$ when $T=10^9$ K. In the early radiation dominated universe, modified dynamics results in no significant deviation from homogeneity and the thermal history is identical to that of the standard Big Bang. This means that all of the results on nucleosynthesis in the standard model carry over to a MOND cosmology. At the epoch of recombination (T = 4000 K), where radiation still dominates in low $\Omega_o$ models, we find $\delta \rho/\rho \approx 4\times 10^{-10}$. That is to say, the MOND-induced inhomogeneities would be five orders of magnitude less than the density fluctuations implied by the observed fluctuations in the CMB. The argument on pressure gradients resisting MOND collapse can be re-framed in terms of the critical Jeans mass for gravitational instability. The Jeans mass is, effectively, identical to the virial mass which, in the context of MOND, is given by $$M_J = {9\over{Ga_o}}{c_s}^4\eqno(27)$$ where $c_s$ is the sound speed in the fluid being considered (Milgrom 1989). Before decoupling of matter and radiation, $c_s = c/\sqrt{3}$ implying that $M_J\approx c^4/Ga_o$ which, given that $a_o\approx cH_o$, is on the order of the total mass of the present observable Universe. Obviously this is vastly greater than the mass enclosed in a MOND region (at the epoch of nucleosynthesis this is approximately $10^{-12}$ g); MOND dominated gravitational collapse is clearly an impossibility before hydrogen recombination. After recombination, the Jeans mass of the baryonic component becomes $$M_J = {9\over{Ga_o}}{\Bigl({{kT_m}\over m}\Bigr)}^2 \eqno(28)$$ where $k$ is the Boltzmann constant, $m$ is the mean atomic mass, and $T_m$ is the temperature of the matter. But collapse can still not occur until the Jeans mass falls below the critical mass in a MOND dominated region. From eqs.\ 16 and 20 this is found to be $$M_c = {{{a_o}^3\Omega_o{H_o}^2}\over{2G(\Omega_r{H_o}^2)^3}} {\Bigl({{T_o}\over T}\Bigr)}^9 \eqno(29)$$ in the radiation-dominated regime. We see from eqs. 28 and 29 that while the Jeans mass decreases with the square of the radiation temperature, the critical MOND mass rapidly increases as the temperature falls. This is illustrated in Fig.\ 2 after the epoch of recombination ($T_{rec}\approx 4000$ K) assuming that $T_m = T^2/T_{rec}$ (true for non-relativistic mono-atomic fluid). The MOND critical mass becomes comparable to the Jeans mass (eqs. 27 and 28) when the radiation temperature has fallen to $$T = \Bigl[{1\over{18}}{{\Omega_o{H_o}^2} \over{(\Omega_r{H_o}^2)^3}} \Bigl({m\over{k}}\Bigr)^2{a_o}^4{T_o}^9{T_{rec}}^2 \Bigr]^{1\over{13}}; \eqno(30)$$ with the cosmology assumed above this is $2.5\times 10^3$ K, or somewhat later than the epoch of recombination (this value is obviously quite insensitive to the actual values of the cosmological parameters). The corresponding value of the Jeans mass and critical MOND mass is about $10^5$ M$_\odot$. This means that shortly after recombination pressure gradients are no longer effective in preventing MOND-induced collapse. However, it is argued below that $M_J<M_c$ is a necessary but not a sufficient condition for MOND-dominated expansion and collapse. \section{The formation of structure} Expansion of a low density universe ($\Omega_o<<1$) will remain radiation-dominated until well after recombination; for the cosmology assumed here this occurs at $x = 4.48\times 10^{-3}$ ($z=222$). It is clear from Fig.\ 2 that regions having a mass less than $4\times 10^9$ M$_\odot$ enter the MOND regime while radiation still dominates the energy density of the Universe. Taken at face value then, it would seem that one should apply eq.\ 24 to the MOND-dominated expansion of regions above the Jeans mass which enter the MOND regime in the period between recombination and matter-radiation density equality. However, during this period the horizon is much larger than the scale over which MOND applies (at matter-radiation equality the horizon is 4 Mpc but the scale of modified dynamics is only 3 kpc). After recombination, the photons are uncoupled to the matter, so it would be impossible for a MOND region to re-collapse while the passive gravitational mass in radiation still dominates the gravitational deceleration; the photons free-stream to the horizon. MOND-dominated expansion and re-collapse as described by eq.\ 12 would begin for all masses between 300 M$_\odot$ (the Jean's mass) and $4\times10^9$ M$_\odot$ (the critical MOND mass) at the epoch of matter-radiation equality (indicated by the heavy solid line in Fig.\ 2). The actual mass in a MOND-dominated region at the epoch of matter-radiation equality, $M_e$, is extremely sensitive to the cosmological parameters. This may be determined from eqs.\ 15 and 16 (setting 15b equal to 15c) and is found to be $$ M_e = {{32{a_o}^3{\Omega_r}^6}\over{G{\Omega_o}^8{H_o}^4}} \eqno(31)$$ Combining with eq.\ 7 we find $$M_e = {3.7\times 10^9}{\Bigl[\Bigl({{\Omega_o}\over{0.02}}\Bigr) \Bigl({{H_o}\over{75}}\Bigr)^2\Bigr]}^8\,\,{\rm M_\odot}\eqno(32)$$ Matching the observed light element abundances with the predictions of primordial nucleosynthesis (Walker et al. 1991), implies that $0.018<\Omega_o{(H_o/75)}^2<0.027$. Then with eq.\ 32 we find that $4.3\times 10^8\,M_\odot<M_e< 3.7\times 10^{10}\, M_\odot$. It is interesting that the mass scale over which MOND applies at the epoch of matter-radiation equality-- when MOND collapse can begin and significant inhomogeneities can form-- corresponds to that of low to moderate mass galaxies. Perhaps this offers some explanation for the fact that the lowest-mass virialized building-blocks of the Universe are galaxies (the existing globular clusters and dwarf galaxies re-collapsed simultaneously and may have survived due to incomplete merging). But in any case, for objects of any mass scale, the separation of from the Hubble expansion and subsequent re-collapse can be described by eq.\ 12 (the Felten equation) with the initial radius being the critical MOND radius given by eq.\ 15 for objects of mass greater than $10^9$ M$_\odot$, and the scaled comoving radius ($r = xr_o$) at the epoch of matter-radiation equality for lower mass regions. This dynamical history is shown for various mass scales in Fig.\ 3 which is a plot of the scale factor ($r/r_o$) of regions of different mass as a function of cosmic time to the point of maximum expansion. These curves are determined from numerical integrations of eq.\ 12. The cosmic scale factor corresponding to Friedmann expansion (eq.\ 5) is also shown (again the cosmological term has been set to zero). The vertical dotted line shows the epoch of matter-radiation equality for this particular cosmological model. It is evident that objects of globular cluster mass ($10^5$ M$_\odot$) re-collapse very soon after matter-radiation equality; maximum expansion is reached at a cosmic time of $2.3\times 10^7$ years corresponding to a redshift of 156. Massive galaxies ($10^{11}$ M$_\odot$) reach this point of maximum expansion at $t = 3\times 10^8$ years or $z=26$. Clusters of galaxies ($10^{14}$ M$_\odot$) begin to re-collapse at $2.65\times 10^9$ years (z=3). The mass which is just turning around at the present epoch is $3.7\times 10^{15}$ M$_\odot$. The comoving scale is 66 Mpc but the present radius would be 29 Mpc. This would correspond to the mass and scale of superclusters as noted by Felten (1984). Taken literally the implication would be that a region of 30 Mpc about a typical observer should be collapsing rather than expanding, which is evidently not the case locally. This result, which might be taken as an argument against a pure MOND cosmology, neglects likely complications arising in a real Universe filled with significant density enhancements and peculiar accelerations. In the fully-developed MOND universe at the present epoch, the large scale inhomogeneities and resulting tides will most likely cause large aspherical distortions of the developing structure. Thus, the effects of distant matter cannot be ignored; i.e., the fundamental assumption underlying this treatment of isolated regions breaks down. Given the enhanced tidal effects and the fact that, in MOND, the internal dynamics of a a region is effected by the external acceleration field, the ``external field effect'' (Milgrom 1983a), the growth of pancakes, filaments and voids would seem natural. The present turnaround radius of 30 Mpc may only give an estimate of the scale of structure which has significantly separated out of the present Hubble flow, as suggested by Felten (1984). Since the entire present Universe is MONDIAN (in the absence of a dynamically significant cosmological constant), then on all scales out to the horizon, the expansion cannot be uniform; the mean value of the Hubble parameter grows with scale. This also implies that the mean density of matter within a spherical region should decrease out to the horizon. This may be determined by integrating eq.\ 12 for spheres with comoving radii larger than 66 Mpc (corresponding to the current shell which is just turning around at 29 Mpc) out to the horizon scale of 4000 Mpc. The result is shown in Fig.\ 4 which is a log-log plot of the ratio of the mean density inside a finite spherical volume to the mean density of the Universe as a function the present radius of the sphere (not the comoving radius). It is evident that the average density smoothly increases from the horizon down to a scale of 30 Mpc where it is 10 times larger than the mean density. The literal and naive interpretation of Fig.\ 4, and that which would seem most consistent with the treatment of isolated spherical regions, is that this would represent the density distribution about a single observer in in a MOND Universe. But if we require, consistent with the Cosmological Principle, that the observer have no special position, then an equally valid interpretation is that the average density distribution about any observer in the MOND universe declines smoothly to the horizon-- that the matter distribution is non-analytic (fractal) and does not imply a special position for one observer (Coleman \& Pietronero 1991). It has been claimed, from analysis of redshift catalogues, that the mean density of galaxies about a given galaxy does decrease with scale (eg. Coleman \& Pietronero 1991) out to cosmological distances. Although this claim is controversial (Peebles 1993), it is generally consistent with expectations for a pure MOND cosmology. The actual radius of cross-over to homogeneity would depend upon the value of a possible cosmological constant. With a cosmological term large enough to be dynamically significant ($\lambda\approx 1$), the critical radius for modified dynamics (eq.\ 15a), after first becoming infinitely large, asymptotically approaches a constant value: $r_c\rightarrow (1/6)(c/H_o)$, i.e., about 1/6 the current horizon if $H_o\approx$ 75 km/s-Mpc (the observed value of $a_o$ would then correspond to $cH_o/6$). In this case we might expect large scale homogeneity and uniform exponential expansion of the Universe on sub-horizon scales larger than several hundred Mpc. \section{Conclusions} Although there is not yet a plausible candidate for a general theory of gravity which predicts the MOND phenomenology in the limit of low accelerations, consideration of the dynamics of a finite spherical volume may contain elements of a realistic MOND cosmology. Perhaps the most interesting conclusion that can be drawn from such a exercise is that modified dynamics over finite separations is compatible with Friedmann cosmology on large scale; until relatively recently in the history of the Universe MOND could not dominate the dynamical evolution of the Universe in general. At earlier epochs, the scale over which MOND applies, $r_c$ (within which the deceleration of Hubble expansion is less than $a_o$), is smaller than the size of a causally connected region which implies that the Universe as a whole can be isotropic and adequately described by the RW metric with expansion governed by the usual Friedmann equation. However, the fact that the expansion is slower in MOND-dominated regions implies that inhomogeneities must be present at any epoch on a scale of $r_c$ and smaller. In the early radiation dominated Universe, the magnitude of these MOND-induced inhomogeneities is very small ($\delta \rho/\rho \approx 10^{-31}$ when T=$10^9$ K), because of the small pressure gradients required to restore uniform expansion. Therefore, the thermal and dynamical history of the early MOND Universe is exactly that of the standard Big Bang and all predictions relevant to the nucleosynthesis of the light elements carry over to MOND cosmology. After non-relativistic matter dominates the mass density of the Universe (which can be rather late in a low density Universe), MOND cosmology diverges from that of standard cosmology. At the epoch of matter-radiation equality, objects with mass up to $4\times 10^9$ M$_\odot$ rapidly collapse to form virialized objects. The fact that this mass scale, which is the mass in the MOND regime at radiation-matter equality, is comparable to that of low mass galaxies seems significant: objects of this mass would be the principal virialized building blocks in the Universe. Moreover, this mass scale emerges naturally from the basic dynamics; astrophysical considerations such as cooling vs. collapse time scales do not play a role. Although objects of smaller mass (down to $10^2$ M$_\odot$) collapse and virialize first, a process probably accompanied by star formation, these objects would rapidly merge in the the larger collapsing regions. This suggests that galaxy formation is primarily dissipationless; that the stellar content of galaxies may be in place before the galaxies actually form. Of course, early star formation could be limited by processes such as photo-dissociation of H$_2$ as in standard scenarios (Haiman et al. 1997); these self-limiting processes could keep much of the matter content of the universe in gaseous form as seems to be implied by the observations of rich clusters. Many of these low mass galaxies galaxies would merge to form more massive objects as larger and larger scales come into the MOND regime. A spherical region with the mass of a large galaxy ($10^{11}$ M$_\odot$) reaches maximum expansion and begins to re-collapse at a redshift of 26 which implies that large galaxies should be in place as virialized objects by redshift of 5 to 10. This is earlier than the epoch of galaxy formation in the standard CDM paradigm (Frenk et al. 1988). Moreover, from Fig.\ 3 it is evident that regions with the size and mass of a cluster have reached maximum expansion by a cosmic age of $2.7\times 10^9$ years corresponding to a redshift of three. This means that by z=3 not only do massive galaxies exist but they are also significantly clustered (the density of the $10^{14}$ M$_\odot$ region would be enhanced by a factor of 6.5 over the mean at this redshift). This may be relevant to the observation of luminous galaxies at z=3 which is remarkable not only because they are there but also because of the apparent degree of clustering (Steidel et al. 1997). Such observations may be able to distinguish between the cosmogony sketched here and that of the standard CDM paradigm. The largest objects being virialized now would be clusters of galaxies with masses in excess of $10^{14}$ M$_\odot$. Superclusters would only now be reaching maximum expansion. Such a scenario of structure formation is hierarchical in the extreme and as such bears a resemblance to the more standard (CDM) scenarios of the build up of structure. But here, dissipationless dark matter is not required to enhance structure formation; structure forms inevitably on the MOND scale of $r_c$ because of the effective logarithmic potential. In the present Universe regions approaching the horizon scale would be subject to a scale-dependent deceleration due to modified dynamics. This would lead to a Universe in which the mean run of density about a galaxy decreases smoothly to a cosmic scale. The actual scale for approach to homogeneity would depend upon the value of the cosmological constant; for $\lambda \approx 1$ corresponding to a zero curvature Universe, the density and expansion of the Universe would be more or less uniform on scales greater than several hundred Mpc. In such a Universe, the Hubble parameter would also be, on average, scale dependent, increasing with the separation between objects out to some significant fraction of the Hubble radius. However, these are the aspects of MOND cosmology which most depend upon the unknown properties of the underlying theory-- such as the value of the cosmological constant and whether or not MOND phenomenology saturates at some lower value of acceleration and attraction returns to inverse square. The details of cosmology and cosmogony sketched here are dependent upon the assumptions which underly this procedure: no effects of the surrounding universe on the finite spherical volume, no return to Newtonian dynamics at lower accelerations, no variation of $a_o$ with cosmic time. A different set of assumptions is also plausible and would lead to a different MOND cosmology. The remarkable aspect of the cosmology resulting from these assumptions is the fact that the pre-recombination dynamical and thermal evolution is identical to that of the standard Big Bang. But in any case, it seems inevitable that in a MOND cosmology, structure formation proceeds much more rapidly and efficiently than in standard cosmologies due to the effective logarithmic potential. Friedman expansion on horizon scale combined with modified dynamics on smaller scale suggests that density peaks may be required to play the role of seeds or centers about which MOND-dominated expansion and re-collapse occurs. Apart from this, primordial density fluctuations have played no role in the present discussion of structure formation. In the correct relativistic theory this will probably not be the case; for example, in stratified aquadratic scalar-tensor theory (Sanders 1997), there are no effects of modified dynamics in the absence of scalar field gradients; in a perfectly homogeneous Universe, the metric is RW and structure never develops. This suggests that in a proper theory fluctuations may be essential for the development of structure which may then proceed, qualitatively, as described above. One should be cautious about pushing these results too far. In standard theory, the Newtonian dynamics of an expanding region takes on cosmological significance only in retrospect, that is, after the application of General Relativity and the construction of a relativistic cosmology. Here, the order is reversed-- the rules of MOND are applied to a finite spherical region before the development of the appropriate relativistic theory. But it is possible that many of the aspects of a fully relativistic cosmology may be previewed, at least in a qualitative sense, by such an exercise. While this remains to be seen, it is of considerable interest that the resulting cosmology does seem to reconcile the extreme homogeneity of the early radiation-dominated Universe with early galaxy formation and the extreme range of structure observed in the matter-dominated era. Moreover, the fact that this can be accomplished without the necessity of invoking hypothetical non-baryonic dark matter is entirely consistent with the original motivation for modified dynamics as an alternative to dark matter on the scale of galaxies and galaxy clusters. \acknowledgments I am very grateful to J.D. Bekenstein and M. Milgrom for helpful comments on this work.
2023-04-23T06:41:23.860Z
1997-10-29T15:55:07.000Z
redpajama/arxiv
arxiv_0001
2,351
7,487
596a73b369b6ecec0c4ae6cf19ff326418071c6b
\section{Introduction}\label{sec:intro} The Gelfand-Shilov spaces were first introduced as a useful set of functions for the study of Cauchy problems in partial differential equations. These functions are convenient in this setting because of their smoothness, and because of the conditions of regularity imposed on them. For instance, some partial differential equations are ill-posed in the Schwartz space $\mathscr{S}$ or its dual $\mathscr{S}'$, the space of tempered distributions (see e.g. \cite[p.~160-163]{hormander} for notation), but are well-posed in suitable Gelfand-Shilov spaces. One such example is the Euler-Tricomi equation $D_t^2 f + t D_x^2 f = 0$, and another example may be found in \cite{lewy}. The fact that some partial differential equations are only well-posed in Gelfand-Shilov spaces exemplifies the need to determine properties of functions in those spaces. Gelfand-Shilov spaces can also be useful in the study of pseudo-differential operators \cite{cappiello}, which in turn have uses in, for instance, quantum theory \cite{quantum} and signal processing \cite{signal}. \begin{comment} Spaces of smooth functions have many interesting properties. The prototypical example of such a space is the Schwartz space, $\mathscr{S}$, which consists of functions that, along with their Fourier transforms and derivatives, decay faster than rational functions. \end{comment} The Gelfand-Shilov spaces $S_s^\sigma$, $S_s$ and $S^\sigma$ of Roumieu type (cf. \cite{eijndhoven,gel,chung}) and $\Sigma_s^\sigma$, $\Sigma_s$ and $\Sigma^\sigma$ of Beurling type (cf. \cite{pilip}) can be considered as refinements of the Schwartz space $\mathscr{S}$, where we impose analyticity-like smoothness conditions. The strength of these conditions depend on the parameters $s$ and $\sigma$. The smaller $s$ is the faster the functions must vanish at infinity, and smaller $\sigma$ impose stronger conditions on the growth of the derivatives (meaning the Fourier transform vanishes faster). In the one-parameter spaces, functions have sub-exponential decay and their Fourier transforms tend to zero faster than the reciprocal of any polynomial, or vice versa. In the two-parameter spaces, both the functions and their Fourier transforms have sub-exponential decay. If $s$ and $\sigma$ are sufficiently small, the only function found in $S_s^\sigma$ or $\Sigma_s^\sigma$ is $f(x) \equiv 0$, and the spaces are considered trivial. There are more general Gelfand-Shilov spaces, such as the $S_{M_p}^{N_p}$-spaces whose properties are explored in \cite{chung}, for instance. In this paper, we are mostly interested in discussing the properties of the one-parameter spaces $S_s$ and $S^\sigma$, their duals $(S_s)'$ and $(S^\sigma)'$, as well as the corresponding spaces $\Sigma_s$ and $\Sigma^\sigma$ and their duals. More specifically, we establish growth estimates on elements in these spaces and their Fourier transforms. Additionally, we find estimates involving the short-time Fourier transform which provides an alternate characterization of Gelfand-Shilov spaces. Such estimates exist for the two-parameter spaces (cf. \cite{grochenig}) and here we extend characterizations of this type to one-parameter spaces. We find that the short-time Fourier transform admits sub-exponential decay in one parameter, and tends to zero faster than reciprocals of polynomials in the other. Corresponding estimates are found for the duals of one-parameter spaces as well. We also examine Toeplitz operators on these one-parameter spaces, where the symbol $a(x,\xi)$ of the operator lies in different one-parameter spaces in each variable. We find conditions such that the Toeplitz operator is continuous on $S_s$, $S^\sigma$ and their respective duals. We also determine when the two-parameter spaces are nontrivial. These results are well-known for $S_s^\sigma$-spaces, but for the $\Sigma_s^\sigma$-spaces, we find that the space is nontrivial if and only if $s+\sigma > 1$, as opposed to the condition $s+\sigma \geq 1$, $(s,\sigma)\neq (\frac12,\frac12)$ often cited in other works (cf. \cite{toft2}). This result, which was initially suggested by Andreas Debrouwere, directly contradicts versions of this result in previous works. The paper is structured as follows. In Section \ref{sec:prelim}, we introduce notations, definitions and preliminary propositions regarding Gelfand-Shilov spaces necessary to obtain results in subsequent sections. These preliminary results can either be found in \cite{chung,eijndhoven,gel} or are simple enough to be left as an exercise for the reader. In Section \ref{sec:nontriv}, we determine for which $s$ and $\sigma$ the space $\Sigma_s^\sigma$ is nontrivial. In Section \ref{sec:stft}, we obtain growth estimates for the short-time Fourier transform of functions in $S_s$, $S^\sigma$ and $\Sigma_s$, $\Sigma^\sigma$. In Section \ref{sec:dual}, we show how these results can be used to characterize the duals of these one-parameter spaces via the short-time Fourier transform as well. Lastly, in Section \ref{sec:top}, we find conditions on the symbol of Toeplitz operators so that the operator is continuous on one-parameter spaces and their duals. \section{Preliminaries}\label{sec:prelim} We begin by defining the spaces we will devote the most attention to in this paper. These are the so-called Gelfand-Shilov spaces. There is a clear and intuitive correspondence between the spaces of Roumieu type and those of Beurling type and the order in which the definitions are listed is meant to highlight this correspondence. \begin{definition} Suppose $s,\sigma>0$. \begin{enumerate}[label=(\roman*)] \item $S_s(\mathbb{R}^n)$ consists of all $f\in C^\infty(\mathbb{R}^n)$ for which there is an $h>0$ such that \begin{equation}\label{Sscond} \sup_{x\in \mathbb{R}^n} | x^\alpha D^\beta f(x) | \leq C_\beta h^{|\alpha|} \alpha!^s, \quad \forall \alpha,\beta\in\mathbb{N}^n, \end{equation} where $C_\beta$ is a constant depending only on $\beta$. \item $\Sigma_s(\mathbb{R}^n)$ consists of all $f\in C^\infty(\mathbb{R}^n)$ such that \eqref{Sscond} holds for every $h>0$, where $C_\beta = C_{h,\beta}$ depends on $h$ and $\beta$. \item $S^\sigma(\mathbb{R}^n)$ consists of all $f\in C^\infty(\mathbb{R}^n)$ for which \begin{equation}\label{Ssigmacond} \sup_{x\in \mathbb{R}^n} | x^\alpha D^\beta f(x) | \leq C_\alpha h^{|\beta|} \beta!^\sigma, \quad \forall \alpha,\beta\in\mathbb{N}^n, \end{equation} holds for some $h>0$, where $C_\alpha$ is a constant depending only on $\alpha$. \item $\Sigma^\sigma(\mathbb{R}^n)$ consists of all $f\in C^\infty(\mathbb{R}^n)$ such that \eqref{Ssigmacond} holds for every $h>0$, where $C_\alpha = C_{h,\alpha}$ depends on $h$ and $\alpha$. \item $S_s^\sigma(\mathbb{R}^n)$ consists of all $f\in C^\infty(\mathbb{R}^n)$ for which there are constants $h>0$ and $C>0$ such that \begin{equation}\label{GScond} \sup_{x \in \mathbb{R}^n}|x^\alpha D^\beta f(x) | \leq C h^{|\alpha + \beta|} \alpha!^s \beta!^\sigma, \quad \forall \alpha,\beta\in\mathbb{N}^n. \end{equation} \item $\Sigma_s^\sigma(\mathbb{R}^n)$ consists of all $f\in C^\infty(\mathbb{R}^n)$ such that \eqref{GScond} holds for all $h>0$, where $C=C_h$ depends only on $h$. \end{enumerate} \end{definition} Trivially, we see that $S_s^\sigma \subseteq S_s\cap S^\sigma$, $\Sigma_s^\sigma \subseteq \Sigma_s \cap \Sigma^\sigma$, and $\Sigma_s^\sigma \subseteq S_s^\sigma$ for all such $s$ and $\sigma$. In fact, we have $S_s^\sigma = S_s\cap S^\sigma$ (cf. \cite{eijndhoven}) and $\Sigma_s^\sigma=\Sigma_s\cap\Sigma^\sigma$ (this is a well-known result that follows by analogous arguments, but an explicit proof can be found in \cite{albin} for instance). \begin{definition} Let $\mathscr{F}$ denote the Fourier transform given by \begin{equation*} (\mathscr{F}f)(\xi) = \hat{f}(\xi) = \frac{1}{(2\pi)^{n/2}} \int_{\mathbb{R}^n} f(x) e^{- i \langle x,\xi \rangle} \, dx, \end{equation*} and let $\mathscr{F}^{-1}$ be the corresponding inverse Fourier transform \begin{equation*} (\mathscr{F}^{-1}f)(x) = \frac{1}{(2\pi)^{n/2}} \int_{\mathbb{R}^n} f(\xi) e^{i \langle x,\xi \rangle} \, d\xi. \end{equation*} If $f$ is a generalized function, then $\mathscr{F}$ denotes the adjoint operator of the Fourier transform defined above. \end{definition} Here we list some basic properties of Gelfand-Shilov spaces in the form of two propositions. The first proposition establishes sub-exponential decay of derivatives in Gelfand-Shilov spaces, and the second establishes how Fourier transforms work in Gelfand-Shilov spaces. For both of the following two propositions, (a) can be found in \cite{eijndhoven,gel,chung}, and (b) follows by analogous arguments. \begin{prop} \label{prop:Ssalt} Suppose $s>0$ and $f\in C^\infty(\mathbb{R}^n)$. Then \begin{enumerate}[label=(\alph*)] \item $f\in S_s(\mathbb{R}^n)$ if and only if there are constants $C_\beta,r>0$ such that \begin{equation}\label{eq:Ssalt} |D^\beta f(x)| \leq C_\beta e^{- r |x|^{1/s}} \end{equation} for all multi-indices $\beta$; \item $f\in\Sigma_s(\mathbb{R}^n)$ if and only if for every $r>0$ \begin{equation}\label{eq:Ssigalt} |D^\beta f(x)| \leq C_{r,\beta} e^{- r |x|^{1/s}} \end{equation} holds for all multi-indices $\beta$, where $C_{r,\beta}>0$ depends only on $r$ and $\beta$. \end{enumerate} \end{prop} \begin{prop} \label{prop:FT} Suppose $s,\sigma>0$. \begin{enumerate}[label=(\alph*)] \item If $s+\sigma\geq 1$, then $f\in S_s^\sigma(\mathbb{R}^n)$ if and only if $\hat{f}\in S_\sigma^s(\mathbb{R}^n)$. Moreover, $f\in S_s(\mathbb{R}^n)$ if and only if $\hat{f}\in S^s(\mathbb{R}^n)$. \item If $s+\sigma>1$, then $f\in\Sigma_s^\sigma(\mathbb{R}^n)$ if and only if $\hat{f}\in\Sigma_\sigma^s(\mathbb{R}^n)$. Moreover, $f\in \Sigma_s(\mathbb{R}^n)$ if and only if $\hat{f}\in \Sigma^s(\mathbb{R}^n)$. \end{enumerate} \end{prop} We also include the following basic result. \begin{prop}\label{prop:SsubSig} If $s,\sigma >0$, $s<s_1$ and $\sigma<\sigma_1$, then $S_s^\sigma(\mathbb{R}^n) \subseteq \Sigma_{s_1}^{\sigma_1}(\mathbb{R}^n)$. \end{prop} We will now discuss the topology of Gelfand-Shilov spaces. To do this we need the following definition. \begin{definition}\label{def:indproj} Suppose $V_j$, $j=0,1,2,\dots $, are Banach spaces, $$V= \bigcap_{j\geq 0} V_j$$ and $$W = \bigcup_{j\geq 0} V_j.$$ \begin{enumerate}[label=(\alph*)] \item Let $i_j:V\rightarrow V_j $ be inclusion maps. We say that the \emph{projective limit} is the space $V$ with the smallest possible topology such that $i_j$ is continuous for all $j$. We write this as $$ V = \projlim_{j\geq 0} V_j. $$ \item Suppose further that $V_j\hookrightarrow V_{j+1}$, meaning that $V_j$ is continuously embedded in $V_{j+1}$, and let $\tilde{i}_j:V_j \rightarrow V_{j+1}$ be inclusion maps. We say that the \emph{inductive limit} is the space $W$ with the greatest possible topology such that $\tilde{i}_j$ is continuous for all $j$. We write this as $$ W = \indlim_{j\geq 0} V_j. $$ \end{enumerate} \end{definition} With these definitions and propositions in mind, we can construct topologies on $S_s$, $S^\sigma$, $\Sigma_s$ and $\Sigma^\sigma$ and define their duals. For more information on topological vector spaces, see for instance \cite{Schaefer}. \begin{definition}\label{def:top} \begin{enumerate} \item Let $V_{s,r,N}(\mathbb{R}^n)$ consist of all $f\in C^\infty(\mathbb{R}^n)$ such that \begin{equation*} ||f||_{s,r,N}= \sup_{x\in\mathbb{R}^n, |\alpha|\leq N} \left|D^\alpha f(x)e^{r|x|^{1/s}}\right| < \infty. \end{equation*} \item Let $V_{r,M}^\sigma(\mathbb{R}^n)$ consist of all $f\in C^\infty(\mathbb{R}^n)$ such that \begin{equation*} ||f||_{r,M}^\sigma= \sup_{\xi\in\mathbb{R}^n, |\beta|\leq M} \left| D^\beta \hat{f}(\xi)e^{r|\xi|^{1/\sigma}}\right| < \infty. \end{equation*} \end{enumerate} \end{definition} We see that \begin{equation*} S_s(\mathbb{R}^n) = \indlim_{r>0}\left(\projlim_{N\geq 0} V_{s,r,N}(\mathbb{R}^n) \right) \end{equation*} and \begin{equation*} S^\sigma(\mathbb{R}^n) = \indlim_{r>0}\left(\projlim_{M\geq 0} V_{r,M}^\sigma(\mathbb{R}^n) \right), \end{equation*} which implies \begin{equation*} S_s(\mathbb{R}^n) = \bigcup_{r>0}\left(\bigcap_{N\geq 0} V_{s,r,N}(\mathbb{R}^n) \right),\quad S^\sigma(\mathbb{R}^n) = \bigcup_{r>0}\left(\bigcap_{M\geq 0} V_{r,M}^\sigma(\mathbb{R}^n) \right). \end{equation*} For the $\Sigma_s$- and $\Sigma^\sigma$-spaces, we obtain \begin{equation*} \Sigma_s(\mathbb{R}^n) = \projlim_{r>0}\left(\projlim_{N\geq 0} V_{s,r,N}(\mathbb{R}^n) \right) \end{equation*} and \begin{equation*} \Sigma^\sigma(\mathbb{R}^n) = \projlim_{r>0}\left(\projlim_{M\geq 0} V_{r,M}^\sigma(\mathbb{R}^n) \right) \end{equation*} which implies \begin{equation*} \Sigma_s(\mathbb{R}^n) = \bigcap_{r>0}\left(\bigcap_{N\geq 0} V_{s,r,N}(\mathbb{R}^n) \right),\quad \Sigma^\sigma(\mathbb{R}^n) = \bigcap_{r>0}\left(\bigcap_{M\geq 0} V_{r,M}^\sigma(\mathbb{R}^n) \right). \end{equation*} \begin{remark} While $\Sigma_s$ and $\Sigma^\sigma$ are Fréchet spaces for all $s,\sigma>0$, the same is not known to be true for $S_s$ and $S^\sigma$ in current literature. \end{remark} This leads us to define the dual spaces of $S_s$ and $S^\sigma$ in the following way. \begin{definition}\label{def:Sdual} We will denote a functional $f$ of such a dual space being applied to a test function $\psi$ in the appropriate corresponding space by $f(\psi) = \langle f, \psi \rangle$. \begin{enumerate}[label=(\roman*)] \item We say that $u\in (S_s)'(\mathbb{R}^n)$ if for every $r>0$ there is an $N\geq 0$ such that \begin{equation*} |\langle u, f \rangle | \leq C_N \sum_{|\alpha|\leq N} || D^\alpha f e^{r|\cdot|^{1/s}} ||_\infty, \end{equation*} for any $f\in S_s(\mathbb{R}^n)$. \item We say that $u\in (S^\sigma)'(\mathbb{R}^n)$ if for every $r>0$ there is an $N\geq 0$ such that \begin{equation*} |\langle u, f \rangle | \leq C_N \sum_{|\alpha|\leq N} || D^\alpha \hat{f} e^{r|\cdot|^{1/\sigma}} ||_\infty, \end{equation*} for any $f\in S^\sigma(\mathbb{R}^n)$. \end{enumerate} \end{definition} Similarly, we define the dual spaces of $\Sigma_s$ and $\Sigma^\sigma$ as follows. \begin{definition}\label{def:Sigdual} \begin{enumerate}[label=(\roman*)] \item We say that $u\in (\Sigma_s)'(\mathbb{R}^n)$ if there is an $r_0>0$ and an $N\geq 0$ such that \begin{equation*} |\langle u, f \rangle | \leq C \sum_{|\alpha|\leq N} || D^\alpha f e^{r_0|\cdot|^{1/s}} ||_\infty, \end{equation*} for any $f\in \Sigma_s(\mathbb{R}^n)$. \item We say that $u\in (\Sigma^\sigma)'(\mathbb{R}^n)$ if there is an $r_0>0$ and an $N\geq 0$ such that \begin{equation*} |\langle u, f \rangle | \leq C_N \sum_{|\alpha|\leq N} || D^\alpha \hat{f} e^{r_0|\cdot|^{1/\sigma}} ||_\infty, \end{equation*} for any $f\in \Sigma^\sigma(\mathbb{R}^n)$. \end{enumerate} \end{definition} \begin{remark} Since $S_0(\mathbb{R}^n) = C_c^\infty(\mathbb{R}^n)$, the space of compactly supported smooth functions, (cf. \cite[p.~170]{gel}) we have $(S_0)'(\mathbb{R}^n) = \mathscr{D}'(\mathbb{R}^n)$. Since $S_s(\mathbb{R}^n)$ is continuously embedded and dense in $S_{s'}(\mathbb{R}^n)$ for $s\leq s'$, we thus have $$(S_s)'(\mathbb{R}^n) \subseteq \mathscr{D}'(\mathbb{R}^n)$$ for all positive $s$. By Proposition \ref{prop:FT}, we therefore have $$(S^\sigma)'(\mathbb{R}^n) \subseteq \mathscr{F}\mathscr{D}'(\mathbb{R}^n)$$ for all positive $\sigma$. \end{remark} In Definitions \ref{def:Sdual} and \ref{def:Sigdual}, we can replace the $L^\infty(\mathbb{R}^n)$-norm with the $L^2(\mathbb{R}^n)$-norm by the arguments of \cite[p.~134]{eijndhoven}. We can extend this further with Hölder's inequality to obtain the following equivalent definitions of the dual spaces. \begin{prop}\label{prop:Ssdual} Suppose $1\leq p \leq \infty$. \begin{enumerate}[label=(\roman*)] \item $u\in (S_s)'(\mathbb{R}^n)$ if and only if for every $r>0$ there is an $N\geq 0$ such that \begin{equation*} |\langle u, f \rangle | \leq C_N \sum_{|\alpha|\leq N} || D^\alpha f e^{r|\cdot|^{1/s}} ||_p, \end{equation*} for any $f\in S_s(\mathbb{R}^n)$. \item $u\in (S^\sigma)'(\mathbb{R}^n)$ if and only if for every $r>0$ there is an $N\geq 0$ such that \begin{equation*} |\langle u, f \rangle | \leq C_N \sum_{|\alpha|\leq N} || D^\alpha \hat{f} e^{r|\cdot|^{1/\sigma}} ||_p, \end{equation*} for any $f\in S^\sigma(\mathbb{R}^n)$. \end{enumerate} \end{prop} Replacing the $L^\infty$-norm with $L^p$-norms, $1\leq p < \infty$, is possible for the $\Sigma_s$ and $\Sigma^\sigma$ duals by similar arguments. For $f\in (S_s)'$ or $f\in (S^\sigma)'$ we will also consider $(f,\psi)$, which we denote to mean the continuous extension of the regular inner product of $L^2$ given by \begin{equation*} (f,\psi)_2 = \int_{\mathbb{R}^n} f(y) \overline{\phi(y)} \, d y \end{equation*} to $f\in(S_s)'$, $\psi\in S_s$ or $f\in(S^\sigma)'$, $\psi\in S^\sigma$. The fact that this inner product can be extended continuously to duals of Gelfand-Shilov spaces follows by the fact that $(S_s,L^2,(S_s)')$ forms a \emph{Gelfand triple} (cf. \cite{bannert}). The same is true of $S^\sigma$, $\Sigma_s$ and $\Sigma^\sigma$. We also recall the following definition of the short-time Fourier transform, which serves a pivotal role in several of the characterizations in this paper. \begin{definition} The \emph{short-time Fourier transform} of $f\in(S_s)'(\mathbb{R}^n)$ with window function $\phi\in S_s(\mathbb{R}^n)$ is given by \begin{equation*} V_\phi f(x,\xi) = \mathscr{F}\Big[ f \overline{\phi(\cdot - x)} \Big] (\xi) = (2\pi)^{-n/2}(f, \phi(\cdot-x)e^{i\langle \cdot,\xi\rangle}). \end{equation*} For $f$ belonging to $(S^\sigma)'(\mathbb{R}^n)$, $(\Sigma_s)'(\mathbb{R}^n)$ or $(\Sigma^\sigma)'(\mathbb{R}^n)$, we define the short-time Fourier transform by replacing each occurrence of $S_s$ above with $S^\sigma$, $\Sigma_s$ and $\Sigma^\sigma$, respectively. \end{definition} \begin{comment} \begin{prop}\label{prop:SEqualsSsIntSsigma} Suppose $s,\sigma>0$. \begin{enumerate}[label=(\alph*)] \item If $s+\sigma\geq 1$, then \begin{equation*} S_s^\sigma(\mathbb{R}^n) = S_s(\mathbb{R}^n)\cap S^\sigma(\mathbb{R}^n). \end{equation*} \item If $s+\sigma>1$, then \begin{equation*} \Sigma_s^\sigma(\mathbb{R}^n) = \Sigma_s(\mathbb{R}^n) \cap \Sigma^\sigma(\mathbb{R}^n). \end{equation*} \end{enumerate} \end{prop} \begin{prop} \label{thm:Sigmachar} Suppose $s,\sigma>0$. \begin{enumerate}[label=(\alph*)] \item If $s+\sigma\geq 1$, then $f\in S_s^\sigma(\mathbb{R}^n)$ if and only if \begin{equation*} |f(x)| \leq A e^{-a |x|^{1/s}}, \quad |\hat{f}(\xi)| \leq B e^{- b |\xi|^{1/\sigma}} \end{equation*} for some constants $A,B,a,b>0$. \item If $s+\sigma>1$, then $f\in\Sigma_s^\sigma(\mathbb{R}^n)$ if and only if \begin{equation*} |f(x)| \leq A e^{-a |x|^{1/s}}, \quad |\hat{f}(\xi)| \leq B e^{- b |\xi|^{1/\sigma}} \end{equation*} for every $a,b>0$, where $A,B>0$ depend only on $a$ and $b$, respectively. \end{enumerate} \end{prop} \end{comment} \section{Non-triviality of $\Sigma_s^\sigma$-spaces}\label{sec:nontriv} In this section, we determine when $\Sigma_s^\sigma$-spaces are nontrivial. Similar results have already been established for $S_s^\sigma$-spaces (cf. \cite{gel}). By nontrivial we mean that the space contains a function which is not constantly equal to zero. To establish non-triviality conditions, we will need two propositions. The following proposition follows by similar arguments to those in \cite[p.~172-175]{gel}. \begin{prop}\label{prop:SigCest} If $s,\sigma >0$, $\sigma < 1$ and $f\in \Sigma_s^\sigma(\mathbb{R}^n)$, then $f$ can be continued analytically as an entire function in the ($2n$-dimensional) complex plane. Moreover, for every $a,b>0$, \begin{equation*} |f(x+i y)| \leq C \exp{\left(-a|x|^{1/s} + b |y|^{1/(1-\sigma)}\right)} \end{equation*} for some constant $C=C_{a,b}$. \end{prop} We also find the following result in \cite[p.~228-233]{gel}. \begin{prop}\label{prop:Striv} For positive $s$ and $\sigma$, the space $S_s^\sigma(\mathbb{R}^n)$ is nontrivial if and only if $s+\sigma \geq 1$. \end{prop} With these propositions in mind, we prove the main result of this section. In previous works the condition ``$s+\sigma \geq 1$, $(s,\sigma) \neq (1/2,1/2)$'' is employed instead of ``$s+\sigma > 1$''. Here, we prove that the correct condition is $s+\sigma > 1$. \begin{thm} \label{thm:Sigtriv} Suppose $s,\sigma >0$. Then the space $\Sigma_s^\sigma(\mathbb{R}^n)$ is nontrivial if and only if $s+\sigma > 1$. \end{thm} \begin{proof} Since $\Sigma_s^\sigma \subseteq S_s^\sigma$, it follows by Proposition \ref{prop:Striv} that $\Sigma_s^\sigma$ is trivial whenever $s+\sigma < 1$. Furthermore, Proposition \ref{prop:Striv} with Proposition \ref{prop:SsubSig} implies that $\Sigma_s^\sigma$ is nontrivial when $s+\sigma > 1$. Thus we need only consider the case $s+\sigma = 1$. Since $s$ and $\sigma$ are both assumed to be positive, we must have $\sigma < 1$. By Proposition \ref{prop:SigCest}, it is then true that \begin{equation*} |f(z)| \leq C_{a,b} \exp{\left(-a|x|^{1/s} + b |y|^{1/s}\right)} \end{equation*} for every $a,b>0$, where $z=x+i y$. Moreover \begin{equation*} |f(i z)| \leq C_{a,b} \exp{\left(-a|y|^{1/s} + b |x|^{1/s}\right)} \end{equation*} and therefore \begin{equation*} | f(z)\cdot f(i z) | \leq C_{a,b}^2 \exp{\left( (b-a)( |x|^{1/s} + |y|^{1/s}) \right)}. \end{equation*} Since this inequality holds for all $a,b>0$, then by picking $a>b$ we see that $g(z) = f(z)\cdot f(i z)$ is bounded and tends to zero as $|x|,|y|\rightarrow \infty$. By Proposition \ref{prop:SigCest}, $f$ is an entire function, thus so is $g$. Hence Liouville's theorem implies that $g \equiv 0$. But this implies that $f\equiv 0$ as well, completing the proof. \end{proof} \section{Characterizations by short-time Fourier transform}\label{sec:stft} We now move on to the characterization of $S_s$- and $S^\sigma$-spaces in terms of their short-time Fourier transforms. This is detailed in the following theorem, which is the main result of this section. \begin{thm}\label{thm:SsSTFT} Suppose $s,\sigma> 0$. \begin{enumerate}[label=(\roman*)] \item Let $\phi\in S_s(\mathbb{R}^n)\setminus\{0\}$. Then $f\in S_s(\mathbb{R}^n)$ if and only if there is an $r>0$ such that \begin{equation}\label{eq:STFT3} |V_\phi f(x,\xi) | \leq C_N (1+|\xi|^2)^{-N} e^{- r |x|^{1/s}} \end{equation} for every $N\geq 0$. \item Let $\phi\in S^\sigma(\mathbb{R}^n)\setminus\{0\}$. Then $f\in S^\sigma(\mathbb{R}^n)$ if and only if there is an $r>0$ such that \begin{equation}\label{eq:STFT4} |V_{\phi} f(x,\xi) | \leq C_N (1+|x|^2)^{-N} e^{- r |\xi|^{1/\sigma}} \end{equation} for every $N\geq 0$. \end{enumerate} \end{thm} For the proof, we will need the following three lemmas. These lemmas follow by basic computations and are left for the reader to prove. \begin{lemma}\label{lem:twistconv} For $f\in S_s(\mathbb{R}^n)$ ($f\in\Sigma_s(\mathbb{R}^n)$) and $\phi_1,\phi_2,\phi_3 \in S_s(\mathbb{R}^n)$ ($\phi_1,\phi_2,\phi_3 \in \Sigma_s(\mathbb{R}^n)$), \begin{equation*} (\phi_3,\phi_1) V_{\phi_2} f(x,\xi) = \frac{1}{(2\pi)^{n/2}} \iint V_{\phi_1}f(x-y,\xi-\eta) V_{\phi_2} \phi_3 (y,\eta) e^{-i\langle x-y,\eta \rangle} \, d y \, d \eta. \end{equation*} \end{lemma} \begin{lemma}\label{lem:absolutineq} If $s>0$ then there is a $C \geq 1$ such that \begin{equation}\label{eq:absolutineq} C^{-1} (|x|^{1/s} + |y|^{1/s} ) \leq |y|^{1/s} + |y-x|^{1/s} \leq C (|x|^{1/s} + |y|^{1/s} ). \end{equation} for every $x,y\in\mathbb{R}^n$. \end{lemma} \begin{lemma}\label{lem:absolutineq3} For any $\xi,\eta\in \mathbb{R}^n$ and any $N\geq 0$, there is a constant $C>0$ such that \begin{equation*} (1+|\xi-\eta|^2)^{-N} \leq C (1+|\xi|^2)^{-N} (1+|\eta|^2)^N. \end{equation*} \end{lemma} \begin{proof}[Proof of Theorem \ref{thm:SsSTFT}] Suppose that $f\in S_s$. For every $N\geq 0$, we have \begin{align*} |(1+|\xi|^2)^N V_\phi f(x,\xi) | &= \frac{1}{(2\pi)^{n/2}} \left| \int_{\mathbb{R}^n} f(y) \overline{\phi(y-x)} (1+|\xi|^2)^N e^{-i\langle y,\xi \rangle} \, d y \right| \\ &=\frac{1}{(2\pi)^{n/2}} \left| \int_{\mathbb{R}^n} f(y) \overline{\phi(y-x)} (1-\Delta)^N e^{-i\langle y,\xi \rangle} \, d y \right| \\ &= \frac{1}{(2\pi)^{n/2}} \left| \sum_{\gamma_0+|\gamma|=N} \dfrac{N!}{\gamma_0! \gamma!} \int_{\mathbb{R}^n} f(y) \overline{\phi(y-x)} D^{2\gamma} e^{-i\langle y, \xi \rangle} \, d y, \right| \end{align*} where $\gamma'=(\gamma_0,\gamma)\in \mathbb{N}^{1+n}$ and the derivatives are taken with respect to $y$. Integration by parts together with Leibniz formula yields \begin{align*} |(1+|\xi|^2)^N V_\phi f(x,\xi) | &= \frac{1}{(2\pi)^{n/2}} \left|\sum_{\gamma_0+|\gamma|=N} \dfrac{N!}{\gamma_0! \gamma!} \int_{\mathbb{R}^n} D^{2\gamma} \left[ f(y) \overline{\phi(y-x)} \right] e^{-i \langle y, \xi \rangle}\, d y \right| \\ &=\frac{1}{(2\pi)^{n/2}} \left|\sum_{\gamma',\alpha} c^N_{\gamma',\alpha} \int D^{\alpha} f(y) D^{2\gamma-\alpha} \overline{\phi(y-x)} e^{-i\langle y, \xi\rangle} d y \right| \\ &\leq \frac{1}{(2\pi)^{n/2}} \sum_{\gamma',\alpha} c^N_{\gamma',\alpha} \int\left| D^{\alpha} f(y) D^{2\gamma-\alpha} \overline{\phi(y-x)} \right| \, d y, \end{align*} where $\sum_{\gamma',\alpha}=\sum_{|\gamma'|=N} \sum_{\alpha\leq \gamma}$ and where $c^N_{\gamma',\alpha} = \dfrac{N!}{\gamma_0! \gamma!}\binom{2\gamma}{\alpha}$. By Proposition \ref{prop:Ssalt} there are $C_{\gamma,\alpha}, a>0$ such that \begin{align*} \int\left| D^{\alpha} f(y) D^{2\gamma-\alpha} \overline{\phi(y-x)} \right| \, d y &\leq C_{\gamma,\alpha} \int e^{- a(|y|^{1/s}+|y-x|^{1/s})} \, d y, \end{align*} and by Lemma \ref{lem:absolutineq} there is a $c>0$ such that \begin{equation*} \int\left| D^{\alpha} f(y) D^{2\gamma-\alpha} \overline{\phi(y-x)} \right| \, d y \leq C'_{\gamma,\alpha} e^{-ac|x|^{1/s}} \end{equation*} since $\int e^{-ac|y|^{1/s}} d y < \infty$. Hence, with $r=a c>0$ and $C_{N,\gamma',\alpha} = c^N_{\gamma',\alpha}C'_{\gamma,\alpha}>0$ we obtain \begin{align*} |(1+|\xi|^2)^N V_\phi f(x,\xi) | &\leq \frac{1}{(2\pi)^{n/2}} \sum_{\gamma',\alpha} C_{N,\gamma',\alpha} e^{-r|x|^{1/s}} \\ &\leq C_N e^{-r|x|^{1/s}}, \end{align*} where $C_N = \frac{1}{(2\pi)^{n/2}}\sum_{\gamma',\alpha} C_{N,\gamma',\alpha}$. Thus \eqref{eq:STFT3} holds for every $N\geq 0$. Now suppose that \eqref{eq:STFT3} holds for every $N\geq 0$. This condition implies that $f\in \mathscr{S}$ (cf. \cite{grochenig}). In particular $f\in C^\infty$, hence by Proposition \ref{prop:Ssalt}, the result follows if there is an $r>0$ such that \eqref{eq:Ssalt} holds for every multi-index $\beta$. Consider $V_\phi[D^\beta f](x,\xi)$. Integrating by parts and applying Leibniz formula gives \begin{align*} V_\phi[D^\beta f](x,\xi) &= \frac{(-1)^\beta}{(2\pi)^{n/2}}\int f(y) D^\beta \left( \overline{\phi(y-x)} e^{-i\langle y,\xi \rangle} \right) \, d y \\ &= \sum_{\alpha\leq\beta} \frac{C_{\alpha,\beta}}{(2\pi)^{n/2}} \int f(y) D^{\alpha}\overline{\phi(y-x)} \xi^{\beta-\alpha} e^{-i\langle y,\xi \rangle}\, d y \\ &= \sum_{\alpha\leq\beta} C_{\alpha,\beta} \xi^{\beta-\alpha} V_{D^{\alpha}\phi} f(x,\xi), \end{align*} where $C_{\alpha,\beta} = (-1)^\beta \binom{\beta}{\alpha}$. By Lemma \ref{lem:twistconv} we therefore have \begin{equation}\label{eq:twist1} V_\phi[D^\beta f](x,\xi) =\sum_{\alpha\leq\beta} C'_{\alpha,\beta} \xi^{\beta-\alpha} \iint V_\phi f(x-y,\xi-\eta) V_{D^\beta\phi}\phi(y,\eta) e^{-i\langle x-y,\eta \rangle} d y \, d \eta, \end{equation} where $C'_{\alpha,\beta}=(2\pi)^{-n/2} (\phi,\phi)^{-1}C_{\alpha,\beta}$. For now, we consider only the double integral \begin{equation*} I = \iint V_\phi f(x-y,\xi-\eta) V_{D^\beta\phi}\phi(y,\eta) e^{-i\langle x-y,\eta \rangle} d y \, d \eta \end{equation*} from the right-hand side of the previous equation. Note that \begin{equation*} V_{D^\beta \phi}\phi(x,\xi) = e^{-i\langle x,\xi\rangle} V_{\overline{\phi}} [D^\beta\overline{\phi}](-x,\xi), \end{equation*} and since $D^\beta\overline{\phi},\overline{\phi}\in S_s\setminus\{0\}$ the first part of this theorem now implies that there is an $r_1 > 0$ such that \begin{equation*} |V_{D^\beta \phi}\phi(x,\xi)| \leq C_{\beta,N_1} (1+|\xi|^2)^{-N_1} e^{-r_1 |x|^{1/s}} \end{equation*} for every $N_1\geq 0$ and every $\beta$. (Note that all the derivatives of $\overline{\phi}$ fulfill Proposition \ref{prop:Ssalt} with the same exponent, hence we can use the same $r_1>0$ for every $\beta$.) By assumption, \eqref{eq:STFT3} holds for all $N\geq 0$. For any given $N$, pick $N_1 > N$. We now obtain \begin{align*} I &\leq A_{\beta,N} \iint (1+|\xi-\eta|^2)^{-N} e^{- r |x-y|^{1/s}}(1+|\eta|^2)^{-N_1} e^{-r_1 |y|^{1/s}} d y d \eta \\ &= A_{\beta,N} I_1 I_2 \end{align*} where $A_{\beta,N} = C_N C_{\beta,N_1}$, $$I_1 = \int e^{- r |x-y|^{1/s}} e^{-r_1 |y|^{1/s}} d y$$ and $$I_2 = \int (1+|\xi-\eta|^2)^{-N}(1+|\eta|^2)^{-N_1} d \eta.$$ In order to estimate $I_1$, we let $r_2 = \min\{r,r_1\}$ and apply Lemma \ref{lem:absolutineq} to obtain $c>0$ such that \begin{align*} I_1 &\leq \int e^{-r_2( |y-x|^{1/s} + |y|^{1/s})} d y \\ &\leq e^{- r_2 c |x|^{1/s}} \int e^{- r_2 c |y|^{1/s}} d y \\ &= B e^{- r_2 c |x|^{1/s}}, \end{align*} where $B = \int e^{- r_2 c |y|^{1/s}} d y < \infty$. Since $N_1 > N$, Lemma \ref{lem:absolutineq3} gives \begin{align*} I_2 &= \int (1+|\xi-\eta|^2)^{-N}(1+|\eta|^2)^{-N_1} d \eta \\ &\leq C (1+|\xi|^2)^{-N}\int (1+|\eta|^2)^{N - N_1} d \eta \\ &= B_N (1 + |\xi|^2)^{- N}, \end{align*} where $B_N = C \int (1+|\eta|^2)^{N - N_1} d \eta < \infty$. Combining these estimates, we get \begin{equation*} I \leq B_{\beta,N} (1+|\xi|^2)^{-N} e^{- r_2 c |x|^{1/s}} \end{equation*} for every $N\geq 0$, where $B_{\beta,N} = A_{\beta,N} B B_N $. Combining this with \eqref{eq:twist1}, we obtain \begin{equation} \label{eq:twist2} | V_\phi [D^\beta f](x,\xi) | \leq \sum_{\alpha\leq\beta} B_{\alpha,\beta,N} \xi^{\beta - \alpha} (1+|\xi|^2)^{-N} e^{- c r |x|^{1/s}} \end{equation} for every $N\geq 0$, where $B_{\alpha,\beta,N} = C'_{\alpha,\beta} B_{\beta,N}$. We now integrate both sides of \eqref{eq:twist2} with respect to $\xi$. Note that \begin{equation*} |V_\phi f(x,\xi)| = (2\pi)^{-n/2}\left|\left(\hat{f}_{-x}*\psi\right) (\xi)\right|, \end{equation*} where $\psi = \mathscr{F}\left[\,\overline{\phi}\,\right]$, and since $\mathscr{F}[f_a](\eta) = e^{-i\langle a, \eta \rangle}\hat{f}(\eta)$, \begin{align*} (2\pi)^{-n/2} \int_{\mathbb{R}^n}\left|\left(\hat{f}_{-x}*\psi\right) (\xi) \right|\, d \xi &\geq (2\pi)^{-n/2} \left| \int_{\mathbb{R}^n}\left(\hat{f}_{-x}*\psi\right) (\xi) \, d \xi \right| \\ &= (2\pi)^{-n/2} \left|\iint \hat{f}_{-x}(\eta)\psi(\xi - \eta) \, d \eta \, d \xi \right| \\ &= (2\pi)^{-n/2} \left| \int e^{i\langle x, \eta \rangle}\hat{f}(\eta) \, d \eta \int \psi(\xi - \eta) \, d \xi \right| \\ &= |f(x)|\left|\int \psi(\xi - \eta) \, d \xi\right|. \end{align*} Since $\int \psi(\xi - \eta) \, d \xi <\infty$, we therefore obtain \begin{equation}\label{eq:twist3} |D^\beta f(x)| \leq C_\phi \int | V_\phi [D^\beta f](x,\xi) | d \xi, \end{equation} for some constant $C_\phi > 0$. Moreover, if we fix $N > |\beta|$, then \begin{equation*} \int \xi^{\beta - \alpha} (1+|\xi|^2)^{-N} d \xi = D_{\alpha,\beta} < \infty \end{equation*} for each $\alpha \leq \beta$ and thus, with $r' = r_2 c$, \begin{equation} \label{eq:twist4} \int | V_\phi [D^\beta f](x,\xi) | d \xi \leq \sum_{\alpha\leq\beta} B_{\alpha,\beta,N} D_{\alpha,\beta} e^{- r' |x|^{1/s}}. \end{equation} Finally let $C_\beta = C_\phi^{-1}\sum_{\alpha\leq\beta} B_{\alpha,\beta,N} D_{\alpha,\beta}$. Then combining \eqref{eq:twist3} with \eqref{eq:twist4} now yields \begin{equation*} |D^\beta f(x) | \leq C_\beta e^{- r' |x|^{1/s}} \end{equation*} for every multi-index $\beta$. This completes the proof of (i). To prove (ii), we first note that by Proposition \ref{prop:FT}, $f\in S^\sigma$ and $\phi\in S^\sigma\setminus\{0\}$ if and only if $\hat{f}\in S_\sigma$ and $\hat{\phi}\in S_\sigma\setminus\{0\}$. By (i), we therefore have $f\in S^\sigma$ if and only if \begin{equation*} |V_{\hat{\phi}} \hat{f}(x,\xi)| \leq C_N (1+|\xi|^2)^{-N} e^{-r |x|^{1/\sigma}} \end{equation*} for every $N\geq 0$. Since $V_{\hat{\phi}} \hat{f}(x,\xi) = e^{-i\langle x,\xi \rangle} V_{\phi}f (-\xi,x)$, this condition can be rewritten as \begin{equation*} |V_{\phi} f(-\xi,x)| \leq C_N (1+|\xi|^2)^{-N} e^{-r |x|^{1/\sigma}}. \end{equation*} Performing a variable substitution now yields \eqref{eq:STFT4}. This completes the proof. \end{proof} Utilizing Proposition \ref{prop:Ssalt}(b) instead of Proposition \ref{prop:Ssalt}(a), we obtain the following characterizations of short-time Fourier transforms in $\Sigma_s$ and $\Sigma^\sigma$ by analogous arguments. \begin{thm} Suppose $s,\sigma> 0$. \begin{enumerate}[label=(\roman*)] \item Let $\phi\in \Sigma_s(\mathbb{R}^n)\setminus\{0\}$. Then $f\in \Sigma_s(\mathbb{R}^n)$ if and only if for every $r>0$ and every $N\geq 0$, \begin{equation*} |V_\phi f(x,\xi) | \leq C_{r,N} (1+|\xi|^2)^{-N} e^{- r |x|^{1/s}}. \end{equation*} \item Let $\phi\in \Sigma^\sigma(\mathbb{R}^n)\setminus\{0\}$. Then $f\in \Sigma^\sigma(\mathbb{R}^n)$ if and only if for every $r>0$ and every $N\geq 0$, \begin{equation*} |V_\phi f(x,\xi) | \leq C_{r,N} (1+|x|^2)^{-N} e^{- r |\xi|^{1/\sigma}}. \end{equation*} \end{enumerate} \end{thm} Using these short-time Fourier transform characterizations, we can obtain characterizations for the one-parameter spaces similar to those of \cite{chung} for the two-parameter spaces. \begin{thm}\label{thm:SsFT} Suppose $s,\sigma>0$ and $f\in C^\infty(\mathbb{R}^n)$. \begin{enumerate}[label=(\alph*)] \item $f\in S_s(\mathbb{R}^n)$ if and only if there is an $r>0$ such that \begin{equation}\label{eq:Ssaltchar} |f(x)|\leq C e^{-r|x|^{1/s}}, \quad |\hat{f}(\xi)|\leq C_N (1+|\xi|^2)^{-N} \end{equation} for every $N\geq 0$. \item $f\in S^\sigma(\mathbb{R}^n)$ if and only if there is an $r>0$ such that \begin{equation}\label{eq:Ssigaltchar} |f(x)|\leq C_N (1+|x|^2)^{-N} , \quad |\hat{f}(\xi)|\leq C e^{-r|\xi|^{1/\sigma}} \end{equation} for every $N\geq 0$. \end{enumerate} \end{thm} \begin{proof} Suppose first that $f\in S_s$. Then there is an $r>0$ such that \eqref{eq:STFT3} holds for every $N\geq 0$. Integrating both sides of this equation with respect to $\xi$ yields \begin{equation*} \int |V_\phi f(x,\xi)| \, d \xi \leq C' e^{-r|x|^{1/s}}. \end{equation*} On the other hand, since $V_\phi f = (\hat{f},\hat{\phi}(\cdot-\xi)e^{-i\langle \cdot, x \rangle}),$ \begin{align*} \int |V_\phi f(x,\xi)| \, d \xi &\geq (2\pi)^{-n/2}\left| \int e^{i\langle x,\eta\rangle} \hat{f}(\eta) \int \overline{\hat{\phi}(\xi-\eta)}\, d \xi \, d \eta \right| \\ &= C_\phi |f(x)|, \end{align*} hence we obtain $$ |f(x)| \leq C e^{-r|x|^{1/s}}.$$ Starting instead by integrating both sides of \eqref{eq:STFT3} with respect to $x$ yields the inequalities \begin{align*} C_\phi'' |\hat{f}(\xi)| &\leq (2\pi)^{-n/2} \int f(y) e^{-\langle y,\xi \rangle} \int \overline{\phi(y-x)} \, d x \, d y \\ &\leq \int |V_\phi f(x,\xi) |\, d x \\ &\leq C'_N (1+|\xi|^2)^{-N} \end{align*} hence we obtain $$ |\hat{f}(\xi)|\leq C_N (1+|\xi|^2)^{-N}$$ for every $N\geq 0$. Suppose instead that $f$ fulfills \eqref{eq:Ssaltchar} for some $r>0$ and every $N\geq 0$. Then $$ |V_\phi (x,\xi)| \leq (2\pi)^{-n/2}\int |f(y)| |\phi(y-x)| \, d y. $$ Using what we proved in the first half of this proof, there are $C_0,r_1,r_2 > 0$ such that $$ \int |f(y)| |\phi(y-x)| \, d y \leq C_0 \int e^{-r_1 |y|^{1/s}}e^{-r_2|y-x|^{1/s}} \, d y \leq C_1 e^{-r|x|^{1/s}} $$ for some $r>0$, where we use Lemma \ref{lem:absolutineq} for the last inequality. Hence $$ |V_\phi f(x,\xi)| \leq C e^{-r |x|^{1/s}}. $$ Using the same strategy once more but starting with the fact that $$ |V_\phi f (x,\xi) | \leq (2\pi)^{-n/2} \int |\hat{f}(\eta)| |\hat{\phi}(\eta - \xi)| \, d \eta, $$ and this time utilizing Lemma \ref{lem:absolutineq3} instead, we obtain $$ |V_\phi f(x,\xi) | \leq C_N (1+|\xi|^2)^{-N} $$ for every $N\geq 0$. Combining both of these inequalities we see that for every $N\geq 0$, $$ |V_\phi f(x,\xi)|^2 \leq C_N (1+|\xi|^2)^{-N} e^{- r |x|^{1/s}}, $$ and in particular for $N=2k$, $k\geq 0$, $$ |V_\phi f(x,\xi)|^2 \leq C_{2k} (1+|\xi|^2)^{-2k} e^{- r |x|^{1/s}}, $$ thus $$ |V_\phi f(x,\xi) | \leq C_k' (1+|\xi|^2)^{-k} e^{-r' |x|^{1/s}} $$ for all $k\geq 0$, where $C_k' = \sqrt{C_{2k}}$ and $r'=r/2$. This completes the proof of (a). To prove (b), simply perform Fourier transforms in light of Proposition \ref{prop:FT} and apply (a). \end{proof} As with the other results, we state the corresponding theorem for the $\Sigma_s$- and $\Sigma^\sigma$-spaces but omit its proof as it follows by analogous arguments. \begin{thm}\label{thm:SigsFT} Suppose $s,\sigma>0$ and $f\in C^\infty(\mathbb{R}^n)$. \begin{enumerate}[label=(\alph*)] \item $f\in \Sigma_s(\mathbb{R}^n)$ if and only if for every $r>0$ and $N\geq 0$, \begin{equation}\label{eq:Sigmasaltchar} |f(x)|\leq C_r e^{-r|x|^{1/s}}, \quad |\hat{f}(\xi)|\leq C_N (1+|\xi|^2)^{-N}. \end{equation} \item $f\in \Sigma^\sigma(\mathbb{R}^n)$ if and only if for every $r>0$ and every $N\geq 0$, \begin{equation}\label{eq:Sigmasigaltchar} |f(x)|\leq C_N (1+|x|^2)^{-N} , \quad |\hat{f}(\xi)|\leq C_r e^{-r|\xi|^{1/\sigma}}. \end{equation} \end{enumerate} \end{thm} \section{Characterizations of dual spaces}\label{sec:dual} We now move on to the characterization of duals to one-parameter spaces $S_s$ and $S^\sigma$. These duals were defined in Section \ref{sec:prelim} via the topologies detailed in Definition \ref{def:top} and using the results of Section \ref{sec:stft}, we now arrive at the following equivalent topologies. \begin{prop} Suppose $s,\sigma > 0$. \begin{enumerate}[label=(\roman*)] \item Let $\phi\in S_s(\mathbb{R}^n)\setminus\{0\}$, let \begin{equation*} p^{\phi}_{s,r,N}(f) = \sup_{x,\xi\in\mathbb{R}^n} \left| V_\phi f(x,\xi) (1+|\xi|^2)^{N}e^{r |x|^{1/s}}\right| \end{equation*} and let $B_{s,r,N}(\mathbb{R}^{n})$ be the Banach space consisting of all $f\in C^\infty(\mathbb{R}^{n})$ such that $p^{\phi}_{s,r,N}(f)$ is finite. Then $$ S_s(\mathbb{R}^n) = \indlim_{r>0}\left(\projlim_{N\geq 0} B_{s,r,N}(\mathbb{R}^{n}) \right) $$ where the equality holds in a topological sense as well. \item Let $\phi\in S^\sigma(\mathbb{R}^n)\setminus\{0\}$, let \begin{equation*} q^{\phi,\sigma}_{r,M}(f) = \sup_{x,\xi\in\mathbb{R}^n} \left| V_\phi f(x,\xi) (1+|x|^2)^{M}e^{r |\xi|^{1/\sigma}}\right| \end{equation*} and let $B^\sigma_{r,M}(\mathbb{R}^{n})$ be the Banach space consisting of all $f\in C^\infty(\mathbb{R}^{n})$ such that $q^{\phi,\sigma}_{r,M}(f)$ is finite. Then $$ S^\sigma(\mathbb{R}^n) = \indlim_{r>0}\left(\projlim_{M\geq 0} B^\sigma_{r,M}(\mathbb{R}^{n}) \right) $$ where the equality holds in a topological sense as well. \end{enumerate} \end{prop} \begin{proof} The equivalence of the semi-norms $p^{\phi}_{s,r,N}$ and $||\cdot||_{s,r,N}$, as well as that of $q^{\phi,\sigma}_{r,M}$ and $||\cdot||^\sigma_{r,M}$ is established implicitly in the proof of Theorem \ref{thm:SsSTFT}. \end{proof} This proposition gives us the following equivalent definitions for the dual spaces $(S_s)'$ and $(S^\sigma)'$. \begin{cor} Suppose $s,\sigma > 0$. \begin{enumerate}[label=(\roman*)] \item Let $\phi\in S_s(\mathbb{R}^n)\setminus\{0\}$ and let $u\in \mathscr{D}'(\mathbb{R}^n)$. Then $u\in (S_s)'(\mathbb{R}^n)$ if and only if for every $r>0$ there is an $N\geq 0$ such that \begin{equation*} |\langle u, f \rangle|\leq C_N p^{\phi}_{s,r,N}(f) \end{equation*} for all $f\in S_s(\mathbb{R}^n)$. \item Let $\phi\in S^\sigma(\mathbb{R}^n)\setminus\{0\}$ and $u\in\mathscr{F}\mathscr{D}'(\mathbb{R}^n)$. Then $u\in (S^\sigma)'(\mathbb{R}^n)$ if and only if for every $r>0$ there is an $M\geq 0$ such that \begin{equation*} |\langle u, f \rangle|\leq C_M q^{\phi,\sigma}_{r,M}(f) \end{equation*} for all $f\in S^\sigma (\mathbb{R}^n)$. \end{enumerate} \end{cor} \begin{comment} \begin{cor}\label{cor:inner} \begin{enumerate}[label=(\roman*)] \item A distribution $u$ belongs to $(S_s)'(\mathbb{R}^n)$ if and only if for every $r>0$ there is an $N\geq 0$ such that \begin{equation*} |( u, f ) | \leq C_N \sum_{|\alpha|\leq N} || D^\alpha f e^{r|\cdot|^{1/s}} ||_2, \end{equation*} for any $f\in S_s(\mathbb{R}^n)$. \item A distribution $u$ belongs to $(S^\sigma)'(\mathbb{R}^n)$ if and only if for every $r>0$ there is an $N\geq 0$ such that \begin{equation*} |( u, f ) | \leq C_N \sum_{|\alpha|\leq N} || D^\alpha \hat{f} e^{r|\cdot|^{1/\sigma}} ||_2, \end{equation*} for any $f\in S^\sigma(\mathbb{R}^n)$. \end{enumerate} \end{cor} \end{comment} This brings us to the following characterization of the duals via short-time Fourier transforms, which is the main result of this section. \begin{thm}\label{thm:SsSTFTdual} \begin{enumerate}[label=(\roman*)] \item Let $\phi\in S_s(\mathbb{R}^n)\setminus\{0\}$ and $f\in \mathscr{D}'(\mathbb{R}^n)$. Then $f\in(S_s)'(\mathbb{R}^n)$ if and only if for every $r>0$ there is an $N_0\geq 0$ such that \begin{equation}\label{eq:STFTdual1} |V_\phi f(x,\xi)| \leq C_{r}(1+|\xi|^2)^{N_0} e^{r|x|^{1/s}}. \end{equation} \item Let $\phi\in S^\sigma(\mathbb{R}^n)\setminus\{0\}$ and $f\in \mathscr{F}\mathscr{D}'(\mathbb{R}^n)$. Then $f\in(S^\sigma)'(\mathbb{R}^n)$ if and only if for every $r>0$ there is an $N_0\geq 0$ such that \begin{equation}\label{eq:STFTdual2} |V_\phi f(x,\xi)| \leq C_{r}(1+|x|^2)^{N_0} e^{r|\xi|^{1/\sigma}}. \end{equation} \end{enumerate} \end{thm} \begin{proof} Suppose $f\in (S_s)'$, $\phi\in S_s\setminus\{0\}$. Then by Proposition \ref{prop:Ssdual} and Leibniz formula, there is an $N_0>0$ such that \begin{align*} |V_\phi f(x,\xi)| &= |(f,\phi(\cdot-x)e^{i\langle\cdot,\xi\rangle})| \\ &\leq C_{N_0} \sum_{|\alpha|\leq N_0} || D^\alpha \left(\phi(\cdot-x)e^{i\langle \cdot,\xi\rangle}\right) e^{r|\cdot|^{1/s}}||_2\\ &= C_{N_0} \sum_{|\alpha|\leq N_0}\sum_{\gamma\leq\alpha}\binom{\alpha}{\gamma} || (D^\gamma \phi)(\cdot-x) \xi^{\alpha-\gamma} e^{i\langle \cdot,\xi\rangle} e^{r|\cdot|^{1/s}}||_2 \\ &\leq C'_{N_0} (1+|\xi|^2)^{N_0} \sum_{|\alpha|\leq N_0}\sum_{\gamma\leq\alpha}\binom{\alpha}{\gamma} || (D^\gamma \phi)(\cdot-x) e^{r|\cdot|^{1/s}}||_2 \end{align*} By Theorem \ref{prop:Ssalt}, there is an $r_0>0$ such that $|D^{\gamma}\phi(y-x)|\leq C_\gamma e^{-r_0|y-x|^{1/s}}$, hence \begin{align*} |V_\phi f(x,\xi)| &\leq C'_{N_0} (1+|\xi|^2)^{N_0} \sum_{|\alpha|\leq N_0}\sum_{\gamma\leq\alpha}\binom{\alpha}{\gamma} || e^{-r_0|\cdot-x|^{1/s}} e^{r|\cdot|^{1/s}}||_2. \end{align*} By Lemma \ref{lem:absolutineq}, there is a $c\geq 1$ such that \begin{equation*} - r_0|y-x|^{1/s} \leq r_0 |x|^{1/s} - r_0/c \cdot |y|^{1/s}. \end{equation*} Let $r\in (0, r_0 /(2c))$. Then \begin{equation*} - r_0|y-x|^{1/s} \leq - 2 c r|y-x|^{1/s} \leq 2 c r |x|^{1/s} - 2 r |y|^{1/s}, \end{equation*} and \begin{align*} |V_\phi f(x,\xi)| &\leq C'_{N_0} (1+|\xi|^2)^{N_0} \sum_{|\alpha|\leq N_0}\sum_{\gamma\leq\alpha}\binom{\alpha}{\gamma} || e^{2cr |x|^{1/s} - 2 r |\cdot|^{1/s} + r|\cdot|^{1/s}}||_2 \\ &=C'_{N_0} (1+|\xi|^2)^{N_0} e^{2 c r |x|^{1/s}} \sum_{|\alpha|\leq N_0}\sum_{\gamma\leq\alpha}\binom{\alpha}{\gamma} || e^{-r|\cdot|^{1/s}}||_2 \\ &\leq C''_{N_0} (1+|\xi|^2)^{N_0} e^{2 c r |x|^{1/s}}. \end{align*} Clearly, the inequality still holds if we let $r>r_0/(2c)$ (the right hand side only becomes larger), hence we have shown that the inequality is valid for all $r>0$, as was to be shown. Now suppose that for every $r>0$ there is an $N_0 \geq 0$ such that \eqref{eq:STFTdual1} holds. Then by Moyal's identity, for every $\varphi \in S_s $ \begin{equation*} |(f,\varphi)|\leq ||\phi||_2^{-2} |(V_\phi f, V_\phi \varphi)|, \end{equation*} hence \begin{equation*} |(f,\varphi)|\leq ||\phi||_2^{-2} \int \int |V_\phi f(x,\xi)|\cdot |V_\phi \varphi(x,\xi)| \, d x \, d \xi. \end{equation*} By assumption combined with Theorem \ref{thm:SsSTFT}, for every $r,r_1>0$ and any $N_1\geq 0$ there is an $N_0\geq 0$ such that \begin{align*} |(f,\varphi)|&\leq C_{N_0}||\phi||_2^{-2} \int \int (1+|\xi|^2)^{N_0} e^{r|x|^{1/s}} |V_\phi\varphi(x,\xi)| \, d x \, d \xi \\ &= C_{N_0}||\phi||_2^{-2} \int \int (1+|\xi|^2)^{(N_0-N_1)} e^{(r-r_1)|x|^{1/s}} \left|V_\phi\varphi(x,\xi)(1+|\xi|^2)^{N_1} e^{r_1|x|^{1/s}}\right| \, d x \, d \xi \\ &\leq C_{N_0}||\phi||_2^{-2} p^{\phi,s}_{r_1,N_1}(\varphi)\int \int (1+|\xi|^2)^{-(N_1-N_0)} e^{-(r_1-r)|x|^{1/s}} \, d x \, d \xi. \end{align*} Pick $N_1$ such that $N_1>N_0$ and pick $r$ such that $r<r_1$. Then we obtain \begin{align*} |(f,\varphi)|&\leq C_{N_0,N_1,r}||\phi||_2^{-2} p^{\phi,s}_{r_1,N_1}(\varphi) \end{align*} for all $r_1>r$. By picking $r>0$ arbitrarily small, we thus obtain \begin{align*} |(f,\varphi)|&\leq C'_{N_1,r_1}\cdot p^{\phi,s}_{r_1,N_1}(\varphi) \end{align*} for all $r_1>0$. This completes the proof of (i). The proof of (ii) is very similar, utilizing the fact that $\phi\in S^\sigma$ is equivalent to $\hat{\phi}\in S_\sigma $, and the fact that $V_\phi f(x,\xi) = (\hat{f},\hat{\phi}(\cdot-\xi)e^{-i\langle \cdot,x \rangle})$. \end{proof} Lastly we include the corresponding result for the dual spaces of $\Sigma_s$ and $\Sigma^\sigma$, which follows by analogous arguments. \begin{thm} \begin{enumerate}[label=(\roman*)] \item Let $\phi\in \Sigma_s(\mathbb{R}^n)\setminus\{0\}$. Then $f\in(\Sigma_s)'(\mathbb{R}^n)$ if and only if there is an $r_0>0$, a $C>0$ and an $N_0\geq 0$ such that \begin{equation*} |V_\phi f(x,\xi)| \leq C(1+|\xi|^2)^{N_0} e^{r_0|x|^{1/s}}. \end{equation*} \item Let $\phi\in \Sigma^\sigma(\mathbb{R}^n)\setminus\{0\}$. Then $f\in(\Sigma^\sigma)'(\mathbb{R}^n)$ if and only if for every $r_0>0$ there is an $N_0\geq 0$ such that \begin{equation*} |V_\phi f(x,\xi)| \leq C(1+|x|^2)^{N_0} e^{r_0|\xi|^{1/\sigma}}. \end{equation*} \end{enumerate} \end{thm} \section{Continuity of Toeplitz operators}\label{sec:top} We will now look at Toeplitz operators on one-parameter Gelfand-Shilov spaces. To analyze these, we will need to consider functions in $2n$ dimensions which belong to different one-parameter Gelfand-Shilov spaces in different ($n$-dimensional) variables. To make sense of these, we begin by examining the spaces where each variable belongs to a two-parameter Gelfand-Shilov space. These spaces are defined as follows. \begin{definition} Suppose $s_1,s_2,\sigma_1,\sigma_2 > 0$. Then $S_{s_1,\sigma_2}^{\sigma_1,s_2}(\mathbb{R}^{2 n})$ consists of every $f\in C^{\infty}(\mathbb{R}^{2n})$ for which there is an $h>0$ such that \begin{equation*} \sup \dfrac{\left| x^{\alpha_1} \xi^{\alpha_2} D_x^{\beta_1} D_\xi^{\beta_2} f(x,\xi) \right|}{h^{|\alpha_1 + \alpha_2 + \beta_1 + \beta_2|} (\alpha_1!)^{s_1} (\alpha_2!)^{\sigma_2} (\beta_1!)^{\sigma_1} (\beta_2!)^{s_2}} < \infty \end{equation*} for every $\alpha_1,\alpha_2,\beta_1,\beta_2\in \mathbb{N}^n,$ where the supremum is taken over $x,\xi\in\mathbb{R}^n$. \end{definition} We can interpret this as a space where functions belong to $S_{s_1}^{\sigma_1}(\mathbb{R}^{n})$ in the $x$-variable and $S_{\sigma_2}^{s_2}(\mathbb{R}^{n})$ in the $\xi$-variable. Note that $S_{s,s}^{\sigma,\sigma}(\mathbb{R}^{2 n}) = S_s^\sigma(\mathbb{R}^{2 n})$. With the notations $S_{s}^\infty = S_s$ and $S_\infty^\sigma = S^\sigma$, we can construct similar spaces where the functions belong to the one-parameter spaces in single variables instead. These are the spaces we will focus on in this section. \begin{definition} Suppose $s,t> 0$. Then $S_{s,\infty}^{\infty,t}(\mathbb{R}^{2 n})$ consists of every $f\in C^\infty(\mathbb{R}^{2 n})$ for which there is an $h>0$ such that \begin{equation}\label{eq:doubledef} \sup \dfrac{\left| x^{\alpha_1} \xi^{\alpha_2} D_x^{\beta_1} D_\xi^{\beta_2} f(x,\xi) \right|}{h^{|\alpha_1 + \beta_2|}(\alpha_1!)^s (\beta_2!)^t} \leq C_{\beta_1,\alpha_2} \end{equation} for every $\alpha_1,\alpha_2,\beta_1,\beta_2\in \mathbb{N}^n,$ where the supremum is taken over $x,\xi\in\mathbb{R}^n$ and where $C_{\beta_1,\alpha_2}$ is a constant depending only on $\beta_1$ and $\alpha_2$. \end{definition} In similar ways with $\sigma,\tau>0$, we let $S_{\infty,\tau}^{\sigma,\infty}(\mathbb{R}^{2 n})$ consist of every $f\in C^\infty(\mathbb{R}^{2 n})$ for which there is an $h>0$ such that \begin{equation}\label{eq:doubledef2} \sup \dfrac{\left| x^{\alpha_1} \xi^{\alpha_2} D_x^{\beta_1} D_\xi^{\beta_2} f(x,\xi) \right|}{h^{|\beta_1+\alpha_2|}(\beta_1!)^{\sigma}(\alpha_2!)^{\tau}} \leq C_{\alpha_1,\beta_2} \end{equation} for every $\alpha_1,\alpha_2,\beta_1,\beta_2\in \mathbb{N}^n$. Here, the supremum is taken over $x,\xi\in\mathbb{R}^n$ and $C_{\alpha_1,\beta_2}$ is a constant depending only on $\alpha_1$ and $\beta_2$. We also consider the duals of these spaces, which we construct as follows. Let $||f||_{h,N,M}$ be the supremum in \eqref{eq:doubledef} taken over $x,\xi\in\mathbb{R}^n$, $\alpha_2,\beta_1\in\mathbb{N}^n$, but only $|\alpha_1|\leq N$ and $|\beta_2|\leq M$. With \[ V_{h,N,M}(\mathbb{R}^{2 n}) = \{f\in C^\infty(\mathbb{R}^{2 n}): ||f||_{h,N,M}<\infty \},\] we observe that \begin{equation} S_{s,\infty}^{\infty,t}(\mathbb{R}^{2 n}) = \indlim_{h>0} \left(\projlim_{N,M\geq 0} V_{h,N,M}(\mathbb{R}^{2 n}) \right). \end{equation} It is therefore natural to set \begin{equation} (S_{s,\infty}^{\infty,t})'(\mathbb{R}^{2 n}) = \projlim_{h>0} \left(\indlim_{N,M\geq 0} (V_{h,N,M})'(\mathbb{R}^{2 n}) \right). \end{equation} We construct the space $(S_{\infty,\tau}^{\sigma,\infty})'(\mathbb{R}^{2 n})$ analogously. Similar to the characterizations of $S_s$ and $S^\sigma$ via Fourier transform in Theorem \ref{thm:SsFT}, we can characterize the double spaces $S_{s,\infty}^{\infty,t}$ and $S_{\infty,\tau}^{\sigma,\infty}$ as follows. \begin{prop}\label{prop:doublespace} Suppose $s,t,\sigma,\tau>0$ and $f\in C^{\infty}(\mathbb{R}^{2 n})$. \begin{enumerate}[label=(\alph*)] \item $f\in S_{s,\infty}^{\infty,t}(\mathbb{R}^{2 n})$ if and only if there is an $r>0$ such that \begin{equation} |f(x,\xi)|\leq C_N (1+|\xi|^2)^{-N} e^{-r |x|^{1/s}}, \quad |\hat{f}(\eta,y)|\leq (1+|\eta|^2)^{-N} e^{- r|y|^{1/t}} \end{equation} for every $N\geq 0$. \item $f\in S_{\infty,\tau}^{\sigma,\infty}(\mathbb{R}^{2 n})$ if and only if there is an $r>0$ such that \begin{equation} |f(x,\xi)|\leq C_N (1+|x|^2)^{-N} e^{-r |\xi|^{1/\tau}}, \quad |\hat{f}(\eta,y)|\leq (1+|y|^2)^{-N} e^{- r|\eta|^{1/\sigma}} \end{equation} for every $N\geq 0$. \end{enumerate} \end{prop} \begin{proof} This result follows directly from Theorem \ref{thm:SsFT}. \end{proof} With this in mind we now consider Toeplitz operators on one-parameter Gelfand-Shilov spaces. \begin{definition}\label{def:top} Let $s,\sigma>0$, $\phi_1,\phi_2 \in S_s(\mathbb{R}^n)$ and $a\in S_{s,\infty}^{\infty,s}(\mathbb{R}^{2 n})$. The Toeplitz operator $Tp_{\phi_1,\phi_2}(a)$ is given by \begin{equation}\label{eq:top} (Tp_{\phi_1,\phi_2}(a) f, g)_{L^2(\mathbb{R}^{2 n})} = (a, \overline{V_{\phi_1} f}\cdot V_{\phi_2} g)_{L^2(\mathbb{R}^{2 n})} \end{equation} for every $f\in S_s(\mathbb{R}^n)$ and $g\in (S_s)'(\mathbb{R}^n)$. If instead $\phi_1,\phi_2\in S^\sigma(\mathbb{R}^n)$ and $a\in S^{\sigma,\infty}_{\infty,\sigma}(\mathbb{R}^{2 n})$, the Toeplitz operator is given by \eqref{eq:top} for every $f\in S^\sigma(\mathbb{R}^n)$ and $g\in (S^\sigma)'(\mathbb{R}^n)$. \end{definition} We observe that the Toeplitz operator in \eqref{eq:top} can be expressed as $$ Tp_{\phi_1,\phi_2}(a)f = V_{\phi_2}^*(a \cdot V_{\phi_1} f)$$ and that this is a continuous operator from $S_s(\mathbb{R}^{n})$ to $S_s(\mathbb{R}^{n})$ when $a\in S_{s,\infty}^{\infty,s}(\mathbb{R}^{2 n})$, and from $S^\sigma(\mathbb{R}^{n})$ to $S^\sigma(\mathbb{R}^{n})$ when $a\in S_{\infty,\sigma}^{\sigma,\infty}(\mathbb{R}^{2 n})$. We now want to show that we can loosen the restriction on $a$ to instead be in the duals of $S_{s,\infty}^{\infty,s}(\mathbb{R}^{2 n})$ and $S_{\infty,\sigma}^{\sigma,\infty}(\mathbb{R}^{2 n})$. To do this, we need the following lemma. \begin{lemma}\label{lem:doubleFT} Let $\phi_1,\phi_2,f \in S_s(\mathbb{R}^n)$ and $g \in (S_s)'(\mathbb{R}^n)$. Then \begin{equation*} \mathscr{F} [\overline{V_{\phi_1} f} \cdot V_{\phi_2} g] (\eta,y) = e^{i y \eta} V_{\phi_2} \phi_1 (y,-\eta) \cdot V_f g(-y,\eta) \end{equation*} \end{lemma} \begin{proof} We have \begin{align*} \mathscr{F} [\overline{V_{\phi_1} f} \cdot V_{\phi_2} g] (\eta,y) &= (2\pi)^{-2 n} \iiiint \overline{f(z)} \phi_1(z-x) g(w) \overline{\phi_2(w-x)} e^{i(z-w-y)\xi - i x\eta} \, d z d w d x d \xi \\ &= (2\pi)^{- n} \iint \overline{f(w+y)} \phi_1(w+y-x) g(w) \overline{\phi_2(w-x)} e^{- i x\eta} \, d w \, d x \\ &= (2\pi)^{- n} \iint \overline{f(s)} \phi_1(s-x) g(s-y) \overline{\phi_2(s-y-x)} e^{- i x\eta} \, d s \, d x \\ &= (2\pi)^{- n} \iint \overline{f(s)} \phi_1(t) g(s-y) \overline{\phi_2(t-y)} e^{- i (s-t)\eta} \, d s \, d t \\ &= (2\pi)^{- n} \iint \overline{f(z+y)} \phi_1(t) g(z) \overline{\phi_2(t-y)} e^{- i (z+y-t)\eta} \, d z \, d t \\ &= e^{-i y \eta} V_{\phi_2} \phi_1 (y,-\eta) \cdot V_f g(-y,\eta) \\ \end{align*} where we apply the Fourier inversion theorem in the second step, apply the variable substitution $s = w + y$ in the third step, $t = s - x$ in the fourth step and $z = s - y$ in the fifth step. \end{proof} With this lemma in mind we move on to the main result of this section. \begin{thm} Suppose $\phi_1,\phi_2 \in S_s(\mathbb{R}^n)$. Then the following is true. \begin{enumerate}[label=(\alph*)] \item The definition of $Tp_{\phi_1,\phi_2}(a)$ is uniquely extendable to any $a\in (S_{s,\infty}^{\infty,s})'(\mathbb{R}^{2 n})$ and is then continuous from $S_s(\mathbb{R}^n)$ to $(S_s)'(\mathbb{R}^n)$. \item If $a\in (S_{s,\infty}^{\infty,s})'(\mathbb{R}^{2 n})$ then $Tp_{\phi_1,\phi_2}(a)$ is continuous on $S_s(\mathbb{R}^n)$ and uniquely extendable to a continuous operator on $(S_s)'(\mathbb{R}^n)$. \end{enumerate} \end{thm} \begin{proof} For (a), it is sufficient to show that $H = \overline{V_{\phi_1} f}\cdot V_{\phi_2} g \in S_{s,\infty}^{\infty,s}(\mathbb{R}^{2 n})$ whenever $f\in S_s(\mathbb{R}^n)$ and $g\in (S_s)'(\mathbb{R}^n)$ and for (b), the same statement is sufficient but with $f\in (S_s)'(\mathbb{R}^n)$ and $g\in S_s(\mathbb{R}^n)$. We begin by proving (a). By Theorem \ref{thm:SsSTFT} and Theorem \ref{thm:SsSTFTdual}, for every $N\geq 0$ and every $r>0$, there are $r_0 > 0$, $N_0 \geq 0$ and $C_{N,r} > 0$ such that \begin{align*} |H(x,\xi)| &= (2\pi)^{-n} | V_{\phi_1} f (x,\xi)|\cdot |V_{\phi_2}g(x,\xi)| \\ &\leq C_{N,r} (1 + |\xi|^2)^{-(N-N_0)} e^{-(r_0-r) |x|^{1/s}}. \end{align*} Picking $r<r_0$ and noting that $N_0 \geq 0$ gives the first inequality of Proposition \ref{prop:doublespace}. By Lemma \ref{lem:doubleFT}, \begin{align*} |\hat{H}(\eta,y)| &= (2\pi)^{-n} | V_{\phi_2} \phi_1 (y,-\eta) \cdot V_f g(-y,\eta)|. \end{align*} Applying Theorem \ref{thm:SsSTFT} and Theorem \ref{thm:SsSTFTdual} exactly as before, we now obtain the second inequality of Proposition \ref{prop:doublespace}. This completes the proof of (a). To prove (b), simply reverse the roles of $f$ and $g$ and the result follows. \end{proof} We also state the corresponding result for $a\in (S_{\infty,\sigma}^{\sigma,\infty})'(\mathbb{R}^{2 n})$, which follows by similar arguments. \begin{thm} Suppose $\phi_1,\phi_2 \in S^\sigma(\mathbb{R}^n)$. Then the following is true. \begin{enumerate}[label=(\alph*)] \item The definition of $Tp_{\phi_1,\phi_2}(a)$ is uniquely extendable to any $a\in (S_{\infty,\sigma}^{\sigma,\infty})'(\mathbb{R}^{2 n})$ and is then continuous from $S^\sigma(\mathbb{R}^n)$ to $(S^\sigma)'(\mathbb{R}^n)$. \item If $a\in (S_{\infty,\sigma}^{\sigma,\infty})'(\mathbb{R}^{2 n})$ then $Tp_{\phi_1,\phi_2}(a)$ is continuous on $S^\sigma(\mathbb{R}^n)$ and uniquely extendable to a continuous operator on $(S^\sigma)'(\mathbb{R}^n)$. \end{enumerate} \end{thm}
2023-04-23T06:41:24.233Z
2022-02-03T02:17:00.000Z
redpajama/arxiv
arxiv_0001
2,364
10,345
119b58a54ee46622a5fad3ea04337331b56b9cbd
\section{Introduction} Fe-Al alloys are attractive materials that exhibit a good oxidation and sulfidation resistance, excellent resistance to abrasive wear and erosion, high strength, relatively low density and high magnetic permeability~\cite{Knibloe1993,Eggersmann2000,Ikeda2001}. Due to these unique properties, Fe-Al alloys are promising as the high-temperature functional materials for magnetic and diffusion barrier applications, and interconnections in microelectronics corrosion stability. {The functional properties of ordered alloys strongly depend on their phase structure. According to the experimental phase diagram (see Fig.~\ref{fig_0}), three bcc structures (A2, B2 and D0$_3$) with different degrees of order are formed for iron-rich region depending on temperature and composition. This fact makes Fe-Al attractive for study of ordering reactions since bcc structure is stable in the wide concentration and temperature range.} \begin{figure}[!htb] \centering \includegraphics[height=6cm,clip]{Fig_0.png} \hfil \caption{Experimental phase diagram of Fe$_{100-x}$Al$_x$ (${0\leq x \leq 50}$) adapted from~\cite{Stein}. The dash-dotted lines mark the transitions from ferromagnetic~(FM) to paramagnetic~(PM) $\alpha-$Fe and Fe$_3$Al. Dotted line between about 23 and 24 at.\% Al indicates ($\alpha-$Fe + Fe$_3$Al)/Fe$_3$Al boundary. } \label{fig_0} \end{figure} In the concentration range up to $\approx19$~at.\%Al there is the $\alpha$-region with disordered bcc A2~($\alpha$-Fe) phase. Whereas the ordered phases based on B2 (FeAl) and D0$_3$~(Fe$_3$Al) structures form with a higher Al content~\cite{Knibloe1993,Ikeda2001,Leamy1967,Ershov2018}. The D0$_3$ phase is stable below 825~K for compounds with $18 < x < 37$~at.\%Al while the B2 phase occurs in concentration range between 23 and 54~at.\% Al depending on temperature (see Fig.~\ref{fig_0}). In general, there are two "order-disorder" (B2$\rightarrow$A2 and D0$_3\rightarrow$B2) and one paramagnetic (PM)$\rightarrow$ferromagnetic (FM) (A$2^{\mathrm{PM}}\rightarrow$A$2^{\mathrm{FM}}$) transformation for Fe-Al compounds~\cite{Ikeda2001, Stein}. Both transformations (B2$\rightarrow$A2 and D0$_3\rightarrow$B2) are the second order ones, B2$\rightarrow$A2 observes for compounds with $23 < x < 45$~at.\%Al whereas D0$_3\rightarrow$B2 takes place in the narrow concentration range and the maximal transition temperature of 818 K is observed at 26.5~at.\%. {It should be noted that the Fe-Al phase diagram includes some features related to the areas of the coexistence of these three structures. One of them is the small triangular area between 815 and 900~K for compositions with $21.3 < x < 22.8$at.\%Al where the FM disordered A2 phase coexists with the PM B2 phase~\cite{Ikeda2001, Stein, Okamoto}. The other feature is the two-phase (A2 and D0$_3$) region in the concentration range from 18 to 25~at.\% below 814~K. This region is characterized by the first-order "order-disorder" transition.} {The tetragonal magnetostriction $\lambda_{001}$ of Fe-Al alloys has an unusual dependence as for Fe-rich Fe-Ga alloys~\cite{Clark_2008}. Room temperature magnetostriction measurements of Fe-Al alloys indicated a five-fold rise in magnetostriction with Al content up to 30\% Al. Nevertheless, the $\lambda_{001}$ values for Fe-Al are smaller as compared to the Fe-Ga system. This finding can be explained by less magnitude of magnetoelastic constant $-b_1$ and the decreasing of shear modulus $C^{\prime}$ with Al content, which is not so large also as for Fe-Ga~\cite{Clark_2008}. } The aim of this paper is a complex study of structural and magnetic properties of Fe$_{100-x}$Al$_x$ ($5 \leq x \leq 25$~at.\%) alloys by the density functional theory at zero temperature with the finite-temperature Monte Carlo (MC) simulations. \section{Methodology and calculation details} {The \textit{ab initio} calculations were performed using the spin-polarized relativistic Korringa-Kohn-Rostoker code (SPR-KKR)~\cite{Ebert-sprkkr} implementing the KKR Green's function multiple scattering theory. The exchange correlation energy was carried out in the Perdew–Burke–Ernzerhof (PBE)~\cite{Perdew-1996} generalized gradient approximation (GGA).} {The single-site coherent potential approximation (CPA) successfully describing the properties of many compositionally disordered alloy systems was used to create the off-stoichiometric compositions. An impurity of each atom type, Fe or Al, is placed in an effective CPA medium and then considers the alloy to be described by the weighted average of the two different impurity solutions. As shown in~\cite{Banhart}, although the CPA deals with disorder systems, it can describe long-range order systems too.} {The electronic structure calculations of Fe$_{100-x}$Al$_x$ ($5 \leq x \leq 25$~at.\%) were performed using the spin-polarized scalar-relativistic mode for three crystal cubic structures with different atomic order. The fully ordered D0$_3$ structure with space group $Fm\bar{3}m$ (No. 225) was created by the unit cell consisting of four atoms. The Al and Fe atoms were located at the $4a\ (0;\ 0;\ 0)$ Wyckoff position, the Fe atoms were occupied $4b\ (0.5;\ 0.5;\ 0.5)$ and $8c\ (0.25;\ 0.25;\ 0.25)$ Wyckoff positions. For stoichiometric Fe$_{75}$Al$_{25}$, $4a$ site was fully occupied by Al. The partially ordered B2 phase with space group $Pm\bar{3}m$ (No. 221) was created using two atoms per unit cell, where Fe and Al atoms randomly were occupied the $1b\ (0.5;\ 0.5;\ 0.5)$ site while $1a\ (0;\ 0;\ 0)$ site was occupied by Fe atoms. For disordered A2 phase ($Im\bar{3}m$ space group, No. 229) Fe and Ga atoms were distributed at the $2a\ (0;\ 0;\ 0)$ Wyckoff position. The calculations of optimized lattice constants were performed as single-point energy calculations at several different volumes for each structure under study. To evaluate equilibrium volume more precisely, the total energy curves as a function of volume were fitted by the Birch–Murnaghan equation of state}. {For the determination of tetragonal shear modulus $C^{\prime}$, the cubic structures were deformed from their optimized geometries along the $z$~axis assuming a volume-conserving mode $\varepsilon_x= \varepsilon_y = - 1/2 \varepsilon_z$ in the range of $\varepsilon = \pm2\%$.} $C^{\prime}$ was calculated from the $\varepsilon_z$-dependent total energy $E_\mathrm{tot.}$ according to the following equation~\cite{Wang2013}, \begin{equation} C^{\prime}=\frac{(C_{11}-C_{12})}{2}= \displaystyle\frac{1}{3V_0}\frac{d^2E_\mathrm{tot.}}{d\varepsilon^2} \label{shear modulus} \end{equation} here $V_0$ is the equilibrium volume of the unit cell. For A2, B2, D0$_3$ cubic structures, magnetostrictive coefficient $\lambda_{001}$ was determined through the tetragonal dependence of magnetocrystalline anisotropy energy $E_\mathrm{{MCA}}$ as~\cite{Wang2013} \begin{equation} \label{eq_lambda} \lambda_{001} = \frac{2}{3V_0 C^{\prime}}\frac{dE_{\mathrm{{MCA}}}}{d\varepsilon} = -\frac{b_1}{3C^{\prime}}, \end{equation} here, $-b_1$ is magnetoelastic constant. The $E_\mathrm{{MCA}}$ was calculated using the torque method implemented in the SPR-KKR package. {The Heisenberg exchange parameters $J_{ij}$ were evaluated by way of KKR multiple scattering formalism proposed by Liechtenstein et al. ~\cite{Liechtenstein}. The spin-polarized scalar-relativistic Dirac Hamiltonian mode and PBE approach for exchange-correlation potential were considered to calculate the $J_{ij}$ parameters as a function of distance between interacting atoms. Using the calculated $J_{ij}$ and Fe magnetic moments ($\mu^{\mathrm{Fe}}$) as input data, the finite-temperature MC simulations of Heisenberg model without magnetic field and anisotropy terms were done.} \begin{equation} {\cal H}= - \sum_{ij}J_{ij}\mathbf{S}_i\mathbf{S}_j. \label{heis_ham} \end{equation} Here, $\mathbf{S}_{i}=\left(S_{i}^{x},\ S_{i}^{y},\ S_{i}^{z}\right)$ is a classical Heisenberg spin variable $\left|S_i\right|=1$. {The $J_{ij}$ were taken into account up to the 6 coordinational spheres and can be ferromagnetic (FM, $J_{ij}>0$) or antiferromagnetic (AFM, $J_{ij}<0$).} The modelling was performed on a 3925 atoms lattice with periodic boundary conditions using the Metropolis algorithm~\cite{LandauBinder}. {The MC step at each site corresponded to an enumeration of randomly selected $N$ lattice sites, where $N$ is the number of atoms in the lattice. For a given temperature, the number of MC steps at each site was $5\times10^5$.} The magnetic order parameter $m$ and total magnetization $M$ are defined by the following way, \begin{equation} \begin{array}{ll} m^{\mathrm{Fe}}=\displaystyle\frac{1}{N^{\mathrm{Fe}}}\sum_{i}\sqrt{\left(S_i^x\right)^{2}+\left(S_i^y\right)^{2}+\left(S_i^z\right)^{2}},\\[1mm] M=3\mu^{\mathrm{Fe}}m^{\mathrm{Fe}}, \end{array} \end{equation} Curie temperature ($T_C$) was estimated from temperature dependencies of magnetization by plotting $M^{1/\beta\left(T\right)}$ function that decreases almost linearly with increasing temperature. The intersection of {$M^{1/\beta}\left(T\right)$} curve with the $T$ axis indicates $T_C$. Here, $\beta$ is the critical index, and it is equal to $0.3646$ for the three-dimensional Heisenberg model~\cite{Huang-1987}. \section{Results and discussion} In the first step of our calculations, we have done the geometric optimization of cubic crystal structures. {The calculated ground-state energies summarized in Table~\ref{tab1} show that the D0$_3$ structure has the lowest energy for all considered compounds. The energy difference between B2 and D0$_3$ structure ($\Delta E_0^{\mathrm{B}2-\mathrm{D}0_3}$) is smaller than the respective values for A2 and D0$_3$ ones ($\Delta E_0^{\mathrm{A}2-\mathrm{D}0_3}$). For both structures, the $\Delta E_0^{\mathrm{B}2-\mathrm{D}0_3}$ and $\Delta E_0^{\mathrm{A}2-\mathrm{D}0_3}$ are increasing functions of Al concentration and they reach 65~meV/atom and 101~meV/atom for stoichiometric Fe$_{75}$Al$_{25}$, respectively.} \begin{table}[htb!] \begin{center} \caption{The ground state energy $E_0$ (eV/atom) of Fe$_{100-x}$Al$_x$ (${5\leq x \leq 25}$~at.\%) with different cubic crystal structures, obtained via \textit{ab initio} calculations at zero temperature.} \label{tab1} \begin{tabular}{l|cc c c } \hline $x$, at.\% &D0$_3$ &B2$^{\mathrm{FM}}$ & {B2$^{\mathrm{NM}}$} & A2 \\ \hline 5 & $-33219.647$& $-33219.644$& $-33219.179$& $-33219.642$ \\ 10& $-31818.911$& $-31818.898$& $-31818.488$& $-31818.893$\\ 15& $-30418.175$& $-30418.148$& $-30417.800$& $-30418.137$\\ 18& $-29577.735$& $-29577.697$& $-29577.390$& $-29577.681$\\ 20& $-29017.442$& $-29017.397$& $-29017.117$& $-29017.376$\\ 21& $-28737.296$& $-28737.247$& $-28736.981$& $-28737.223$\\ 24& $-27896.859$& $-27896.797$& --& $-27896.764$\\ 25& $-27616.713$& $-27616.648$& $-27616.437$& $-27616.612$\\ \hline \end{tabular} \end{center} \end{table} {According to the experimental data, the B2 structure is paramagnetic~\cite{Miyatani}. However, \textit{ab initio} calculations generally predict a FM ground state for B2 phase~\cite{Lechermann,Rhee} as in our case (see Table 1). There are two ways to explain this finding. One is the presence of defects in an experimental sample that have a strong influence on the magnetic state. The other is the fact that the magnetic state depends strongly on the degree of chemical order~\cite{Lechermann}. As it was shown by Mohn \textit{et al.}~\cite{Mohn} and Rhee \textit{et al.}~\cite{Rhee} the correct \textit{ab initio} prediction of magnetic state could be done by describing exchange and correlation within the local-density approximation including Hubbard parameters (LDA +$U$).} {In the present work, we performed electronic structure calculations for the nonmagnetic state of the B2 phase (labeled B2$^{\mathrm{NM}}$) with an account of exchange-correlation energy in GGA approximation. As can be seen from Table~\ref{tab1} the obtained ground state energies for the nonmagnetic state of B2 are higher than those for the disordered A2 structure. In further discussion, all results for the B2 structure will be present in the FM state.} Fig.~\ref{fig_1}~(a) shows the equilibrium lattice parameter $a_0$ ($a_0/2$ for D0$_3$) of phases corresponding to a minimum value of energy $E_0$ as a function of Al concentration for D0$_3$, B2, and A2 structures of Fe$_{100-x}$Al$_x$ alloys in comparison with available experimental data taken from~\cite{Leamy1967,Balagurov}. \begin{figure}[!htb] \centering \includegraphics[height=6cm,clip]{fig_1_new.png} \includegraphics[height=6cm,clip]{fig_2a.png} \hfil \caption{(Color online)~(a)~Calculated lattice parameters $a_0$ ($a_0/2$ for D0$_3$) and (b)~shear moduli as a function of the Al composition of Fe$_{100-x}$Al$_x$ (${5\leq x \leq 25}$~at.\%) alloys with A2, B2 and D0$_3$ structures. The experimental results at room temperature were taken from Leamy~et~al.~\cite{Leamy1967}, Balagurov~et~al.~\cite{Balagurov}, and Restorff~et~al.~\cite{Restorff}. } \label{fig_1} \end{figure} For all cubic structures calculated herein, the lattice parameter is found to increase with increasing Al content. The differences between $a_0^{\mathrm{A}2}-a_0^{\mathrm{D}0_3}$ and $a_0^{\mathrm{D}0_3}-a_0^{\mathrm{B}2}$ increase too and numerically are quite similar. At the 18~at.\% of Al concentration, data for A2 and D0$_3$ are equally distant from the dash experimental line. With further increasing of Al content, the slope of the theoretical curve $a_0^{\mathrm{A}2}(x)$ becomes higher than the experimental one. The $a_0^{\mathrm{D}0_3}(x)$ curve slope is similar to the experiment that corresponds to neutron diffraction experiments by Balagurov et al.~\cite{Balagurov}. According to~\cite{Balagurov}, the D0$_3$ phase corresponds to the main structural phase in the range from 21 to 34~at.\%. The concentration dependencies of calculated shear moduli of D0$_3$, B2, and A2 structures are shown in Fig.~\ref{fig_1}~(b). For comparison, experimental data taken from works~\cite{Leamy1967,Restorff} are plotted on the graph. The $C^{\prime}$ decreases linearly with increasing Al content, and magnitudes of shear modulus for considered structures are close to each other for Al content up to 10 at.\%. The anomaly in the $C^{\prime}$ behavior takes place for compounds with $x > 15$~at.\% Al. It could be related to the structural transformations which occur in the alloys as indicated by the atomic volume changes (See Fig.~\ref{fig_1}~(a)). Thus a difference in the $C^{\prime}$ behavior between D0$_3$(B2) and A2 has been attributed to the transition from disordered to ordered phase~\cite{Cullen}. {Let us consider next the results of the magnetocrystalline anisotropy energy calculations $E_{\mathrm{MCA}}$ for Fe$_{100-x}$Al$_x$ (${5\leq x \leq 25}$~at.\%) with A2, D0$_3$ and B2 structures using the torque method. To demonstrate the tendency of $E_{\mathrm{MCA}}(\varepsilon)$ behavior for all structures, in Fig.~\ref{fig_2} we show the results for compounds with $x=10,\ 20,\ 24$~at.\%. For A2 and B2 structures, the strain dependencies of $E_{\mathrm{MCA}}(\varepsilon)$ demonstrate the linear behavior with a positive slope, which slightly increases for the A2 phase and it is nearly constant for the B2 phase with increasing Al content. In the case of D0$_3$ structure, an increase in Al content 20~at.\% leads to linear decrease of $E_{\mathrm{MCA}}(\varepsilon)$ curve and the slope of $E_{\mathrm{MCA}}(\varepsilon)$ is changed from positive to negative for Fe$_{80}$Al$_{20}$.} \begin{figure}[!htb] \centering \includegraphics[height=6cm,clip]{fig_3.png} \hfil \caption{(Color online)~Calculated strain dependencies of magnetocrystalline anisotropy energy $E_{\mathrm{MCA}}$ of Fe$_{100-x}$Al$_x$ ($x=10,\ 20,\ 24$~at.\%) with D0$_3$, B2, and A2 structures.} \label{fig_2} \end{figure} The theoretical results of tetragonal magnetostriction $\lambda_{001}$ calculated by Eq.~(\ref{eq_lambda}) are presented in Fig.~\ref{fig_3}. As can be seen, the A2 and B2 structures provide a positive contribution on tetragonal magnetostriction, while D0$_3$ has a positive magnitude only up to 18 at.\%. \begin{figure}[!htb] \centering \includegraphics[height=6cm,clip]{fig_4.png} \hfil \caption{(Color online)~Calculated tetragonal magnetostriction constants of Fe$_{100-x}$Al$_x$ (${5\leq x \leq 25}$~at.\%) with D0$_3$, B2, and A2 structures. The experimental results were taken from Restorff~et~al.~\cite{Restorff}.} \label{fig_3} \end{figure} {This tendency is dependent on $E_{\mathrm{MCA}}$ behavior. The higher values of theoretical shear moduli $C^{\prime}$ could explain the numerically lesser value of $\lambda_{001}$ for the A2 structure compared with the B2 phase.} A similar investigation of tetragonal magnetostriction using the torque method was done for Fe-Ga alloys~\cite{Matyunina}. It has been shown that the main contribution to tetragonal magnetostriction in the region of Ga concentrations $21\leq x \leq 25$~at.\% makes the B2 phase. {One of the earliest models proposed to explain the giant magnetostriction of Fe-Ga alloys was the question of Ga 2-nd nearest-neighbor (B2-like) pairing~\cite{Buschow-2012}. In the D0$_3-$like ordered phase, this pairing is strongly suppressed.} {The magnetic exchange coupling parameters $J_{ij}$ were calculated using the equilibrium lattice parameters and the SPR-KKR package by employing CPA. We considered the FM state of D0$_3$, B2 and A2 structures of Fe$_{100-x}$Al$_x$ (${5\leq x \leq 25}$). In general, the $J_{ij}$ shows damping oscillatory behavior for all compositions and structures under-considered. The largest FM-interaction observes between the first nearest neighbour of Fe-Fe atoms. Fig.~\ref{fig_4}(a)--(c) show the exchange as a function of distance $d/a$ between atoms $i$ and $j$ for D0$_3$, B2, and A2 structures in composition with 24~at.\% of Al content. Calculated $J_{ij}$ between first neighbors of iron atoms as a function of Al concentration is depicted in Fig.~\ref{fig_4}~(d).} \begin{figure}[!htb] \centering \includegraphics[height=6cm,clip]{fig_4a.png} \includegraphics[height=6cm,clip]{fig_4b.png} \includegraphics[height=6cm,clip]{fig_4c.png} \includegraphics[height=6cm,clip]{fig_4d.png} \hfil \caption{(Color online)~Calculated exchange coupling parameters $J_{ij}$ as a function of distance ($d/a$) between atoms $i$ and $j$ for (a)~D0$_3$, (b)~B2, and (c)~A2 structures of Fe$_{76}$Al$_{24}$ alloy.~(d)~Calculated $J_{ij}$ between first neighbors of iron atoms as a function of Al concentration of Fe$_{100-x}$Al$_x$ (${5\leq x \leq 25}$). The subscripts of Fe atoms correspond to Wyckoff positions.} \label{fig_4} \end{figure} {As we mentioned in Section 2, in the fully ordered D0$_3$ structure, the Fe atoms are located on the $4a$, $4b$, and $8c$ Wyckoff positions. In the B2 phase, Fe atoms occupy $1b$ and $1a$ Wyckoff positions, and in A2, there is one type of iron on the $2a$ Wyckoff position. According to these locations, in fig.~\ref{fig_4} all Fe atoms are marked with the subscripts. For the D0$_3$ and B2 structures, the intersublattice interactions Fe$_{4b}$-Fe$_{8c}$ and Fe$_{1b}$-Fe$_{1a}$ provide the largest contribution to the exchange due to shorter distances compared to the intrasublattice interactions. As can be seen in fig.~\ref{fig_4}~(d), for the D0$_3$ and A2 structures, the behavior of $J_{ij}\left(x\right)$ curves are similar to each other and the largest values of $J_{ij}$ are obtained in composition Fe$_{75}$Al$_{25}$. In the case of the B2 structure, the $J_{ij}$ is a decreasing function of Al content.} The constants of magnetic exchange interactions and magnetic moments obtained from \textit{ab initio} calculations were used as input parameters to simulate the temperature dependences of magnetization and to estimate the Curie temperatures using Monte Carlo simulation. The calculated Curie temperatures for D0$_3$, B2 and A2 structures are listed in Table~\ref{tab2}. \begin{table}[htb!] \begin{center} \caption{Curie temperatures $T_C$ (K) of Fe$_{100-x}$Al$_x$ (${5\leq x \leq 25}$) with D0$_3$, B2, and A2 structures, obtained via MC calculations. For comparison experimental data from~\cite{Stein} are shown. }\label{tab2} \begin{tabular}{l|cc c|cc c| cc } \hline \multirow{2}{*}{$x$, at.\%}& \multicolumn{3}{c|}{MC calculations ($T_C$)}& \multicolumn{2}{c}{Experiment ($T_C$)}\\ & D0$_3$ & B2 & A2 & D0$_3$ & A2 \\ \hline 5 & 907 & 1235& 1213 & $-$ & 1035\\ 10& 1159 & 1350& 1258 & $-$ & 1018\\ 15& 1300 & 1360& 1238 & $-$ & 991\\ 18& 1328 & 1369& 1214 & $-$ & 967\\ 20& 1338 & 1349& 1193 & $-$ & 937\\ 21& 1345 & 1334& 1175 & $-$ & $-$\\ 24& 1250 & 1218& 1131 & 781 & $-$\\ 25& 1074 & 1206& 1127 & 758 &$-$\\ \hline \end{tabular} \end{center} \end{table} It follows from experimental results, for the A2 structure, Curie temperatures continuously decrease with increasing Al concentration. In opposite to experimental results, calculated Curie temperature dependence for A2 structure has a maximum value at 10 at.\%. Curie temperatures for B2 and D0$_3$ phases also increase up to 18 (for B2) and 20 (for D0$_3$)~at.\% and then decrease. It should be noted calculated Curie temperatures have overestimated values in comparison to experimental data. This tendency could be explained by increasing magnetic exchange interaction parameters $J_{ij}$ (See Figure~\ref{fig_4}). \section{Conclusion} The complex investigation of structural and magnetic properties of Fe$_{100-x}$Al$_x$ alloys ($5 \leq x \leq 25$~at.\%) was done by means of \textit{ab initio} methods and Monte Carlo simulations. Experimentally observed crystal structures D0$_3$, B2, and A2 were considered. {Conducted theoretical calculations of the ground state energies showed that all considered phases are stable, while D0$_3$ is energetically favourable for the whole Al concentration range.} It was found that the calculated equilibrium lattice constants increased with Al content. In the range from 20 to 25 at.\%, the theoretical data of $a_0$ for the D0$_3$ phase are closer to the experiment that corresponds to the existence of D0$_3$ as the main structural phase. Shear moduli $C^{\prime}$ of A2, B2 and D0$_3$ structures are linearly decreased with adding Al atoms in compositions. The significant difference between D0$_3$ (B2) and A2 values of $C^{\prime}$ in an area higher than 15~at.\% Al content could be related to the atomic volume changes. In A2 and B2 structures, the tetragonal magnetostriction constants are positive, while in D0$_3$, this parameter is positive only up to 18 at.\%. This tendency depends on magnetocrystalline anisotropy energy behaviour as a function of Al content. It was shown that the obtained values of $\lambda_{001}$ for the B2 structure are closer to the experimental one. This result could be explained by the proposed model of formation B2-like short-range order that provide the giant magnetostriction in Fe-based alloys. Using exchange interaction constants as input parameters, the Curie temperatures were estimated with the help of Monte Carlo simulations. The calculated Curie temperature dependence for the A2 structure has a maximum value at 10 at.\% while from experimental results, for the A2 structure, Curie temperatures continuously decrease with increasing of Al concentration. \section*{Acknowledgment} This work was supported by the Ministry of Science and Higher Education of the Russian Federation within the framework of the Russian State Assignment under contract No.~{075-00992-21-00}. MVM gratefully acknowledges the Advanced science research foundation of the Chelyabinsk State University. \section*{References}
2023-04-23T06:41:24.242Z
2022-02-03T02:19:17.000Z
redpajama/arxiv
arxiv_0001
2,366
3,843
1723eddbc7838d6f0a808e627a446b2e3db537e9
\section{Introduction} \paragraph*{Introduction.} Investigation of the Lorentz invariance violation in $2\nu\beta\beta$ decay is an interesting research topic that is currently included in the study of this process. The theoretical framework underlying the estimation of the LIV effects in various physical processes is the Standard-Model extension (SME) theory, which incorporates Lorentz invariance violating operators of arbitrarily large dimension \cite{CK-PRD55,CK-PRD58,K-PRD69,KR-RMP2011}. Of particular interest is the minimal SME, where LIV effects can occur only through operators of mass dimension four or less \cite{KR-RMP2011}, which represents the theoretical background of many investigations, including those in the neutrino sector. The operators that couple to neutrinos can affect the neutrino oscillations, neutrinos velocity, and spectra of the electrons emitted in beta and double-beta decays \cite{KM-PRD69,Adam2012,Agnes-GALAXIES2021,Diaz-AHEP,Diaz-PRD89,DKL-PRD88}. Effects of LIV in the neutrino sector have been searched first in neutrino oscillation experiments such as Double-Chooz \cite{DC-PRD86}, MiniBoone \cite{MBoone-PLB718}, Ice Cube \cite{IC-PRD82}, MINOS \cite{Minos-PRD85}, SuperKamiokande \cite{SK-PRD91}, resulting in constraints of the LIV coefficients that control different couplings. However, according to the SME theory, the LIV effects in the neutrino sector can also be induced by the so-called oscillation-free operators of dimension three (countershaded effects), which do not affect the neutrino oscillations and hence can not be measured in these experiments. They are controlled by an oscillation-free (of) coefficient with four components, one time-like $\aof$ and three space-like $(a^{(3)}_{\rm of})_{1m}$, with $m=0, \pm 1$. Particularly, the LIV effects induced by the isotropic component of the countershaded operator can be searched in double-beta decay (DBD) experiments. This is because, in these experiments, the neutrinos are not measured, and only a global effect given by neutrinos of all orientations can be detectable \cite{Diaz-PRD88}. LIV signatures have recently been searched in DBD experiments such as EXO \cite{EXO-200-PRD93}, GERDA \cite{GERDA-PhDThesis}, AURORA \cite{AURORA-2018}, NEMO-3 \cite{NEMO-3-2019,NEMO-3-PhdThesis}, CUORE \cite{CUORE-2019, CUORE-PhDThesis}, CUPID-0 \cite{CUPID-0-PRD100}, and the non-observation of the LIV effects resulted in constraints on the $\aof$ coefficient. These investigations were based until recently, on predictions of the electron spectra that were done with approximate (analytical) Fermi functions, built from electron wave functions (w.f.) obtained within a point-like nucleus model \cite{Primakoff-1959,Haxton-1984,Doi-1985,Suhonen-1998} and without screening effects. In two previous papers, we provided predictions of the single and summed energy electron spectra and angular correlation between electrons as well as their deviations due to LIV, calculated with improved electron w.f. \cite{NIT-2020, NIT-2021}. First, in Ref. \cite{NIT-2020} we compared the results of calculating $2\nu\beta\beta$ decay observables using Fermi functions obtained with different methods. We found that the differences in the values of the phase-space factors and decay rates calculated with different Fermi functions can be up to $30\%$. Thus, we concluded that the exact electronic w.f., obtained by numerically solving the Dirac equation in a realistic Coulomb-type potential, including the finite nuclear size correction and screening effects, are indicated for the accurate calculation of the phase space factors and further of the electron spectra and their LIV deviations. Next, using this method, we provided theoretical summed energy electron spectra for experimental LIV analyses for the $^{100}\text{Mo}$ nucleus. Then, in Ref. \cite{NIT-2021}, we extended the analysis of the LIV effects to single electron spectra and angular correlations between electrons. We discussed the LIV deviations that may occur in these spectra showing that they manifest differently for positive and negative values of the LIV coefficient $\aof$ and become more pronounced as the electron energy approaches the Q-value. We also proposed an alternative method to constrain $\aof$, namely through the measurement of the angular correlation coefficient. However, our analysis of the LIV effects in \cite{NIT-2021} was limited to $^{100}$Mo nucleus, for which the single state dominance (SSD) hypothesis (i.e., only the first $1^+$ state in the intermediate odd-odd nucleus contributes to the DBD rate \cite{Abad-1984,Simkovic_2001,Domin-2005}) can be used in calculations. In this paper, we extend the previous analyses to all nuclei that are currently being studied in DBD experiments, namely $^{48}$Ca, $^{76}$Ge, $^{82}$Se, $^{100}$Mo, $^{110}$Pd, $^{116}$Cd, $^{130}$Te, $^{136}$Xe and $^{150}$Nd. We deduce the formulas for the LIV deviations and provide single electron spectra, summed energy electron spectra, and angular correlation between electrons calculated with and without LIV contributions, which are measured in $2\nu\beta\beta$ decay. Different from the $^{100}$Mo case, in most other studied nuclei, more $1^+$ states in the intermediate nucleus with higher energies can also contribute to the decay rate (HSD hypothesis). For these isotopes, the perturbation of the single electron spectra due to LIV may look different, as we will show. Next, we compare the electron and angular correlation spectra calculated with the inclusion of the LIV perturbations with their Standard Model (SM) forms and discuss the information that can be obtained about the strength versus observability of the LIV effects in the current experimental statistics. Finally, we present the alternative method of constraining $\aof$ from the measurement of the angular correlation coefficient and estimate the statistics that different double-beta decay experiments should reach to constrain this coefficient at the level of the current beta decay experiments. \paragraph*{Theoretical formalism} In this section we deduce the necessary formulas for the electron spectra, angular correlation, and angular correlation coefficient as well as for their perturbations due to Lorentz invariance violation. LIV effects in the neutrino sector can be estimated taking into account that the neutrino four-component momentum modifies from its standard expression $q^{\alpha} = (\omega, {\bf q})$ to $q^{\alpha} = (\omega, {\bf q} + {\bf a}^{(3)}_{\rm of}-\mathring{a}^{(3)}_{\rm of} \bf \hat{q})$ \cite{KR-RMP2011,Diaz-PRD89,KT-PRL102}. In $2\nu\beta\beta$ decay this induces a change in the total decay rate that can be expressed as a sum of two terms \cite{Diaz-PRD89}: \begin{equation} \Gamma_{\rm SME} = \Gamma_{\rm SM} + \delta \Gamma, \end{equation} where $\Gamma_{\rm SM}$ is the standard decay rate and $\delta \Gamma$ is the LIV contribution. The differential decay rate for the standard $2\nu\beta\beta$ decay process and for ground states (g.s) to g.s. transitions $0_{gs}^+\rightarrow0_{gs}^+$, can be expressed as \cite{Haxton-1984,Doi-1985,Tomoda-1991,Kotila-2012}: \begin{equation} d\Gamma_{\rm SM}=\left[\mathcal{A}+ \mathcal{B}\cos\theta_{12}\right]w_{\rm SM}d\omega_1d\varepsilon_1d\varepsilon_2d(\cos\theta_{12}) \label{eq:DiferentialRate} \end{equation} where $\varepsilon_{1,2}$ are the electron energies, $\omega_{1,2}$ are the antineutrino energies, and $\theta_{12}$ is the angle between the directions of the two emitted electrons. In what follows, we adopt the natural units ($\hbar=c=1$). Within the SM framework, the term $w_{\rm SM}$ is given by: \begin{equation} w_{\text{SM}}=\frac{g_A^4G_F^4\left|V_{ud}\right|^4}{64\pi^7}\omega_1^2\omega_2^2p_1p_2\varepsilon_1\varepsilon_2 \end{equation} where $g_A$ is the axial vector constant, $G_F$ is the Fermi coupling constant, $V_{ud}$ is the first element of the Cabibbo-Kobayashi-Maskawa matrix and $p_{1,2}$ are the momenta of the electrons. The $\mathcal{A}$ and $\mathcal{B}$ quantities are products of nuclear matrix elements (NMEs) and phase-space factors (PSFs) for the $2\nu\beta\beta$ decay mode. Their explicit expressions can be found in many papers on DBD (see for example \cite{Doi-1985, Tomoda-1991, NIT-2021}). After the integration over the lepton energies, the derivative of the decay rate with respect to the cosine of the angle $\theta_{12}$ can be written as a sum between the spectrum part and angular correlation part: \begin{equation} \label{SMDecayRate} \frac{d\Gamma_{\rm SM}}{d(\cos\theta_{12})}=\frac{1}{2}\left(\Gamma_{\rm SM} + \Lambda_{\rm SM} \cos\theta_{12} \right) = \frac{1}{2}\Gamma_{\rm SM}\left(1+ \kappa_{\rm SM} \cos\theta_{12}\right). \end{equation} Here, $\kappa_{\rm SM} = \Lambda_{\mathrm{SM}}/\Gamma_{\mathrm{SM}}$ is the angular correlation coefficient. $\Lambda_{\mathrm{SM}}$, the angular part of the decay rate, is also affected by LIV and, like the spectrum part, can be written as a sum between its SM form and the LIV deviation: \begin{equation} \Lambda_{\rm SME}=\Lambda_{\rm SM}+\delta\Lambda. \end{equation} We note that after integration over $\cos\theta_{12}$ only the spectrum part gives contribution to the total DBD decay rate. Using the closure approximation, the $2\nu\beta\beta$ decay rate can be expressed in a factorized form \cite{Haxton-1984,Doi-1985,Tomoda-1991}: \begin{align} \begin{aligned} \frac{\Gamma}{\ln 2}&=g_A^4\left|M\right|^2G, \\ \frac{\Lambda}{\ln 2}&=g_A^4\left|M\right|^2H, \label{eq:decayrate_factorization} \end{aligned} \end{align} \noindent where $M$ are NMEs which depend on the nuclear structure of the nuclei involved in the decay, and $G$ and $H$ are PSFs which include the distortion of the electrons w.f. by the Coulomb field of the daughter nucleus. Since we refer to the LIV effects induced by the neutrino behavior, only PSFs are subject to the LIV modifications, namely: \begin{eqnarray} G_{\rm SME}=G_{\rm SM}+\delta G, \\ H_{\rm SME}=H_{\rm SM}+\delta H \end{eqnarray} The phase-space factors for the $2\nu\beta\beta$ transitions to final ground states can be written in a compact form as follows \cite{NIT-2021}: \begin{widetext} \begin{eqnarray} \begin{Bmatrix} \label{PSF1} G_{\rm SM}\\ \delta G \end{Bmatrix}=&& \frac{\tilde{A}^2G_F^2\left|V_{\text{ud}}\right|^2m_e^9}{96\pi^7\ln2}\frac{1}{m_e^{11}}\int_{m_e}^{E_I-E_F-m_e}d\varepsilon_{1}\varepsilon_{1}p_1\int_{m_e}^{E_I-E_F-\varepsilon_{1}}d\varepsilon_{2}\varepsilon_{2}p_2\nonumber\\ &&\times\int_{0}^{E_I-E_F-\varepsilon_{1}-\varepsilon_{2}}d\omega_1\omega_2^2a(\varepsilon_{1},\varepsilon_{2})\left[\langle K_N\rangle^2+\langle L_N\rangle^2+\langle K_N\rangle\langle L_N\rangle\right]\begin{Bmatrix} \omega_1^2\\ 4\mathring{a}^{(3)}_{\rm of}\omega_1 \end{Bmatrix} \\ \begin{Bmatrix} \label{PSF2} H_{\rm SM}\\ \delta H \end{Bmatrix}=&& \frac{\tilde{A}^2G_F^2\left|V_{\text{ud}}\right|^2m_e^9}{96\pi^7\ln2}\frac{1}{m_e^{11}}\int_{m_e}^{E_I-E_F-m_e}d\varepsilon_{1}\varepsilon_{1}p_1\int_{m_e}^{E_I-E_F-\varepsilon_{1}}d\varepsilon_{2}\varepsilon_{2}p_2\nonumber\\ &&\times\int_{0}^{E_I-E_F-\varepsilon_{1}-\varepsilon_{2}}d\omega_1\omega_2^2b(\varepsilon_{1},\varepsilon_{2})\left[\frac{2}{3}\langle K_N\rangle^2+\frac{2}{3}\langle L_N\rangle^2+\frac{5}{3}\langle K_N\rangle\langle L_N\rangle\right]\begin{Bmatrix} \omega_1^2\\ 4\mathring{a}^{(3)}_{\rm of}\omega_1 \end{Bmatrix} , \end{eqnarray} \end{widetext} where $m_e$ is the electron mass. The quantities $\langle K_N\rangle$ and $\langle L_N\rangle$ are kinematic factors that depend on the lepton energies ($\epsilon$, $\omega$), the g.s. energy of the parent nucleus ($E_I$), and an averaged energy of the excited $1^+$ states in the intermediate nucleus ($\langle E_N \rangle$). Replacing the energies of the $1^+$ states with an average energy is called the closure approximation and allows to express the $2\nu\beta\beta$ decay rate as a product of the PSF and NME parts (see Eq. ~\ref{eq:decayrate_factorization}). The expressions of the kinematic factors $\langle K_N \rangle$ and $\langle L_N \rangle$ are given in many papers about the double-beta decay topic (see for example \cite{Haxton-1984}): \begin{align} \begin{aligned} \label{eq:KnDef} \langle K_N\rangle= {1\over \varepsilon_1+\omega_1+\langle E_N\rangle-E_I}+ {1\over \varepsilon_2+\omega_2+\langle E_N\rangle-E_I}\\ \langle L_N\rangle= {1\over \varepsilon_1+\omega_2+\langle E_N\rangle-E_I}+ {1\over \varepsilon_2+\omega_1+\langle E_N\rangle-E_I}. \end{aligned} \end{align} The energy $\langle E_N\rangle-E_I$ is determined from the approximation $\tilde{A}=[W_0/2+\langle E_N\rangle -E_I]$, where $\tilde{A}=1.12A^{1/2}$ (in MeV) gives the energy of the giant Gamow-Teller resonance in the intermediate nucleus and $W_0=E_I-E_F$, where $E_F$ is the g.s. energy of the daughter nucleus. We note that in many calculations, simplified expressions of these factors are used, namely: $\langle K_{N}\rangle \simeq \langle L_N \rangle \simeq 2/\tilde{A}$. With this approximation, the PSF formulas and their LIV deviations simplify much, but some accuracy is lost as well \cite{NIT-2020}. To provide good predictions for the single and summed energy electron spectra, angular correlation between electrons, as well as for their deviations due to LIV, accurate calculations of the $G_{\text{SM}}$ and $H_{\text{SM}}$ phase space factors and their deviations are required. This implies accurate calculations of the integrals in Eqs.~\ref{PSF1} and \ref{PSF2} which contain the factors $a(\epsilon_1,\epsilon_2)$ and $b(\epsilon_1,\epsilon_2)$. These quantities are built with electron w. f. obtained by solving the Dirac equation in a realistic Coulomb-type potential, including the finite nuclear size (FNS) and screening effects. The functions $a(\varepsilon_1,\varepsilon_2)$ and $b(\varepsilon_1,\varepsilon_2)$ are defined as \cite{Kotila-2012,NIT-2020} \begin{align} \label{KNLN} \begin{aligned} &a(\varepsilon_1,\varepsilon_2)=\left|\alpha^{-1-1}\right|^2+\left|\alpha_{11}\right|^2+\left|\alpha_{1}^{\hspace{0.16cm}-1}\right|^2+\left|\alpha^{-1}_{\hspace{0.35cm}1}\right|^2\\ &b(\varepsilon_1,\varepsilon_2)=-2\Re\{\alpha^{-1-1}\alpha_{11}^*+\alpha^{-1}_{\hspace{0.35cm}1}\alpha_{1}^{\hspace{0.16cm}-1*}\} \end{aligned} \end{align} with \begin{align} \label{eq:WavefunctionsProducts} \begin{aligned} \alpha^{-1-1}&=g_{-1}(\varepsilon_1)g_{-1}(\varepsilon_2), \alpha_{11} = f_{1}(\varepsilon_1)f_{1}(\varepsilon_2),\\ \alpha_{1}^{\hspace{0.16cm}-1}&=f_{1}(\varepsilon_1)g_{-1}(\varepsilon_2), \alpha^{-1}_{\hspace{0.35cm}1}= g_{-1}(\varepsilon_1)f_{1}(\varepsilon_2). \end{aligned} \end{align} \noindent where $f_{1}(\varepsilon_1)$ and $g_{-1}(\varepsilon_2)$ are the electron radial wave functions evaluated on the surface of the daughter nucleus: \begin{align} \begin{aligned} g_{-1}(\varepsilon)&=\int_{0}^{\infty}g_{-1}(\varepsilon,r)\delta(r-R)dr\\ f_{1}(\varepsilon)&=\int_{0}^{\infty}f_{1}(\varepsilon,r)\delta(r-R)dr, \end{aligned} \end{align} where $R=r_0A^{1/3}$, $r_0=1.2$ fm. In the PSF evaluation for LIV analyses, we included the full expressions of $\langle K_N \rangle$ and $\langle L_N \rangle$ from Eq.~\ref{KNLN}, while in previous calculations, their simplified expressions mentioned above are used. Our method of calculation and the comparison of the results with other methods are described in detail in \cite{NIT-2020}, where we showed that using exact electron w.f. instead of approximative ones is more reliable in calculating the PSF values. By differentiating the $2\nu\beta\beta$ decay rate expression versus the total energy of one electron ($\varepsilon_1$), one gets the single electron spectrum \cite{Doi-1985,Tomoda-1991,Kotila-2012}: \begin{equation}\label{eq:SingleElectronSpectra} \frac{d\Gamma_{\rm SME}}{d\varepsilon_1} = C\frac{dG_{\rm SM}}{d \varepsilon_1}. \end{equation} Similarly, one gets the summed energy spectrum of the two electrons: \begin{equation}\label{eq:SumElectronSpectra} \frac{d\Gamma_{\rm SME}}{dK} = C\frac{dG_{\rm SM}}{d K} \end{equation} where $ K\equiv \varepsilon_1 + \varepsilon_2 -2m_e $ is the total kinetic energy of the two electrons. $C$ is a constant including the nuclear matrix elements. Also, by differentiating the decay rate versus $\varepsilon_1$ and $\cos\theta_{12}$, one gets the angular correlation, $\alpha_{\text{SM}}$, between the two emitted electrons: \begin{align} \label{eq:DiffDecayRate_SME} \begin{aligned} &\frac{d\Gamma_\text{SM}}{d \varepsilon_1 d(\cos\theta_{12})}=C\frac{d G_{\rm SM}}{d\varepsilon_1} \left[1+\alpha_{\text{SM}}\cos\theta_{12}\right]. \end{aligned} \end{align} where $ \alpha_{\text{SM}} \equiv (dH_{\rm SM}/d\varepsilon_1)/(dG_{\rm SM}/d\varepsilon_1)$ is the SM angular correlation. In \cite{NIT-2021}, we calculated the expressions of these quantities and their LIV deviations for the single electron spectrum: \begin{equation}\label{eq:SingleElectronSpectra} \frac{d\Gamma_{\rm SME}}{d\varepsilon_1} = C\frac{dG_{\rm SM}}{d \varepsilon_1}\left(1+\aof \chi^{(1)}(\varepsilon_1)\right), \end{equation} and summed energy electron spectrum: \begin{equation}\label{eq:SumElectronSpectra} \frac{d\Gamma_{\rm SME}}{dK} = C\frac{dG_{\rm SM}}{d K}\left(1+\aof \chi^{(+)}(K)\right). \end{equation} Here, \begin{equation}\label{eq:LIVdeviations} \chi^{(1)}(\varepsilon_1) = \frac{d(\delta G)}{d \varepsilon_1}/\frac{dG_{\rm SM}}{d \varepsilon_1} \end{equation} and \begin{equation} \chi^{(+)}(K) = \frac{d(\delta G)}{d K}/\frac{dG_{\rm SM}}{d K} \end{equation} are quantities that incorporate the deviations of the electron spectra from their standard (SM) forms. The relation between the LIV-perturbed angular correlation and its standard form can be extracted from the expression of the derivative of the decay rate versus the total energy of an electron and the $\cos\theta_{12}$: \begin{align} \label{eq:DiffDecayRate_SME} \begin{aligned} &\frac{d\Gamma_\text{SME}}{d \varepsilon_1 d(\cos\theta_{12})}=C\frac{d G_{\mathrm{SM}}}{d\varepsilon_1}\times\\ &\left[1+\aof \chi^{(1)}(\epsilon_1)+\left(\alpha_{\text{SM}}+\aof\frac{d(\delta H)/d\varepsilon_1}{dG_{\rm SM}/d\varepsilon_1}\right)\cos\theta_{12}\right]. \end{aligned} \end{align} with \begin{equation} \label{eq:alpha_sme} \alpha_{\text{SME}} = \alpha_{\text{SM}} + \aof \frac{d(\delta H)/d\varepsilon_1}{dG_{\rm SM}/d\varepsilon_1} \end{equation} Differentiating the decay rate expression versus $\cos\theta_{12}$ \begin{align} \label{eq:k_sme} \begin{aligned} &\frac{d\Gamma_\text{SME}}{d(\cos\theta_{12})}=CG_{\rm SM}\times\\ &\left[1+\aof\frac{\delta G}{G_{\rm SM}}+\left(\kappa_{\text{SM}}+\aof\frac{\delta H}{G_{\rm SM}}\right)\cos\theta_{12}\right], \end{aligned} \end{align} we can identify (in round brackets) the SME expression of the angular correlation coefficient $\kappa_{\text{SME}}$ and the relation with its standard form. For an independent treatment with respect to $\aof$, we define $\xi_{\text{LIV}} \equiv \delta H/G_{\mathrm{SM}}$ in units of $\mathrm{MeV}^{-1}$. Finally, the LIV-perturbed angular correlation coefficient can also be written as, \begin{equation} \label{LIVAngularCorrelationFactor} \kappa_{\rm SME}=\frac{\Lambda_{\rm SM}}{\Gamma_{\rm SM}}+\frac{\delta \Lambda}{\Gamma_{\rm SM}}. \end{equation} The first term in the r.h.s of Eq.~\ref{LIVAngularCorrelationFactor} is the standard angular correlation coefficient, $\kappa_{\rm SM}$, and the second one is its LIV deviation. \paragraph*{Results and discussions.} We calculate the single and summed energy electron spectra, angular correlation spectra and angular correlation coefficient, along with their LIV deviations from the standard forms for all nuclei that are investigated in DBD experiments, i.e. $^{48}$Ca, $^{76}$Ge, $^{82}$Se, $^{100}$Mo, $^{116}$Cd, $^{130}$Te, $^{136}$Xe and $^{150}$Nd. As already mentioned, we use electron radial wave functions obtained as solutions of the Dirac equation in a Coulomb potential that encodes the finite-size and the atomic screening of the final nucleus. We numerically solve the radial Dirac equation with the subroutine package RADIAL \cite{Salvat-1991,Salvat-CPC2019}. Following this procedure, the truncation errors are completely avoided, and the radial wave functions are obtained with the desired accuracy. Thus, the numerical solutions can be considered as exact for the given input potential. More details about the electrostatic potential that we use can be found in Refs. \cite{SM-2013,MPS-2015,NIT-2020}. In calculations, we use either the SSD or HSD hypothesis as follows. The SSD hypothesis has been experimentally validated for the $^{82}\mathrm{Se}$ \cite{CUPID-0-PRL2019} and $^{100}\mathrm{Mo}$ \cite{NEMO-3-2019} nuclei and we used it for these isotopes. This means we replaced $\langle E_{N} \rangle$ in the formulas from the previous section, with the energy of the first $1^+$ intermediate state ($E_{1_1^+}$). For $^{150}\mathrm{Nd}$ nucleus, the dominant DBD transition also occurs through the first $1^+$ state in the intermediate nucleus, $^{150}\mathrm{Pm}$, but transitions through other $1^+$ states of higher energies, also contribute and must be included in the calculation so that the DBD rate value is reproduced \cite{150Nd-PRC2011}. Thus, we calculated the single electron spectra using both (SSD and HSD) hypotheses for this nucleus. In Fig. \ref{fig:singlespectra} we present the normalized standard and LIV-perturbed spectra for all nuclei except $^{150}\mathrm{Nd}$. For the nuclei where the SSD hypothesis applies, we used the following values for the $1+$ state energies ($E_{1_1^+}-E_I$): $-0.338$ MeV for $^{82}\mathrm{Se}$, $-0.343$ MeV for $^{100}\mathrm{Mo}$ and $-0.315$ MeV for $^{150}\mathrm{Nd}$. As can be seen, the main difference between the calculations for different nuclei can occur at low electron energies. For the nuclei where the HSD hypothesis applies, the LIV spectra increase first monotonously with increasing energy and reach their maxima at energies not close to 0. On the other hand, for the isotopes where the SSD hypothesis applies, the LIV spectra (except $^{82}Se$) show a local maximum at $\varepsilon_{1}\to0$. For more precise information, in Table I, we give the position of global maxima of the LIV spectra for all nuclei. Concluding, regardless of the hypothesis assumed, the overall effect of LIV on the single electron spectra in all nuclei is a shift of the spectra towards higher electron energies, as shown in Ref. \cite{NIT-2021} in the case of $^{100}\text{Mo}$. This is an effect similar to that found in the summed energy electron spectra \cite{NIT-2020}. \begin{figure*} \centering{ \includegraphics[width=0.8\textwidth]{fig1a} \includegraphics[width=0.52\textwidth]{fig1b} } \caption{(Color online) Normalized $2\nu\beta\beta$ single electron spectra within SM with solid line and the first order contribution in $\aof$ due to LIV with dashed line. See text for the assumption on the hypothesis used. } \label{fig:singlespectra} \end{figure*} \begin{figure*} \includegraphics[width=0.8\textwidth]{fig2} \caption{(Color online) Normalized single electron spectra within SM with solid line and the first order contribution in $\aof$ due to LIV with dashed line for $2\nu\beta\beta$ decay of $^{150}\mathrm{Nd}$. We assumed the SSD hypothesis in the left panel and HSD in the right panel. } \label{fig:150NdSSDvsHSD} \end{figure*} \begin{figure*} \includegraphics[width=0.8\textwidth]{fig3} \caption{(Color online) The quantity $\chi^{(1)}(\varepsilon_1)$ depicted for current limits of $\aof$ (dashed for upper limit and dot-dashed for lower limit). The solid line at $\chi^{(1)}(\epsilon_1)= 0$ represents the SM prediction. } \label{fig:Chi1allnuclei} \end{figure*} Next, we discuss other LIV signatures resulting from the comparison of the single and summed energy electron spectra and angular correlation perturbed by LIV with their standard forms. We note first that in the previous works \cite{Diaz-PRD89, EXO-200-PRD93, NEMO-3-2019, NIT-2020} the LIV effects were presented by plotting separately, on the same figure, the normalized summed energy electron spectra calculated within SM, and their LIV deviations. Thus, as we already mentioned, it was concluded that the LIV effects (if they exist) manifest as a global shift of the electron spectra to higher electron energies. Further, using the theoretical predictions for the summed energy electron spectra and from the non-observation of such deviations, constraints on the LIV $\aof$ coefficient are deduced. Several DBD experiments reported such limits \cite{EXO-200-PRD93, GERDA-PhDThesis, AURORA-2018, CUPID-0-PRD100,NEMO-3-2019, NEMO-3-PhdThesis}. Besides the analyzes on the summed energy electron spectra reported in these references, we presented in \cite{NIT-2021} another analysis of the LIV signatures by comparing the electron spectra (single and summed energy) and angular correlation calculated with and without LIV contributions. This was done for the $^{100}\mathrm{Mo}$ nucleus for which the SSD hypothesis holds. Here, we extend this analysis to all nuclei. Thus, in Fig.~\ref{fig:Chi1allnuclei} we plot the quantity $1 + \aof \chi^{(1)}(\varepsilon_1)$, which represents the ratio between the single electron spectrum calculated with the LIV contributions and its standard forms for all nuclei. The calculations are performed with two (extreme) sets of $\aof$ limits, namely those reported by the EXO collaboration $-2.65\times 10^{-2} \mathrm{MeV} \le \aof \le 7.6\times 10^{-3} \mathrm{MeV}$\cite{EXO-200-PRD93} and those reported by the NEMO-3 collaboration $-4.2\times 10^{-4} \mathrm{MeV} \le \aof \le 3.5\times 10^{-4}\mathrm{MeV}$\cite{NEMO-3-2019}. Other limits reported until now can be found in \cite{KR-ARXIV, KR-RMP2011}. The horizontal line equal to $1$ represents the ratios in the absence of LIV effects, while the curves situated over or under this line represent the deviations when the LIV corrections are included. The position of the curves is dictated by the sign of the $\aof$ coefficient, over the horizontal unity line for positive values of $\aof$ and under this line for negatives values of this coefficient. As we mentioned in \cite{NIT-2021}, the increased divergences between the standard and the LIV perturbed spectra are due to a slower descent (in absolute value) of the LIV spectrum with respect to the standard one at the end of the energy interval (near $Q$-value). As seen, for $\aof$ limits reported by \cite{EXO-200-PRD93} the deviations of the single electron spectra due to LIV are quite pronounced (even for electron energies much lower than the $Q$-value), and they should have been seen already, which did not happen. For more stringent limits of $\aof$, as those reported by NEMO-3 \cite{NEMO-3-2019}, the deviations are very small and cannot be seen in the current experimental statistics. However, in future DBD experiments, such as the SuperNEMO experiment, which targets $10^3$ times the statistics from NEMO-3 for $^{100}$Mo, these LIV deviations might be observed. These observations are valid for all the studied nuclei. However, a drawback of the single electron spectra is that they can only be measured in DBD experiments with electron tracking systems. That is why we present a similar analysis for the summed energy electron spectra that are measured in all the DBD experiments and with higher statistics than the single electron spectra. In Fig.~\ref{fig:ChiSumAllnuclei}, we plot the ratio between the summed energy spectra of electrons calculated with the LIV contributions and their standard forms. One can see LIV effects with similar shapes, as in the case of the single electron spectra, and the same arguments are valid to explain them. From the analysis of the deviations of these predicted electron spectra, estimations on the magnitude and observability of the LIV effects in the different statistics can be made. \begin{figure*} \includegraphics[width=0.8\textwidth]{fig4} \caption{(Color online)The quantity $\chi^{(+)}(K)$ depicted for current limits of $\aof$.The same conventions as in Fig.~\ref{fig:Chi1allnuclei} are used.} \label{fig:ChiSumAllnuclei} \end{figure*} Further, we discuss the LIV effects on the angular correlation $\alpha$ and the value of the angular correlation coefficient $k$. In Fig.~\ref{fig:angcorr_allnuclei} the angular correlation spectra for all the nuclei are plotted with the same conventions as in Fig.~\ref{fig:ChiSumAllnuclei}. As seen, deviations of the angular correlation curves from their standard forms may manifest even at low electron energies, and they increase much in the vicinity of the $Q$-value for the $\aof$ values reported by EXO. Again, for the $\aof$ values reported by NEMO3, these deviations cannot be seen in the current experimental statistics. We also note that distinctively from the electron spectra, the total angular correlation spectrum exceeds the standard spectrum for negative values of $\aof$ because $\delta H$ is also negative, making the LIV contribution positive (see Eq. 24). Regarding the theoretical electron and angular correlation spectra discussed above, we mention that we can provide upon request detailed numerical predictions of these spectra to be used in DBD experiments for the LIV investigation. \begin{figure*} \includegraphics[width=0.8\textwidth]{fig5} \caption{(Color online) The angular correlation spectrum plotted for the current limits of $\aof$. The same conventions as in Fig.~\ref{fig:Chi1allnuclei} are used.} \label{fig:angcorr_allnuclei} \end{figure*} Finally, we refer to the angular correlation coefficient, $k$, defined in Eqs.~\ref{SMDecayRate} and \ref{LIVAngularCorrelationFactor} of the previous section. As shown in Ref. \cite{NIT-2021} it can also be used to constrain the $\aof$ coefficient and estimate quickly (albeit grossly) the number of the $2\nu\beta\beta$ events needed to put a certain limit on $\aof$. $k_{\text{SME}}$ can be determined in the DBD experiments with electron tracking systems by using the forward-backward asymmetry \cite{Arnold-2010}, \begin{align} A\equiv\frac{\int^0_{-1}\frac{d\Gamma}{dx}dx-\int^1_{0}\frac{d\Gamma}{dx}dx}{\Gamma} =\frac{N_{+}-N_{-}}{N_{+}+N_{-}}=\frac{1}{2}k_{\mathrm{SME}}, \end{align} where $x=\cos\theta_{12}$ and $N_{-}(N_{+})$ are the $2\nu\beta\beta$ events with the angle $\theta_{12}$ smaller (larger) than $\pi/2$. Assuming that the experimental value of this coefficient is compatible at 90\% CL with the SM value and considering only statistical uncertainties in the number of events recorded, one can compute the number of events needed to constrain the $\aof$ coefficient at a specific value. In Table~\ref{tab:KXiNevts}, we give the values of $k_{\mathrm{SM}}$ and $\xi_{\mathrm{LIV}}$ computed as described in the previous section. In the last column, we also give the number of events needed to constrain the upper limit of $\aof$ at the same value obtained from the tritium decay (i.e., $|\aof| < 3\times 10^{-5}\mathrm{MeV}$ \cite{KR-ARXIV}). We also indicate the nuclei for which we have employed the SSD hypothesis by subscript. In these cases, the $\tilde{A}$ value has been taken from \cite{Kotila-2012}. The rest of the nuclei have been treated within the HSD hypothesis. We note that $k_{\mathrm{SM}}$ and $\xi_{LIV}$ do not follow the same behavior across the nuclei. As expected, the number of events necessary to constrain $\aof$ ($N_{\mathrm{ev}}$) is the lowest where the modulus of $\xi_{LIV}$ is the highest, although the relation is not linear and $N_{\mathrm{ev}}$ varies significantly from one nucleus to another. We also remark that applying the same procedure for an $\aof$ limit stronger by one order of magnitude than the most stringent current limit (\cite{NEMO-3-2019}) leads to an increase of two orders of magnitude of the needed number of events. This implies that in the near future, the DBD experiments will improve by some factor the best current upper limit of the $\aof$ coefficient. \begin{table*} \begin{ruledtabular} \begin{tabular}{cccccc} Nucleus & Q-value (MeV)& $k_{\text{SM}}$ & $\xi_{\text{LIV}}\left(\mathrm{MeV}^{-1}\right)$ & $N\times 10^{-8}(|\aof| < 3\times 10^{-5}\mathrm{MeV})$ & $\varepsilon_{1}^{\mathrm{max}}$(MeV)\\ $^{48}$Ca & 4.2681\cite{Qvalue-48Ca} & -0.7673 & -3.4931 & 8.4060 & 0.671\\ $^{76}$Ge & 2.0391\cite{Qvalue-76Ge} & -0.5608 & -4.9831 & 4.4625 & 0.181\\ $^{82}$Se$_{\text{SSD}}$ & 2.9979\cite{Qvalue-82Se} & -0.6585 & -4.3121 & 5.7670 & 0.197\\ $^{100}$Mo$_{\mathrm{SSD}}$ & 3.0344\cite{Qvalue-100Mo} & -0.6690 & -4.2939 & 5.7932 & 0\\ $^{110}$Pd & 2.0179\cite{Qvalue-110Pd} & -0.5788 & -5.0765 & 4.2760 & 0.120\\ $^{116}$Cd & 2.8135\cite{Qvalue-116Cd} & -0.6726 & -4.3332 & 5.6808 & 0.192\\ $^{130}$Te & 2.5275\cite{Qvalue-130Te} & -0.6514 & -4.6013 & 5.0779 & 0.220 \\ $^{136}$Xe & 2.4587\cite{Qvalue-136Xe} & -0.6483 & -4.6828 & 4.9082 & 0.198\\ $^{150}$Nd$_{\text{SSD}}$ & 3.3367\cite{Qvalue-150Nd} & -0.7218 & -4.1323 & 6.1258 & 0 \\ $^{150}$Nd$_{\text{HSD}}$ & 3.3367 & -0.7357 & -3.9734 & 6.5869 & 0.375 \\ \end{tabular} \end{ruledtabular} \caption{\label{tab:KXiNevts} $k_{\text{SM}}$ and $\xi_{\text{LIV}}$ computed as described in the text for all nuclei. $Q$-values used in calculations are also displayed. The fifth column contains the expected number of events needed to constrain $\aof$ to the current limit obtained from tritium decay \cite{KR-ARXIV}. The last column contains the position of the maxima of LIV single electron spectra} \end{table*} \paragraph*{Conclusions.} We analyze the LIV effects on the single electron spectra, summed energy electron spectra, and angular correlation between electrons in $2\nu\beta\beta$ decay for all the experimentally interesting nuclei. We derive the formulas of the LIV contributions to these spectra and angular correlation and provide theoretical predictions of them to be used for constraining the LIV coefficient $\aof$. Next, we analyze different signatures that could be probed in the DBD experiments. First, we confirm the overall effect of LIV to shift the single and summed energy electron spectra to higher electron energies for all the studied nuclei. Next, we highlight other LIV signatures that can be analyzed by comparing the electron and angular correlation spectra computed with and without LIV contributions and show that from this comparison, one can get information about the observability of the LIV effects in the current experimental statistics. Then, the alternative method of constraining $\aof$ from the measurement of the angular correlation coefficient is discussed. In this regard, we estimate the statistics that each of the DBD experiments, studying different nuclei, should reach to constrain $\aof$ at the level of the current beta decay experiments. We hope that our work improves the theoretical support and further stimulates the search for LIV in DBD. \paragraph*{Acknowledgments} The figures for this article have been created using the SciDraw scientific figure preparation system \cite{SciDraw}. This work has been supported by the grants of the Romanian Ministry of Research, Innovation and Digitalization through the project PN19-030102-INCDFM and CNCS-UEFISCDI project no. 99/2021 within PN-III-P4-ID-PCE-2020-2374
2023-04-23T06:41:24.890Z
2022-02-03T02:21:24.000Z
redpajama/arxiv
arxiv_0001
2,386
5,719
dbdb690175692bf8d9fcdef64bf44f94affc533e
\section{Introduction} \subsection{Limit sets, minimal sets, and the geometry of their complements} A \emph{limit set} $\mathcal L$ of a holomorphic foliation $\mathcal{F}$ on a complex surface is a closed saturated subset contained in the closure of every leaf of $\mathcal{F}$, it is unique if it exists. Limit sets have in general a fractal structure. Classical examples arise with Riccati foliations, namely foliations on surfaces transverse to a rational fibration. In this context, the limit set corresponds to the limit set of the monodromy group, which can be any Kleinian group. Examples of strict limit sets in the non linear context have been discovered recently, e.g. Jouanolou foliation in degree two of $\mathbb P^2$ and its perturbations, see Alvarez-Deroin \cite{Alvarez}. More generally, a \emph{minimal set} $\mathcal M$ is a closed saturated subset such that every leaf of $\mathcal{F}$ contained in $\mathcal M$ is dense in $\mathcal M$. Complements of minimal sets display very interesting geometrical properties. For instance, Grauert's classical example of pseudoconvex but not Stein domain arises as complement of minimal sets of irrational linear foliations on complex tori \cite{Pet}. Recall that a domain of a complex manifold is \emph{Stein} if it is biholomorphic to a domain of a complex affine space \cite{H}. Remarkably, this is characterized by the existence of a proper and strictly plurisubharmonic exhaustion function on the domain \cite{Grauert}. Such exhaustions can be used to perform Hartogs fillings, which allow to extend analytic objects. For instance, pseudoconvex subsets of $\mathbb P^n$ being Stein \cite{Takeuchi}, one can extend to $\mathbb P^n$ the Cauchy-Riemann foliation of analytic Levi-flat hypersurfaces. Lins Neto enforced that powerful idea to show that there is no Levi-flat hypersurface in $\mathbb P^n$ for $n \geq 3$ \cite{LinsNeto}. That tricky question remains open in $\mathbb P^2$. In an other nice work, Brunella proved that the complement of a compact saturated set avoiding the singular set of $\mathcal{F}$ is a modification of a Stein domain once the normal bundle $N_\mathcal{F}$ has positive curvature near the compact set \cite{BrunellaToulouse} (see \cite{Pet} for the notion of modification). In the context of Levi-flat hypersurfaces, Canales reached positivity for $N_\mathcal{F}$ by using dynamical properties on the Cauchy-Riemann foliation, and managed to adapt Brunella's arguments \cite{Canales}. In particular, she retrieved, by a different way, Diederich-Ohsawa's convexity of complements of Levi-flat hypersurfaces which are limit sets of Riccati foliations with real monodromy \cite{DO1,DO2}. \subsection{Main result} The novelties of our study is that we deal with limit sets that contain singular points of the foliation and which may have a fractal structure. We work with foliations on a compact K\"ahler surface satisfying \vspace{0.2cm} \textit{(*) every singular point of \(\mathcal{F}\) is hyperbolic (the eigenvalues of the vector field are not \(\mathbb R\)-colinear) and \( \mathcal{F} \) does not carry any foliation cycle. } \vspace{0.2cm} This condition is generic in many algebraic families of foliations, for instance it is satisfied on a real Zariski open dense subset in the moduli space of degree \(d\) holomorphic foliations of \(\mathbb P^2\). Moreover, under the condition (*), Dinh-Nguyen-Sibony \cite{DNS_unique ergodicity} proved that there exists on the surface a unique $\mathcal{F}$-directed harmonic current. That implies that $\mathcal{F}$ has a limit set $\mathcal L$, given by the support of the harmonic current. We obtain the following result. \begin{theorem} \label{t: convexity II} Let \(\mathcal{F}\) be a holomorphic foliation of a compact K\"ahler surface satisfying property (*). If the limit set $\mathcal L$ is thin (in particular if it has zero Lebesgue measure), then its complement is a modification of a Stein domain. \end{theorem} The proof consists in showing that the normal bundle $N_\mathcal{F}$ supports a metric with positive curvature in a neighborhood of $\mathcal L$, we proceed in 3 steps explained below. We then complete the proof by constructing a proper strictly plurisubharmonic exhaustion function near $\mathcal L$, that fourth step needs to adapt Brunella's arguments \cite{BrunellaToulouse} to our singular context, the fact that the singular points are linearizable is crucial. In the \emph{first step} we prove that the curvature is positive along the foliation: we follow the approach of \cite{Deroin-Kleptsyn}, using the absence of foliation cycle and Hahn-Banach's theorem. It is interesting to notice that we get a quick proof of the negativity of the Lyapunov exponent, established in \cite{Nguyen3}. The \emph{second step} establishes positivy of the curvature in neighborhoods of singular points. Here we use suitable notions of positivy for currents, the vanishing of the Lelong numbers of the harmonic current \cite{Nguyen2} and the uniqueness of the harmonic current \cite{DNS_unique ergodicity}. The \emph{third step} is based on the thin property, opened to reach positivity for $N_\mathcal{F}$ in a neighborhood of $\mathcal L$ outside the singular points. This is a potential theoritic condition on the fractal structure of the limit set. After Doob, a closed subset \(K\subset {\bf D}\) is thin if for every point \(p\in K\), the probability that a Brownian trajectory starting at \(p\) stays in \(K\) during a positive amount of time is vanishing. The limit set $\mathcal L$ is thin if its image under local first integrals with values in the unit disc is thin. We shall use that thin sets have neighborhoods whose first eigenvalue with respect to the Dirichlet problem can be made arbitrarily large, this property actually characterizes them. \subsection{Kleinian groups and Julia sets} For Riccati foliations, the limit set is the whole surface or has zero Lebesgue measure. This is a consequence of the solution of Ahlfors' conjecture for Kleinian groups, see the combination of works due to Ahlfors \cite{Ahlfors}, Agol \cite{Agol}, Calegari-Gabai \cite{Calegari_Gabai} and Canary \cite{Canary}. Since a limit set with zero Lebesgue measure is thin (Lemma \ref{lemma: positiveLeb}), Theorem \ref{t: convexity II} implies: \begin{corollary}\label{c: Riccati} The complement of the limit set of a Riccati foliation is a modification of a Stein domain. \end{corollary} This result extends the works of Diederich-Ohsawa and Canales mentionned above, which concern Levi-flat hypersurfaces. We do not known any example of foliation having a limit set which is not the whole ambiant surface but has positive Lebesgue measure. Observe that this is not the case for the Julia set of polynomial mappings acting on $\mathbf{C}$, see Buff-Ch\'eritat \cite{BC} and Avila-Lyubich \cite{AL}. This motivates the following general questions. \begin{question} Is it true that the limit set of a holomorphic foliation is either the whole ambiant space or is thin? Is it true that Julia sets of rational mappings are thin? \end{question} These questions are out of reach for the present work, but to motivate them, we prove that \begin{theorem}\label{t: Julia polynomial} The Julia set of a polynomial mapping acting on $\mathbf{C}$ is thin.\end{theorem} \subsection{Organization of the article} In Section \ref{s: preliminaries}, we present general facts on holomorphic foliations and harmonic currents, and explain with Proposition \ref{l: Hahn-Banach} how functional analysis enters the picture. In the next sections, as specified above, we show in 3 steps that the normal bundle $N_\mathcal{F}$ supports a metric with positive curvature in a neighborhood of $\mathcal L$. In Section~\ref{s: positivity normal bundle}, Theorem \ref{t: positivity normal bundle} and its improvement Theorem \ref{c: positivity II} corresponds to the first two steps. Thin sets are introduced in Section \ref{s: thin sets}, then one can proceed to the third step in Section \ref{s: positivity all directions}. The delicate construction of the proper and strictly plurisubharmonic exhaustion function near $\mathcal L$ occupies Section \ref{s: convexity}, the non-trivial monodromy near the singular points leads us to introduce our so-called $m$-functions. The last section is dedicated to the proof of Theorem \ref{t: Julia polynomial}. \\ \emph{Acknowledgements.} We thank Alano Ancona for pointing the reference \cite{BG} on the thin property and Misha Lyubich for discussions about that property for Julia sets. We also thank the Research in Paris program of Institut Henri Poincar\'e for the very nice working conditions offered to us during fall 2021. \section{Preliminaries} \label{s: preliminaries} \subsection{Tangent bundle, leaves, first integrals} A holomorphic foliation \(\mathcal{F}\) on a smooth K\"ahler surface \(S\) is the data of a line bundle \( T\mathcal{F} \rightarrow S\) and a morphism \(\pi : T\mathcal{F} \rightarrow TS\) which vanishes only at a finite number of points. Such a point is called a \emph{singularity} of \( \mathcal{F}\), and their set is denoted by \(\text{sing}(\mathcal{F})\). Let \(S^*:= S\setminus \text{sing}(\mathcal{F}) \) be the regular part of \(\mathcal{F}\). The bundle \(T\mathcal{F}\) is called the \emph{tangent bundle} of $\mathcal{F}$. The vector fields on \(S\) that are images of local sections of \(T\mathcal{F}\) by \( \pi \) form a subsheaf of \(\mathcal O (TS)\) called the \emph{tangent sheaf} of \(\mathcal{F}\). The \emph{leaves} of the foliation are the equivalence classes of the relation on $S^*$ defined by: two points are equivalent if they belong to the same integral curve of a locally defined vector field belonging to the tangent sheaf of \(\mathcal{F}\). A local \emph{first integral} of \(\mathcal{F}\) is a function \( t : W \rightarrow {\bf C}\) defined on an open subset of \(S\), which is constant along the leaves of the restriction of $\mathcal{F}$ to \(W\). On a neighborhood \(W\) of any regular point \(p\) is defined a biholomorphism \( (z, t) : W \rightarrow {\bf D}\times {\bf D}\) to the bidisc that maps the tangent sheaf of \(\mathcal{F}\) to the sheaf of horizontal vector field of the bidisc, and which is called a \emph{foliation chart}. The function \( t: W \rightarrow {\bf D}\) is then a local first integral of the foliation and it is a submersion. Any other first integral in \(W\) is a function of \(t\). In a foliation chart, a set of the form \( {\bf D}\times \{t\}\) is called a \emph{plaque}. \subsection{Normal bundle} It is defined on $S^*$ by \( N_{\mathcal{F}} = TS / \pi (T\mathcal{F}) \). Any metric \( m \) on \( N_{\mathcal{F}} \) has the following form \begin{equation}\label{eq: varphi} m = \exp (2\varphi_m)\ | dt| ^2 ,\end{equation} where \( t \) is a local submersion defining $\mathcal{F}$. On every plaque, the function \( \varphi_m\) is well-defined up to an additive constant, hence any derivative of \(\varphi_m\) is well-defined (in the sequel, we will consider the gradient and Laplacian of \(\varphi_m\) wrt a metric on \( T\mathcal{F}\)). We introduce on $S^*$ the leafwise \(1\)-form \begin{equation}\label{eq: eta} \eta_m := d_{\mathcal{F}} \varphi_m . \end{equation} We will use that if \( m = \exp (\psi) m' \) is another smooth metric on \( N_{\mathcal{F}}\), then \begin{equation} \label{eq: change of metric} \eta_m = d_{\mathcal{F}} \psi + \eta_{m'} . \end{equation} If \(\omega \) is a local non vanishing section of the dual line bundle \( N_{\mathcal{F}}^*\), that we think as a holomorphic one form on \(S\) that vanishes on \( T\mathcal{F}\), the modulus \(|\omega|\) defines a hermitian metric on \( N_{\mathcal{F}}\). Writing in local charts \( \omega = f(z,t) dt \) for a holomorphic function \( f\), we have the local expression $$ \eta _{|\omega|} = d_{\mathcal{F}} \log |f(z,t)| .$$ On the other hand, the form \( \alpha_{\omega} = d_{\mathcal{F}} \log f \) defines a local section of the canonical bundle \( K_{\mathcal{F}} := T^* \mathcal{F}\), which depends only on \(\omega\), and which satisfies the equation \begin{equation} \label{eq: alpha} d\omega = \alpha_{\omega} \wedge \omega . \end{equation} With this form at hand, we have the formula \begin{equation} \label{eq: eta omega} \eta_{|\omega|} = \Re \alpha_{\omega}.\end{equation} \subsection{Singular points}\label{sub:singpoints} At the neighborhood of a singular point \( p \), there is a germ of vector field \(V\) belonging to the tangent sheaf of the foliation, that vanishes only at \( p \). By Hartog's lemma, such a vector field is the \( \pi \)-image of a generating section of \(T\mathcal{F}\). Any other germ of vector field with these properties differ from \(V\) by multiplication by a non vanishing holomorphic function. We will say that the singularity \( p \) is \emph{hyperbolic} if the two eigenvalues of \(V\) are not \({\bf R}\)-colinear. By Poincar\'e linearization theorem, see e.g. \cite[Theorem 5.5, p. 50]{IY}, there exist coordinates \((x,y)\) onto the bidisc \( \mathbf{D}\times \mathbf{D} \) on which \begin{equation}\label{eq: linearization} V = a x \partial _x + b y \partial _y .\end{equation} Such coordinates \((x,y)\) are called \emph{linearization coordinates}. As for the tangent bundle, the normal bundle extends in a unique way to a line bundle on $S$. It is sufficient to verify this for the conormal bundle: in linearization coordinates around a singular point $p$, the holomorphic \(1\)-form \[\omega = a x dy - b y dx \] vanishes exactly on \(T\mathcal{F}\) in \(S^*\), hence defines a section of \( N_{\mathcal{F}} ^* \) in \(S^*\) that does not vanish. This section extends as a section of \(N_{\mathcal{F}}^*\) defined at $p$ and does not vanish there as well, as claimed. A smooth metric \( m \) on \( N_{\mathcal{F}} \) has the following expression close to a singularity \begin{equation}\label{eq: smooth metric on NF} m := \exp (2 \psi) \ |\omega|^2 \end{equation} where \(\psi\) is a smooth function (including at the singularity). We then have \[ \eta_{m} = d_{\mathcal{F}} \psi + \Re \alpha_{\omega} \] where \( \alpha_{\omega}\) is a section of \( K_{\mathcal{F}}\), given in linearization coordinates by \begin{equation}\label{eq: alpha} \alpha_\omega=\frac{a+b}{a} \left(\frac{dx}{x}\right)_{|\mathcal{F}} = \frac{a+b}{b} \left(\frac{dy}{y}\right)_{|\mathcal{F}} . \end{equation} \subsection{Harmonic currents}\label{ss: harmonic current} A current of bidimension $(1,1)$ on $S$ is a continuous linear form on the space of smooth \((1,1)\)-forms \(\Omega^{1,1}(S)\). In this article, every current will be of bidimension $(1,1)$. \begin{definition} Let \( \mathcal P \) be a closed subset of \(TS \). A $(1,1)$-form $\omega$ is \(\mathcal P\)-positive if \( \omega (u, iu ) \geq 0\) for every \( u\in \mathcal P\) (written $\omega_{\vert \mathcal P} \geq 0$). A current $T$ is \(\mathcal P\)-positive if \(T (\omega) \geq 0\) for every \(\mathcal P\)-positive $(1,1)$-form $\omega$. \end{definition} If \(\mathcal P \subset \mathcal P'\), then any \(\mathcal P\)-positive current is \(\mathcal P'\)-positive. In particular every $\mathcal P$-positive current is positive in the usual sense. \begin{lemma}\label{lem: largestrict} Let $T$ be a non trivial current. Then $T$ is \(\mathcal P\)-positive if and only if $T(\omega) > 0$ as soon as $\omega_{\vert \mathcal P} > 0$. \end{lemma} \begin{proof} Let $\kappa$ be a K\"ahler form on $S$ and let $\omega$ be such that $\omega_{\vert \mathcal P} > 0$. Since $\mathcal P$ is closed, there exists $\epsilon >0$ such that $(\omega - \epsilon \kappa)_{\vert \mathcal P} \geq 0$. If $T$ is non trivial and \(\mathcal P\)-positive, we get $T(\omega) \geq \epsilon T(\kappa) > 0$ as desired. Reciprocally, consider $\omega + \epsilon \kappa$ and let $\epsilon$ tend to zero. \end{proof} A current \(T\) is \textit{harmonic} if it is \(dd^c\)-closed, namely \(T ( d d^c f ) = 0 \) for every smooth function \(f\in C^\infty (S)\). Skoda \cite{Skoda} proved that if \( T\) is a positive harmonic current defined at the neighborhood of the origin in \({\bf C}^2\), then \begin{equation} \label{eq: monotonicity trace measure} r\mapsto I_T(r):=\frac{1}{r^2} \int_{|x|^2 + |y|^2 \leq r^2} T \wedge i (dx \wedge d\overline{x} + dy \wedge d\overline{y} ) \end{equation} is non decreasing. The limit when \(r\) tends to \(0\) therefore exists, it is the Lelong number of \(T\) at the origin, it does not depend on the coordinate system. In particular, a harmonic current does not put any mass on points. A harmonic current \( T\) defines an element \( [T]\) in the dual \( H^{1,1}_{BC} (S, \mathbf{R}) ^* \) of the Bott-Chern cohomology group, which is isomorphic to \( H^{1,1} (S, \mathbf{R}) ^*\) by the \(dd^c\)-lemma. By duality, it defines a class \([T]\) in \( H^{1,1} (S, \mathbf{R}) \). Its intersection with the Chern class \(c_1(L)\) of a line bundle \( L\) is defined by \begin{equation} \label{eq: intersection with harmonic current } [T] \cdot c_1(L) = T ( \Theta_{m} ) \end{equation} where \( m \) is any hermitian metric on \(L\) and \(\Theta_{m} = - \frac{1}{2} d d^c \log m(s) \) is the curvature form of \(m\), \(s\) being a local non vanishing holomorphic section of $L$, see \cite{Ghys}. We shall denote \( T\cdot L\) this intersection. The following application of Hahn-Banach separation principle will be crucial. \begin{proposition} \label{l: Hahn-Banach} Let \(\mathcal P\subset TS\) be a closed subset. A line bundle \(L\) has a hermitian metric $m$ whose curvature satisfies $(\Theta_m)_{\vert \mathcal P} >0$ if and only if \( T\cdot L >0\) for every non trivial \(\mathcal P\)-positive harmonic current \(T\). \end{proposition} \begin{proof} If $m$ is a hermitian metric on \(L\) such that $(\Theta_m)_{\vert \mathcal P} >0$, then Lemma \ref{lem: largestrict} gives \( T \cdot L >0\) for every non trivial \(\mathcal P\)-positive harmonic current \(T\). Reciprocally, assume that \( T \cdot L >0\) for every non trivial \(\mathcal P\)-positive harmonic current \(T\). Let \( E \) be the Banach space of \( (1,1)\)-forms on \(S\) with continuous coefficients, equipped with the topology of uniform convergence, \( F\subset E\) be the subspace of \( dd^c \)-exact smooth \((1,1)\)-forms, \(\mathcal C\subset E\) be the convex open cone formed by continuous \((1,1)\)-forms $\omega$ such that $\omega_{\vert \mathcal P} > 0$, and \(m' \) be a hermitian metric on \(L\). If \( \Theta_{m'} + F\) does not intersect \(\mathcal C\), Hahn-Banach separation theorem asserts that there exists a linear functional \( T : E\rightarrow {\bf R}\) and \(s\in {\bf R}\) such that for any \( x\in \Theta_{m'} +F\) and any \(y\in \mathcal C\), \( T(y) > s \geq T(x)\). Since \(\mathcal C\) is a cone, and \(T\) is bounded from below by $s$ on \(\mathcal C\), the infimum of \(T\) on \(\mathcal C\) is \(0\), so we can assume that \(s=0\). Note also that \(T\) is bounded from above on \(F\), and since \(F\) is a linear subspace, it must vanish identically on \(F\). So \(T\) is a non trivial \(\mathcal P\)-positive harmonic current and \( T \cdot L = T(\Theta_{m'}) \leq 0\), contradiction. Hence \( \Theta_{m'} + F\) intersects \(\mathcal C\); since this is the set of curvatures of smooth hermitian metrics on \(L\), there exists a hermitian metric $m$ on \(L\) whose curvature belongs to $\mathcal C$, that is $(\Theta_m)_{\vert \mathcal P} >0$. \end{proof} \begin{remark}\label{rk: imp} We also have that if $T \cdot L \leq 0$ then there exists for every $\varepsilon > 0$ a hermitian metric $m_\varepsilon$ on $L$ such that $(\Theta_m)_{\vert \mathcal P} \leq \varepsilon \kappa_{\vert \mathcal P}$. The changes consist in replacing \(\overline{\mathcal C}\) by \(\mathcal C\) and to use the relevant Hahn Banach theorem to get \( T(y) < 0 < T(x)\). The contradiction implies that $-\Theta_{m'} + F$ intersects \(\overline{\mathcal C}\). \end{remark} \subsection{Directed harmonic currents}\label{ss: directed harmonic current} Let \(\mathcal P_{\mathcal F} \) be the closure of the image of \( T\mathcal F\) in \(TS\), this is the union of the tangent space of \(\mathcal F\) in the regular part of $\mathcal{F}$ and of the tangent spaces of \(S\) at the singular points of $\mathcal{F}$. We say that a current is \emph{directed} if it is \(\mathcal P_{\mathcal F}\)-positive. A closed directed current is called a \emph{foliation cycle}, terminology due to Sullivan \cite{Sullivan}. The following lemma shows that our definition of harmonic directed currents coincides with the one employed by Berndtsson-Sibony \cite{BS} and Dinh-Nguyen-Sibony \cite{DNS_unique ergodicity}. \begin{lemma} Let $T$ be a harmonic current. Then $T$ is directed if and only if $T \wedge \Omega = 0$ for every smooth $1$-form $\Omega$ locally defining $\mathcal{F}$. \end{lemma} \begin{proof} It suffices to work on $S^*$ since harmonic currents $T$ do not put any mass on points. Assume that $T$ is directed. Let $U$ be a small open neighborhood of a regular point of $\mathcal{F}$, let $(z,t)$ be holomorphic coordinates on $U$ such that $dt$ defines $\mathcal{F}$. Since $T$ is positive, we can write on $U$: $$T = \alpha i dz\wedge d\bar z + \beta i dz \wedge d\bar t + \bar \beta i dt \wedge d\bar z + \gamma i dt \wedge d\bar t , $$ where $\alpha, \gamma$ are positive measures and $\beta$ is a complex measure. The form $\eta = - i dt \wedge d\bar t$ satisfies $\eta_{\vert \mathcal P_{\mathcal F}} = 0$, hence $- \alpha = T \wedge \eta$ is a positive measure, therefore $\alpha =0$. Now let $\vert \beta \vert$ be the variation of $\beta$ and let us write $\beta = h \vert \beta \vert$ where $h$ is a measurable function satisfying $\vert h \vert =1$ on $U$. Then $\eta = h i d\bar t \wedge dz + \bar h i d \bar z \wedge dt$ satisfies $\eta_{\vert \mathcal P_{\mathcal F}} = 0$. Hence $-2 \vert \beta \vert = T \wedge \eta$ is a positive measure, which gives $\beta = 0$. Finally, $T = \gamma i dt \wedge d\bar t$ on $U$, hence $T \wedge dt = 0$ as desired. Reciprocally, if $T \wedge \Omega = 0$ for a $1$-form $\Omega$ defining $\mathcal{F}$, then $T$ has the form $\int h_c [L_c] d\nu(c)$ where $h_c$ is a non negative harmonic function, $[L_c]$ being the current of integration on $\{ t = c\}$ and $\nu$ being a positive measure, see \cite[Proposition 1.6]{BS}. Hence $T$ is directed. \end{proof} The existence of directed harmonic currents on foliated compact complex surfaces has been established by Berndtsson-Sibony \cite{BS}. Nguyen \cite{Nguyen2} proved that for foliations satisfying (*), the Lelong number of every directed harmonic current is everywhere vanishing. Dinh, Nguyen and Sibony \cite{DNS_unique ergodicity} recently established that every compact K\"ahler foliated surface satisfying (*) admits a unique directed harmonic current up to multiplication by a constant. Fornaess and Sibony \cite{FS} previously proved the same result when $S = \mathbb P^2$. The support of this current is a closed saturated subset \(\mathcal L\) contained in the closure of every leaf, we call it the \emph{limit set} of $\mathcal{F}$. \section{Positivity of the normal bundle along the foliation} \label{s: positivity normal bundle} In this section, we first prove that the normal bundle of $\mathcal{F}$ is positive along the foliation. The proof is an adaptation of \cite[Section 3.1.1]{Deroin-Kleptsyn} to our singular context. It is interesting to notice that this provides a quick proof of the negativity of the Lyapunov exponent, established in \cite{Nguyen3}. In a second time, we improve Theorem \ref{t: positivity normal bundle} by gaining positivity for the normal bundle near the singular set, the arguments rely on Lemma \ref{l: Hahn-Banach} and on the vanishing of the Lelong numbers of the directed harmonic current. \begin{theorem} \label{t: positivity normal bundle} Let $S$ be a foliated compact K\"ahler surface satisfying (*). The unique directed harmonic current satisfies $T \cdot N_\mathcal{F} > 0$. Hence (by Lemma \ref{l: Hahn-Banach}) the normal bundle $N_\mathcal{F}$ carries a hermitian metric $m$ whose curvature satisfies \((\Theta_m)_{\vert \mathcal P_\mathcal{F}} > 0\). \end{theorem} Before proving Theorem \ref{t: positivity normal bundle}, we establish Lemma \ref{l: IBP} below. Let \( m\) be a smooth metric on \( N_{\mathcal{F}} \) and let \( v_{m}\) be its associated volume form, considered as a global non negative \( (1,1) \)-form on \(S^*\) whose kernel is the tangent bundle \(T\mathcal{F}\). In local coordinates (see Equation \eqref{eq: varphi}), we have \begin{equation} \label{eq: vm} v_m := \exp (2\varphi_m) \frac{i}{2} dt\wedge d\overline{t} . \end{equation} \begin{lemma} \label{l: IBP} The integral \begin{equation} \label{eq: integral} \int_{S^*} \left( d _{\mathcal{F}} d^c_{\mathcal{F}} \varphi_m + 2 d_{\mathcal{F}}\varphi_m \wedge d^c_{\mathcal{F}} \varphi_m \right) \wedge v_m \end{equation} is absolutely convergent, and vanishes. \end{lemma} \begin{proof} We first prove that the integral \eqref{eq: integral} is absolutely convergent. The problem occurs of course close to the singular points of \(\mathcal{F}\). There we have \( m = \exp (2\psi) |\omega|^2\) where \( \psi \) is a smooth function, and hence \(v_m = \exp (2\psi) \frac{i}{2} \omega\wedge \overline{\omega}\). Recall that \(\alpha_\omega\) is the section of \( K_{\mathcal{F}}\) given by \eqref{eq: alpha}. We then have \[ d_{\mathcal{F}}d_{\mathcal{F}}^c \varphi_m = d_{\mathcal{F}}d_{\mathcal{F}}^c \psi , \ \ d_{\mathcal{F}} \varphi _m = \Re \alpha_{\omega} + d_{\mathcal{F}} \psi , \text{ and } d_{\mathcal{F}} ^c \varphi _m = \frac{1}{2\pi} \Im \alpha_{\omega} + d_{\mathcal{F}}^c \psi.\] Using the relation \(d\omega = \alpha_\omega \wedge \omega\), one verifies that the forms $d _{\mathcal{F}} d^c_{\mathcal{F}} \varphi_m \wedge v_m$ and $d_{\mathcal{F}}\varphi_m \wedge d^c_{\mathcal{F}} \varphi_m \wedge v_m$ are smooth near the singular set, which provides the absolute convergence of \eqref{eq: integral}. We now prove that \eqref{eq: integral} vanishes. From \eqref{eq: vm} we have \( dv_m = 2 d_{\mathcal{F}} \varphi_m \wedge v_m \), so that \begin{equation}\label{eq: computation of primitive} d \left( d_{\mathcal{F}}^c \varphi_m \wedge v_m \right) = \left( d _{\mathcal{F}}d_{\mathcal{F}}^c \varphi_m + 2 d_{\mathcal{F}} \varphi_m \wedge d^c_{\mathcal{F}} \varphi_m \right) \wedge v_m.\end{equation} For any compact domain \( U \subset S^*\) with smooth boundary, Stokes formula together with \eqref{eq: computation of primitive} yields \begin{equation}\label{eq: IBP} \int _U \left( d _{\mathcal{F}}d_{\mathcal{F}}^c \varphi_m + 2 d_{\mathcal{F}} \varphi_m \wedge d^c_{\mathcal{F}} \varphi_m\right) \wedge v_m = \int_{\partial U} d_{\mathcal{F}} ^c\varphi_m \wedge v_m.\end{equation} For \(r >0\) small, take for $U$ the complement of euclidean balls of radius \(r\) around the singular points in the linearizing coordinates \((x,y)\), denoted \( U_r\). By \eqref{eq: IBP}, to prove that \eqref{eq: integral} vanishes, it suffices to prove that the integral \begin{equation*} \label{eq: boundary term} \int _{\partial U_r} d_{\mathcal{F}} ^c\varphi_m \wedge v_m \end{equation*} tends to zero as \( r\) tends to \(0\). This is a consequence of the fact that the \(3\)-form \( d_{\mathcal{F}} ^c\varphi_m \wedge v_m\) is smooth. \end{proof} \begin{proof}[Proof of Theorem \ref{t: positivity normal bundle}] Let \(T\) be the unique directed harmonic current and assume that \( T \cdot N_{\mathcal{F}}\leq 0\). Let \(\kappa\) be a K\"ahler form on \( S\). Applying Remark \ref{rk: imp}, there exists a family \( \{m^{\varepsilon}\} _{\varepsilon >0} \) of metrics on \(N_{\mathcal{F}}\) whose curvature form satisfy \(\left(\Theta_{m^\varepsilon}\right)_{|\mathcal P_\mathcal{F}} \leq \varepsilon \kappa_{|\mathcal P_\mathcal{F}} \). We normalize each \(m^{\varepsilon}\) so that \begin{equation} \label{eq: normalization} \int \kappa \wedge v^{\varepsilon} = 1,\end{equation} where \(v^{\varepsilon} \) is the transverse volume form associated to \( m^{\varepsilon} \). Introduce the currents \begin{equation} \label{eq: approximate foliation cycles} V^\varepsilon (\omega): = \int \omega \wedge v^\varepsilon \text{ for every } \omega\in \Omega^{1,1} (S). \end{equation} By compactness of the set of normalized currents equipped with the weak topology, we can find a sequence \(\varepsilon_n\rightarrow 0\) such that \( V^{\varepsilon_n} \) converges to a normalized current \(V\). We claim that \( V\) is a foliation cycle. Since $dd^c \log m^\varepsilon(s) = - \Theta_{m^\varepsilon}$ (see Section \ref{ss: harmonic current}) and \(\left(\Theta_{m^\varepsilon}\right)_{|\mathcal P_\mathcal{F}} \leq \varepsilon \kappa_{|\mathcal P_\mathcal{F}} \), the limit current \( V\) is \(\mathcal P_{\mathcal F}\)-positive. It remains to prove that \( V\) is closed. Let \( \alpha \) be a \( 1\)-form on \(S\). Since \[ d(\alpha \wedge v^{\varepsilon}) = d\alpha \wedge v^{\varepsilon} - \alpha \wedge dv^{\varepsilon}= d\alpha \wedge v^{\varepsilon} - 2 \alpha \wedge d_\mathcal{F} \varphi^{\varepsilon} \wedge v^{\varepsilon}, \] Stokes formula gives \[V^{\varepsilon} (d\alpha ) = 2 \int \alpha \wedge d_\mathcal{F} \varphi^{\varepsilon} \wedge v^{\varepsilon} . \] In particular, we get by Cauchy-Schwarz \[ |V^{\varepsilon} (d\alpha)| \leq 2 \left( \int \alpha \wedge \alpha^* \wedge v^\varepsilon \right)^{1/2} \left( \int d_{\mathcal{F}}\varphi^{\varepsilon} \wedge d^c _{\mathcal{F}}\varphi^{\varepsilon} \wedge v^\varepsilon \right)^{1/2} \] where \( \alpha ^* (u)= -\alpha (iu)\). The restriction of \(\alpha \wedge \alpha^*\) to \(\mathcal{F}\) is bounded by a constant times the K\"ahler form \(\kappa\), hence \[ |V^\varepsilon (d\alpha) | \leq C(\alpha) \left( \int d_{\mathcal{F}}\varphi^\varepsilon \wedge d^c _{\mathcal{F}}\varphi^\varepsilon \wedge v^\varepsilon \right)^{1/2} ,\] where \( C(\alpha)\) does not depend on \(\varepsilon\). By Lemma \ref{l: IBP}, we have \[ \int_{S^*} \left( d _{\mathcal{F}} d^c_{\mathcal{F}} \varphi^{\varepsilon} + 2 d_{\mathcal{F}}\varphi^{\varepsilon} \wedge d^c_{\mathcal{F}} \varphi^{\varepsilon} \right) \wedge v^\varepsilon = 0 , \] and since \( - d_{\mathcal{F}} d^c _{\mathcal{F}} \varphi^{\varepsilon} \) is the restriction of the curvature of \(m^{\varepsilon} \) to \(\mathcal{F}\), we get from \eqref{eq: normalization} \[ 2 \int _{S^*} d_{\mathcal{F}}\varphi^{\varepsilon} \wedge d^c_{\mathcal{F}} \varphi^{\varepsilon} \wedge v^\varepsilon =\int_{S^*} \Theta_{m^\varepsilon} \wedge v^{\varepsilon} \leq \varepsilon \int_{S^*} \kappa \wedge v^\varepsilon = \varepsilon . \] We infer that \( V^{\varepsilon} (d\alpha) \rightarrow_{\varepsilon\rightarrow 0} 0\), and consequently \( V\) is closed. This contradicts the assumption (*) and ends the proof of Theorem \ref{t: positivity normal bundle}. \end{proof} We now prove the following technical but fundamental refinement of Theorem \ref{t: positivity normal bundle}, which permits to gain positivity for $N_\mathcal{F}$ at the neighborhood of the singular set. \begin{theorem} (Improvement of Theorem \ref{t: positivity normal bundle}) \label{c: positivity II} Let \(M >0\) be a constant. There exists a hermitian metric \(m\) on \( N_{\mathcal{F}}\) whose curvature satisfies \((\Theta_m)_{\vert \mathcal P_\mathcal{F}} > 0\) and such that for each singular point \( p\in \text{sing} (\mathcal{F}) \), there exists linearization coordinates \( (x_p, y_p) : U_p \rightarrow {\bf B} \) from a neighborhood of \(p\) onto the unit ball, such that the foliation in these coordinates is defined by the vector field \eqref{eq: linearization}, and such that in restriction to each \( U_p\), we have: \[ \Theta_m > M (i dx_p \wedge d\overline{x_p} + i dy_p \wedge d\overline{y_p} ) .\] \end{theorem} \begin{proof} Let us work near a singular point $p$ and introduce linearization coordinates \( (x,y) : U \rightarrow {\bf B} \) near $p$, see Section \ref{sub:singpoints}. For each \( r\in (0, 1) \), let \( U_r := \{ |x|^2+|y|^2< r^2\} \) and let $$ \mathcal P_{\mathcal F, r}:= \mathcal P_{\mathcal F} \cup \overline{T U (r)} . $$ We recall that $I_T(r)$ is defined in Equation (\ref{eq: monotonicity trace measure}). \begin{lemma}\label{l: estimation} Let \(M>0\) be a constant. There exist arbitrarily small radii \(r>0\) such that for every \( \mathcal P_{\mathcal F, r}\)-positive harmonic current \(T_r\), we have \[ T_r \cdot N_{\mathcal F} > 4M I_{T_r}(r). \] \end{lemma} \begin{proof} Suppose to the contrary that for any sufficiently small \(r>0\) there exists a \(\mathcal P_{\mathcal F, r}\)-positive harmonic current \(T_r\) such that \(T_r \cdot N_{\mathcal F} \leq 4M I_{T_r}(r)\). We can assume that \( T_ r\) has total mass \(1\) by normalizing (namely \(T_r (\kappa) = 1\) for a fixed K\"ahler form \(\kappa\) on \(S\)). Since the set of positive currents of mass one is compact for weak convergence, there exists a sequence of radii \(r_n >0\) that tends to \(0\) such that \( T_{r_n}\) weakly converges to a current \(T\). Notice that \(T\) is harmonic, of mass one, and that \( T (\omega) \geq 0\) for every $\omega$ satisfying \(\omega_{\vert \mathcal P_\mathcal{F}} \geq 0 \). Hence $T$ is directed, in particular \( T \cdot N_{\mathcal F} >0\) by Theorem \ref{t: positivity normal bundle}. Let \(\varepsilon >0\) be fixed, and let \(r<\varepsilon\). Since \(I_{T_r}\) is non decreasing, we get \( I_{T_r} (r) \leq I_{T_r} (\varepsilon) \) so that \(I_{T_r} (\varepsilon) \geq \frac{T \cdot N_{\mathcal F}}{4M}>0\) for any \(0<r < \varepsilon\). Applying this to \(r= r_n\) and letting \(n\) go to \(+\infty\) yields \( I_T (\varepsilon ) \geq \frac{T \cdot N_{\mathcal F}}{4M} \). Being true for every \(\varepsilon>0\), we get that \( \nu ( T, p) \geq \frac{T \cdot N_{\mathcal F}}{4M} \), which contradicts the vanishing of the Lelong number of \(T\) at \(p\). \end{proof} Let $M >0$ and $r$ be a small radius provided by Lemma \ref{l: estimation}. Let $$ U_p:= U_{r/2} \textrm{ and } (x_p, y_p) = (2x/ r, 2y/r) : U_p \to {\bf B} . $$ Let \(\omega \) be a smooth non negative \((1,1)\)-form with the following properties: \begin{itemize} \item the support of $\omega$ is contained in \(U_r\), \item $\omega$ is equal to \( i (dx_p \wedge d\overline{x_p} + dy_p \wedge d\overline{y_p} ) \) for \( (x,y) \in U_{r/2}\), \item $\omega$ is bounded by \(i (dx_p \wedge d\overline{x_p} + dy_p \wedge d\overline{y_p} )\) for \( (x,y) \in U_r\). \end{itemize} For any \( \mathcal P_{\mathcal F, r}\)-harmonic current \(T_r \), we have \[ T_r (\omega) \leq \frac{4}{r^2} \int_{U_r} T \wedge i (dx \wedge d\overline{x} + dy \wedge d\overline{y} )\leq 4 I_{T_r}(r). \] Denoting by \(\Theta_m\) the curvature form of a hermitian metric \(m\) on \(N_{\mathcal F}\), and letting \( \eta := \Theta_m - M \omega \), we get from Lemma \ref{l: estimation} that \[ T_r (\eta) >0 \text{ for any } \mathcal P_{\mathcal F, r}\text{-harmonic current } T_r .\] Theorem \ref{c: positivity II} then follows from Lemma \ref{l: Hahn-Banach} applied with $\mathcal P = \mathcal P_{\mathcal F, r}$. \end{proof} \section{Thin sets in the complex plane} \label{s: thin sets} In this section, we study some geometrical/stochastical properties of closed sets $\Lambda$ in a Riemann surface \(C\). For every set \( V\subset C \), and any path \(\gamma : [0,+\infty) \rightarrow C\) starting at a point \(\gamma (0) \in V\), denote by \( T_V( \gamma)\) the largest time \(\gamma\) remains in \(V\), namely \[ T_V (\gamma) := \sup ( t\geq 0 \ |\ \gamma ([0, t] ) \subset V ) . \] Let $\mathbb P^x$ be the Wiener measure on the set of continous paths starting at $x$. \begin{definition} [Doob] A closed subset \(\Lambda \subset C\) is \emph{thin} if for every point \( x\in \Lambda\), a Brownian trajectory starting at \(x\) almost surely leaves \(\Lambda\) at arbitrarily small times (that is $T_\Lambda(\gamma) =0$ for every $x \in \Lambda$ and $\mathbb P^x$-almost every $\gamma$). \end{definition} We refer to the book \cite[Chapter II, p.79]{BG} for more on this notion. Note that this definition does not depend on the choice of the hermitian metric on \(C\), by conformal invariance of the Brownian motion. In the sequel, for \(x \in C\) we will denote by \(\Gamma^x\) the set of continuous paths \(\gamma : [0,\infty) \rightarrow C\) such that \(\gamma (0)=x\), and for every \(t\geq 0\) \[E^{x,t}_{\Lambda} := \{ \gamma \in \Gamma^x, T_{\Lambda} (\gamma) \geq t \}.\] \begin{lemma}\label{lemma: positiveLeb} If \(\Lambda \) has zero Lebesgue measure then it is thin. \end{lemma} \begin{proof} Let us prove that if $\Lambda$ is not thin, then its Lebesgue measure is positive. By assumption there exist $x \in \Lambda$ and $t>0$ such that $\mathbb P^x ( E_{\Lambda}^{x,t} ) > 0$. Let $\mathbb P^{x,t}_\Lambda$ be the restriction of $\mathbb P^x$ to $E_{\Lambda}^{x,t} $. Then $(\pi_t)_* \mathbb P^{x,t}_\Lambda \leq (\pi_t)_* \mathbb P^x$, where $\pi_t : \Gamma^x \to \Lambda$ is defined by $\pi_t(\gamma) = \gamma(t)$. Since $(\pi_t)_* \mathbb P^x$ is absolutely continuous with respect to the Lebesgue measure on $C$ and since the support of $(\pi_t)_* \mathbb P^{x,t}_\Lambda$ is included in $\Lambda$, the Lebesgue measure of $\Lambda$ is positive. \end{proof} The following result is presumably well-known, but we provide a complete proof since we do not know any reference on this. \begin{proposition}\label{p: thin implies large first eigenvalues} A compact subset \( \Lambda\subset \mathbf{C}\) is thin if and only if $\Lambda$ possesses relatively compact open neighborhoods $V$ with smooth boundaries whose first eigenvalue with respect to the Dirichlet problem is arbitrarily large. \end{proposition} Let us recall some facts about the Dirichlet problem, we refer to \cite[Chapter I]{Chavel} for a general account. Let $D$ be a bounded domain in $\mathbf{C}$ with smooth boundary. An eigenvalue for the Dirichlet problem on $D$ is a real number $\lambda$ such that $\Delta \varphi + \lambda \varphi =0$ for some bounded $C^2$ function $\varphi$ with zero boundary values. These eigenvalues form a sequence of positive numbers which tends to infinity, let $\lambda_1(D)$ denote the first (smallest) eigenvalue. It is inclusion decreasing, that is $\lambda_1(D_1) \leq \lambda(D_2)$ if $D_2 \subset D_1$. We shall use two features concerning $\lambda_1$. The first one is that \(\mathbb E^x (\exp (\lambda T_D ) ) < \infty \) for every \(x\in D \) if and only if \(\lambda_ 1 (D) \geq \lambda\), see \cite[Section 3]{Sullivan_positivity}. The second one is that the norm of the diffusion operator $P^t_D$ (see for instance Equation (\ref{eq: HS2}) below) is equal to $e^{-t \lambda_1(D)}$, see \cite[Section 4.7]{PS}. If $V$ is an open subset of $\mathbf{C}$ with smooth boundary, then $\lambda_1(V)$ is the infimum of $\lambda_1(D)$ where $D$ runs over the connected components of $V$. In particular, the norm of $P^t_V$ is equal to $e^{-t \lambda_1(V)}$. Let us introduce open neighborhood of $\Lambda$ which will be used in the proofs of Propositions \ref{p: thin implies large first eigenvalues} and \ref{p: technical characterization}. For every $\epsilon >0$, we define \( \Lambda^\varepsilon :=\{ d_\mathbf{C}(\cdot , \Lambda) <\varepsilon\} \) and set an open neighborhood \(V_\varepsilon\) of $\Lambda$ with smooth boundary satisfying \begin{equation}\label{eq: V} \Lambda \subset V_\varepsilon \subset \Lambda^\varepsilon . \end{equation} \begin{proof}[Proof of Proposition \ref{p: thin implies large first eigenvalues}] Assume that $\Lambda \subset \mathbf{C}$ is a thin compact set. We claim that for every \(t>0\) and \(\delta>0\), there exists \(\varepsilon>0\) such that \begin{equation} \label{eq: quantitative thin property} \mathbb P^x ( T_{\Lambda^\varepsilon} >t ) \leq \delta. \end{equation} Indeed, assume to the contrary that \eqref{eq: quantitative thin property} does not hold: there exist \(t>0\) and \(\delta>0\) so that for every \(\varepsilon >0\), there exists \(x_\varepsilon \in \Lambda^\varepsilon\) such that \begin{equation} \label{eq: contradiction} \mathbb P^{x_\varepsilon} ( T_{\Lambda^\varepsilon} >t ) > \delta.\end{equation} By compactness of \(\Lambda\), we can find a sequence of positive numbers \(\varepsilon_n \) that tends to \(0\) and such that \(x_{\varepsilon_n} \) tends to some \(x\in \Lambda\) when \(n\) goes to infinity. The triangular inequality immediately yields that \begin{equation}\label{eq: triangular inequality consequence} x-y + E _{\Lambda^\eta} ^{y,t} \subset E_{\Lambda^{\eta+ |x-y|}} ^{x,t} ,\end{equation} where \(z+E _{\Lambda^\eta} ^{y,t}\) denotes the set of paths of the form \(z + \gamma (t) \) with \(\gamma \in E _{\Lambda^\eta} ^{y,t}\). Observe that \(\mathbb P^{x_{\varepsilon_n}} (E_{\Lambda^{\varepsilon_n}} ^{x_{\varepsilon_n}, t })=\mathbb P^{x_{\varepsilon_n}} ( T_{\Lambda^{\varepsilon_n}} >t ) > \delta\) by \eqref{eq: contradiction}. Hence, together with \eqref{eq: triangular inequality consequence} and the equivariance of the Wiener measures \(\mathbb P^x\) with respect to translations, we get \[ \mathbb P^x (E ^{x,t}_{\Lambda^{ \varepsilon_n + |x-x_{\varepsilon_n}|}} ) \geq \mathbb P ^x (x-x_{\varepsilon_n}+ E ^{x_{\varepsilon_n},t} _{\Lambda^{ \varepsilon_n}} )=\mathbb P^{x_{\varepsilon_n}} ( E ^{x_{\varepsilon_n}, t}_{\Lambda^{ \varepsilon_n}} )>\delta.\] Denoting by \(\eta_n= \varepsilon_n + |x-x_{\varepsilon_n}|\) and taking if necessary a subsequence so that \(\eta_n\) is decreasing, we get that \(E ^{x, t} _{\Lambda^{\eta_n}}\) is also decreasing for inclusion, and this yields \[ \mathbb P^x (\cap _ n E^{x,t}_{\Lambda^{ \eta_n}} ) \geq \delta .\] But since $\Lambda$ is closed, the intersection \( \cap _ n E^{x,t}_{\Lambda^{ \eta_n}}\) is the set of continuous paths \(\gamma: [0,+\infty) \rightarrow \mathbf{C}\) so that \(\gamma (0) = x\) and \(\gamma ([0,t]) \subset \Lambda\). This contradicts the thin property, hence \eqref{eq: quantitative thin property} holds. We now obtain, by iterating \eqref{eq: quantitative thin property} and using the Markov property, that for every \(k \geq 1 \) and every \(x\in \Lambda ^{\varepsilon}\), \[ \mathbb P^x ( T_{\Lambda^\varepsilon} >kt ) \leq \delta^k.\] In particular for every \(\lambda >0\) and \(x\in \Lambda^{\varepsilon}\), we get \[ \mathbb E^x (\exp (\lambda T_{\Lambda^\varepsilon} ) ) \leq \sum _{k\geq 0} \int _{kt < T_{\Lambda^\varepsilon} \leq (k+1) t} \exp (\lambda T_{\Lambda^\varepsilon} ) d\mathbb P^x \] \[ \leq \text{cst} + \sum _{k \geq 1} \delta ^k \exp (\lambda (k+1) t) <+\infty\] if \(\log \delta + \lambda t <0\). This condition can be fulfilled by appropriately choosing the constants \( t, \delta >0\). For the corresponding value of \(\varepsilon\) we get the convergence of \(\mathbb E^x (\exp (\lambda T_{\Lambda^\varepsilon} ) )\). Now let $D$ be a connected component of $V_\epsilon$ defined in (\ref{eq: V}). Since $D \subset \Lambda^\varepsilon$, we get $T_D \leq T_{\Lambda^\varepsilon}$, hence \(\mathbb E^x (\exp (\lambda T_D ) )\) converges for every \(x\in D \). That proves \(\lambda_ 1 (D) \geq \lambda\) by \cite[Section 3]{Sullivan_positivity}, and thus \(\lambda_ 1 (V_\varepsilon) \geq \lambda\). Conversely, let $\Lambda \subset \mathbf{C}$ be a compact subset having relatively compact open neighborhoods with smooth boundaries whose first eigenvalue is as large as we want. By the monotonicity property of the first eigenvalue, \(\lambda_1(V_\varepsilon)\) tends to \(+\infty\) when \(\varepsilon\) tends to zero. We proceed by contradiction assuming that $\Lambda$ is not thin. First we fix a relatively compact open neighborhood \(V\) of \(\Lambda\) with smooth boundary that contains every \(V^\varepsilon\). Let us take the notations of the proof of Lemma \ref{lemma: positiveLeb}. For every $x \in \Lambda$ and $t >0$ such that $ \mathbb P^x(E_{\Lambda} ^{x,t}) >0$, let $\mu_{\Lambda}^{x,t} := (\pi_t)_* \mathbb P^{x,t}_\Lambda$ and let $q_\Lambda (x,\cdot,t)$ be the density of $\mu_{\Lambda}^{x,t}$ with respect to the Lebesgue measure on $\mathbf C$. The latter is bounded above by the heat kernel $p(x,\cdot, t)$ for the euclidian distance on $\mathbf{C}$, which is a continuous function. Let \begin{equation}\label{eq: HS} P_{\Lambda}^t(f)(x) := \int_{\Lambda} f(y) d \mu_{\Lambda} ^{x,t}(y) , \end{equation} this defines a compact self-adjoint operator of $L^2(V)$. Similarly, let $\mu_{V_\epsilon}^{x,t} := (\pi_t)_* \mathbb P^{x,t}_{V_\epsilon}$ and $q_{V_\epsilon} (x,\cdot,t)$ be the density of $\mu_{V_\epsilon}^{x,t}$ with respect to the Lebesgue measure. Note that $q_{\Lambda} \leq q_{V_\epsilon} \leq p$ since $E_{\Lambda} ^{x,t} \subset E_{V_\epsilon}^{x,t}$. Let us define \begin{equation}\label{eq: HS2} P_{V_\epsilon}^t (f)(x) := \int_{V^\varepsilon} f(y) d \mu_{V_\epsilon}^{x,t} (y) \end{equation} and prove that $P_{V_\epsilon}^t$ converges to $P_{\Lambda}^t$ in the space of operators of $L^2(V)$. First observe that one can integrate over $V$ instead of $\Lambda$ and $V_\epsilon$ in (\ref{eq: HS}) and (\ref{eq: HS2}) without modifying the definitions. Now for every $f \in L^2(V)$ of norm one, \begin{equation}\label{eq: strongconv} \norm { P_{V_\epsilon}^t (f) - P_{\Lambda}^t (f) }^2 \leq \iint_{V \times V} (q_{V_\epsilon} - q_{\Lambda} )^2 dxdy \leq M \iint_{V \times V} (q_{V_\epsilon} - q_{\Lambda} ) dxdy , \end{equation} where the first inequality uses Cauchy-Schwarz, and the last inequality uses $q_{\Lambda} \leq q_{V_\epsilon} \leq p$, $M$ being an upper bound of $p(\cdot,\cdot,t)$ on $V \times V$. Since $\cap _{\epsilon >0} E_{V_\epsilon}^{x,t} = E_\Lambda^{x,t}$ for every $x \in \Lambda$, the right hand side term of Equation (\ref{eq: strongconv}) tends to zero as $\epsilon$ tends to zero. In particular, the norm of $P_{V_\epsilon}^t$ tends to the norm of $P_\Lambda^t$, hence $\lambda_1(V_\epsilon)$ is bounded, a contradiction. The compact set $\Lambda$ is thus thin, and the proof is complete. \end{proof} We will need the following rather technical result. \begin{proposition} \label{p: technical characterization} A closed subset \(\Lambda \subset \mathbf{D}\) is thin if and only if there exists a sequence of smooth functions \( f_n : \mathbf{D} \rightarrow \mathbf{R}\) such that, in restriction to any compact subset of \( \Lambda \), we have, uniformly: \begin{itemize} \item \(f_n\) converges to \(0\), \item \( \Delta f_n\) tends to \( +\infty\), \item \( \vert \nabla f_n \vert ^2 = o (\Delta f_n )\). \end{itemize} \end{proposition} \begin{proof} Since the desired convergence of the sequence is in restriction to any compact, we can assume that \(\Lambda\subset \mathbf{D}\) itself is compact. We first prove that if \(\Lambda\) is thin then there exists such a sequence of functions. We borrow notations from the proofs of Lemma \ref{lemma: positiveLeb} and Proposition \ref{p: thin implies large first eigenvalues}. Fix \(\lambda >0\), and let \(\varepsilon = \varepsilon (\lambda) >0\) be small enough so that \(\lambda _1(V_\varepsilon) >\lambda\). Let \begin{equation} \label{eq: expectation of exponentiel of hitting time} \psi _{\lambda, \varepsilon} (x) := \mathbb E^x \left( \exp ( \lambda T_{V_{\varepsilon}} ) \right) : V_\varepsilon \rightarrow \mathbf{R} , \end{equation} it is well defined since \( \lambda_1(V_\varepsilon) > \lambda \). By \cite[Section 3]{Sullivan_positivity}, $(\psi _{\lambda, \varepsilon})_\lambda$ satisfies \begin{equation} \left\{ \begin{array}{l} \Delta \psi_{\lambda, \varepsilon} + \lambda \psi_{\lambda, \varepsilon } =0 \\ \psi_{\lambda, \varepsilon} = 1 \text{ on } \partial V_{\varepsilon} \end{array} \right. \end{equation} and uniformly converges to \( 1\) on \(\Lambda\) when $\lambda$ tends to $\infty$ (hence when \(\varepsilon\) tends to \(0\)), because $\Lambda$ is thin. Let \begin{equation} \label{eq: function phi} \varphi_{\lambda, \varepsilon } : = -\log \psi_{\lambda, \varepsilon}.\end{equation} The sequence $( \varphi_{\lambda, \varepsilon})_\lambda$ uniformly converges to 0 on \(\Lambda\), and we have \begin{equation} \Delta \varphi_{\lambda, \varepsilon} = - \frac{\Delta \psi_{\lambda,\varepsilon}} {\psi_{\lambda,\varepsilon}} + \vert \nabla \varphi_{\lambda,\varepsilon} \vert ^2 = \lambda + \vert \nabla \varphi_{\lambda,\varepsilon} \vert ^2 . \end{equation} So we deduce \begin{equation}\label{eq: bounds varphi} \Delta \varphi_{\lambda, \varepsilon}\geq \lambda \text{ and } \vert \nabla \varphi_{\lambda, \varepsilon} \vert ^2 \leq \Delta \varphi_{\lambda,\varepsilon} .\end{equation} Now for every positive integer \(n\), define \begin{equation} \label{eq: fn} f_n := \frac{1}{n} \varphi _{n^2, \varepsilon_n} \end{equation} where \(\varepsilon_n\) is small enough so that \( \lambda_1 ( V_{\varepsilon_n} ) > n^2 \). Equation \eqref{eq: bounds varphi} then implies \[ \Delta f_n \geq n \text{ and } \vert \nabla f_n \vert ^2 \leq \frac{1}{n} \Delta f_n , \] which concludes the first part of the proof. Conversely, consider a sequence of functions \(f_n\) as in the statement of Proposition \ref{p: technical characterization}, and let \(\lambda > 0 \). For \(n\) large enough, there exists a relatively compact open neighborhood \(V_n\) of \(\Lambda\) on which \begin{equation} \label{eq: condition} \Delta f_n \geq 2\lambda \text{ and } \Delta f_n \geq 2 \vert \nabla f_n \vert ^2 . \end{equation} We can assume that \( V_n \) has smooth boundary. Denote \(\psi := \exp ( - f_n) \). The function \(\psi\) is positive on $V_n$ and satisfies \[ -\Delta \psi = \psi \left( \Delta f_n - \vert \nabla f_n \vert ^2 \right) \geq \lambda \psi.\] This inequality, together with Lemma \ref{l: reciproque} below, implies $\lambda_1(V_n) \geq \lambda$. We then conclude by applying Proposition \ref{p: thin implies large first eigenvalues}. \end{proof} \begin{lemma} \label{l: reciproque} Let \( V \subset \mathbf{C}\) be a relatively compact open set with smooth boundary, and let \( \psi : V\rightarrow \mathbf{R}\) be a positive function such that \( - \Delta \psi \geq \lambda \psi\) for some constant \(\lambda\). Then, \(\lambda_1(V) \geq \lambda\). \end{lemma} \begin{proof} Let $D$ be a connected component of $V$ and let \(\chi : D \rightarrow \mathbf{R}\) be an eigenfunction for the first eigenvalue of $D$. We thus have \begin{equation*} \left\{ \begin{array}{l} \Delta \chi + \lambda_1(D) \chi =0 \\ \chi = 0 \text{ on } \partial D. \end{array} \right. \end{equation*} By Courant's nodal domain theorem \cite[Section I.5]{Chavel}, the function $\chi$ does not vanish on $D$, we can assume that $\chi$ is positive. Green's formula reads \begin{equation*} \label{eq: Green} \int_D \left( \chi \Delta \psi - \psi \Delta \chi \right) dv = \int_{\partial D} \left( \chi \nabla \psi - \psi \nabla\chi\right) \cdot n_{ext} , \end{equation*} where \( n_{ext}\) is the exterior normal vector to \(\partial D\) and \( v\) is the Lebesgue measure on \(\mathbf{C}\). Since \(\chi \) is positive on \(D\) and vanishes on \(\partial D\), the exterior derivative \( \nabla \chi \cdot n_{ext}\) is non positive, hence $\int_D \left( \chi \Delta \psi - \psi \Delta \chi \right) dv \geq 0$. Moreover, \[ \int _D \left( \chi \Delta \psi - \psi \Delta \chi \right) dv \leq \int_ D \left( -\lambda \chi \psi +\lambda _1(D) \chi \psi\right) dv = (-\lambda + \lambda_1(D) ) \int_D \chi \psi dv . \] That proves $\lambda_1(D) \geq \lambda$ since \(\chi \) and \(\psi\) are positive on \(D\). Taking into account every connected component $D$ of $V$, we get $\lambda_1(V) \geq \lambda$ as desired. \end{proof} \section{Positivity of the normal bundle in all directions at each point of the limit set}\label{s: positivity all directions} The goal of this part is to use the thin property to gain positivity for the normal bundle $N_\mathcal{F}$ in all directions at each point of the limit set $\mathcal L$. The proof is inspired by Brunella's article \cite{BrunellaToulouse}, see also \cite{Canales}. We denote by \(\mathcal L\) the limit set of \(\mathcal{F}\). Let \(\Lambda \subset \mathbf{D}\) be the image of \(\mathcal L \cap U\) by a local first integral \( t : U\rightarrow \mathbf{D}\); we call such a set \(\Lambda\) a transversal set of \(\mathcal L\). We will say that \(\mathcal L\) is thin if any transversal set \(\Lambda\) is thin. Recall that the thin property is a local property, so using the minimality, \(\mathcal L\) is thin if and only if some of its transversal set is thin. \begin{theorem}(Improvement of Theorem \ref{c: positivity II}) \label{c: metric of positive curvature III} Assume that \(\mathcal L \) is thin. For every $p \in \text{sing}(\mathcal F)$, let $(x_p,y_p) : U_p \to \bf B$ be linearization coordinates provided by Theorem \ref{c: positivity II} and let \(M >0\). There exists a hermitian metric \(m\) on \(N_{\mathcal F}\) such that \begin{enumerate} \item the curvature of $m$ is positive on \(T_q S\) for every \(q \in \mathcal L \), \item the curvature of $m$ is bounded from below by \( M (i dx_p\wedge d\overline{x_p} + i dy_p \wedge d\overline{y_p} ) \) on \( U_p \) for every \(p\in \text{sing} (\mathcal F)\). \end{enumerate} \end{theorem} Let \( U_ p(r) := \{ |x_p|^2+|y_p|^2< r^2 \} \) and $U(r) := \bigcup_p U_p (r)$. Let \( (V_j)_{j \in J}\) be a finite covering of $S \setminus U(1/\sqrt {16})$ by foliated charts such that $V := \bigcup_{j} V_j$ does not intersect $U(1/\sqrt{32})$. In particular, $\partial V \subset U(1/\sqrt 8)$. These special properties on $(V_j)_{j \in J}$ will be used in Section \ref{s: convexity}. Let \( \rho _j : S \rightarrow {\bf R^+}\) be smooth functions whose support is contained in \(V_j\) such that $\sum_j \rho_j$ does not vanish on $V$. We can choose \(\rho_j\) satisfying \begin{equation} \label{eq: estimates partition of unity} \vert { D^k \rho _j } \vert \leq C \rho _j^{1/2} \end{equation} for every $j \in J$ and $k=1,2$, where \(D^k\rho_j \) denotes the \(k\)-th derivative of $\rho_j$. Let \( (z_j, t_j) : V_j \rightarrow {\bf D}\times {\bf D}\) be foliated coordinates, and let \( m \) be the hermitian metric on \(N_{\mathcal F}\) constructed in Theorem \ref{c: positivity II}. The curvature of $m$ is positive in restriction to \(\mathcal P_{\mathcal F}\), hence we have \[ m = \exp (- \varphi_j) \ |dt_j| \textrm{ on } V_j ,\] where \(\varphi_j \) is a smooth strictly subharmonic function along the leaves. \begin{proof}[Proof of Theorem \ref{c: metric of positive curvature III}] For every \(j\in J\), let \(\Lambda_j\subset {\bf D}\) be the image of \( \mathcal L\cap V_j\) by the map \(t_j: V_j \rightarrow {\bf D}\). Since the set \(\Lambda_j\) is thin, there exists by Proposition \ref{p: technical characterization} a sequence of smooth functions \(\{ f_n^j\} _n :{\bf D} \rightarrow {\bf R} \) satisfying the following properties uniformly in restriction to \( \Lambda_j \) when \(n\) tends to infinity: \begin{itemize} \item[(i)] \(f_n^j\) converges to \(0\), \item[(ii)] \( \Delta_{t_j} f_n^j = (f_n^j)_{t_{j}\overline{t_{j}}} \) tends to \( +\infty\), \item[(iii)] \( \vert \nabla_{t_j} f_n^j \vert ^2 = o (\Delta_{t_j} f_n ^j )\). \end{itemize} Let us define \[ f_n := \sum_{j\in J} \rho_j \, f_n^j \circ t_j : S \to \mathbf{R}^+ \] and the hermitian metrics on \( N_{\mathcal F} \) \[ m_n := \exp (- f_n) m .\] Note that $m_n = m$ on $S \setminus V$ since the support of $f_n$ is included in $V$. In particular, by Theorem \ref{c: positivity II}, the curvature of $m_n$ is bounded from below by \( M (i dx_p\wedge d\overline{x_p} + i dy_p \wedge d\overline{y_p} ) \) on \( U_p \setminus V \) for every \(p\in \text{sing} (\mathcal F)\). It remains to prove that, for $n$ large enough, the curvature of $m_n$ is positive on \(T_q S\) for every \(q \in \mathcal L \cap V \). Let \(j_0 \in J \) and $J_0 := \{ j \in J , V_j \cap V_{j_0} \neq \emptyset \}$. We have on $V_{j_0}$: $$ m_n = \exp ( -\phi_n ^{j_0} ) |dt_{j_0}| , \textrm{ where } \phi_n ^{j_0}:= \varphi_{j_0}+f_n . $$ Let us introduce the following derivatives on $V_{j_0}$: $$ \alpha_n := (\phi_n^{j_0} )_{z_{j_0}\overline{z_{j_0}}} \ \ , \ \ \beta_n := (\phi_n^{j_0} )_{z_{j_0}\overline{t_{j_0}}} \ \ , \ \ \gamma_n := (\phi_n^{j_0} )_{t_{j_0}\overline{t_{j_0}}} .$$ We have to show that \( \alpha_n , \gamma_n \), and \( \alpha_n \gamma_n - \beta_n^2 \) are positive on $\mathcal L$. In the remainder $f_n^j$ simply stands for $f_n^j \circ t_j$, and we denote $$ \alpha := (\varphi_{j_0})_{z_{j_0}\overline{z_{j_0}}} \ \ , \ \ \beta := (\varphi_{j_0})_{z_{j_0}\overline{t_{j_0}}} \ \ , \ \ \gamma := (\varphi _{j_0})_{t_{j_0}\overline{t_{j_0}}} . $$ The notation \( o_{\mathcal L} (1)\) refers to a function that tends to zero when the argument tends to a point of \( {\mathcal L} \). \vspace{0.2cm} \textit{Positivity of $\alpha_n$:} Since \( f_n^j\) only depends on \( t_{j_0}\), we have on $V_{j_0}$: \begin{equation}\label{eq: estimates1} \alpha_n = \alpha + (f_n)_{z_{j_0}\overline{z_{j_0}}} = \alpha + \sum _{j\in J_0}( \rho_j ) _{z_{j_0}\overline{z_{j_0}}} f_n^j = \alpha + o_{\mathcal L} (1) , \end{equation} where the last equality comes from property $(i)$. Since $\alpha$ is a positive function on $\mathcal L$, the function $\alpha_n$ is positive on $\mathcal L$ for $n$ large enough. \vspace{0.2cm} In order to show the positivity of $\gamma_n$ and $\alpha_n \gamma_n - \beta_n^2$ on $\mathcal L$, we introduce \[ \Delta_n^{j_0} : = \sum_{j \in J_0} \rho_j (f_n^j ) _{t_{j_0} \overline{t_{j_0}}} : V_{j_0} \to \mathbf{R} ,\] which tends to \(+\infty\) at each point of $\Lambda_{j_0}$ in the support of some \(\rho_j\). By properties $(i), (ii), (iii)$ and Equation (\ref{eq: estimates partition of unity}), we get $$ \sum_{j\in J_0} \vert D^2 \rho_j \vert f_n^j \leq C \sum_{j\in J_0} \rho_j^{1/2} f_n^j \leq C (\sum_{j\in J_0} \rho_j f_n^j)^{1/2} ( \sum_{j\in J_0} f_n^j )^{1/2} = o_{\mathcal L} ((\Delta_n^{j_0})^{1/2}) , $$ $$ \sum_{j\in J_0} \vert D^1 \rho_j \vert \vert \nabla_{t_j} f_n^j \vert \leq C (\# J_0) ^{1/2} ( \sum_{j\in J_0} \rho_j \vert \nabla_{t_j} f_n^j \vert ^2)^{1/2} = o_{\mathcal L}((\Delta_n^{j_0})^{1/2}) .$$ \vspace{0.2cm} \textit{Positivity of $\gamma_n$:} the preceding estimates and the computation \[ \gamma_n = \gamma + \Delta_n ^{j_0} + \sum _{j\in J_0} \left( (\rho_j)_{t_{j_0}\overline{t_{j_0}}} f_n^j + 2\Re \left((\rho_j)_{t_{j_0}} (f_n^j )_{\overline{t_{j_0}}} \right) \right) \] imply \begin{equation} \label{eq: estimates4} \gamma_n = \gamma + \Delta_n ^{j_0} + o_{\mathcal L} \left( ( \Delta_n^{j_0} ) ^{1/2} \right) . \end{equation} Since $\Delta_n ^{j_0}$ tends to $+ \infty$ on $\mathcal L$, $\gamma_n$ is positive on $\mathcal L$ for $n$ large enough. \vspace{0.2cm} \textit{Positivity of $\alpha_n \gamma_n - \beta_n^2$:} here we have \begin{equation}\label{eq: estimates3} \beta_n = \beta + \sum _{j\in J_0} \left( (\rho_j ) _{z_{j_0}\overline{t_{j_0}}} f_n^j + (\rho_j )_{z_{j_0}} (f_n^j)_{\overline{t_{j_0}}} \right) = \beta + o_{\mathcal L} \left( (\Delta_n^{j_0})^{1/2} \right) . \end{equation} By using \eqref{eq: estimates1}, \eqref{eq: estimates3} and \eqref{eq: estimates4}, we obtain that $\alpha_n \gamma_n - \beta_n^2 = \alpha \Delta_n^{j_0} + o ( \Delta_n^{j_0} )$, which tends to $+ \infty$ on $\mathcal L$. That completes the proof of Theorem \ref{c: metric of positive curvature III}.\end{proof} \section{Convexity of the complement of the limit set: proof of Theorem \ref{t: convexity II}} \label{s: convexity} We prove in this Section that $S \setminus \mathcal L$ is strongly pseudoconvex, hence it is a modification of a Stein manifold by Grauert's Theorem \cite[Theorem 2]{Grauert}. Namely, we have to prove that there exists a proper and strictly plurisubharmonic function $h : \mathcal V \setminus \mathcal L \to \mathbf{R}$ where $\mathcal V$ is a neighborhood of $\mathcal L$ in $S$. We shall follow Brunella's construction \cite[Section 3.1]{BrunellaToulouse} in the non-singular setting, and perform an additional analysis near the singularities of $\mathcal{F}$. \subsection{Introduction of $m$-functions} \begin{definition}\label{def: mfunction} Given a hermitian metric \(m \) on \(N_{\mathcal F}\), and an open set \( Y \subset S^* = S\setminus \text{sing} (\mathcal{F})\), a function \( f: Y \setminus \mathcal L\rightarrow {\bf R} \) is called a \(m\)-function, if at any point \( p \in Y \cap \mathcal L\), there exists a local submersion \( t : W_p \rightarrow {\bf C} \) defining the foliation \(\mathcal F\) on a neighborhood \( W_p \) of \(p\), such that \begin{equation} \label{eq: log distance} f := \varphi - \log d_{\bf C}( t , \Lambda) + o_{\mathcal L \cap Y}(1) \textrm{ on } W_p \cap Y \setminus \mathcal L , \end{equation} where \(\Lambda = t (\mathcal L)\), \(d_{\bf C}\) is the euclidean distance on ${\bf C}$ and \(m= \exp (-\varphi ) |dt|\). \end{definition} Note that \( - \log d_{\bf C} (t , \Lambda) \) is plurisubharmonic on $U \setminus \mathcal L$ since it is equal to \(\sup_{\xi \in \Lambda} - \log {| t -\xi |} \). In particular, $dd^c f \geq dd^c \varphi$. \begin{lemma} (\cite[Lemma 3.2]{BrunellaToulouse}) \label{lemme: BruTou} Two \(m\)-functions \(f: Y \rightarrow {\bf R}\) and \(f ' : Y' \rightarrow {\bf R} \) satisfy \[ f - f ' = o _{\mathcal L \cap Y\cap Y'} (1) .\] \end{lemma} \begin{proof} Let \(p\in Y\cap Y' \cap \mathcal L\). By definition, there exists a neighborhood \(W_p\) of \(p\) and two submersions \( t, t' : W_p \rightarrow {\bf C} \) defining the foliation \(\mathcal F\) such that if \( \Lambda = t( \mathcal L \cap W_p) , \ \Lambda' = t' (\mathcal L \cap W_p)\) and \( m = \exp (-\varphi ) |dt| = \exp (-\varphi ') |dt '|\), then \[ f'- f = \log \left( \left \lvert \frac{dt' }{dt}\right\rvert \cdot \frac{d_{\bf C} ( t , \Lambda) }{d_{\bf C} ( t' , \Lambda') } \right) + o_{\mathcal L \cap Y \cap Y'}(1) . \] However, \[ d_{\bf C} ( t' , \Lambda' ) = \left \lvert \frac{dt' }{dt} \right \rvert \cdot d_{\bf C} ( t , \Lambda) \cdot (1+o_{\mathcal L \cap Y\cap Y'}(1)) \] so the claim follows. \end{proof} \begin{definition}\label{def: constant} Let \(p\in \text{sing} (\mathcal{F})\) and \(U_p\) be linearization coordinates near $p$ provided by Theorem \ref{c: metric of positive curvature III}. The foliation $\mathcal{F}$ is thus defined in these coordinates by $\omega = axdy - by dx$. Let \( E\) be the elliptic curve defined as the quotient of the restriction of \(\mathcal{F}\) to the complement of the two separatrices $\{ xy =0 \}$ in \(U_p \). Let $I$ be the quotient map $$ I : (x,y) \in U_p \setminus \{ xy = 0\} \mapsto \log {y^a \over x^b} \in E. $$ We fix a non zero holomorphic \(1\)-form \(\eta\) on \(E\) and denote by \(\Lambda\) the \(I\)-image of \(\mathcal L\cap U_p \setminus \text{separatrices} \). \end{definition} \begin{lemma} \label{l: third step} Let \(m\) be a smooth hermitian metric on the restriction of \( N_{\mathcal{F}}\) to \(U_p\), that we write in the form \begin{equation}\label{eq: expression m} m= \exp (-\xi ) |I^* \eta| ,\end{equation} where \( \xi\) has some logarithmic singularities on the separatrices. Let us define \[F_p := \xi - \log d_E(I, \Lambda) .\] \begin{enumerate} \item $F_p$ is a \(m\)-function near any point of \(\mathcal L\cap U_p \setminus \text{separatrices} \). \item Near a point of a separatrix, $F_p$ is not a \(m\)-function, but differs from a genuine \(m\)-function by a bounded function. More precisely, there exists a constant $C_p$ such that for every compact set \( K\subset U_p \setminus \{p\}\), every \(m\)-function \(f: K\rightarrow {\bf R} \) and every \( \delta >0\), there exists a neighborhood \(\mathcal V\) of \( K\cap \mathcal L\) such that \( \norm{F_p -f}_{\infty, \mathcal V}\leq C_p+ \delta \). \end{enumerate} \end{lemma} \begin{proof} At a neighborhood of a point of \(U_p \cap \mathcal L\) not belonging to the separatrices, a transverse coordinate for the foliation \(\mathcal F\) is locally given by the first integral \(I\). Hence $F_p$ is a $m$-function near those points. Denote by \( (S_k)_{k=1,2}\) the germs of separatrices passing through \(p\). On each \(S_k\), the closed meromorphic form \( I^* \eta \) has a pole and can be written locally \( \alpha_k \frac{dt_k}{t_k}\), where \(t_k \) is a local submersion defining $\mathcal{F}$ such that \( S_k = \{ t_k=0 \} \). Let us be more explicit: for $(x_0 , 0) \in U_p$ with $x_0 \neq 0$, a local submersion defining $\mathcal{F}$ near $(x_0,0)$ is given by $t = y / x^{b/a}$. Then $I^* \eta = d \log {y^a \over x^b} = {1 \over xy}(ax dy - by dx)$ is equal to ${1 \over a} {dt \over t}$ outside $\{ t = 0 \}$, as claimed. Note that the local submersions $t_k$ cannot be globalized due to the effect of monodromy, but they are well-defined up to multiplication by a constant. We denote by \( \widetilde{\Lambda}_k \) the \(t_k\)-image of \( \mathcal L\). For $k = 1,2$, let us consider the function locally defined outside \(S_k\) by \[ \widetilde{ u_k } := \log \left \lvert \frac{\alpha_k}{t_k} \right\rvert + \log \frac{d_{\bf C} (t_k, \widetilde{\Lambda}_k)}{d_E( I , \Lambda) } . \] It is invariant by the holonomy map $h : t \mapsto e^{2i\pi {b \over a}t}$ produced by turning around the separatrix $\{ y = 0 \}$, this is due to the fact that $\widetilde{\Lambda}_k$ is $h$-invariant. Hence there exists a continuous function \( u _k : E \rightarrow {\bf R} \) such that \( \widetilde{u_k} = u_k \circ I\). Let \(C_p := \max (C_1,C_2)\), where \( C_k := \max _E |u_k|\). Now if we write $m$ as \( \exp (- \psi _k ) |dt_k |\), then $$ \tilde f := \psi_k - \log d_{\bf C} (t_k, \widetilde{\Lambda}_k ) $$ defines a \(m\)-function near every point of $S_k$. But \eqref{eq: expression m} and \( I^* \eta = \alpha_k \frac{dt_k}{t_k}\) yield \( \xi - \psi_k = \log\left( \frac{ |\alpha _k |} {|t_k|}\right) \). Hence the function $$F_p = \xi - \log d_E(I, \Lambda)$$ satisfies \(F_p - \tilde f = \tilde u_k\), which is bounded by $C_p$. One gets the stated property for an arbitrary $m$-function $f$ by using Lemma \ref{lemme: BruTou}. \end{proof} \subsection{Proof of Theorem \ref{t: convexity II}} Let \(M\) be a constant larger than \(10 C_p\) (see Lemma \ref{l: third step} for the definition of $C_p$) for each \(p\in \text{sing} (\mathcal{F})\) and let \(m\) be a hermitian metric on \(N_\mathcal{F}\) provided by Theorem \ref{c: metric of positive curvature III} for that constant $M$. The curvature of $m$ is thus positive on \(T_q S\) for every \(q \in \mathcal L \), and is bounded from below by \( M (i dx_p\wedge d\overline{x_p} + i dy_p \wedge d\overline{y_p} ) \) on \( U_p \) for every singular point $p$. We use the notations of Section \ref{s: positivity all directions}. On every $V_j$, $m = e^{- \varphi_j} \vert dt_j \vert$ where $\varphi_j$ is strictly plurisubharmonic. Let us consider a \(m\)-function $$ f_j = \varphi_j - \log d_{\bf C}( t , \Lambda) + o_{\mathcal L \cap V_j}(1) : V_j\setminus \mathcal L \to \mathbf{R} $$ and set \[ h_j := f_j + \varepsilon \rho_j . \] We choose \(\varepsilon\) small enough such that \( h_j\) remains strictly psh (this is possible since $dd^c \rho_j$ is bounded and $dd^c f_j \geq dd^c \varphi_j$) and such that $\epsilon \rho_j$ is smaller than $C_p /10 $ for every \(p\in \text{sing} (\mathcal{F})\). Now for every \(p\in \text{sing} (\mathcal{F})\), consider \[ h_p := F_p + 2 C_p - 4 C_p (|x_p|^2+ |y_p|^2) : U_p \setminus \mathcal L\rightarrow {\bf R} , \] where \( F_p \) is provided by Lemma \ref{l: third step}. It is strictly plurisubharmonic by the choice of \(M\). Let us define \[ h : \mathcal V \setminus \mathcal L \rightarrow {\bf R} \ \ , \ \ h(q) := \sup _{ \nu } h_\nu (q) , \] where the supremum is taken over $J_q := \{ \nu \in J, q \in V_\nu \}$ and $P_q := \{ \nu \in \text{sing}(\mathcal{F}) , q\in U_\nu \}$. Note that $P_q$ has at most one element. Since $h$ is proper on $ \mathcal V \setminus \mathcal L$, the next proposition shows that $S \setminus \mathcal L$ is strongly pseudoconvex, which completes the proof of Theorem \ref{t: convexity II}. We shall follow the arguments of \cite[Lemma 3.3]{BrunellaToulouse}, but we have to adapt them to take into account the singular set of $\mathcal{F}$. This is where the delicate construction of $h_p$ (and its comparison with $h_j$ provided by Lemma \ref{lem: est}) enters the picture. \begin{proposition} $h$ is continuous and strictly psh near every $q \in \mathcal V \setminus \mathcal L$. \end{proposition} \begin{proof} There are several cases depending on $q$ described below. For each of them, the reader will verify that $h$ can be rewritten as the supremum of a family of continuous and strictly plurisubharmonic functions all defined on some neighborhood $O_q$ of $q$. Recall that \( U_ p(r) = \{ |x_p|^2+|y_p|^2< r^2 \} \) so that $U_p = U_p(1)$. Let us begin with $q \in S \setminus U(1/\sqrt{16})$, for which there are three cases: a) Some neighborhood $O_q$ of $q$ satisfies $O_q \subset W$ or $O_q \cap W = \emptyset$ for every $W \in \{ V_j , j \in J \} \cup \{ U_p , p \in \text{sing}(\mathcal{F}) \}$. b) The set $J_q^\partial := \{ j \in J , q \in \partial V_j\}$ is not empty. Recall that the support of the non negative smooth function \( \rho _j \) is contained in \(V_j\) and that $\sum_j \rho_j$ does not vanish on $\bigcup_j V_j$. Let us fix $j_0 \in J_q$ such that $\rho_{j_0}(q) > 0$ and let $j \in J_q^\partial$. Since $\rho_j(q) = 0$ and $f_j - f_{j_0} = o_{\mathcal L \cap V_j \cap V_{j_0}}(1)$ by Lemma \ref{lemme: BruTou}, we get $$ f_j + \epsilon \rho_j < f_{j_0} + \epsilon \rho_{j_0} \ (\textrm{hence } h_j < h_{j_0}) \ \textrm{ on some } O_q \cap V_j \cap V_{j_0} .$$ c) The set $\{ p \in \text{sing}(\mathcal{F}) , q \in \partial U_p \}$ is not empty, let $p$ denote its single element. The first item of Lemma \ref{lem: est} below implies for every $j_0 \in J_q$: $$h_p < h_{j_0} - C_p \ \textrm{ on some } O_q \cap U_p \cap V_{j_0} . $$ To finish it remains to consider $q \in U_p(1/16)$ for some $p \in \text{sing} (\mathcal{F})$. For every $j \in J_q$, the second item of Lemma \ref{lem: est} implies $$ h_j < h_p - 3 C_p/10 \textrm{ on some } O_q \cap U_p(1/\sqrt 8) \cap V_j , $$ hence $h$ is simply equal to $h_p$ on $U_p(1/\sqrt{16})$, and we are done. \end{proof} \begin{lemma} \label{lem: est} There exists a neighborhood \( \mathcal V \) of \(\mathcal L\) such that for every $p \in \text{sing} (\mathcal{F})$, we have \begin{enumerate} \item if $V_j \cap \partial U_p \neq \emptyset$, $h_p \leq h_j - 19 C_p / 10$ on $\mathcal V \cap V_j \cap \partial U_p$, \item if $V_j \cap U_p(1/\sqrt 8) \neq \emptyset$, $h_j \leq h_p - 3 C_p / 10$ on $\mathcal V \cap V_j \cap U_p(1/\sqrt 8)$. \end{enumerate} \end{lemma} \begin{proof} By applying Lemma \ref{l: third step} with $f = f_j$ and $\delta = C_p / 10$, we obtain \begin{equation*}\label{hphq1} h_p = F_p - 2 C_p \leq f_j - 19 C_p / 10 \textrm{ on } \mathcal V \cap V_j \cap \partial U_p , \end{equation*} the first point of Lemma \ref{lem: est} then follows from $f_j \leq h_j$ on $V_j$. For the second one, we first use Lemma \ref{l: third step} as before and then the upper bound $F_p \leq h_p - 3C_p /2$ on $U_p(1/\sqrt 8)$ to get \begin{equation*}\label{hphq2} h_j = f_j + \epsilon \rho_j \leq (F_p + 11 C_p / 10) + C_p / 10 \leq h_p - 3 C_p / 10 \textrm{ on } \mathcal V \cap V_j \cap U_p(1/\sqrt 8) , \end{equation*} which completes the proof. \end{proof} \section{The Julia set of a polynomial mapping is thin} The present section is devoted to the proof of Theorem~\ref{t: Julia polynomial}. Actually, it is a particular case of the following statement. \begin{proposition}\label{p:C-K} Let $K\subset \mathbf{C}$ be a compact set, that coincides with the boundary of the infinite connected component of its complement, \[ K=\partial ((\mathbf{C}\setminus K)_{\infty}). \] Then $K$ is thin. \end{proposition} Indeed, let $P$ be a polynomial mapping and $J$ be its Julia set. Then, the infinite connected component $(\mathbf{C} \setminus J)_{\infty}$ is the basin of attraction of the point at infinity, and $J$ is its boundary, see \cite[Lemma 17.1]{M}. The remainder of this Section is devoted to the proof of Proposition~\ref{p:C-K}. We start with a geometric assertion. Consider a continuous path $\gamma_0:[1,2]\to \mathbf{C}$, that is a piecewise-affine path, joining the points (see Fig.~\ref{f:gamma-0}) \[ \gamma_0(1)=(-2+i), \quad \gamma_0(1.25)=(1+i), \quad \gamma_0(1.5)=(1-i), \] \[ \gamma_0(1.75)=(-1-i),\quad \gamma_0(2)=(-1+2i). \] \begin{figure} \includegraphics[height=5cm]{path-0.pdf} \caption{Path $\gamma_0$ and its $\frac{1}{2}$-neighbourhood}\label{f:gamma-0} \end{figure} \begin{lemma}\label{l:1} Any continuous path $\gamma : [1,2]\to \mathbf{C}$ such that $\| \gamma - \gamma_0\|_{C([1,2])}< 1/2$ separates $0$ from $\infty$, that is, $0$ belongs to a bounded connected component of the complement $\mathbf{C}\setminus \gamma([1,2])$. \end{lemma} \begin{proof} See Fig.~\ref{f:gamma-0}; note that parts of the path $\gamma |_{[1,1.25]}$ and $\gamma |_{[1.75,2]}$ have an intersection point inside the dotted square. \end{proof} Now recall that any open subset in the Banach space $C([1,2])$ has positive Wiener measure, see e.g. \cite[Exercise~1.8]{MP}. For the remainder of the proof, we fix $p_0>0$ such that $\|\gamma-\gamma_0\|_{C([1,2])}<\frac{1}{3}$ holds with probability at least $p_0$ for Brownian paths $\gamma : \mathbf{R}^+ \to \mathbf{C}$ starting at $0$ in $\mathbf{C}$. The next proposition asserts that conditioning a Brownian path to reach a given point at some large moment of time affects its renormalized behaviour near the starting moment smaller and smaller. \begin{proposition}\label{p:gamma-close} Let $b \in \mathbf{C}$ and $T > 0$. Then, conditionally to $\gamma(T)=b$ (or to any conditioning of \(\gamma\) on $[T,+\infty)$), there exists $$T' = F(b,T) < T/2$$ such that the path $$\gamma'(t):= \frac{1}{\sqrt{T'}} \gamma(T' t)$$ (that can be seen as a renormalized restriction of $\gamma$ on $[T',2T']$) satisfies \begin{equation}\label{eq:close} \|\gamma'-\gamma_0\|_{C([1,2])}<\frac{1}{2} \end{equation} with probability at least $p_0 / 2$. \end{proposition} \begin{proof} The law of $\gamma|_{[0,T]}$ conditionally to $\gamma(T)=b$ is the same as the law of \[ B_t+ \frac{t}{T} (b-B_T), \quad t\in [0,T], \] where $B_t$ is a Brownian motion. This implies that $\gamma'|_{[1,2]}$ is distributed as \begin{equation}\label{eq:T'} \frac{1}{\sqrt{T'}} B_{T't} + \frac{\sqrt{T'} t}{T} (b-B_T), \quad t\in [1,2]. \end{equation} Let $C > 0$ such that $| b-B_T | <C \cdot T$ holds with probability at least $1- p_0 / 2$. Now, fixing $T'$ small enough such that $\sqrt{T'} \cdot 2C < 1/6$, we ensure that with probability at least $1-p_0 / 2$, the second summand in~\eqref{eq:T'} does not exceed $1/6$ for any $t\in[1,2]$. Meanwhile, the first summand is again a Brownian motion, so it is $1/3$-close to $\gamma_0$ with probability at least~$p_0$ due to Lemma~\ref{l:1}. Hence, with the probability at least $p_0- p_0 / 2=p_0 / 2$ one has $$ \|\gamma'-\gamma_0\|_{C([1,2])} \le \|\gamma_0 -\frac{1}{\sqrt{T'}} B_{T't} \|_{C([1,2])} + \| \frac{\sqrt{T'} t}{T} (b-B_T) \|_{C([1,2])} , $$ which is smaller than $1/3 + 1/6 = 1/2$. \end{proof} The event described in Proposition \ref{p:gamma-close} implies that the path has left $K$: \begin{lemma}\label{l:leave} Let $T'$ be given, and assume that the path \[ \gamma'(t):= \frac{1}{\sqrt{T'}} \gamma(T' t) \] satisfies~\eqref{eq:close}. Then for any $x_0\in K$ the path $x_0+ \gamma(t),$ $t\in [T',2T']$ cannot be contained in~$K$. \end{lemma} \begin{proof} If one had $x_0+ \gamma([T',2T']) \subset K$, then due to Lemma~\ref{l:1} there would be a neighborhood of~$x_0$ consisting of points that one cannot connect to infinity without crossing $K$. And this would contradict the assumption that arbitrarily close to $x_0$ there are points in the unbounded connected complement~$(\mathbf{C}\setminus K)_{\infty}$. \end{proof} We conclude the proof of Proposition~\ref{p:C-K} by iteratively looking at the Brownian path closer and closer to $t=0$. Namely, let us show that for any $x_0\in K$ and for arbitrarily small $\delta>0$ almost surely the path $x_0+\gamma(t)$ is not contained in~$K$ (where $\gamma(t)$ is the standard Brownian path, starting at $0$). Indeed, let us construct a sequence of random times, defined by \[ T_0:=\delta, \quad T_n=F(\gamma(T_{n-1}),T_{n-1}), \quad n=1,2,\dots. \] Observe that $T_{n+1} < {1 \over 2}T_n$. Let $\gamma_n(t):= \frac{1}{\sqrt{T_n}} \gamma(T_n t)$ for $t\in [1,2]$, and let \[ A_n := \{ \text{\eqref{eq:close} holds for } \gamma'=\gamma_n\}. \] Consider also the $\sigma$-algebrae $\mathcal{F}_n$, generated by $T_{n}$ and $\gamma|_{[T_{n},\infty)}$. Then, on one hand, the event $A_n$ is $\mathcal{F}_n$-measurable. On the other hand, due to the choice of the function~$F$ and to Proposition~\ref{p:gamma-close} conditionally to any event in $\mathcal{F}_n$, the probability of $A_{n+1}$ is at least $p_0/2$. Hence, for the events \[ B_n:=\{\forall i=1,2,\dots, n \quad A_i \text{ does not hold}\} \] one has \[ {\mathbb P}( B_{n+1} ) \le (1-p_0/2) \cdot {\mathbb P}( B_{n+1} ), \] and thus \[ {\mathbb P}(B_n) \le (1-p_0/2)^n. \] Thus, almost surely, at least one of the events $A_n$ takes place. By Lemma~\ref{l:leave}, this implies that $x_0+\gamma([T_n,2T_n])$ is not contained in $K$, and hence the path $x_0+\gamma(t)$ leaves~$K$ no later than $2T_n<T_0=\delta$. That completes the proof of Proposition ~\ref{p:C-K}. \\
2023-04-23T06:41:25.026Z
2022-02-03T02:20:41.000Z
redpajama/arxiv
arxiv_0001
2,390
12,936
5a70367c9b18cde560e091abb6cf13053ea93c7b
\section{Introduction}\label{intro} Astronomical masers form under specific physical conditions and are useful to trace different environments in the interstellar medium (ISM). Interstellar masers are often found in environments such as star forming regions (SFRs) and supernova remnants (SNRs). For example, a detection of a collisionally pumped 1720 MHz hydroxyl (OH) maser has traditionally been used as a tracer of shocked regions produced by the interaction of a SNR with a neighboring molecular cloud (MC) \citep[e.g.,][]{claussen1997, frail1998, yusef2003}. Another example are the radiatively pumped Class II methanol (CH$_3$OH) masers lines, which are typically found near young massive stars. In addition, collisionally pumped Class I CH$_3$OH masers are typically found associated with SNRs and outflows in SFRs \citep[e.g.,][]{beuther2002, voronkov2006, cyganowski2009, fontani2010, sjouw2010, pihl2014, sanna2015}. Similar to the 1720 MHz OH masers, the Class I 36 and 44 GHz CH$_3$OH maser transitions have also been detected near shocked regions where SNRs are known to be interacting with MCs \citep[e.g.,][]{sjouw2010, pihl2011, pihl2014}. Recent modeling of Class I CH$_3$OH masers in a SNR environment shows that optimal masing conditions for the 44 GHz transition are temperatures $\ge50$ K and densities between $10^4-10^6$ cm$^{-3}$. Similar temperatures but slightly higher densities in the range of $10^5-10^7$ cm$^{-3}$ are the optimal masing conditions for the 36 GHz transition \citep{mcewen2014, nesterenok2016}. Because of the large overlap in conditions, these transitions can be found co-spatially, but brighter 36 GHz CH$_3$OH masers are expected to trace higher density regions (e.g., near the actual shock front in SNR/MC interaction regions). This has been supported by observations of bright 36 GHz CH$_3$OH masers lining a known shock front in Sgr A East \citep{sjouw2010,pihl2011}. The Sgr A East SNR is located within the inner 12 pc of our complex Galactic Center (GC) and is known to be interacting with two different giant MCs, M$-$0.02$-$0.07 (a.k.a.\,the 50 km\,s$^{-1}$ cloud) and M$-$0.13$-$0.08 (a.k.a.\,the 20 km\,s$^{-1}$ cloud) \citep[e.g.,][]{mezger1989, coilho2000, amo2011}. The details of gas transportation in and around the GC are not well understood but different molecular line observations help uncover interaction regions between different environments. For example, the 50 and 20 km s$^{-1}$ clouds seem to be connected through a 'Molecular Ridge', as suggested by an extensive NH$_3$ study from \citet{coilho2000}, as well as from single dish 36 GHz CH$_3$OH observations \citep{szcz1991}. In addition, observations by \citet{mcgary2001} suggest possible connections between the MCs and the Circumnuclear Disk (CND) via streamers. From NH$_3$ observations by \citet{coilho2000} it was deduced that, along the line of sight, Sgr A East is pushing this 50 km s$^{-1}$ MC eastward and away from us (behind), whereas it also expands into the southern 20 km s$^{-1}$ MC at the same time. The southern region of Sgr A East is also interacting with the SNR G359$-$0.09 \citep{coilho2000,sjouw2008}. These interactions between Sgr A East and its surrounding environment produce collisionally compressed regions of gas some of which are found associated with Class I 36 and 44 GHz CH$_3$OH, as well as collisionally pumped 1720 MHz OH maser emission \citep[e.g.,][]{yusef1996, sjouw2010, pihl2011}. It is often speculated that supersonic motions from expanding SNRs may trigger star formation (SF) in neighboring MCs \citep[e.g.,][]{reynoso2001, cich2014}. Details of the triggering process and what conditions are necessary have never been clearly outlined or confirmed, partly due to the complexity of the regions in the inner Galaxy and the present lack of star forming regions detected in the GC. Different stages of SF can be traced with detections of various maser species, for example, 6.7 GHz, as well as the 44 GHz CH$_3$OH maser line have been found closely associated with HII regions, outflows, and H$_2$O masers (typical tracers of SF) (e.g., \citealt{kurtz2004,moscadelli2007,sanna2010}). It is not clear whether Class I CH$_3$OH masers trace a specific evolutionary stage of SF. Therefore, the detection of maser lines near these regions (such as bright Class I CH$_3$OH or 22 GHz H$_2$O lines) may unveil new sites of, and conditions necessary for SF activity in the GC. In addition, the proper understanding of the physical conditions in these regions may also be important for cosmic ray modeling \citep[e.g.,][]{drury1994,abdo2010,cristofari2013}. Combining maser observations of SNRs and their surrounding environments along with modeling of conditions necessary for the formation of CH$_3$OH masers in SNRs, physical conditions of the gas where CH$_3$OH is detected can be constrained \citep{mcewen2014}. In this context, by studying the distribution of different maser transitions near Sgr A East, we aim to investigate the presence of distinct gas motions in the GC, as well as possibly uncover new sites of star formation. In this study, we report on an extensive 44 GHz CH$_3$OH maser emission survey of Sgr A East and its surrounding environment taken with the Very Large Array (VLA). A 36 GHz CH$_3$OH maser survey towards the same region along with a full analysis of the combined 36 and 44 GHz maser detections will be reported on in a future publication. \section{44~GH\MakeLowercase{z} VLA Observations and Calibration}\label{calib} We are reporting on the results from Q-band VLA observations (project code S3115) of the SNR Sgr A East and its surrounding environment. The Q-band B configuration observations were taken on April 20 and 23, 2011 to observe the $J=7_0\rightarrow6_1A^+$ rotational transition of CH$_3$OH at the rest frequency of 44.069 GHz. Figure\,1 displays the 25 pointing positions overlaid on a 1720 MHz continuum image of the Sgr A region. The VLA primary beam is $1.02'$ at 44 GHz, with a typical synthesized beam size of $0.38''\times0.19''$. The 25 pointings covered a region of roughly $8'\times6'$. Although the area is not Nyquist sampled, regions in between different pointings were also searched for potential masers by imaging beyond the primary beam. These candidate sources have been considered physical if detected in more than one pointing. The 44 GHz observations were separated into 256 frequency channels covering 16 MHz of bandwidth. We sampled a velocity range between $-24$ and 84 km s$^{-1}$ with a resolution of about $0.4$ km s$^{-1}$. Sgr A*, located in pointing A00, was used for phase calibration and 3C286 was used as a flux and bandpass calibrator. The total on source time for each pointing including both days of observation was about 8 minutes. Each pointing was individually imaged with 2048$\times$2048 pixels of $0.036''$ for the central 250 channels ($-23.2$ to 82.7 km\,s$^{-1}$). Typical noise rms values were around 2.9 mJy\,beam$^{-1}$ per channel. The data were reduced, calibrated, and imaged using standard procedures in NRAO's Astronomical Image Processing System (AIPS) pertaining to spectral line data. Fields with bright maser sources ($>1$ Jy\,beam$^{-1}$) were self-calibrated in order to improve the dynamic range of the final maps, and to minimize the number of false detections due to side-lobes. The peak flux densities were also corrected for primary beam attenuation using the AIPS task PBCOR. \begin{figure} \begin{center} \includegraphics[scale=.52]{f1.pdf} \caption{44 GHz VLA observing pointing positions towards the Sgr A region. The ring-like structure pointed at by the arrow outlines the radio continuum emission of the Sgr A East SNR.} \label{f1} \end{center} \end{figure} \section{Results}\label{results} \subsection{Identification Method} Sgr A East is located in a complex and chemically rich environment and many Class I CH$_3$OH masers have been previously detected towards various regions near this SNR \citep{yusef2008, sjouw2010, pihl2011, yusef2013}. In order to search each pointing efficiently, an automated search method using a variety of AIPS tasks was developed to identify maser candidates with flux densities exceeding 10 times the rms noise. These candidates were then sorted according to a confidence in the detection, which depended on the size of the emission region and its signal to noise ratio. The highest ranked candidates were then manually inspected to determine if each was actual emission. Spectral profiles (flux density versus velocity) were produced for the brightest pixel in each region (Fig.\,7). \subsection{Maser Identification} A total of 318 100\% confident 44 GHz CH$_3$OH emission regions were found in the 25 pointings. All exceeded a 10$\sigma$ rms noise limit. The spectral parameters of each emission region are presented in Table 1. The coordinates of each source can be found in Columns 3 and 4 (with an estimated positional accuracy of $\sim0.5''$), which correspond to the peak brightness, I$_{peak}$ of each emission region (Column 5). Figure\,2 shows a histogram of the distribution of the peak flux density values of the maser emission the majority of which are $\lesssim0.8$ Jy\,beam$^{-1}$. The emission was detected across almost the entire region observed with some regions of high concentration. A large abundance of maser emission was detected toward the NE and to the SW, as well as a small amount encompassing the SNR. The maser emission extends to the east about $2'$ (4.6 pc) from the NE boundary of the SNR shell and about $5' 15''$ (12.1 pc) to the south of the SW boundary of the SNR shell. The majority of emission is unresolved and the brightness temperatures, T$_b$, listed in Column 8, are lower limits calculated using the half power beam width of the VLA in B configuration. The minimum brightness temperatures range from $19\times10^2$ to $52\times10^4$ K. The 44 GHz CH$_3$OH masers in this study that have previously detected counterparts are indicated in Table 1 with [R]. The peak velocities vary across the region observed, ranging from about $-13$ to 72 km\,s$^{-1}$ and are listed in Column 6. Figure\,3 shows two simple histograms of the velocity of maser emission in the NE and SW regions observed separated by a declination of $-29^{o}$ $02'$ $30'$ (between rows E and F in Fig.\,1). The color scheme in Fig.\,3 represents the different velocities of the emission. The full width half maximum (FWHM) of the brightest peak is listed in Column 7 and was estimated from the number of channels at half I$_{peak}$ with an error of half of a channel ($\pm0.2$ km\,s$^{-1}$). A gaussian fit was inaccurate to estimate the FWHM because most to the peaks are only a couple channels wide. Instead, we used the number of channels at half peak to approximate an upper limit to the FWHM for each peak. The FWHM of each emission peak listed in Table 1 is narrow ranging from about 0.4 to 3.0 km\,s$^{-1}$ (1 to 7 channels). These line-widths have similar values to previously detected 44 and 36 GHz CH$_3$OH masers in Sgr A East and other SNRs \citep{sjouw2010, pihl2011}. The estimated thermal line-width using the lowest calculated T$_b$ (1,900 K) is about 1.6 km\,s$^{-1}$. Two sources (119 and 160) have measured FWHMs close to the thermal linewidth. A gaussian was fit to the spectral profiles of these two sources but the errors in the fits were large, therefore we conclude that 119 and 160 are not thermal sources. The measured FWHM of the remaining emission regions are less than the calculated thermal line-width for their minimum T$_b$, therefore we conclude that all the peaks listed in Table 1 are maser emission and non-thermal in nature. Although the majority of masers detected were found to be single peaks of emission, as can be seen in Fig.\,7, some of the spectra show multiple spectral peaks at a single position ($\sim$\%27). The sources in Table 1 that are noted with a [M] indicate the sources that display multiple spectral features detected at the same position, some with complicated and broad structure in their spectral profiles. Some of the multiple peak features may imply that we are observing partly unresolved structures within the VLA beam. The emission regions with multiple peaks cover the entire region observed but are mostly concentrated in pointings BB and BC to the NE and EI and EG to the SW. Given the sensitivity of the VLA and the beam size in B-configuration, our observations may not be sensitive to thermal or extended sources below $4\sigma$ corresponding to a brightness temperature less than $\sim$370 K. \begin{figure} \begin{center} \includegraphics[scale=.25]{f2.pdf} \caption{A histogram showing the peak flux density distribution of the maser emission.} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=.43]{f3.pdf} \caption{Two histograms showing the velocity distribution of the maser emission in the NE (top) and SW (bottom) regions observed. The masers corresponding to these regions are separated by a declination of $-29^{o}$ $02'$ $30''$. The different colors correspond to the velocity of the masers. This color scheme is used in later figures to indicate the velocity of each maser along with their position. } \end{center} \end{figure} \section{Discussion} Various regions in the Sgr A East environment have previously been searched for different maser transitions. Four collisionally pumped maser transitions have been found, namely the 22 GHz H$_2$O, 1720 MHz OH, 36 GHz CH$_3$OH, and 44 GHz CH$_3$OH lines \citep{yusef1996, sjouw2002, yusef2008, sjouw2010, pihl2011, yusef2013}. The distribution trends of all these maser species in a few regions will be briefly discussed. \subsection{CH$_3$OH Maser Distribution} Figure\,4 shows the positions of the 44 GHz CH$_3$OH maser sources (crosses) detected from this survey within the observed region indicated by the black dashed line, overlaid on a 1720 MHz continuum image. Previously detected 36 GHz (circles) and 44 GHz (triangles) CH$_3$OH, 1720 MHz OH (squares), and 22 GHz H$_2$O (plus signs) masers are also plotted \citep{yusef1995, yusef1996, sjouw2002, yusef2008, sjouw2010, pihl2011, yusef2013}. The color of each symbol represents the velocity bin of the maser according to the scheme in Fig.\,3, for example, light blue symbols represent velocities $>55$ km\,s$^{-1}$. Typical beam sizes and channel rms values from the previous observations are as follows: $15''$ and 15 mJy\,beam$^{-1}$ \citep{yusef1996}, $2.5''\times1.9''$ and 15 mJy\,beam$^{-1}$ \citep{sjouw2002}, $5.8''\times3.9''$ and 14.1mJy\,beam$^{-1}$ \citep{pihl2008}, $0.2-0.4''$ and 10-12 mJy\,beam$^{-1}$ \citep{sjouw2010}, $1.3''\times0.5''$ and 15-20 mJy\,beam$^{-1}$ \citep{pihl2011}, and $1.8''\times0.7''$ and 2.5 mJy\,beam$^{-1}$ \citep{yusef2013}. The previously detected 44 GHz CH$_3$OH masers from \citet{pihl2011} and \citet{yusef2008} were also detected in this survey and consistent within our positional accuracy, and have velocities within $\pm2$ km\,s$^{-1}$ of our listed V$_{peak}$. Figure\,4 shows that the majority of emission has velocities around 10 km\,s$^{-1}$ to the SW region and around $45$ km\,s$^{-1}$ to the NE region of the Sgr A East shell. The most noteworthy result from this survey is the large number of CH$_3$OH masers detected within the inner parsecs of our GC. The CH$_3$OH maser emission detected in other Galactic SNRs interacting with MCs pales in comparison to Sgr A East. For example, in a targeted search towards the SNR W28, few 36 and 44 GHz CH$_3$OH masers were found. Towards the SNR G1.4$-$0.1, which is interacting with at least 2 MCs, only 36 GHz CH$_3$OH maser emission was detected, and none at 44 GHz \citep{pihl2014}. It is widely accepted that CH$_3$OH forms on the surface of icy dust grains and then is released into the gas-phase via some heating mechanism, for example, through UV radiation, shocks from cloud-cloud interactions, SNR-cloud interactions, by expanding HII regions, and by young and old stellar outflows \citep[e.g.,][]{garrod2008, whittet2011, ruiz2016, yusef2013}. This enhancement of CH$_3$OH detected near Sgr A East may not be surprising because the GC is extremely chemically rich and subject to shocks. In addition, the enhancement of CH$_3$OH may be driven by cosmic ray irradiation, more so than in other regions of the Galaxy where the cosmic ray ionization rate is lower \citep{yusef2013,pihl2014,mills2015}. \begin{figure*} \begin{center} \includegraphics[scale=1.7]{f4.pdf} \caption{Positions of the 44 GHz CH$_3$OH masers (crosses) overlaid on a 1720 MHz continuum emission of the Sgr A East environment. In addition, previously detected 44 GHz CH$_3$OH (triangles), 36 GHz CH$_3$OH (circles), 22 GHz H$_2$O (plus-signs), and 1720 MHz OH (squares), masers are overlaid, for details see \citet{yusef1995,yusef1996,sjouw2002,yusef2008,sjouw2010,pihl2011,yusef2013}. The color of each symbol represents the velocity of each maser according to the scheme in Fig.\,3. The dashed black line indicates the the observed region in this study. The red and blue boxed regions correspond to the enlarged NE and SW regions in Fig.\,5 and 6, respectively.} \end{center} \end{figure*} \subsubsection {NE Region} A high concentration of masers is found in the NE region of Sgr A East, where this SNR is interacting with the 50 km\,s$^{-1}$ MC and borders the radio continuum shell of Sgr A East, as can be seen in Fig.\, 5 (zoom of the red, boxed region in Fig.\,4). The vast majority of the 44 GHz masers in this clump are found to have velocities similar to that of the MC, around 50 km\,s$^{-1}$ or less (red and yellow symbols). Many of these masers are coincident with a known shock front \citep{pihl2011} and they seem to follow a sharp boundary along the edge of the SNR radio continuum emission, outlined by the black rectangle in Fig.\,5. The brightest 44 GHz CH$_3$OH masers detected in this region, with $I_{peak}$ between 5.87 and 16.16 Jy\,beam$^{-1}$, are significantly weaker than the brightest 36 GHz CH$_3$OH masers detected in the same region. The lowest flux density ratios between these two maser species in this region is $\sim$5. Based on modeling results from \citet{mcewen2014}, this implies a high density region ($n>10^6$ cm$^{-3}$). In addition, the high concentration of masers in this region is spatially coincident with strong SiO ($2-1$) emission (between 20 and 50 km\,s$^{-1}$), as can be seen in \citet{yusef2013}, which is also indicative of high density shocked gas. Very few masers are detected to the west (right) of this boundary, which is in agreement with what was seen by \citet{pihl2011}. This means that the physical conditions to the west of the shock front are not conducive for CH$_3$OH maser emission. It is possible that there may be a lower abundance of CH$_3$OH in this region, due to UV photodissociation of the CH$_3$OH molecule as was speculated by \citet{yusef2013}. \begin{figure*} \begin{center} \includegraphics[scale=3.5]{f5.pdf} \caption{Positions of the 44 GHz CH$_3$OH masers (crosses) overlaid on a 1720 MHz continuum emission of the zoomed in NE region of the Sgr A East environment indicated by the red box in Fig.\,4. In addition, previously detected 44 GHz CH$_3$OH (triangles), 36 GHz CH$_3$OH (circles), 22 GHz H$_2$O (plus-signs), and 1720 MHz OH (squares), masers are overlaid, for details see \citet{yusef1995,yusef1996,sjouw2002,yusef2008, sjouw2010,pihl2011,yusef2013}. The color of each symbol represents the velocity of each maser according to the scheme in Fig.\,3. The black rectangle indicates a large abundance of masers in a region where the 50 km\,s$^{-1}$ MC boarders the radio continuum shell. The black circle outlines a cluster of 44 GHz CH$_3$OH maser emission forming a dense ridge.} \end{center} \end{figure*} The collisionally excited 1720 MHz OH masers form under similar physical conditions compared to Class I CH$_3$OH, but are found offset from the CH$_3$OH maser positions. This means they are most likely formed in different regions in the shocked gas \citep{pihl2008, pihl2014, mcewen2014}. The majority of the 44 GHz masers in the NE region have slightly lower velocity spread compared to the OH masers, which have an average of $\sim57$ km\,s$^{-1}$, and are located just to the SW of the group of 44 GHz CH$_3$OH masers (in the rectangle region). This implies that the OH masers are located in a region of the shock that is more turbulent and disturbed, namely in the post shocked gas. This strengthens the same conclusion that was drawn based on previous but more sparse observations of a CH$_3$OH maser survey by \citet{pihl2011} and reinforces the idea that bright 36 GHz CH$_3$OH masers coincident with weaker 44 GHz CH$_3$OH masers detected in a SNR/MC interaction region trace a region closer to the actual shock front compared to OH maser emission. Just to the east of this front, there appears to be another cluster of 44 GHz CH$_3$OH maser emission forming a dense ridge, outlined by the black circle in Fig.\,5. Given the distance to the GC of 8.5 kpc, this ridge is about 1.2 pc to the east of the shock front. As suggested by \citet{mcewen2014}, these masers may be associated with a possible newly detected young SNR embedded in the 50 km\,s$^{-1}$ MC. In this region, \citet{tsuboi2011, tsuboi2012} detected a dense shocked molecular shell region based on CS (J$=1-0$) observations and high SiO/H$^{13}$CO$^+$ ratios. The SiO emission ranges from about 15 to 45 km\,s$^{-1}$, which agrees with the velocity of the masers. Alternatively, they could be excited by an internal shock in the core of the cloud generated by star formation (SF). \subsubsection{SW Region} Two distinct concentrations of masers are found to the SW of Sgr A East, where this SNR is interacting with the 20 km\,s$^{-1}$ MC (Fig.\, 6). Here, the vast majority of the 44 GHz masers have velocities that range from 5 to 15 km\,s$^{-1}$ (blue symbols). However, in the northern cluster (outlined by the black square in Fig.\,6) several masers are found to have velocities similar to that of the MC, closer to 20 km\,s$^{-1}$ (pink and green symbols) and outline a known non-thermal filament in this region (SgrA-F) \citep{ho1985}. In the southern cluster (outlined by the black circle) several masers have velocities less than 5 km\,s$^{-1}$ (black symbols), indicating a different origin. The masers in the southern cluster form a rough circle centered around a declination of $-29^o$ $05'$ $55''$ and is located about $10''$ to the SW of a known HII region (SgrA-G, 17h 45m 38.21s $-29^o$ $05'$ $45.5''$) \citep{ho1985}. These two distinct clusters can also be seen in SiO (2-1) with comparable velocities around 20 km\,s$^{-1}$ (northern) and 0 km\,s$^{-1}$ (southern), in good agreement with the velocities of other masers in these regions \citep{tsuboi2011}. The majority of the 36 GHz masers previously detected are around 17 km\,s$^{-1}$ (northern cluster) and around 0 km\,s$^{-1}$ (southern cluster). Given the distance of 8.5\,kpc to the GC, the separation between these two clumps is about 5.8 pc. The scale of both of these clusters is roughly the same, $\sim1.6$ pc, which is about double the size of a typical compact HII region \citep{kurtz2005}. No 1720 MHz OH masers are detected near these two clusters. \begin{figure*} \begin{center} \includegraphics[scale=4]{f6.pdf} \caption{Positions of the 44 GHz CH$_3$OH masers (crosses) overlaid on a 1720 MHz continuum emission of the zoomed in SW region near Sgr A East indicated by the blue box in Fig.\,4. In addition, previously detected 44 GHz CH$_3$OH (triangles), 36 GHz CH$_3$OH (circles), 22 GHz H$_2$O (plus-signs), and 1720 MHz OH (squares), masers are overlaid, for details see \citet{yusef1995,yusef1996,sjouw2002,yusef2008,sjouw2010,pihl2011,yusef2013}.} \end{center} \end{figure*} \subsection{Possible Star Formation Near Sgr A East} Observationally, 44 GHz CH$_3$OH lines are found to be more common and brighter near SF regions compared to the 36 GHz line, although both transitions have been found co-located (e.g., \citealt{voronkov2014}). It is not clear whether the Class I CH$_3$OH maser traces a specific evolutionary stage of SF. \citet{voronkov2014} suggest that Class I CH$_3$OH masers may be found at multiple epochs throughout the evolution of a massive star. They find Class I CH$_3$OH masers near regions with other maser sources that trace early evolutionary stages (e.g., 6.67 GHz CH$_3$OH), as well as older stages (e.g., OH). Another possibility may be that Class I CH$_3$OH masers arise from/in different types of shocks in SF regions, including young and old outflows, cloud/cloud collisions, and expanding HII regions \citep{ruiz2016}. \citet{mcewen2014} show that in a SNR environment (low dust temperature and IR radiation), given a specific CH$_3$OH abundance, the 36 GHz maser line dominates at higher number densities and the 44 GHz line at lower densities. These transitions are both collisionally pumped lines, but at lower densities where the 44 GHz line dominates, it is possible that IR pumping may play an important role. If so, it could help explain why the Class I 44 GHz line is more common in SF regions, where IR radiation and dust emission are more prevalent. In fact, modeling by \citet{nesterenok2016} shows that strong radiation fields can quench other collisionally pumped Class I CH$_3$OH lines (e.g., the 25 GHz line), while the radiatively pumped Class II lines (e.g., the 6.7 GHz line) become brighter. However, it is also found that both Class I and Class II masers can be bright and exist in the same region with high IR radiation fields. The modeling work carried out by \citet{nesterenok2016} did not extend to the 36 and 44 GHz lines. Using the online radiative transfer modeling program (RADEX, van der Tak et al. 2007), taking into account a strong external radiation field (100K) appropriate for SF regions, we indeed find conditions where the 44 GHz and 36 GHz lines simultaneously exist. We also find conditions where the 44 GHz line dominates over the 36 GHz line (e.g., high temperatures $\sim$200 K and densities $\sim$10$^4$ cm$^{-3}$). This supports the idea that strong IR radiation fields could influence the production of Class I CH$_3$OH lines. Based on our VLA observations, three regions that are offset from the radio SNR shell can be identified, which will be discussed for possible star formation association. One region is to the NE of the SNR, seemingly located towards interior of the 50 km s$^{-1}$ MC, Fig.\,5. The other two are to the SW of the SNR near the southern cluster of masers labeled in Fig.\,6. {\bf Region one:} Towards the far NE region outlined by the black square in Fig.\,5, there are three recently detected 36 GHz CH$_3$OH masers \citep{yusef2013}. These masers are located more towards the interior of the 50 km\,s$^{-1}$ MC. Two of the masers are spatially coincident with a few 44 GHz masers with similar velocities. In addition, these masers have slightly lower velocities compared to those towards the edge of the Sgr A East shell to the west. Despite searches, radio continuum and H$_2$O masers have not been detected in this NE region, which are often signposts of HII regions. {\bf Region two:} In the SW region (Fig.\,6) towards the core of the 20 km\,s$^{-1}$ MC interaction, there is a known HII region (SgrA-G) just to the NW of the the southern cluster where H$_2$O masers are detected \citep{sjouw2002} that have similar velocities to the CH$_3$OH masers. The fact that there are both H$_2$O masers and 44 GHz CH$_3$OH masers in this region suggest possible star formation. It is probable that some of the CH$_3$OH masers closer to the HII region are tracing outflows or shocks produced from the expanding HII region. {\bf Region three:} In the SW southern cluster (Fig.\,6), the circular distribution nature of the CH$_3$OH masers could mean that they are tracing outflows from an undetected SF region (Fig.\,6). In addition, no 1720 MHz OH maser emission has been detected in this region, which would be consistent with an early stage of SF. Additional observations of other SF tracers, will hopefully shed light on the properties of this region. Note that the four known compact HII regions (A-D; Ho et al. 1985) located just to the east of the region outlined by the black rectangle in Fig.\,5 are located in the foreground and therefore are not related to the shock front of Sgr A East \citep{sjouw2008}. Although spatially coincident, the few 44 GHz CH$_3$OH masers and one H$_2$O maser are not associated with these HII regions. \section{Conclusions} Over 300 masers were detected in the Sgr A East region at 44 GHz. The majority of the maser emission is found to be associated with the interaction of the SNR with the neighboring MCs to the NE and SW of the SNR. We summarize the results of this survey in three main points: first, the distribution and abundance of 44 GHz CH$_3$OH masers is very different compared to the 1720 MHz OH masers. The 44 GHz CH$_3$OH masers are much more abundant than OH and are not found co-spatial with the 1720 MHz OH masers, which suggests they are sustained in different regions of the shocked environment. Second, the brightest 44 GHz CH$_3$OH masers detected in this study are significantly weaker compared to the brightest 36 GHz CH$_3$OH masers detected in the same region, located in the NE shocked region of Sgr A East, which indicates a high density. Third, it is possible that some of the masers are tracing sites of star formation, although conclusive evidence does not exist at this time. A more complete survey of 36 GHz maser emission is underway, and will be used to complete a full analysis of this region. \acknowledgments We thank NASA for support under FERMI grant NNX12AO77G. B.C.M.\, acknowledges support from the NM Space Grant Consortium under the Graduate Research Fellowship program. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
2023-04-23T06:41:25.202Z
2016-06-09T02:12:51.000Z
redpajama/arxiv
arxiv_0001
2,402
5,278
40018800d1c1375ce40829fed1b010005a7500fa
\section{Acknowledgments} This work was supported by the AFOSR (FA9550-12-1-0282) and NSF (PHY-1104424). \bibliographystyle{apsrev4-1}
2023-04-23T06:41:26.118Z
2016-06-09T02:04:16.000Z
redpajama/arxiv
arxiv_0001
2,421
22
b3fe95642fb70d95adc1cd6593ad97339ac2f959
\section{Introduction} Gaussian distributed variables are widely used to represent the state of a system in many problems ranging from state estimation \cite{Simon2006b} to scheduling \cite{Palmer2013,Palmer2014a}. In practice, the state vectors in many systems are known to satisfy inequality constraints. Examples of state-constrained systems include health monitoring \cite{Simon2006}, vision systems \cite{Shimada1998}, robotics \cite{Boccadoro2010}, binary sensor networks \cite{Manes2013}, and object tracking \cite{Romero-cano2015}. This paper deals specifically with systems that are subject to inequality constraints where the constraints themselves have uncertainty described by Gaussian distributions. Constraints described by Gaussian distributions can arise from many sources in state estimation problems including discrete sensors, such as position or level switches, that have uncertainty on their activation point, obstacles whose positions are uncertain, and other physical and model-derived bounds such as maximum fuel levels based on historical fuel burn rates. Constrained Gaussian distributed variables also appear in scheduling applications where the distribution describing the time at which an event is predicted to occur is constrained by the time distributions of other events. Hard inequality constraints are well studied \cite{Simon2006b}, where the main approaches are estimate projection \cite{Simon2006}, gain projection \cite{Gupta2007}, and Probability Density Function (PDF) truncation \cite{Simon2010b}. Estimate and gain projection approaches incorporate the constraints into the derivation of the Kalman filter, resulting in a constrained optimisation problem that can be solved using quadratic programming, least squares approaches, amongst others \cite{Simon2006b, Simon2010}. Truncation methods, on the other hand, are applied directly to the PDF resulting from a Kalman filter, as outlined in Figure \ref{f:truncation}. This approach truncates the PDF at the constraints and calculates the mean and covariance of the truncated PDF, which become the constrained state estimate and its covariance. The PDF truncation approach was shown in \cite{Simon2010b} to, in general, outperform the estimate projection method. The truncation approach has been applied to probabilistic collision checking for robots \cite{Patil2012}, and has been extended to non-linear systems \cite{Teixeira2010,Straka2012}. \begin{figure} \centering \includegraphics[width = 0.8\textwidth]{truncation.pdf} \caption{The Kalman filter is run independent of the truncation method, with the truncation being applied to the state estimate that is the output of the Kalman filter. The prediction step of the Kalman filter results in a probability distribution describing the state, $x$, conditioned on the system model, $M$. The measurement update step further conditions the state estimate on the observations, $O$. Finally, the truncation step conditions the estimate on the constraints acting on the state, $C$. } \label{f:truncation} \end{figure} Soft constraints correspond to uncertain or noisy constraints, and are less studied than hard constraints. Soft equality constraints are typically incorporated as noisy measurements \cite{Simon2006b,Helor1993}. However, soft inequality constraints are significantly more difficult to deal with, and numerical filters such as a Particle Filter (PF) are typically used for these problems \cite{Shao2010}. Several numerical methods have been examined for incorporating soft constraints into the Kalman filter. A numerical PDF truncation method was used in \cite{Boccadoro2010} for robot localisation using Radio Frequency IDentification (RFID) tags, where the noise on the inequality constraints was highly non-Gaussian. Compared with a PF approach, the numerical PDF truncation method was 2 to 3 orders of magnitude faster while, in general, providing similar results. A similar RFID problem was examined in \cite{Manes2013} where aspects of the Unscented Kalman Filter (UKF) and PF were combined---the prediction step used the standard UKF step, while the correction step was modified to weight the sigma-points of the UKF in a similar manner to the weighting process in a PF. It was shown to outperform a PF as well as the Quantised Extended Kalman Filter (QEKF) presented in \cite{DiGiampaolo2012}. The literature on soft inequality constraints has focused on constraints with non-Gaussian distributions, where the constrained state estimates are, by necessity, calculated using numerical methods. The main contribution of this paper is an analytical method for PDF truncation with soft constraints where the soft constraints are described by Gaussian distributions. This reduces the computational requirement compared to numerical methods, and it is shown to provide superior estimation performance compared to unconstrained and hard-constrained state estimation methods. The truncation approach presented in this paper is not limited to Kalman filters and can be applied to any constrained system using Gaussian distributions to represent the state and constraints. The rest of this paper is structured as follows: Section \ref{s:probdef} introduces the constrained Kalman filtering problem, Section \ref{s:transform} shows how the state and constraints can be transformed such that each state has only one constraint acting on it, Section \ref{s:constrained} presents the truncation method for a one-sided constraint, and Section \ref{s:interval} extends this to an interval constraint. The performance of the methods are evaluated in Section \ref{s:results}, and the paper is concluded in Section \ref{s:conc}. \ref{s:a_mean} and \ref{s:a_variance} provide in-depth derivations of the integrals used in this paper. \section{Problem definition}\label{s:probdef} This paper adapts the notation used in \cite{Simon2010b}. A discrete linear time-invariant system is described by: \begin{gather} \boldsymbol{x}\left(k\right) = \boldsymbol{Fx}\left(k-1\right) + \boldsymbol{Gu}\left(k\right) + \boldsymbol{w}\left(k\right) \notag \\ \boldsymbol{y}\left(k\right) = \boldsymbol{Hx}\left(k\right) + \boldsymbol{v}\left(k\right) \end{gather} where $k$ is the time index, $\boldsymbol{x}$ is the state vector with $n$ states, $\boldsymbol{u}$ is the vector of known control inputs, and $\boldsymbol{y}$ is the vector of measurements. The vectors $\boldsymbol{w}$ and $\boldsymbol{v}$ contain the process and measurement noise respectively. The process noise, $\boldsymbol{w}$, is assumed to be zero mean Gaussian white noise with a covariance matrix of $\boldsymbol{Q}$. The measurement noise, $\boldsymbol{v}$, is similarly assumed to be zero mean Gaussian white noise with a covariance matrix of $\boldsymbol{R}$. The noises at each time-step are assumed to be independent. For the given system, the Kalman filter prediction equations are \cite{Faragher2012}: \begin{gather} \boldsymbol{\hat{x}}(k|k-1) = \boldsymbol{F\hat{x}}(k-1|k-1) + \boldsymbol{Gu}(k-1) \notag \\ \boldsymbol{P}(k|k-1) = \boldsymbol{FP}(k-1|k-1)\boldsymbol{F}^{T} + \boldsymbol{Q} \end{gather} and the measurement update equations are: \begin{gather} \boldsymbol{K}= \boldsymbol{P}(k|k-1) \boldsymbol{H}^{T} \left( \boldsymbol{HP}(k|k-1)\boldsymbol{H}^{T} + \boldsymbol{R} \right)^{-1} \notag \\ \boldsymbol{\hat{x}}(k|k) = \boldsymbol{\hat{x}}(k|k-1) + \boldsymbol{K} \left( \boldsymbol{y}(k) - \boldsymbol{H\hat{x}}(k|k-1) \right) \\ \boldsymbol{P}(k|k) = \boldsymbol{P}(k|k-1) - \boldsymbol{K}\boldsymbol{H}\boldsymbol{P}(k|k-1) \notag \end{gather} where $\boldsymbol{\hat{x}}(k|k)$ is the state estimate, and $\boldsymbol{P}(k|k)$ is the covariance of the state estimate. The state estimate is initialised with $\boldsymbol{\hat{x}}(0) = E[\boldsymbol{x}(0)]$, where $E[.]$ is the expectation operator. The covariance matrix is initialised with $\boldsymbol{P}(0) = E[(\boldsymbol{x}(0) - \boldsymbol{\hat{x}}(0))(\boldsymbol{x}(0) - \boldsymbol{\hat{x}}(0))^{T}]$. Now consider the following $s$ linearly independent constraints on the system: \begin{equation}\label{eq:constraint_def} A_{m}(k) \le \boldsymbol{\phi}_{m}^{T}(k)\boldsymbol{x}(k) \le B_{m}(k) \qquad m=1,...,s \end{equation} where the constraints are uncertain and normally distributed: \begin{equation} A_{m}(k) \sim \mathcal{N}(\mu_{a,m},\sigma_{a,m}^{2}) \qquad B_{m}(k) \sim \mathcal{N}(\mu_{b,m},\sigma_{b,m}^{2}) \end{equation} Equation (\ref{eq:constraint_def}) describes a two-sided constraint on the linear function of the state described by $\boldsymbol{\phi}_{m}^{T}(k)\boldsymbol{x}(k)$. One sided constraints can be represented by setting $\mu_{a,m} = -\infty$, or $\mu_{b,m} = \infty$, and hard constraints can be implemented by setting $\sigma_{a,m} \approx 0$ or $\sigma_{b,m} \approx 0$ as required. Given an estimate $\boldsymbol{\hat{x}}(k)$ with covariance $\boldsymbol{P}(k)$ at time $k$, the problem is to truncate the Gaussian PDF $\mathcal{N}(\boldsymbol{\hat{x}}(k),\boldsymbol{P}(k))$ using the $s$ constraints described above, and then find the mean $\boldsymbol{\tilde{x}}(k)$ and covariance $\boldsymbol{\tilde{P}}(k)$ of the truncated PDF. The calculated mean and covariance represent the constrained estimate of the state. \section{Transforming the state vector and constraints}\label{s:transform} To apply the constraints via the truncation method, the state vector must be transformed so that the constraints are decoupled. This will result in $s$ transformed constraints that each involve only one element of the transformed state, allowing the constraints to be enforced individually on each element of the transformed state. It should be noted that the order in which constraints are applied can change the final state estimate. However, if the initial constraints are decoupled, the order of constraint application does not change the result \cite{Simon2010b}. The transformation process is outlined in \cite{Simon2006b} and \cite{Simon2010b}, and is summarised here in equations (\ref{eq:transform_start})--(\ref{eq:transform_end}) and (\ref{eq:mean_and_sigma})--(\ref{eq:truncated_estimate}). For ease of notation, the $(k)$ after each variable will be dropped. Let the vector $\boldsymbol{\tilde{x}}_{i}$ be the truncated state estimate, and the matrix $\boldsymbol{\tilde{P}}_{i}$ be the covariance of $\boldsymbol{\tilde{x}}_{i}$, after the first $i-1$ constraints have been enforced. To initialise the process: \begin{gather} i = 1 \quad \boldsymbol{\tilde{x}}_{i} = \boldsymbol{\hat{x}} \quad \boldsymbol{\tilde{P}}_{i} = \boldsymbol{{P}} \label{eq:transform_start} \end{gather} The transformed state vector is given by: \begin{equation} \label{eq:transform} \boldsymbol{z}_{i} = \boldsymbol{\rho}_{i}\boldsymbol{W}_{i}^{-1/2}\boldsymbol{T}_{i}^{T}(\boldsymbol{x}-\boldsymbol{\tilde{x}}_{i}) \end{equation} where the matrices $\boldsymbol{T}_{i}$ and $\boldsymbol{W}_{i}$ are derived from the Jordan canonical decomposition of $\boldsymbol{\tilde{P}}_{i}$: \begin{equation} \boldsymbol{T}_{i}\boldsymbol{W}_{i}\boldsymbol{T}_{i}^{T} = \boldsymbol{\tilde{P}}_{i} \end{equation} $\boldsymbol{T}_{i}$ is an orthogonal matrix, and $\boldsymbol{W}_{i}$ is a diagonal matrix. The matrix $\boldsymbol{\rho}_{i}$ is derived by the Gram--Schmidt orthogonalisation \cite{Moon2000} which finds the orthogonal $\boldsymbol{\rho}_{i}$ that satisfies: \begin{equation} \boldsymbol{\rho}_{i}\boldsymbol{W}_{i}^{1/2}\boldsymbol{T}_{i}^{T}\boldsymbol{\phi}_{i} = \left[(\boldsymbol{\phi}_{i}^{T}\boldsymbol{\tilde{P}}_{i}\boldsymbol{\phi}_{i})^{1/2} \quad 0 \quad ... \quad 0 \right]^{T} \end{equation} Now only one element of $\boldsymbol{z}_{i}$ is constrained, and the states in the transformed state vector $\boldsymbol{z}_{i}$ are independent standard normal distributions. Let $\boldsymbol{e}_{i}$ be the $i$th column of an $n \times n$ identity matrix. Transforming the constraints results in: \begin{equation} C_{i} \le \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i} \le D_{i} \end{equation} where \begin{gather} C_{i} \sim \mathcal{N}(\mu_{c,i},\sigma_{c,i}^{2}) \notag \\ \mu_{c,i} = \frac{\mu_{a,i} - \boldsymbol{\phi}_{i}^{T}\boldsymbol{\tilde{x}}_{i}}{\sqrt{\boldsymbol{\phi}_{i}^{T}\boldsymbol{\tilde{P}}_{i}\boldsymbol{\phi}_{i}}} \quad \sigma_{c,i} = \frac{\sigma_{a,i}}{\sqrt{\boldsymbol{\phi}_{i}^{T}\boldsymbol{\tilde{P}}_{i}\boldsymbol{\phi}_{i}}} \end{gather} and \begin{gather} D_{i} \sim \mathcal{N}(\mu_{d,i},\sigma_{d,i}^{2}) \notag \\ \mu_{d,i} = \frac{\mu_{b,i} - \boldsymbol{\phi}_{i}^{T}\boldsymbol{\tilde{x}}_{i}}{\sqrt{\boldsymbol{\phi}_{i}^{T}\boldsymbol{\tilde{P}}_{i}\boldsymbol{\phi}_{i}}} \quad \sigma_{d,i} = \frac{\sigma_{b,i}}{\sqrt{\boldsymbol{\phi}_{i}^{T}\boldsymbol{\tilde{P}}_{i}\boldsymbol{\phi}_{i}}} \label{eq:transform_end} \end{gather} The equations for calculating the standard deviation of each constraint are not present in \cite{Simon2006b,Simon2010b}, but they are a trivial extension from the equations provided for calculating the mean. \section{One-sided constraint}\label{s:constrained} First, consider the case where there is only one constraint on the transformed state, in this case a lower constraint: \begin{equation} C_{i} \le \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i} \label{eq:singleconstraint} \end{equation} Applying a lower constraint to the transformed state is equivalent to finding the conditional probability distribution of the transformed state given that it is higher than the constraint. Using Bayes' theorem, the conditional probability distribution, $p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta | C_{i} \le \zeta)$, as a function of $\zeta$ is given by: \begin{equation} \label{eq:bayes_single} p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta | C_{i} \le \zeta) = \frac{p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta) \times P(C_{i} \le \zeta)} {P(C_{i} \le \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i})} \end{equation} where $p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta)$ is the PDF of $\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}$, $P(C_{i} \le \zeta)$ is the probability that a point $\zeta$ is greater than the constraint, and $P(C_{i} \le \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i})$ is the probability that the transformed state is greater than the constraint. $P(C_{i} \le \zeta)$ is given by: \begin{align} P(C_{i} \le \zeta) &= \int\limits_{-\infty}^{\zeta} \textrm{PDF}_{C_{i}}(c) \; \textrm{d}c \notag \\ & = \textrm{CDF}_{C_{i}}(\zeta) \end{align} where $\textrm{PDF}_{C_{i}}(c)$ is the PDF of the constraint $C_{i}$ evaluated at $c$, and $\textrm{CDF}_{C_{i}}(\zeta)$ is the Cumulative Distribution Function (CDF) of $C_{i}$ evaluated at $\zeta$. $P(C_{i} \le \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i})$ is given by: \begin{align} P(C_{i} \le \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}) & = P(C_{i} - \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i} \le 0) \notag \\ & = \int\limits_{-\infty}^{0}\textrm{PDF}_{C_{i} - \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta) \; \textrm{d}\zeta \notag \\ & = \textrm{CDF}_{C_{i} - \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(0) \end{align} where $C_{i} - \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i} \sim \mathcal{N}(\mu_{c,i},\sigma_{c,i}^{2} + 1)$ since $\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}$ is a standard normal distribution. The conditional probability distribution of the transformed state given that it is higher than the constraint is then given by: \begin{align}\label{eq:bayes_single_pdfs} p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta | C_{i} \le \zeta) &= \frac{p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta) \times P(C_{i} \le \zeta)} {P(C_{i} \le \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i})} \notag \\ & = \frac{\textrm{PDF}_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta) \times \textrm{CDF}_{C_{i}}(\zeta)}{\textrm{CDF}_{C_{i} - \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(0)} \end{align} The denominator of (\ref{eq:bayes_single_pdfs}) can be thought of as a normalising factor---it is the area of the numerator and ensures that the CDF of $p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta | C_{i} \le \zeta)$ is bound between 0 and 1. For states and constraints described by Gaussian distributions, $p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta | C_{i} \le \zeta)$ is given by: \begin{equation} \label{eq:dist_single} p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta | C_{i} \le \zeta) = \frac{\frac{1}{\sqrt{2\pi}}\exp(\frac{-x^{2}}{2})\frac{1}{2}\left[1 + \textrm{erf}\left( \frac{\zeta - \mu_{c,i}}{\sigma_{c,i}\sqrt{2}} \right) \right]}{\frac{1}{2}\left[1-\textrm{erf}\left(\frac{\mu_{c,i}}{\sqrt{2\left(\sigma_{c,i}^{2} + 1\right)}}\right)\right]} \end{equation} where erf(.) is the error function, defined as: \begin{equation} \textrm{erf}(t) = \frac{2}{\sqrt{\pi}}\int\limits_{0}^{t}\exp\left(-\tau^{2}\right)\textrm{d}\tau \end{equation} Let: \begin{equation} \alpha_{i} = \frac{1}{\sqrt{2\pi}\left[1-\textrm{erf}\left(\frac{\mu_{c,i}}{\sqrt{2\left(\sigma_{c,i}^{2} + 1\right)}}\right)\right]} \end{equation} then \begin{equation} \label{eq:dist_single_simp} p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta | C_{i} \le \zeta) = \alpha_{i}\exp\left(-\zeta^{2}/2\right)\left[1+\textrm{erf}\left(\frac{\zeta-\mu_{c,i}}{\sigma_{c,i}\sqrt{2}} \right)\right] \end{equation} To approximate $p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta | C_{i} \le \zeta)$ with a Gaussian distribution, the mean and variance are calculated as follows: \begin{align} \mu_{i} &= \alpha_{i}\int\limits_{-\infty}^{\infty}\zeta \exp\left(-\zeta^{2}/2\right)\left[1+\textrm{erf}\left(\frac{\zeta-\mu_{c,i}}{\sigma_{c,i}\sqrt{2}} \right)\right] \textrm{d}\zeta \notag\\ &= \frac{2\alpha_{i}}{\sqrt{\sigma_{c,i}^2 + 1}}\exp\left(-\frac{\mu_{c,i}^{2}}{2(\sigma_{c,i}^2 + 1)}\right) \label{eq:single_mu} \end{align} \begin{equation} \label{eq:single_sigma} \begin{split} \sigma_{i}^{2} &= \alpha_{i}\int\limits_{-\infty}^{\infty}\left(\zeta - \mu_{i}\right)^{2} \exp\left(-\zeta^{2}/2\right)\left[1+\textrm{erf}\left(\frac{\zeta-\mu_{c,i}}{\sigma_{c,i}\sqrt{2}} \right)\right] \textrm{d}\zeta \\ &= \alpha_{i}\left[\sqrt{2\pi}\left(\left(1+\mu_{i}^{2}\right)\left(1-\textrm{erf}\left(\frac{\mu_{c,i}}{\sqrt{2(\sigma_{c,i}^{2}+1)}}\right)\right)\right) \right. \\ & \qquad + \left. \frac{2}{\sqrt{\sigma_{c,i}^2 + 1}}\exp\left(-\frac{\mu_{c,i}^{2}}{2(\sigma_{c,i}^2 + 1)}\right)\left(\frac{\mu_{c,i}}{\sigma_{c,i}^2 + 1} - 2\mu_{i} \right)\right] \end{split} \end{equation} The derivations of the mean and variance can be found in \ref{s:a_mean} and \ref{s:a_variance} respectively. The transformed state estimate, after the $i$th constraint has been applied, has the following mean and covariance: \begin{gather} \boldsymbol{\tilde{z}}_{i+1} = \mu_{i}\boldsymbol{e}_{i}\notag \\ \boldsymbol{\tilde{G}}_{i+1} = \boldsymbol{I}_{n} + \left(\sigma_{i}^{2}-1\right)\boldsymbol{e}_{i}\boldsymbol{e}_{i}^{T} \label{eq:mean_and_sigma} \end{gather} where $\boldsymbol{I}_{n}$ is an $n \times n$ identity matrix. Taking the inverse of the transformation in (\ref{eq:transform}) gives the mean and variance of the state estimate after the truncation of the $i$th constraint: \begin{gather} \boldsymbol{\tilde{x}}_{i+1} = \boldsymbol{T}_{i}\boldsymbol{W}_{i}^{1/2}\boldsymbol{\rho}_{i}^{T}\boldsymbol{\tilde{z}}_{i+1}+\boldsymbol{\tilde{x}}_{i} \notag \\ \boldsymbol{\tilde{P}}_{i+1} = \boldsymbol{T}_{i}\boldsymbol{W}_{i}^{1/2}\boldsymbol{\rho}_{i}^{T}\boldsymbol{\tilde{G}}_{i+1}\boldsymbol{\rho}_{i}\boldsymbol{W}_{i}^{1/2}\boldsymbol{T}_{i}^{T} \label{eq:inverse_transform} \end{gather} This process (from (\ref{eq:transform}) to (\ref{eq:inverse_transform})) is repeated for the $s$ constraints, incrementing $i$ each time and using the constrained state estimate after constraint $i$ has been applied as the input state estimate for constraint $i+1$. After the $s$ constraints have been applied, the constrained state estimate is: \begin{gather} \boldsymbol{\tilde{x}} = \boldsymbol{\tilde{x}}_{s+1} \notag \\ \boldsymbol{\tilde{P}} = \boldsymbol{\tilde{P}}_{s+1} \label{eq:truncated_estimate} \end{gather} The equations for applying an upper constraint of the form: \begin{equation} \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i} \le D_{i} \end{equation} are as follows: \begin{equation} \alpha_{i} = \frac{1}{\sqrt{2\pi}\left[1+\textrm{erf}\left(\frac{\mu_{d,i}}{\sqrt{2\left(\sigma_{d,i}^{2} + 1\right)}}\right)\right]} \end{equation} \begin{equation} \mu_{i} = -\frac{2\alpha_{i}}{\sqrt{\sigma_{d,i}^2 + 1}}\exp\left(-\frac{\mu_{d,i}^{2}}{2(\sigma_{d,i}^2 + 1)}\right) \end{equation} \begin{equation} \begin{split} \sigma_{i}^{2} &= \alpha_{i}\left[\sqrt{2\pi}\left(\left(1+\mu_{i}^{2}\right)\left(1+\textrm{erf}\left(\frac{\mu_{d,i}}{\sqrt{2(\sigma_{d,i}^{2}+1)}}\right)\right)\right) \right. \\ & \qquad - \left. \frac{2}{\sqrt{\sigma_{d,i}^2 + 1}}\exp\left(-\frac{\mu_{d,i}^{2}}{2(\sigma_{d,i}^2 + 1)}\right)\left(\frac{\mu_{d,i}}{\sigma_{d,i}^2 + 1} - 2\mu_{i} \right)\right] \end{split} \end{equation} Several examples of the proposed method with a lower constraint are shown in Figure \ref{f:single_examples}. As can be seen, the Gaussian approximation is very close to the actual truncated distribution, with the approximation improving as $\sigma_{c,i}$ increases. As $\sigma_{c,i} \rightarrow \infty$, the $\textrm{CDF}\left(C_{i}\right)$ approaches a uniform distribution, which means that the truncated PDF approaches the original PDF. In Figure \ref{sf:single3}, the lower constraint is higher than the original distribution, resulting in the truncated distribution moving towards the constraint. In this case, the truncated distribution is actually still below the majority of the constraint distribution. Here, the uncertainty of the original and constraint distributions are balanced against one another---the more certain that one of the distributions is, the closer the truncated distribution will be to that distribution. For example, as the uncertainty of the constraint is decreased, the truncated distribution will move to the right. As the uncertainty of the constraint approaches 0, the constraint approaches a hard constraint and the majority of the truncated distribution will be above the constraint. \begin{figure} \centering \subfloat[$\mu_{c,i} = -2, \sigma_{c,i} = 0.5$]{ \includegraphics[width=0.45\textwidth]{single1}\label{sf:single1} } \hspace{0.5em} \subfloat[$\mu_{c,i} = 0, \sigma_{c,i} = 1$]{ \includegraphics[width=0.45\textwidth]{single2}\label{sf:single2} } \subfloat[$\mu_{c,i} = 3, \sigma_{c,i} = 1.5$]{ \includegraphics[width=0.45\textwidth]{single3}\label{sf:single3} } \caption{Comparison of the actual truncated distributions and Gaussian approximations of the truncated distributions for several lower constraints. In \protect\subref{sf:single3}, the lower constraint is higher than the original distribution, resulting in the truncated distribution moving towards the constraint. In this case, the Gaussian approximation is an almost perfect approximation of the truncated distribution and the two lines overlap. } \label{f:single_examples} \end{figure} \subsection{Feedback of the truncated estimate} There is disagreement amongst authors as to whether or not the truncated state estimate should be fed back into the Kalman filter, with some suggesting using feedback \cite{Teixeira2010, Straka2012} as shown in Figure~\ref{sf:feedback_flowchart} and others stating that the truncation process should be kept independent of the unconstrained Kalman filter \cite{Simon2010b} as shown in Figure~\ref{sf:no_feedback_flowchart}. In \cite{Simon2010b}, it is argued that this feedback can lead to overconfident estimates as the information provided by the constraints is used multiple times. In reality, there are two issues to consider when deciding whether or not to use feedback. The first issue concerns the uncertainty model of the constraints---if the constraints are noisy, and if that noise is independent from one time-step to another, then the truncated estimate can be fed back into the Kalman filter. Under these conditions, the constraints are similar to independent noisy measurements. However, many physical constraints are uncertain rather than noisy---that is, the actual value of the constraint is constant and is not resampled at each time-step. Feeding the truncated estimate back into the Kalman filter in this case can result in overconfident estimates, as will be shown in Section \ref{s:results}. \begin{figure} \centering \subfloat[The truncated state estimate is fed back into the Kalman filter.]{ \includegraphics[width=0.8\textwidth]{truncation_feedback}\label{sf:feedback_flowchart} } \subfloat[The Kalman filter is run independently of the truncation, with the feedback occuring after the measurement update and before the truncation. ]{ \includegraphics[width=0.8\textwidth]{truncation_no_feedback}\label{sf:no_feedback_flowchart} } \caption{Feedback of the state estimate into the Kalman filter can occur either before or after the truncation process. Deciding which feedback approach to use depends on the uncertainty model of the constraints and the shape of the truncated distribution. } \label{f:feedback_flowcharts} \end{figure} The second issue arises when the truncated distribution is highly non-Gaussian, which is commonly the case when the uncertainty of the constraint is low in comparison to the uncertainty of the estimate. In these cases, the Gaussian approximation of the truncated distribution introduces error that can accumulate if it is fed back into the Kalman filter, leading to unrepresentative state estimates. An example of such a situation is given in \cite{Simon2010b}. Consider a scalar system with no process noise such that $x_{k+1} = x_{k}$, no measurements, and a hard constraint of $x \ge 0$. If the initial state estimate is a standard normal distribution, then the truncated distribution will have a Gaussian shape for $x \ge 0$ and 0 otherwise. Approximating this as a Gaussian distribution changes the mean from 0 to $\sqrt{2/\pi}$ and the variance from 1 to $\left(\pi-2\right)/\pi$ after the truncation has been applied once. If this were fed back into the Kalman filter, it would result in a monotonically increasing estimate mean and a monotonically decreasing estimate variance for successive applications of the truncation approach. The authors of \cite{Simon2010b} argue that this is a result of incorporating the information from the constraints multiple times. However, it is actually the Gaussian approximation of the truncated distribution that causes this behaviour. If it were possible to feed the truncated distribution back into the filter without approximating it as a Gaussian distribution, then applying the constraint at the next time step would result in the exact same truncated distribution. When deciding whether or not to use feedback, the above issues should be carefully considered. Not using feedback is the conservative option---the resultant estimates will have a higher uncertainty compared to methods using feedback, but will avoid most of these issues. Provided the constraints are noisy rather than uncertain, and have a high noise uncertainty, using feedback is valid and can yield a significant performance benefit over not using feedback. The difference between uncertain and noisy constraints is minimal when the uncertainty of the constraints is small in comparison to the uncertainty of the state estimate, and the main source of error in these cases is the approximation of the truncated distribution. One possible way of dealing with this, suggested by \cite{Simon2010b}, is to only feed back the elements of the truncated estimate where the elements of the original estimate violate the constraint. This was suggested in the context of hard constraints, however, and determining the point at which a soft constraint is violated is ambiguous and is left to the reader. \section{Interval constraint}\label{s:interval} Now consider the case where there are two constraints. After transforming the state using Equations (\ref{eq:transform_start})--(\ref{eq:transform_end}), the two constraints acting on the transformed state are: \begin{equation} C_{i} \le \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i} \le D_{i} \end{equation} Using Bayes' theorem, the conditional probability distribution of the transformed state satisfying the constraints, $p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta | (C_{i} \le \zeta \le D_{i}))$, as a function of $\zeta$ is given by: \begin{align} p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta | C_{i} \le \zeta &\le D_{i}) = \frac{p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta) \times P(C_{i} \le \zeta \le D_{i})} {P(C_{i} \le \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i} \le D_{i})} \label{eq:bayes_interval1} \\ & = \frac{p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta) \times P(C_{i} \le \zeta) \times P( \zeta \le D_{i})} {P(C_{i} \le \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i} \le D_{i})} \label{eq:bayes_interval2} \end{align} Replacing $P(C_{i} \le \zeta \le D_{i})$ in (\ref{eq:bayes_interval1}) with $P(C_{i} \le \zeta) \times P( \zeta \le D_{i})$ in (\ref{eq:bayes_interval2}) is possible since $C_{i}$ and $D_{i}$ are independent random variables. Note, however, that $P(C_{i} \le \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i} \le D_{i})$ cannot be split up in this way and requires the evaluation of a multi-variate CDF, which does not have an analytical solution. This probability is a normalising factor for the numerator. This gives the following distribution: \begin{align} p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta &| C_{i} \le \zeta \le D_{i}) \notag \\ & = \frac{p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta) \times P(C_{i} \le \zeta) \times P( \zeta \le D_{i})} {P(C_{i} \le \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i} \le D_{i})} \notag \\ & = \frac{\textrm{PDF}_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta) \times \textrm{CDF}_{C_{i}}(\zeta) \times \textrm{CDF}_{D_{i}}(\zeta)}{P(C_{i} \le \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i} \le D_{i})} \notag \\ & = \frac{\frac{1}{\sqrt{2\pi}}\exp(-\frac{\zeta^{2}}{2})\frac{1}{2}\left[ 1 + \textrm{erf}\left( \frac{\zeta - \mu_{c,i}}{\sigma_{c,i}\sqrt{2}} \right) \right]\frac{1}{2}\left[ 1 - \textrm{erf}\left( \frac{\zeta - \mu_{d,i}}{\sigma_{d,i}\sqrt{2}} \right) \right]}{P(C_{i} \le \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i} \le D_{i})} \label{eq:dist_int} \end{align} The integrals required to calculate the mean and variance of (\ref{eq:dist_int}) contain integrals of the form: \begin{equation} \int\limits_{-\infty}^{\infty}\exp\left(-x^2\right)\textrm{erf}(a x + b)\textrm{erf}(c x + d) \textrm{d}x \end{equation} which does not have an analytical solution. The following approximation is proposed: \begin{align} &\left[ 1 + \textrm{erf}\left( \frac{\zeta - \mu_{c,i}}{\sigma_{c,i}\sqrt{2}} \right) \right]\left[ 1 - \textrm{erf}\left( \frac{\zeta - \mu_{d,i}}{\sigma_{d,i}\sqrt{2}} \right) \right] \notag \\ & \approx 2 \left[\textrm{erf}\left( \frac{\zeta - \mu_{c,i}}{\sigma_{c,i}\sqrt{2}} \right) - \textrm{erf}\left( \frac{\zeta - \mu_{d,i}}{\sigma_{d,i}\sqrt{2}} \right) \right] \label{eq:approximation} \end{align} This yields an approximation of the numerator in (\ref{eq:dist_int}) of: \begin{equation}\label{eq:approximate_pdf} \textrm{Z}(\boldsymbol{e}_{i}^{T}\boldsymbol{\tilde{z}}_{i+1}) = \frac{1}{4\sqrt{2\pi}}\exp\left(-\zeta^{2}/2\right)\left[2\left(\textrm{erf}\left(\frac{\zeta-\mu_{c,i}}{\sigma_{c,i}\sqrt{2}} \right)-\textrm{erf}\left(\frac{\zeta-\mu_{d,i}}{\sigma_{d,i}\sqrt{2}} \right)\right)\right] \end{equation} where $\textrm{Z}(\boldsymbol{e}_{i}^{T}\boldsymbol{\tilde{z}}_{i+1})$ is an unnormalised function describing the truncated distribution. This approximation relies on the condition that $\mu_{c,i} < \mu_{d,i}$, and the assumption that the constraint distributions do not significantly overlap. If these are satisfied, it is highly likely that one of the erf terms will be equal to 1 when the other is not, giving a good approximation of the actual distribution. This is illustrated in Figure \ref{f:erf}. \begin{figure} \centering \includegraphics[width = 0.6\textwidth]{erf} \caption{Error function terms for a lower constraint with $\mu_{c,i} = -2$ and $\sigma_{c,i} = 1$ and an upper constraint with $\mu_{d,i} = 2$ and $\sigma_{d,i} = 1$. In this case, one of the error function terms is always close to 1. } \label{f:erf} \end{figure} An overlap metric, $\gamma$, is defined as: \begin{equation} \label{eq:gamma} \gamma = \frac{\mu_{d,i} - \mu_{c,i}}{\sigma_{d,i} + \sigma_{c,i}} \end{equation} and a shape metric, $\delta$, is defined as: \begin{equation} \label{eq:delta} \delta = \left| \log\left(\frac{\sigma_{c,i}}{\sigma_{d,i}}\right)\right| \end{equation} $\gamma$ is a measure of how much the probability distributions of the two constraints overlap, and $\delta$ is a measure of how different the shapes of the probability distributions of the constraints are. Figure \ref{f:approx_examples} shows examples of the approximation applied to several different cases of $\gamma$ and $\delta$. As can be seen, the approximation is an almost perfect approximation in Figure \ref{sf:approx1}, with the approximation degrading as $\gamma$ decreases in the other examples. \begin{figure} \centering \subfloat[$\mu_{c,i} = -2, \sigma_{c,i} = 0.5, \mu_{d,i} = 2, \sigma_{d,i} = 1$, corresponding to $\gamma = 2.67$ and $\delta = 0.30$]{ \includegraphics[width=0.45\textwidth]{approximate1}\label{sf:approx1} } \hspace{0.5em} \subfloat[$\mu_{c,i} = -3, \sigma_{c,i} = 1, \mu_{d,i} = 2, \sigma_{d,i} = 3$, corresponding to $\gamma = 1.25$ and $\delta = 0.48$]{ \includegraphics[width=0.45\textwidth]{approximate2}\label{sf:approx2} } \subfloat[$\mu_{c,i} = -1, \sigma_{c,i} = 2, \mu_{d,i} = 2, \sigma_{d,i} = 3.5$, corresponding to $\gamma = 0.55$ and $\delta = 0.24$]{ \includegraphics[width=0.45\textwidth]{approximate3}\label{sf:approx3} } \caption{Comparison of the actual and approximate functions from (\ref{eq:approximation}) for several values of $\gamma$ and $\delta$. In \protect\subref{sf:approx1}, the approximation is an almost perfect approximation of the actual distribution and the two lines overlap. } \label{f:approx_examples} \end{figure} Figure \ref{f:approx_error} shows the Kullback--Leibler (KL) divergence between the actual and approximate probability distributions for various $\delta$ and $\gamma$. Increasing $\gamma$ and decreasing $\delta$ improves the approximation. For $\gamma \ge 3$, the approximation very closely matches the actual distribution, regardless of $\delta$. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{approx_error_kl} \caption{Comparison of the KL divergence between the actual and approximate probability distributions as a function of the overlap metric, $\gamma$, where the lines are constant $\delta$. For $\delta > 3$, the KL divergence is approximately the same as for $\delta = 3$. } \label{f:approx_error} \end{figure} To normalise $\textrm{Z}(\boldsymbol{e}_{i}^{T}\boldsymbol{\tilde{z}}_{i+1})$ to a PDF, the area of the function is calculated as: \begin{equation} \begin{aligned} \int\limits_{-\infty}^{\infty}\textrm{Z}&(\boldsymbol{e}_{i}^{T}\boldsymbol{\tilde{z}}_{i+1}) \textrm{d}\zeta \\ &= \int\limits_{-\infty}^{\infty}\frac{1}{4\sqrt{2\pi}}\exp\left(-\zeta^{2}/2\right)\left[2\left(\textrm{erf}\left(\frac{\zeta-\mu_{c,i}}{\sigma_{c,i}\sqrt{2}} \right)-\textrm{erf}\left(\frac{\zeta-\mu_{d,i}}{\sigma_{d,i}\sqrt{2}} \right)\right)\right] \textrm{d}\zeta \\ &= \frac{1}{2}\left[\textrm{erf}\left(\frac{\mu_{d,i}}{\sqrt{2\left(\sigma_{d,i}^{2} + 1\right)}}\right) - \textrm{erf}\left(\frac{\mu_{c,i}}{\sqrt{2\left(\sigma_{c,i}^{2} + 1\right)}}\right)\right] \end{aligned} \end{equation} The mean of the PDF is then given by: \begin{align} \mu_{i} &= \alpha_{i}\int\limits_{-\infty}^{\infty}\zeta \exp\left(-\zeta^{2}/2\right)\left[\left(\textrm{erf}\left(\frac{\zeta-\mu_{c,i}}{\sigma_{c,i}\sqrt{2}} \right)-\textrm{erf}\left(\frac{\zeta-\mu_{d,i}}{\sigma_{d,i}\sqrt{2}} \right)\right)\right] \textrm{d}\zeta \notag\\ &= 2\alpha_{i}\left(\frac{1}{\sqrt{\sigma_{c,i}^2 + 1}}\exp\left(-\frac{\mu_{c,i}^{2}}{2(\sigma_{c,i}^2 + 1)}\right) \right. \notag \\ &\qquad \left. - \frac{1}{\sqrt{\sigma_{d,i}^2 + 1}}\exp\left(-\frac{\mu_{d,i}^{2}}{2(\sigma_{d,i}^2 + 1)}\right)\right) \end{align} where \begin{equation} \alpha_{i} = \frac{1}{\sqrt{2\pi}\left[\textrm{erf}\left(\frac{\mu_{d,i}}{\sqrt{2\left(\sigma_{d,i}^{2} + 1\right)}}\right) - \textrm{erf}\left(\frac{\mu_{c,i}}{\sqrt{2\left(\sigma_{c,i}^{2} + 1\right)}}\right)\right]} \end{equation} The variance is given by: \begin{equation} \begin{aligned} \sigma_{i}^{2} = \alpha_{i}\int\limits_{-\infty}^{\infty}\left(\zeta - \mu_{i}\right)^{2} \exp\left(-\zeta^{2}/2\right)\left[\left(\textrm{erf}\left(\frac{\zeta-\mu_{c,i}}{\sigma_{c,i}\sqrt{2}} \right)-\textrm{erf}\left(\frac{\zeta-\mu_{d,i}}{\sigma_{d,i}\sqrt{2}} \right)\right)\right] \textrm{d}\zeta \\ = \alpha_{i}\left[\sqrt{2\pi}\left(\left(1+\mu_{i}^{2}\right)\left(\textrm{erf}\left(\frac{\mu_{d,i}}{\sqrt{2(\sigma_{d,i}^{2}+1)}}\right)-\textrm{erf}\left(\frac{\mu_{c,i}}{\sqrt{2(\sigma_{c,i}^{2}+1)}}\right)\right)\right) \right. \\ \qquad + \left. \frac{2}{\sqrt{\sigma_{c,i}^2 + 1}}\exp\left(-\frac{\mu_{c,i}^{2}}{2(\sigma_{c,i}^2 + 1)}\right)\left(\frac{\mu_{c,i}}{\sigma_{c,i}^2 + 1} - 2\mu_{i} \right) \right. \\ \qquad - \left. \frac{2}{\sqrt{\sigma_{d,i}^2 + 1}}\exp\left({-\frac{\mu_{d,i}^{2}}{2(\sigma_{d,i}^2 + 1)}}\right)\left(\frac{\mu_{d,i}}{\sigma_{d,i}^2 + 1} - 2\mu_{i} \right)\right] \end{aligned} \end{equation} The derivations of these integrals are not provided, but can be easily derived using the solutions provided in \ref{s:a_mean} and \ref{s:a_variance} for the one-sided constraint. The truncated state estimate is then obtained by applying Equations (\ref{eq:mean_and_sigma}) and (\ref{eq:inverse_transform}). This process is repeated for the $s$ constraints, incrementing $i$ each time. Several examples of the proposed method for interval constraints are shown in Figure \ref{f:interval_examples}. The Gaussian approximation method produces distributions that are very similar to the actual truncated distributions. Figure \ref{sf:interval3} shows an example where soft and hard constraints are combined---the hard constraint has been modelled as a soft constraint with a very small standard deviation. \begin{figure} \centering \subfloat[$\mu_{c,i} = -2, \sigma_{c,i} = 0.5, \mu_{d,i} = 2, \sigma_{d,i} = 1$]{ \includegraphics[width=0.45\textwidth]{interval1}\label{sf:interval1} } \hspace{0.5em} \subfloat[$\mu_{c,i} = 1, \sigma_{c,i} = 1, \mu_{d,i} = 3, \sigma_{d,i} = 1$]{ \includegraphics[width=0.45\textwidth]{interval2}\label{sf:interval2} } \subfloat[$\mu_{c,i}=-2, \sigma_{c,i}=0.001, \mu_{d,i}=-1, \sigma_{d,i}=0.5$]{ \includegraphics[width=0.45\textwidth]{interval3}\label{sf:interval3} } \caption{Comparison of the actual and approximate distributions for several values of $\gamma$ and $\delta$. In \protect\subref{sf:interval2}, the Gaussian approximation is an almost perfect approximation of the truncated distribution and the two lines overlap. } \label{f:interval_examples} \end{figure} \section{Results}\label{s:results} Consider a robot moving along a corridor, as shown in Figure \ref{f:robotexample}. The corridor has a wall 10m in front of the initial position of the robot, and discrete position sensors placed at 1m intervals. The position sensors can detect which side of the set-point of the sensor the robot is on, and have uncertainty on the set-point. The robot has an initial velocity of 10cm/s and accelerates at 1cm/s$^{2}$ for 20s, then decelerates at 1cm/s$^{2}$ for 20s, before again accelerating at 1cm/s$^{2}$ until it reaches the far wall. Two types of robots were considered---one with standard deviations on the acceleration and initial velocity of $\sigma_{a} = 1\textrm{cm/s}^{2}$ and $\sigma_{v} = 3 \textrm{cm/s}$ respectively (Robot A), and a less uncertain one with standard deviations on the acceleration and initial velocity of $\sigma_{a} = 0.5\textrm{cm/s}^{2}$ and $\sigma_{v} = 1.5 \textrm{cm/s}$ respectively (Robot B). The standard deviation of the set-point of the sensors was also varied, with standard deviations (in cm) of $\sigma_{s} \in \{0, 5, 10, 15, 20, 25, 30\}$ tested. The position of each robot was tracked using the following Kalman filter run at 10Hz: \begin{figure} \centering \includegraphics[width = 0.5\textwidth]{robotexample.pdf} \caption{The robot (circle) is moving along the corridor. Positioned at 1m intervals are sensors with uncertain positions that can detect which side of the sensor the robot is on. These sensor readings are used as both measurements (when the sensor reading changes) and constraints. In the image, the position estimate of the robot would be constrained between the sensors at 3m and 4m (shaded area). The uncertainty of the constraints is shown by the shading. } \label{f:robotexample} \end{figure} \begin{equation} \boldsymbol{x} = \begin{bmatrix} x \\ \dot{x} \end{bmatrix} \quad \boldsymbol{u} = \begin{bmatrix} \ddot{x} \end{bmatrix}\quad \boldsymbol{F} = \begin{bmatrix} 1 & \Delta t\\ 0 & 1 \end{bmatrix} \quad \boldsymbol{G} = \begin{bmatrix} \frac{\Delta t^{2}}{2}\\ \Delta t \end{bmatrix} \end{equation} The covariance of the process noise was given by: \begin{equation} \boldsymbol{Q} = \boldsymbol{G}\boldsymbol{G}^{T}\sigma_{a}^{2} = \begin{bmatrix} \frac{\Delta t^{4}}{4} & \frac{\Delta t^{3}}{2}\\ \frac{\Delta t^{3}}{2} & \Delta t^{2} \end{bmatrix} \sigma_{a}^{2} \end{equation} The Kalman filter was initialised with: \begin{equation} \hat{\boldsymbol{x}} = \begin{bmatrix} 0\\ 0.1 \end{bmatrix} \quad \boldsymbol{P} = \begin{bmatrix} 0 & 0 \\ 0 & \sigma_{v}^{2} \end{bmatrix} \end{equation} A position sensor changing its reading was incorporated as a noisy measurement of the position: \begin{equation} \boldsymbol{H} = \begin{bmatrix} 1 & 0 \end{bmatrix} \quad \boldsymbol{R} = \begin{bmatrix} \sigma_{s}^{2} \end{bmatrix} \end{equation} At each time-step, position sensors whose reading did not change were not incorporated into the Kalman filter. The aim of the constrained Kalman filtering approach here was to use the absence of measurements to improve the state estimate. While the robot was in between sensors, the sensors were treated as constraints on the state of the system. The truncation method proposed in this paper (which will be referred to as the soft-constrained Kalman filter) was compared with an unconstrained Kalman filter, and a constrained Kalman filter using the truncation method that ignored the uncertainty of the constraints and treated them as hard constraints (referred to as the hard-constrained Kalman filter). Each combination of robot, sensor uncertainty, and Kalman filter method was tested 1000 times. The time-average Root Mean Square Error (RMSE) and percentage improvement between methods for Robot A are shown in Figure \ref{f:results1}, and the results for Robot B are shown in Figure \ref{f:results2}. For Robot A, both the hard-constrained and soft-constrained methods provided a significant benefit over the unconstrained Kalman filter, with an improvement of over 40\% in tracking performance when the sensors have no uncertainty. As the uncertainty of the sensors was increased, the soft-constrained method slightly outperformed the hard-constrained method. The process noise for Robot B was significantly less than Robot A. As a result, the uncertainty of the sensors played a larger role in determining the performance of the methods. As can be seen in Figure \ref{f:results2}, the hard-constrained Kalman filter was significantly outperformed by the unconstrained Kalman filter once the sensor uncertainty was above 10cm. In these cases, the estimate produced by the hard-constrained Kalman filter was overconfident, and the proposed method outperformed the hard-constrained Kalman filter by over 17\%. An example of the overconfident estimates produced by the hard-constrained method is shown in Figure \ref{f:simulation_example}. The proposed method strikes a balance between the high uncertainty of the unconstrained Kalman filter and the overconfident estimates of the hard-constrained Kalman filter. \begin{figure} \centering \subfloat[RMSE]{ \includegraphics[width=0.6\textwidth]{kalman_rmse_uncertain}\label{sf:results1_rmse} } \subfloat[Percentage improvement between methods]{ \includegraphics[width=0.6\textwidth]{kalman_percent_uncertain}\label{sf:results1_percent} } \caption{RMSE and percentage improvement between methods for Robot A as the sensor uncertainty is varied. The soft-constrained approach is equal to or better than the unconstrained and hard-constrained approaches in all cases. } \label{f:results1} \end{figure} \begin{figure} \centering \subfloat[RMSE]{ \includegraphics[width=0.6\textwidth]{kalman_rmse_certain}\label{sf:results2_rmse} } \subfloat[Percentage improvement between methods]{ \includegraphics[width=0.6\textwidth]{kalman_percent_certain}\label{sf:results2_percent} } \caption{RMSE and percentage improvement between methods for Robot B as the sensor uncertainty is varied. The soft-constrained approach is equal to or better than the unconstrained and hard-constrained approaches in all cases. The hard-constrained approach is outperformed by the unconstrained approach when the sensor uncertainty is large. } \label{f:results2} \end{figure} \begin{figure} \centering \subfloat[Unconstrained Kalman filter]{ \includegraphics[width=0.45\textwidth]{kalman_example_unc}\label{sf:simulation_unc} } \subfloat[Hard-constrained Kalman filter]{ \includegraphics[width=0.45\textwidth]{kalman_example_hard}\label{sf:simulation_hard} } \subfloat[Soft-constrained Kalman filter]{ \includegraphics[width=0.45\textwidth]{kalman_example_soft}\label{sf:simulation_soft} } \caption{Comparison of the actual and estimated positions with uncertainty for Robot B with sensor uncertainty of 15cm. This illustrates a case where the hard-constrained approach is outperformed by the unconstrained and soft-constrained approaches. The unconstrained estimate has large uncertainty, while the hard-constrained estimate is overconfident. The soft-constrained approach has a lower uncertainty compared to the unconstrained approach without producing the overconfident estimates of the hard-constrained method. } \label{f:simulation_example} \end{figure} As discussed at the end of Section \ref{s:constrained}, under certain conditions the truncated state distribution can be fed back into the Kalman filter. For the scenario considered, the discrete position sensors are uncertain rather than noisy, and thus using feedback is not valid. Figure \ref{sf:feedback} shows the effects of using feedback in the example scenario, with the result that the estimate is more confident. However, in some cases this estimate can become overconfident and fail to accurately represent the actual state. The method without feedback (Figure \ref{sf:no_feedback}) has less confident estimates, but they are not overconfident. \begin{figure} \centering \subfloat[Soft-constrained Kalman filter without feedback of the truncated estimate into the Kalman filter]{ \includegraphics[width=0.6\textwidth]{kalman_example_soft_no_feedback}\label{sf:no_feedback} } \subfloat[Soft-constrained Kalman filter with feedback of the truncated estimate into the Kalman filter]{ \includegraphics[width=0.6\textwidth]{kalman_example_soft_feedback}\label{sf:feedback} } \caption{Comparison of the actual and estimated positions with uncertainty for Robot A with sensor uncertainty of 15cm. In \protect\subref{sf:no_feedback}, the truncated estimate is not fed back into the Kalman filter, while in \protect\subref{sf:feedback}, the truncated estimate is used by the Kalman filter and the resultant estimate is overconfident. } \label{f:feedback} \end{figure} \section{Conclusion}\label{s:conc} This paper developed an analytical method of truncating an inequality constrained Gaussian distributed variable where the constraints themselves are described by Gaussian distributions. A key aspect of the approach was the use of moment-based Gaussian approximations of the truncated distribution. This truncation method was applied to the constrained Kalman filtering problem where it was shown to outperform unconstrained Kalman filtering and the existing constrained Kalman filter using hard constraints in a simulation example. A key benefit of the developed method compared to hard-constrained Kalman filters is that it is not overconfident near the uncertain constraints. It is an analytical version of existing numerical integration methods, thus providing a computational benefit over the existing numerical methods. \section*{Acknowledgements} This work was supported by the Rio Tinto Centre for Mine Automation and the Australian Centre for Field Robotics, University of Sydney, Australia. \section{Introduction} Gaussian distributions are widely used to represent the state of a system in many problems ranging from state estimation \cite{Simon2006b} to scheduling \cite{Palmer2013,Palmer2014a}. In practice, the state vectors in many systems are known to satisfy inequality constraints. Examples of state-constrained systems include health monitoring \cite{Simon2006}, vision systems \cite{Shimada1998}, robotics \cite{Boccadoro2010}, binary sensor networks \cite{Manes2013}, and object tracking \cite{Romero-cano2015}. This paper deals specifically with systems that are subject to inequality constraints where the constraints themselves have uncertainty described by Gaussian distributions. Constraints described by Gaussian distributions can arise from many sources in state estimation problems including discrete sensors, such as position or level switches, that have uncertainty on their activation point, obstacles whose positions are uncertain, and other physical and model-derived bounds such as maximum fuel levels based on historical fuel burn rates. Constrained Gaussian distributed variables also appear in scheduling applications where the distribution describing the time at which an event is predicted to occur is constrained by the time distributions of other events. Hard inequality constraints are well studied \cite{Simon2006b}, where the main approaches are estimate projection \cite{Simon2006}, gain projection \cite{Gupta2007}, and Probability Density Function (PDF) truncation \cite{Simon2010b}. Estimate and gain projection approaches incorporate the constraints into the derivation of the Kalman filter, resulting in a constrained optimisation problem that can be solved using quadratic programming, least squares approaches, and others \cite{Simon2006b, Simon2010}. Truncation methods, on the other hand, are applied directly to the PDF resulting from a Kalman filter, as outlined in Figure \ref{f:truncation}. This approach truncates the PDF at the constraints and calculates the mean and covariance of the truncated PDF, which become the constrained state estimate and its covariance. The PDF truncation approach was shown in \cite{Simon2010b} to, in general, outperform the estimate projection method. The truncation approach has been applied to probabilistic collision checking for robots \cite{Patil2012}, and has been extended to non-linear systems \cite{Teixeira2010,Straka2012}. \begin{figure} \centering \includegraphics[width = 0.8\textwidth]{truncation.pdf} \caption{The Kalman filter is run independent of the truncation method, with the truncation being applied to the state estimate that is the output of the Kalman filter. The prediction step of the Kalman filter results in a probability distribution describing the state, $x$, conditioned on the system model, $M$. The measurement update step further conditions the state estimate on the observations, $O$. Finally, the truncation step conditions the estimate on the constraints acting on the state, $C$. } \label{f:truncation} \end{figure} Soft constraints correspond to uncertain or noisy constraints, and are less studied than hard constraints. Soft equality constraints are typically incorporated as noisy measurements \cite{Simon2006b,Helor1993}. However, soft inequality constraints are significantly more difficult to deal with, and numerical filters such as a Particle Filter (PF) are typically used for these problems \cite{Shao2010}. Several numerical methods have been examined for incorporating soft constraints into the Kalman filter. A numerical PDF truncation method was used in \cite{Boccadoro2010} for robot localisation using Radio Frequency IDentification (RFID) tags, where the noise on the inequality constraints was highly non-Gaussian. Compared with a PF approach, the numerical PDF truncation method was 2 to 3 orders of magnitude faster while, in general, providing similar results. A similar RFID problem was examined in \cite{Manes2013} where aspects of the Unscented Kalman Filter (UKF) and PF were combined---the prediction step used the standard UKF step, while the correction step was modified to weight the sigma-points of the UKF in a similar manner to the weighting process in a PF. It was shown to outperform a PF as well as the Quantised Extended Kalman Filter (QEKF) presented in \cite{DiGiampaolo2012}. The literature on soft inequality constraints has focused on constraints with non-Gaussian distributions, where the constrained state estimates are, by necessity, calculated using numerical methods. The main contribution of this paper is an analytical method for PDF truncation with soft constraints where the soft constraints are described by Gaussian distributions. This reduces the computational requirement compared to numerical methods, and it is shown to provide superior estimation performance compared to unconstrained and hard-constrained state estimation methods. The truncation approach presented in this paper is not limited to Kalman filters and can be applied to any constrained system using Gaussian distributions to represent the state and constraints. The rest of this paper is structured as follows: Section \ref{s:probdef} introduces the constrained Kalman filtering problem, Section \ref{s:transform} shows how the state and constraints can be transformed such that each state has only one constraint acting on it, Section \ref{s:constrained} presents the truncation method for a one-sided constraint, and Section \ref{s:interval} extends this to an interval constraint. The performance of the methods is evaluated in Section \ref{s:results}, and the paper is concluded in Section \ref{s:conc}. \ref{s:a_mean} and \ref{s:a_variance} provide in-depth derivations of the integrals used in this paper. \section{Problem definition}\label{s:probdef} This paper adapts the notation used in \cite{Simon2010b}. A discrete linear time-invariant system is described by: \begin{gather} \boldsymbol{x}\left(k\right) = \boldsymbol{Fx}\left(k-1\right) + \boldsymbol{Gu}\left(k\right) + \boldsymbol{w}\left(k\right) \notag \\ \boldsymbol{y}\left(k\right) = \boldsymbol{Hx}\left(k\right) + \boldsymbol{v}\left(k\right) \end{gather} where $k$ is the time index, $\boldsymbol{x}$ is the state vector with $n$ states, $\boldsymbol{u}$ is the vector of known control inputs, and $\boldsymbol{y}$ is the vector of measurements. The vectors $\boldsymbol{w}$ and $\boldsymbol{v}$ contain the process and measurement noise respectively. The process noise, $\boldsymbol{w}$, is assumed to be zero mean Gaussian white noise with a covariance matrix of $\boldsymbol{Q}$. The measurement noise, $\boldsymbol{v}$, is similarly assumed to be zero mean Gaussian white noise with a covariance matrix of $\boldsymbol{R}$. The noises at each time-step are assumed to be independent. For the given system, the Kalman filter prediction equations are \cite{Faragher2012}: \begin{gather} \boldsymbol{\hat{x}}(k|k-1) = \boldsymbol{F\hat{x}}(k-1|k-1) + \boldsymbol{Gu}(k-1) \notag \\ \boldsymbol{P}(k|k-1) = \boldsymbol{FP}(k-1|k-1)\boldsymbol{F}^{T} + \boldsymbol{Q} \end{gather} and the measurement update equations are: \begin{gather} \boldsymbol{K}= \boldsymbol{P}(k|k-1) \boldsymbol{H}^{T} \left( \boldsymbol{HP}(k|k-1)\boldsymbol{H}^{T} + \boldsymbol{R} \right)^{-1} \notag \\ \boldsymbol{\hat{x}}(k|k) = \boldsymbol{\hat{x}}(k|k-1) + \boldsymbol{K} \left( \boldsymbol{y}(k) - \boldsymbol{H\hat{x}}(k|k-1) \right) \\ \boldsymbol{P}(k|k) = \boldsymbol{P}(k|k-1) - \boldsymbol{K}\boldsymbol{H}\boldsymbol{P}(k|k-1) \notag \end{gather} where $\boldsymbol{\hat{x}}(k|k)$ is the state estimate, and $\boldsymbol{P}(k|k)$ is the covariance of the state estimate. The state estimate is initialised with $\boldsymbol{\hat{x}}(0) = E[\boldsymbol{x}(0)]$, where $E[.]$ is the expectation operator. The covariance matrix is initialised with $\boldsymbol{P}(0) = E[(\boldsymbol{x}(0) - \boldsymbol{\hat{x}}(0))(\boldsymbol{x}(0) - \boldsymbol{\hat{x}}(0))^{T}]$. Now consider the following $s$ linearly independent constraints on the system: \begin{equation}\label{eq:constraint_def} A_{m}(k) \le \boldsymbol{\phi}_{m}^{T}(k)\boldsymbol{x}(k) \le B_{m}(k) \qquad m=1,...,s \end{equation} where the constraints are uncertain and normally distributed: \begin{equation} A_{m}(k) \sim \mathcal{N}(\mu_{a,m},\sigma_{a,m}^{2}) \qquad B_{m}(k) \sim \mathcal{N}(\mu_{b,m},\sigma_{b,m}^{2}) \end{equation} Equation (\ref{eq:constraint_def}) describes a two-sided constraint on the linear function of the state described by $\boldsymbol{\phi}_{m}^{T}(k)\boldsymbol{x}(k)$. One sided constraints can be represented by setting $\mu_{a,m} = -\infty$, or $\mu_{b,m} = \infty$, and hard constraints can be implemented by setting $\sigma_{a,m} \approx 0$ or $\sigma_{b,m} \approx 0$ as required. Given an estimate $\boldsymbol{\hat{x}}(k)$ with covariance $\boldsymbol{P}(k)$ at time $k$, the problem is to truncate the Gaussian PDF $\mathcal{N}(\boldsymbol{\hat{x}}(k),\boldsymbol{P}(k))$ using the $s$ constraints described above, and then find the mean $\boldsymbol{\tilde{x}}(k)$ and covariance $\boldsymbol{\tilde{P}}(k)$ of the truncated PDF. The calculated mean and covariance represent the constrained estimate of the state. \section{Transforming the state vector and constraints}\label{s:transform} To apply the constraints via the truncation method, the state vector must be transformed so that the constraints are decoupled. This will result in $s$ transformed constraints that each involve only one element of the transformed state, allowing the constraints to be enforced individually on each element of the transformed state. It should be noted that the order in which constraints are applied can change the final state estimate. However, if the initial constraints are decoupled, the order of constraint application does not change the result \cite{Simon2010b}. The transformation process is outlined in \cite{Simon2006b} and \cite{Simon2010b}, and is summarised here in equations (\ref{eq:transform_start})--(\ref{eq:transform_end}) and (\ref{eq:mean_and_sigma})--(\ref{eq:truncated_estimate}). For ease of notation, the $(k)$ after each variable will be dropped. Let the vector $\boldsymbol{\tilde{x}}_{i}$ be the truncated state estimate, and the matrix $\boldsymbol{\tilde{P}}_{i}$ be the covariance of $\boldsymbol{\tilde{x}}_{i}$, after the first $i-1$ constraints have been enforced. To initialise the process: \begin{gather} i = 1 \quad \boldsymbol{\tilde{x}}_{i} = \boldsymbol{\hat{x}} \quad \boldsymbol{\tilde{P}}_{i} = \boldsymbol{{P}} \label{eq:transform_start} \end{gather} The transformed state vector is given by: \begin{equation} \label{eq:transform} \boldsymbol{z}_{i} = \boldsymbol{\rho}_{i}\boldsymbol{W}_{i}^{-1/2}\boldsymbol{T}_{i}^{T}(\boldsymbol{x}-\boldsymbol{\tilde{x}}_{i}) \end{equation} where the matrices $\boldsymbol{T}_{i}$ and $\boldsymbol{W}_{i}$ are derived from the Jordan canonical decomposition of $\boldsymbol{\tilde{P}}_{i}$: \begin{equation} \boldsymbol{T}_{i}\boldsymbol{W}_{i}\boldsymbol{T}_{i}^{T} = \boldsymbol{\tilde{P}}_{i} \end{equation} $\boldsymbol{T}_{i}$ is an orthogonal matrix, and $\boldsymbol{W}_{i}$ is a diagonal matrix. The matrix $\boldsymbol{\rho}_{i}$ is derived by the Gram-Schmidt orthogonalisation \cite{Moon2000} which finds the orthogonal $\boldsymbol{\rho}_{i}$ that satisfies: \begin{equation} \boldsymbol{\rho}_{i}\boldsymbol{W}_{i}^{1/2}\boldsymbol{T}_{i}^{T}\boldsymbol{\phi}_{i} = \left[(\boldsymbol{\phi}_{i}^{T}\boldsymbol{\tilde{P}}_{i}\boldsymbol{\phi}_{i})^{1/2} \quad 0 \quad ... \quad 0 \right]^{T} \end{equation} Now only one element of $\boldsymbol{z}_{i}$ is constrained, and the states in the transformed state vector $\boldsymbol{z}_{i}$ are independent standard normal distributions. Let $\boldsymbol{e}_{i}$ be the $i$th column of an $n \times n$ identity matrix. Transforming the constraints results in: \begin{equation} C_{i} \le \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i} \le D_{i} \end{equation} where \begin{gather} C_{i} \sim \mathcal{N}(\mu_{c,i},\sigma_{c,i}^{2}) \notag \\ \mu_{c,i} = \frac{\mu_{a,i} - \boldsymbol{\phi}_{i}^{T}\boldsymbol{\tilde{x}}_{i}}{\sqrt{\boldsymbol{\phi}_{i}^{T}\boldsymbol{\tilde{P}}_{i}\boldsymbol{\phi}_{i}}} \quad \sigma_{c,i} = \frac{\sigma_{a,i}}{\sqrt{\boldsymbol{\phi}_{i}^{T}\boldsymbol{\tilde{P}}_{i}\boldsymbol{\phi}_{i}}} \end{gather} and \begin{gather} D_{i} \sim \mathcal{N}(\mu_{d,i},\sigma_{d,i}^{2}) \notag \\ \mu_{d,i} = \frac{\mu_{b,i} - \boldsymbol{\phi}_{i}^{T}\boldsymbol{\tilde{x}}_{i}}{\sqrt{\boldsymbol{\phi}_{i}^{T}\boldsymbol{\tilde{P}}_{i}\boldsymbol{\phi}_{i}}} \quad \sigma_{d,i} = \frac{\sigma_{b,i}}{\sqrt{\boldsymbol{\phi}_{i}^{T}\boldsymbol{\tilde{P}}_{i}\boldsymbol{\phi}_{i}}} \label{eq:transform_end} \end{gather} The equations for calculating the standard deviation of each constraint are not present in \cite{Simon2006b,Simon2010b}, but they are a trivial extension from the equations provided for calculating the mean. \section{One-sided constraint}\label{s:constrained} First, consider the case where there is only one constraint on the transformed state, in this case a lower constraint: \begin{equation} C_{i} \le \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i} \label{eq:singleconstraint} \end{equation} Applying a lower constraint to the transformed state is equivalent to finding the conditional probability distribution of the transformed state given that it is higher than the constraint. Using Bayes' theorem, the conditional probability distribution, $p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta | C_{i} \le \zeta)$, as a function of $\zeta$ is given by: \begin{equation} \label{eq:bayes_single} p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta | C_{i} \le \zeta) = \frac{p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta) \times P(C_{i} \le \zeta)} {P(C_{i} \le \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i})} \end{equation} where $p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta)$ is the PDF of $\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}$, $P(C_{i} \le \zeta)$ is the probability that a point $\zeta$ is greater than the constraint, and $P(C_{i} \le \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i})$ is the probability that the transformed state is greater than the constraint. $P(C_{i} \le \zeta)$ is given by: \begin{align} P(C_{i} \le \zeta) &= \int\limits_{-\infty}^{\zeta} \textrm{PDF}_{C_{i}}(c) \; \textrm{d}c \notag \\ & = \textrm{CDF}_{C_{i}}(\zeta) \end{align} where $\textrm{PDF}_{C_{i}}(c)$ is the PDF of the constraint $C_{i}$ evaluated at $c$, and $\textrm{CDF}_{C_{i}}(\zeta)$ is the Cumulative Distribution Function (CDF) of $C_{i}$ evaluated at $\zeta$. $P(C_{i} \le \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i})$ is given by: \begin{align} P(C_{i} \le \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}) & = P(C_{i} - \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i} \le 0) \notag \\ & = \int\limits_{-\infty}^{0}\textrm{PDF}_{C_{i} - \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta) \; \textrm{d}\zeta \notag \\ & = \textrm{CDF}_{C_{i} - \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(0) \end{align} where $C_{i} - \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i} \sim \mathcal{N}(\mu_{c,i},\sigma_{c,i}^{2} + 1)$ since $\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}$ is a standard normal distribution. The conditional probability distribution of the transformed state given that it is higher than the constraint is then given by: \begin{align}\label{eq:bayes_single_pdfs} p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta | C_{i} \le \zeta) &= \frac{p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta) \times P(C_{i} \le \zeta)} {P(C_{i} \le \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i})} \notag \\ & = \frac{\textrm{PDF}_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta) \times \textrm{CDF}_{C_{i}}(\zeta)}{\textrm{CDF}_{C_{i} - \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(0)} \end{align} The denominator of (\ref{eq:bayes_single_pdfs}) can be thought of as a normalising factor---it is the area of the numerator and ensures that the CDF of $p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta | C_{i} \le \zeta)$ is bound between 0 and 1. For states and constraints described by Gaussian distributions, $p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta | C_{i} \le \zeta)$ is given by: \begin{equation} \label{eq:dist_single} p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta | C_{i} \le \zeta) = \frac{\frac{1}{\sqrt{2\pi}}\exp(\frac{-x^{2}}{2})\frac{1}{2}\left[1 + \textrm{erf}\left( \frac{\zeta - \mu_{c,i}}{\sigma_{c,i}\sqrt{2}} \right) \right]}{\frac{1}{2}\left[1-\textrm{erf}\left(\frac{\mu_{c,i}}{\sqrt{2\left(\sigma_{c,i}^{2} + 1\right)}}\right)\right]} \end{equation} where erf(.) is the error function, defined as: \begin{equation} \textrm{erf}(t) = \frac{2}{\sqrt{\pi}}\int\limits_{0}^{t}\exp\left(-\tau^{2}\right)\textrm{d}\tau \end{equation} Let: \begin{equation} \alpha_{i} = \frac{1}{\sqrt{2\pi}\left[1-\textrm{erf}\left(\frac{\mu_{c,i}}{\sqrt{2\left(\sigma_{c,i}^{2} + 1\right)}}\right)\right]} \end{equation} then \begin{equation} \label{eq:dist_single_simp} p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta | C_{i} \le \zeta) = \alpha_{i}\exp\left(-\zeta^{2}/2\right)\left[1+\textrm{erf}\left(\frac{\zeta-\mu_{c,i}}{\sigma_{c,i}\sqrt{2}} \right)\right] \end{equation} To approximate $p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta | C_{i} \le \zeta)$ with a Gaussian distribution, the mean and variance are calculated as follows: \begin{align} \mu_{i} &= \alpha_{i}\int\limits_{-\infty}^{\infty}\zeta \exp\left(-\zeta^{2}/2\right)\left[1+\textrm{erf}\left(\frac{\zeta-\mu_{c,i}}{\sigma_{c,i}\sqrt{2}} \right)\right] \textrm{d}\zeta \notag\\ &= \frac{2\alpha_{i}}{\sqrt{\sigma_{c,i}^2 + 1}}\exp\left(-\frac{\mu_{c,i}^{2}}{2(\sigma_{c,i}^2 + 1)}\right) \label{eq:single_mu} \end{align} \begin{equation} \label{eq:single_sigma} \begin{split} \sigma_{i}^{2} &= \alpha_{i}\int\limits_{-\infty}^{\infty}\left(\zeta - \mu_{i}\right)^{2} \exp\left(-\zeta^{2}/2\right)\left[1+\textrm{erf}\left(\frac{\zeta-\mu_{c,i}}{\sigma_{c,i}\sqrt{2}} \right)\right] \textrm{d}\zeta \\ &= \alpha_{i}\left[\sqrt{2\pi}\left(\left(1+\mu_{i}^{2}\right)\left(1-\textrm{erf}\left(\frac{\mu_{c,i}}{\sqrt{2(\sigma_{c,i}^{2}+1)}}\right)\right)\right) \right. \\ & \qquad + \left. \frac{2}{\sqrt{\sigma_{c,i}^2 + 1}}\exp\left(-\frac{\mu_{c,i}^{2}}{2(\sigma_{c,i}^2 + 1)}\right)\left(\frac{\mu_{c,i}}{\sigma_{c,i}^2 + 1} - 2\mu_{i} \right)\right] \end{split} \end{equation} The derivations of the mean and variance can be found in \ref{s:a_mean} and \ref{s:a_variance} respectively. The transformed state estimate, after the $i$th constraint has been applied, has the following mean and covariance: \begin{gather} \boldsymbol{\tilde{z}}_{i+1} = \mu_{i}\boldsymbol{e}_{i}\notag \\ \boldsymbol{\tilde{G}}_{i+1} = \boldsymbol{I}_{n} + \left(\sigma_{i}^{2}-1\right)\boldsymbol{e}_{i}\boldsymbol{e}_{i}^{T} \label{eq:mean_and_sigma} \end{gather} where $\boldsymbol{I}_{n}$ is an $n \times n$ identity matrix. Taking the inverse of the transformation in (\ref{eq:transform}) gives the mean and variance of the state estimate after the truncation of the $i$th constraint: \begin{gather} \boldsymbol{\tilde{x}}_{i+1} = \boldsymbol{T}_{i}\boldsymbol{W}_{i}^{1/2}\boldsymbol{\rho}_{i}^{T}\boldsymbol{\tilde{z}}_{i+1}+\boldsymbol{\tilde{x}}_{i} \notag \\ \boldsymbol{\tilde{P}}_{i+1} = \boldsymbol{T}_{i}\boldsymbol{W}_{i}^{1/2}\boldsymbol{\rho}_{i}^{T}\boldsymbol{\tilde{G}}_{i+1}\boldsymbol{\rho}_{i}\boldsymbol{W}_{i}^{1/2}\boldsymbol{T}_{i}^{T} \label{eq:inverse_transform} \end{gather} This process (from (\ref{eq:transform}) to (\ref{eq:inverse_transform})) is repeated for the $s$ constraints, incrementing $i$ each time and using the constrained state estimate after constraint $i$ has been applied as the input state estimate for constraint $i+1$. After the $s$ constraints have been applied, the constrained state estimate is: \begin{gather} \boldsymbol{\tilde{x}} = \boldsymbol{\tilde{x}}_{s+1} \notag \\ \boldsymbol{\tilde{P}} = \boldsymbol{\tilde{P}}_{s+1} \label{eq:truncated_estimate} \end{gather} The equations for applying an upper constraint of the form: \begin{equation} \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i} \le D_{i} \end{equation} are as follows: \begin{equation} \alpha_{i} = \frac{1}{\sqrt{2\pi}\left[1+\textrm{erf}\left(\frac{\mu_{d,i}}{\sqrt{2\left(\sigma_{d,i}^{2} + 1\right)}}\right)\right]} \end{equation} \begin{equation} \mu_{i} = -\frac{2\alpha_{i}}{\sqrt{\sigma_{d,i}^2 + 1}}\exp\left(-\frac{\mu_{d,i}^{2}}{2(\sigma_{d,i}^2 + 1)}\right) \end{equation} \begin{equation} \begin{split} \sigma_{i}^{2} &= \alpha_{i}\left[\sqrt{2\pi}\left(\left(1+\mu_{i}^{2}\right)\left(1+\textrm{erf}\left(\frac{\mu_{d,i}}{\sqrt{2(\sigma_{d,i}^{2}+1)}}\right)\right)\right) \right. \\ & \qquad - \left. \frac{2}{\sqrt{\sigma_{d,i}^2 + 1}}\exp\left(-\frac{\mu_{d,i}^{2}}{2(\sigma_{d,i}^2 + 1)}\right)\left(\frac{\mu_{d,i}}{\sigma_{d,i}^2 + 1} - 2\mu_{i} \right)\right] \end{split} \end{equation} Several examples of the proposed method with a lower constraint are shown in Figure \ref{f:single_examples}. As can be seen, the Gaussian approximation is very close to the actual truncated distribution, with the approximation improving as $\sigma_{c,i}$ increases. As $\sigma_{c,i} \rightarrow \infty$, the $\textrm{CDF}\left(C_{i}\right)$ approaches a uniform distribution, which means that the truncated PDF approaches the original PDF. In Figure \ref{sf:single3}, the lower constraint is higher than the original distribution, resulting in the truncated distribution moving towards the constraint. In this case, the truncated distribution is actually still below the majority of the constraint distribution. Here, the uncertainty of the original and constraint distributions are balanced against one another---the more certain that one of the distributions is, the closer the truncated distribution will be to that distribution. For example, as the uncertainty of the constraint is decreased, the truncated distribution will move to the right. As the uncertainty of the constraint approaches 0, the constraint approaches a hard constraint and the majority of the truncated distribution will be above the constraint. \begin{figure} \centering \subfloat[$\mu_{c,i} = -2, \sigma_{c,i} = 0.5$]{ \includegraphics[width=0.45\textwidth]{single1}\label{sf:single1} } \hspace{0.5em} \subfloat[$\mu_{c,i} = 0, \sigma_{c,i} = 1$]{ \includegraphics[width=0.45\textwidth]{single2}\label{sf:single2} } \subfloat[$\mu_{c,i} = 3, \sigma_{c,i} = 1.5$]{ \includegraphics[width=0.45\textwidth]{single3}\label{sf:single3} } \caption{Comparison of the actual truncated distributions and Gaussian approximations of the truncated distributions for several lower constraints. In \protect\subref{sf:single3}, the lower constraint is higher than the original distribution, resulting in the truncated distribution moving towards the constraint. In this case, the Gaussian approximation is an almost perfect approximation of the truncated distribution and the two lines overlap. } \label{f:single_examples} \end{figure} \subsection{Feedback of the truncated estimate} There is disagreement amongst authors as to whether or not the truncated state estimate should be fed back into the Kalman filter, with some suggesting using feedback \cite{Teixeira2010, Straka2012} as shown in Figure~\ref{sf:feedback_flowchart} and others stating that the truncation process should be kept independent of the unconstrained Kalman filter \cite{Simon2010b} as shown in Figure~\ref{sf:no_feedback_flowchart}. In \cite{Simon2010b}, it is argued that this feedback can lead to overconfident estimates as the information provided by the constraints is used multiple times. In reality, there are two issues to consider when deciding whether or not to use feedback. The first issue concerns the uncertainty model of the constraints---if the constraints are noisy, and if that noise is independent from one time-step to another, then the truncated estimate can be fed back into the Kalman filter. Under these conditions, the constraints are similar to independent noisy measurements. However, many physical constraints are uncertain rather than noisy---that is, the actual value of the constraint is constant and is not resampled at each time-step. Feeding the truncated estimate back into the Kalman filter in this case can result in overconfident estimates, as will be shown in Section \ref{s:results}. \begin{figure} \centering \subfloat[The truncated state estimate is fed back into the Kalman filter.]{ \includegraphics[width=0.8\textwidth]{truncation_feedback}\label{sf:feedback_flowchart} } \subfloat[The Kalman filter is run independently of the truncation, with the feedback occuring after the measurement update and before the truncation. ]{ \includegraphics[width=0.8\textwidth]{truncation_no_feedback}\label{sf:no_feedback_flowchart} } \caption{Feedback of the state estimate into the Kalman filter can occur either before or after the truncation process. Deciding which to feedback approach to use depends on the uncertainty model of the constraints and the shape of the truncated distribution. } \label{f:feedback_flowcharts} \end{figure} The second issue arises when the truncated distribution is highly non-Gaussian, which is commonly the case when the uncertainty of the constraint is low in comparison to the uncertainty of the estimate. In these cases, the Gaussian approximation of the truncated distribution introduces error that can accumulate if it is fed back into the Kalman filter, leading to unrepresentative state estimates. An example of such a situation is given in \cite{Simon2010b}. Consider a scalar system with no process noise such that $x_{k+1} = x_{k}$, no measurements, and a hard constraint of $x \ge 0$. If the initial state estimate is a standard normal distribution, then the truncated distribution will have a Gaussian shape for $x \ge 0$ and 0 otherwise. Approximating this as a Gaussian distribution changes the mean from 0 to $\sqrt{2/\pi}$ and the variance from 1 to $\left(\pi-2\right)/\pi$ after the truncation has been applied once. If this were fed back into the Kalman filter, it would result in a monotonically increasing estimate mean and a monotonically decreasing estimate variance for successive applications of the truncation approach. The authors of \cite{Simon2010b} argue that this is a result of incorporating the information from the constraints multiple times. However, it is actually the Gaussian approximation of the truncated distribution that causes this behaviour. If it were possible to feed the truncated distribution back into the filter without approximating it as a Gaussian distribution, then applying the constraint at the next time step would result in the exact same truncated distribution. When deciding whether or not to use feedback, the above issues should be carefully considered. Not using feedback is the conservative option---the resultant estimates will have a higher uncertainty compared to methods using feedback, but will avoid most of these issues. Provided the constraints are noisy rather than uncertain, and have a high noise uncertainty, using feedback is valid and can yield a significant performance benefit over not using feedback. The difference between uncertain and noisy constraints is minimal when the uncertainty of the constraints is small in comparison to the uncertainty of the state estimate, and the main source of error in these cases is the approximation of the truncated distribution. One possible way of dealing with this, suggested by \cite{Simon2010b}, is to only feed back the elements of the truncated estimate where the elements of the original estimate violate the constraint. This was suggested in the context of hard constraints, however, and determining the point at which a soft constraint is violated is ambiguous and is left to the reader. \section{Interval constraint}\label{s:interval} Now consider the case where there are two constraints. After transforming the state using Equations (\ref{eq:transform_start}) - (\ref{eq:transform_end}), the two constraints acting on the transformed state are: \begin{equation} C_{i} \le \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i} \le D_{i} \end{equation} Using Bayes' theorem, the conditional probability distribution of the transformed state satisfying the constraints, $p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta | (C_{i} \le \zeta \le D_{i}))$, as a function of $\zeta$ is given by: \begin{align} p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta | C_{i} \le \zeta &\le D_{i}) = \frac{p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta) \times P(C_{i} \le \zeta \le D_{i})} {P(C_{i} \le \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i} \le D_{i})} \label{eq:bayes_interval1} \\ & = \frac{p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta) \times P(C_{i} \le \zeta) \times P( \zeta \le D_{i})} {P(C_{i} \le \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}) \times P(\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i} \le D_{i})} \label{eq:bayes_interval2} \end{align} Replacing $P(C_{i} \le \zeta \le D_{i})$ and $P(C_{i} \le \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i} \le D_{i})$ in (\ref{eq:bayes_interval1}) with $P(C_{i} \le \zeta) \times P( \zeta \le D_{i})$ and $P(C_{i} \le \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}) \times P(\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i} \le D_{i})$ respectively in (\ref{eq:bayes_interval2}) is possible since $\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}$, $C_{i}$, and $D_{i}$ are independent random variables. This gives the following distribution: \begin{align} p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta &| C_{i} \le \zeta \le D_{i}) \notag \\ & = \frac{p_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta) \times P(C_{i} \le \zeta) \times P( \zeta \le D_{i})} {P(C_{i} \le \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}) \times P(\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i} \le D_{i})} \notag \\ & = \frac{\textrm{PDF}_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(\zeta) \times \textrm{CDF}_{C_{i}}(\zeta) \times \textrm{CDF}_{D_{i}}(\zeta)}{\textrm{CDF}_{C_{i} - \boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i}}(0) \times \textrm{CDF}_{\boldsymbol{e}_{i}^{T}\boldsymbol{z}_{i} - D_{i}}(0)} \notag \\ & = \frac{\frac{1}{\sqrt{2\pi}}\exp(-\frac{\zeta^{2}}{2})\frac{1}{2}\left[ 1 + \textrm{erf}\left( \frac{\zeta - \mu_{c,i}}{\sigma_{c,i}\sqrt{2}} \right) \right]\frac{1}{2}\left[ 1 - \textrm{erf}\left( \frac{\zeta - \mu_{d,i}}{\sigma_{d,i}\sqrt{2}} \right) \right]}{\frac{1}{2}\left[1-\textrm{erf}\left(\frac{\mu_{c,i}}{\sqrt{2\left(\sigma_{c,i}^{2} + 1\right)}}\right)\right]\frac{1}{2}\left[1+\textrm{erf}\left(\frac{\mu_{d,i}}{\sqrt{2\left(\sigma_{d,i}^{2} + 1\right)}}\right)\right]} \label{eq:dist_int} \end{align} The integrals required to calculate the mean and variance of (\ref{eq:dist_int}) contain integrals of the form: \begin{equation} \int\limits_{-\infty}^{\infty}\exp\left(-x^2\right)\textrm{erf}(a x + b)\textrm{erf}(c x + d) \textrm{d}x \end{equation} which does not have an analytical solution. The following approximation is proposed: \begin{align} &\left[ 1 + \textrm{erf}\left( \frac{\zeta - \mu_{c,i}}{\sigma_{c,i}\sqrt{2}} \right) \right]\left[ 1 - \textrm{erf}\left( \frac{\zeta - \mu_{d,i}}{\sigma_{d,i}\sqrt{2}} \right) \right] \notag \\ & \approx 2 \left[\textrm{erf}\left( \frac{\zeta - \mu_{c,i}}{\sigma_{c,i}\sqrt{2}} \right) - \textrm{erf}\left( \frac{\zeta - \mu_{d,i}}{\sigma_{d,i}\sqrt{2}} \right) \right] \label{eq:approximation} \end{align} This yields an approximation of the numerator in (\ref{eq:dist_int}) of: \begin{equation}\label{eq:approximate_pdf} \textrm{Z}(\boldsymbol{e}_{i}^{T}\boldsymbol{\tilde{z}}_{i+1}) = \frac{1}{4\sqrt{2\pi}}\exp\left(-\zeta^{2}/2\right)\left[2\left(\textrm{erf}\left(\frac{\zeta-\mu_{c,i}}{\sigma_{c,i}\sqrt{2}} \right)-\textrm{erf}\left(\frac{\zeta-\mu_{d,i}}{\sigma_{d,i}\sqrt{2}} \right)\right)\right] \end{equation} where $\textrm{Z}(\boldsymbol{e}_{i}^{T}\boldsymbol{\tilde{z}}_{i+1})$ is an unnormalised function describing the truncated distribution. This approximation relies on the condition that $\mu_{c,i} < \mu_{d,i}$, and the assumption that the constraint distributions do not significantly overlap. If these are satisfied, it is highly likely that one of the erf terms will be equal to 1 when the other is not, giving a good approximation of the actual distribution. This is illustrated in Figure \ref{f:erf}. \begin{figure} \centering \includegraphics[width = 0.6\textwidth]{erf} \caption{Error function terms for a lower constraint with $\mu_{c,i} = -2$ and $\sigma_{c,i} = 1$ and an upper constraint with $\mu_{d,i} = 2$ and $\sigma_{d,i} = 1$. In this case, one of the error function terms is always close to 1. } \label{f:erf} \end{figure} An overlap metric, $\gamma$, is defined as: \begin{equation} \label{eq:gamma} \gamma = \frac{\mu_{d,i} - \mu_{c,i}}{\sigma_{d,i} + \sigma_{c,i}} \end{equation} and a shape metric, $\delta$, is defined as: \begin{equation} \label{eq:delta} \delta = \left| \log\left(\frac{\sigma_{c,i}}{\sigma_{d,i}}\right)\right| \end{equation} $\gamma$ is a measure of how much the probability distributions of the two constraints overlap, and $\delta$ is a measure of how different the shapes of the probability distributions of the constraints are. Figure \ref{f:approx_examples} shows examples of the approximation applied to several different cases of $\gamma$ and $\delta$. As can be seen, the approximation is an almost perfect approximation in Figure \ref{sf:approx1}, with the approximation degrading as $\gamma$ decreases in the other examples. \begin{figure} \centering \subfloat[$\mu_{c,i} = -2, \sigma_{c,i} = 0.5, \mu_{d,i} = 2, \sigma_{d,i} = 1$, corresponding to $\gamma = 2.67$ and $\delta = 0.30$]{ \includegraphics[width=0.45\textwidth]{approximate1}\label{sf:approx1} } \hspace{0.5em} \subfloat[$\mu_{c,i} = -3, \sigma_{c,i} = 1, \mu_{d,i} = 2, \sigma_{d,i} = 3$, corresponding to $\gamma = 1.25$ and $\delta = 0.48$]{ \includegraphics[width=0.45\textwidth]{approximate2}\label{sf:approx2} } \subfloat[$\mu_{c,i} = -1, \sigma_{c,i} = 2, \mu_{d,i} = 2, \sigma_{d,i} = 3.5$, corresponding to $\gamma = 0.55$ and $\delta = 0.24$]{ \includegraphics[width=0.45\textwidth]{approximate3}\label{sf:approx3} } \caption{Comparison of the actual and approximate functions from (\ref{eq:approximation}) for several values of $\gamma$ and $\delta$. In \protect\subref{sf:approx1}, the approximation is an almost perfect approximation of the actual distribution and the two lines overlap. } \label{f:approx_examples} \end{figure} Figure \ref{f:approx_error} shows the Kullback-Leibler (KL) divergence between the actual and approximate probability distributions for various $\delta$ and $\gamma$. Increasing $\gamma$ and decreasing $\delta$ improves the approximation. For $\gamma \ge 3$, the approximation very closely matches the actual distribution, regardless of $\delta$. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{approx_error_kl} \caption{Comparison of the KL divergence between the actual and approximate probability distributions as a function of the overlap metric, $\gamma$, where the lines are constant $\delta$. For $\delta > 3$, the KL divergence is approximately the same as for $\delta = 3$. } \label{f:approx_error} \end{figure} To normalise $\textrm{Z}(\boldsymbol{e}_{i}^{T}\boldsymbol{\tilde{z}}_{i+1})$ to a PDF, the area of the function is calculated as: \begin{equation} \begin{aligned} \int\limits_{-\infty}^{\infty}\textrm{Z}&(\boldsymbol{e}_{i}^{T}\boldsymbol{\tilde{z}}_{i+1}) \textrm{d}\zeta \\ &= \int\limits_{-\infty}^{\infty}\frac{1}{4\sqrt{2\pi}}\exp\left(-\zeta^{2}/2\right)\left[2\left(\textrm{erf}\left(\frac{\zeta-\mu_{c,i}}{\sigma_{c,i}\sqrt{2}} \right)-\textrm{erf}\left(\frac{\zeta-\mu_{d,i}}{\sigma_{d,i}\sqrt{2}} \right)\right)\right] \textrm{d}\zeta \\ &= \frac{1}{2}\left[\textrm{erf}\left(\frac{\mu_{d,i}}{\sqrt{2\left(\sigma_{d,i}^{2} + 1\right)}}\right) - \textrm{erf}\left(\frac{\mu_{c,i}}{\sqrt{2\left(\sigma_{c,i}^{2} + 1\right)}}\right)\right] \end{aligned} \end{equation} The mean of the PDF is then given by: \begin{align} \mu_{i} &= \alpha_{i}\int\limits_{-\infty}^{\infty}\zeta \exp\left(-\zeta^{2}/2\right)\left[\left(\textrm{erf}\left(\frac{\zeta-\mu_{c,i}}{\sigma_{c,i}\sqrt{2}} \right)-\textrm{erf}\left(\frac{\zeta-\mu_{d,i}}{\sigma_{d,i}\sqrt{2}} \right)\right)\right] \textrm{d}\zeta \notag\\ &= 2\alpha_{i}\left(\frac{1}{\sqrt{\sigma_{c,i}^2 + 1}}\exp\left(-\frac{\mu_{c,i}^{2}}{2(\sigma_{c,i}^2 + 1)}\right) \right. \notag \\ &\qquad \left. - \frac{1}{\sqrt{\sigma_{d,i}^2 + 1}}\exp\left(-\frac{\mu_{d,i}^{2}}{2(\sigma_{d,i}^2 + 1)}\right)\right) \end{align} where \begin{equation} \alpha_{i} = \frac{1}{\sqrt{2\pi}\left[\textrm{erf}\left(\frac{\mu_{d,i}}{\sqrt{2\left(\sigma_{d,i}^{2} + 1\right)}}\right) - \textrm{erf}\left(\frac{\mu_{c,i}}{\sqrt{2\left(\sigma_{c,i}^{2} + 1\right)}}\right)\right]} \end{equation} The variance is given by: \begin{equation} \begin{aligned} \sigma_{i}^{2} = \alpha_{i}\int\limits_{-\infty}^{\infty}\left(\zeta - \mu_{i}\right)^{2} \exp\left(-\zeta^{2}/2\right)\left[\left(\textrm{erf}\left(\frac{\zeta-\mu_{c,i}}{\sigma_{c,i}\sqrt{2}} \right)-\textrm{erf}\left(\frac{\zeta-\mu_{d,i}}{\sigma_{d,i}\sqrt{2}} \right)\right)\right] \textrm{d}\zeta \\ = \alpha_{i}\left[\sqrt{2\pi}\left(\left(1+\mu_{i}^{2}\right)\left(\textrm{erf}\left(\frac{\mu_{d,i}}{\sqrt{2(\sigma_{d,i}^{2}+1)}}\right)-\textrm{erf}\left(\frac{\mu_{c,i}}{\sqrt{2(\sigma_{c,i}^{2}+1)}}\right)\right)\right) \right. \\ \qquad + \left. \frac{2}{\sqrt{\sigma_{c,i}^2 + 1}}\exp\left(-\frac{\mu_{c,i}^{2}}{2(\sigma_{c,i}^2 + 1)}\right)\left(\frac{\mu_{c,i}}{\sigma_{c,i}^2 + 1} - 2\mu_{i} \right) \right. \\ \qquad - \left. \frac{2}{\sqrt{\sigma_{d,i}^2 + 1}}\exp\left({-\frac{\mu_{d,i}^{2}}{2(\sigma_{d,i}^2 + 1)}}\right)\left(\frac{\mu_{d,i}}{\sigma_{d,i}^2 + 1} - 2\mu_{i} \right)\right] \end{aligned} \end{equation} The derivations of these integrals are not provided, but can be easily derived using the solutions provided in \ref{s:a_mean} and \ref{s:a_variance} for the one-sided constraint. The truncated state estimate is then obtained by applying Equations (\ref{eq:mean_and_sigma})-(\ref{eq:inverse_transform}). This process is repeated for the $s$ constraints, incrementing $i$ each time. Several examples of the proposed method for interval constraints are shown in Figure \ref{f:interval_examples}. The Gaussian approximation method produces distributions that are very similar to the actual truncated distributions. Figure \ref{sf:interval3} shows an example where soft and hard constraints are combined---the hard constraint has been modelled as a soft constraint with a very small standard deviation. \begin{figure} \centering \subfloat[$\mu_{c,i} = -2, \sigma_{c,i} = 0.5, \mu_{d,i} = 2, \sigma_{d,i} = 1$]{ \includegraphics[width=0.45\textwidth]{interval1}\label{sf:interval1} } \hspace{0.5em} \subfloat[$\mu_{c,i} = 1, \sigma_{c,i} = 1, \mu_{d,i} = 3, \sigma_{d,i} = 1$]{ \includegraphics[width=0.45\textwidth]{interval2}\label{sf:interval2} } \subfloat[$\mu_{c,i}=-2, \sigma_{c,i}=0.001, \mu_{d,i}=-1, \sigma_{d,i}=0.5$]{ \includegraphics[width=0.45\textwidth]{interval3}\label{sf:interval3} } \caption{Comparison of the actual and approximate distributions for several values of $\gamma$ and $\delta$. In \protect\subref{sf:interval2}, the Gaussian approximation is an almost perfect approximation of the truncated distribution and the two lines overlap. } \label{f:interval_examples} \end{figure} \section{Results}\label{s:results} Consider a robot moving along a corridor, as shown in Figure \ref{f:robotexample}. The corridor has a wall 10m in front of the initial position of the robot, and discrete position sensors placed at 1m intervals. The position sensors can detect which side of the set-point of the sensor the robot is on, and have uncertainty on the set-point. The robot has an initial velocity of 10cm/s and accelerates at 1cm/s$^{2}$ for 20 seconds, then decelerates at 1cm/s$^{2}$ for 20 seconds, before again accelerating at 1cm/s$^{2}$ until it reaches the far wall. Two types of robots were considered---one with standard deviations on the acceleration and initial velocity of $\sigma_{a} = 1\textrm{cm/s}^{2}$ and $\sigma_{v} = 3 \textrm{cm/s}$ respectively (Robot A), and a less uncertain one with standard deviations on the acceleration and initial velocity of $\sigma_{a} = 0.5\textrm{cm/s}^{2}$ and $\sigma_{v} = 1.5 \textrm{cm/s}$ respectively (Robot B). The standard deviation of the set-point of the sensors was also varied, with standard deviations (in cm) of $\sigma_{s} \in \{0, 5, 10, 15, 20, 25, 30\}$ tested. The position of each robot was tracked using the following Kalman filter run at 10Hz: \begin{figure} \centering \includegraphics[width = 0.5\textwidth]{robotexample.pdf} \caption{The robot (circle) is moving along the corridor. Positioned at 1m intervals are sensors with uncertain positions which can detect which side of the sensor the robot is on. These sensor reading are used as both measurements (when the sensor reading changes) and constraints. In the image, the position estimate of the robot would be constrained between the sensors at 3m and 4m (shaded area). The uncertainty of the constraints is shown by the shading. } \label{f:robotexample} \end{figure} \begin{equation} \boldsymbol{x} = \begin{bmatrix} x \\ \dot{x} \end{bmatrix} \quad \boldsymbol{u} = \begin{bmatrix} \ddot{x} \end{bmatrix}\quad \boldsymbol{F} = \begin{bmatrix} 1 & \Delta t\\ 0 & 1 \end{bmatrix} \quad \boldsymbol{G} = \begin{bmatrix} \frac{\Delta t^{2}}{2}\\ \Delta t \end{bmatrix} \end{equation} The covariance of the process noise was given by: \begin{equation} \boldsymbol{Q} = \boldsymbol{G}\boldsymbol{G}^{T}\sigma_{a}^{2} = \begin{bmatrix} \frac{\Delta t^{4}}{4} & \frac{\Delta t^{3}}{2}\\ \frac{\Delta t^{3}}{2} & \Delta t^{2} \end{bmatrix} \sigma_{a}^{2} \end{equation} The Kalman filter was initialised with: \begin{equation} \hat{\boldsymbol{x}} = \begin{bmatrix} 0\\ 0.1 \end{bmatrix} \quad \boldsymbol{P} = \begin{bmatrix} 0 & 0 \\ 0 & \sigma_{v}^{2} \end{bmatrix} \end{equation} A position sensor changing its reading was incorporated as a noisy measurement of the position: \begin{equation} \boldsymbol{H} = \begin{bmatrix} 1 & 0 \end{bmatrix} \quad \boldsymbol{R} = \begin{bmatrix} \sigma_{s}^{2} \end{bmatrix} \end{equation} At each time-step, position sensors whose reading did not change were not incorporated into the Kalman filter. The aim of the constrained Kalman filtering approach here was to use the absence of measurements to improve the state estimate. While the robot was in between sensors, the sensors were treated as constraints on the state of the system. The truncation method proposed in this paper (which will be referred to as the soft-constrained Kalman filter) was compared with an unconstrained Kalman filter, and a constrained Kalman filter using the truncation method that ignored the uncertainty of the constraints and treated them as hard constraints (referred to as the hard-constrained Kalman filter). Each combination of robot, sensor uncertainty, and Kalman filter method was tested 1000 times. The time-average Root Mean Square Error (RMSE) and percentage improvement between methods for Robot A are shown in Figure \ref{f:results1}, and the results for Robot B are shown in Figure \ref{f:results2}. For Robot A, both the hard-constrained and soft-constrained methods provided a significant benefit over the unconstrained Kalman filter, with an improvement of over 40\% in tracking performance when the sensors have no uncertainty. As the uncertainty of the sensors was increased, the soft-constrained method slightly outperformed the hard-constrained method. The process noise for Robot B was significantly less than Robot A. As a result, the uncertainty of the sensors played a larger role in determining the performance of the methods. As can be seen in Figure \ref{f:results2}, the hard-constrained Kalman filter was significantly outperformed by the unconstrained Kalman filter once the sensor uncertainty was above 10cm. In these cases, the estimate produced by the hard-constrained Kalman filter was overconfident, and the proposed method outperformed the hard-constrained Kalman filter by over 17\%. An example of the overconfident estimates produced by the hard-constrained method is shown in Figure \ref{f:simulation_example}. The proposed method strikes a balance between the high uncertainty of the unconstrained Kalman filter and the overconfident estimates of the hard-constrained Kalman filter. \begin{figure} \centering \subfloat[RMSE]{ \includegraphics[width=0.6\textwidth]{kalman_rmse_uncertain}\label{sf:results1_rmse} } \subfloat[Percentage improvement between methods]{ \includegraphics[width=0.6\textwidth]{kalman_percent_uncertain}\label{sf:results1_percent} } \caption{RMSE and percentage improvement between methods for Robot A as the sensor uncertainty is varied. The soft-constrained approach is equal to or better than the unconstrained and hard-constrained approaches in all cases. } \label{f:results1} \end{figure} \begin{figure} \centering \subfloat[RMSE]{ \includegraphics[width=0.6\textwidth]{kalman_rmse_certain}\label{sf:results2_rmse} } \subfloat[Percentage improvement between methods]{ \includegraphics[width=0.6\textwidth]{kalman_percent_certain}\label{sf:results2_percent} } \caption{RMSE and percentage improvement between methods for Robot B as the sensor uncertainty is varied. The soft-constrained approach is equal to or better than the unconstrained and hard-constrained approaches in all cases. The hard-constrained approach is outperformed by the unconstrained approach when the sensor uncertainty is large. } \label{f:results2} \end{figure} \begin{figure} \centering \subfloat[Unconstrained Kalman filter]{ \includegraphics[width=0.45\textwidth]{kalman_example_unc}\label{sf:simulation_unc} } \subfloat[Hard-constrained Kalman filter]{ \includegraphics[width=0.45\textwidth]{kalman_example_hard}\label{sf:simulation_hard} } \subfloat[Soft-constrained Kalman filter]{ \includegraphics[width=0.45\textwidth]{kalman_example_soft}\label{sf:simulation_soft} } \caption{Comparison of the actual and estimated positions with uncertainty for Robot B with sensor uncertainty of 15cm. This illustrates a case where the hard-constrained approach is outperformed by the unconstrained and soft-constrained approaches. The unconstrained estimate has large uncertainty, while the hard-constrained estimate is overconfident. The soft-constrained approach has a lower uncertainty compared to the unconstrained approach without producing the overconfident estimates of the hard-constrained method. } \label{f:simulation_example} \end{figure} As discussed at the end of Section \ref{s:constrained}, under certain conditions the truncated state distribution can be fed back into the Kalman filter. For the scenario considered, the discrete position sensors are uncertain rather than noisy, and thus using feedback is not valid. Figure \ref{sf:feedback} shows the effects of using feedback in the example scenario, with the result that the estimate is more confident. However, in some cases this estimate can become overconfident and fail to accurately represent the actual state. The method without feedback (Figure \ref{sf:no_feedback}) has less confident estimates, but they are not overconfident. \begin{figure} \centering \subfloat[Soft-constrained Kalman filter without feedback of the truncated estimate into the Kalman filter]{ \includegraphics[width=0.6\textwidth]{kalman_example_soft_no_feedback}\label{sf:no_feedback} } \subfloat[Soft-constrained Kalman filter with feedback of the truncated estimate into the Kalman filter]{ \includegraphics[width=0.6\textwidth]{kalman_example_soft_feedback}\label{sf:feedback} } \caption{Comparison of the actual and estimated positions with uncertainty for Robot A with sensor uncertainty of 15cm. In \protect\subref{sf:no_feedback}, the truncated estimate is not fed back into the Kalman filter, while in \protect\subref{sf:feedback}, the truncated estimate is used by the Kalman filter and the resultant estimate is overconfident. } \label{f:feedback} \end{figure} \section{Conclusion}\label{s:conc} This paper developed an analytical method of truncating an inequality constrained Gaussian distributed variable where the constraints themselves are described by Gaussian distributions. A key aspect of the approach was the use of moment-based Gaussian approximations of the truncated distribution. This truncation method was applied to the constrained Kalman filtering problem where it was shown to outperform unconstrained Kalman filtering and the existing constrained Kalman filter using hard constraints in a simulation example. A key benefit of the developed method compared to hard-constrained Kalman filters is that it is not overconfident near the uncertain constraints. It is an analytical version of existing numerical integration methods, thus providing a computational benefit over the existing numerical methods. \section*{Acknowledgements} This work was supported by the Rio Tinto Centre for Mine Automation and the Australian Centre for Field Robotics, University of Sydney, Australia.
2023-04-23T06:41:26.604Z
2016-06-08T02:12:13.000Z
redpajama/arxiv
arxiv_0001
2,441
16,082
2634406d0f78e17d40efe451cee07fbbeb68a72e
\section{Introduction} Ho\v{r}ava theory is a proposition of a perturbatively renormalizable and unitary quantum field theory of gravity in $3+1$ spacetime dimensions (although the general principles can be applied to other dimensions as well). The original formulation was done in Ref.~\cite{Horava:2009uw}, with related concepts previously considered in Ref.~\cite{Horava:2008ih}. The main idea is to break the relativistic symmetry (at least in the gravitational sector) by introducing a timelike direction into the spacetime, with absolute physical meaning, with the hope of obtaining a renormalizable theory. The spacetime is foliated in terms of spacelike hypersurfaces along this direction. The allowed coordinate transformations, instead of the general transformations between time and space characteristic of general relativity (GR), are the ones that preserve the given foliation. The gauge symmetry group of the theory is then the foliation-preserving diffeomorphisms group (FDiff). A FDiff-covariant Lagrangian allows the inclusion of interacting terms with higher order spatial derivatives of the metric field (which is dimensionless), without the need of increasing the order in time derivatives. Thus, the central aim is that the higher spatial curvature terms that contribute to the propagators improve the renormalization properties of the theory while keeping under control the number of poles since no higher time derivatives are added. This program is a reminiscent of the relativistic higher curvature theories. However, the crucial difference is that in the latter theories, in order to preserve the relativistic symmetry, the order of the time derivatives must be increased as higher curvature terms are included. Among the added poles there arise ghosts that break the unitarity of the theory \cite{Stelle:1976gc}. Since its original formulation in \cite{Horava:2009uw}, the theory has evolved in several directions. Initially the potential was restricted by the so-called detailed balance principle, which basically postulates that the potential of the $3+1$ theory must be derived from a purely spatial three-dimensional Lagrangian. Currently many authors prefer to abandon this principle and instead consider the general, potentially renormalizable theory that includes in the potential all the terms compatible with the FDiff gauge symmetry. Besides this, the theory has two separate main versions, the projectable and the nonprojectable versions. These two ways of formulating the theory, already studied in \cite{Horava:2009uw}, are characterized by the lapse function being a function only of the time coordinate (projectable version) or a general function of time and space (nonprojectable version). Among other developments, the projectable version has been modified by including an extra $U(1)$ gauge symmetry \cite{Horava:2010zj}, eliminating in this way the extra degree of freedom. On the nonprojectable side, a wide class of interacting terms compatible with the FDiff symmetry was incorporated in Ref.~\cite{Blas:2009qj}. These terms make the potential dependent on the lapse function $N$ via the FDiff-covariant vector $a_i = \partial_i \ln N$. Following the spirit of renormalizable gauge theories, the Lagrangian should include all the terms, up to the order required for renormalization, compatible with the underlying gauge symmetry. We refer as the nonprojectable Ho\v{r}ava theory to the theory including the $a_i$ terms. An $U(1)$ extension similar to the one of the projectable case was proposed for the nonprojectable version in \cite{Zhu:2011xe}. The truncation of the nonprojectable theory to second order in derivatives has been found \cite{Blas:2009ck,Jacobson:2010mx,Jacobson:2013xta} to be related to the Einstein-aether theory \cite{Jacobson:2000xp}; specifically the solutions of the latter having a hypersurface-orthogonal aether vector are solutions of the former (but the converse is not true in general \cite{Jacobson:2013xta}). Recently the Ho\v{r}ava theory, both in the projectable and the nonprojectable versions, has been reproduced by gauging (making dynamical) the Newton-Cartan geometry \cite{Hartong:2015zia}. In the nonprojectable case, including the $a_i$ terms of \cite{Blas:2009qj}, the closure of the algebra of constraints of the classical Hamiltonian formulation has been shown \cite{varios:hamiltoniannokc} (see also \cite{Bellorin:2012di}). There the crucial role of the $a_i$ terms in improving the structure of the constraints was noticed. Indeed, one of the motivations of \cite{Blas:2009qj} to include these terms was to improve the mathematical structure of the field equations in the Lagrangian scheme. Implicitly assuming the invertibility of the Legendre transformation, in the Hamiltonian analysis of Refs.~\cite{varios:hamiltoniannokc}, the presence of an extra degree of freedom was corroborated. The extra mode was previously identified in Ref.~\cite{Blas:2009qj} with a well-behaved dispertion relation (under suitable restrictions on the space of parameters). Among several features that have been studied for the extra mode, it has been found that, whenever one forces the kinetic term to adopt the relativistic version at low energies, it suffers from the so-called strong-coupling problem \cite{varios:strongcoupling}. A feasible resolution of this problem is to demand that the scale of activation of higher order operators is low enough \cite{Blas:2009ck}. In Ref.~\cite{Bellorin:2013zbp} the case in which the invertibility of the Legendre transformation does not hold was analyzed. This happens when the independent (dimensionless) coupling constant arising in the kinetic term, $\lambda$, acquires the specific value $\lambda = 1/3$. At this value the kinetic term acquires a conformal invariance \cite{Horava:2009uw}, but \emph{the whole theory is not conformally invariant} since in general the terms in the potential break the conformal symmetry (unless only specific terms, like $(\mbox{Cotton})^2$, are included in the potential such that it is rendered conformally invariant). For this reason we call the point $\lambda = 1/3$ the kinetic-conformal (KC) point, and use the same name for the Ho\v{r}ava theory formulated at this point. At the KC point there arise two extra second-class constraints \cite{Bellorin:2013zbp}. Qualitatively, one may regard the presence of these new constraints as a consequence of the lack of invertibility of the Legendre transformation at the KC point. The two constraints eliminate precisely the extra mode. We consider this a very interesting property, since the number of degrees of freedom of the KC Ho\v{r}ava theory coincides with the one of GR (the $U(1)$ extensions also eliminate the extra mode \cite{Horava:2010zj,Zhu:2011xe}). In Ref.~\cite{Bellorin:2013zbp} the closure of the algebra of constraints assuming a general, unspecified, potential was shown. In addition, a model with soft breaking of the conformal invariance was considered there, corroborating the consistent structure of constraints and conditions for the Lagrange multipliers with explicit equations. Moreover the perturbative version of the effective large-distance action of the KC theory at quadratic order in perturbations is physically equivalent to perturbative GR. We devote this paper to deepening the features of the nonprojectable Ho\v{r}ava theory at the KC point. We pose ourselves two main objectives. The first one is to further advance the knowledge of the classical Hamiltonian formulation, which is fundamental for the consistency of the theory. We would like to get explicit expressions for all the constraints and conditions for the Lagrange multipliers when the potential contains all the possible interacting terms up to $z=3$, the minimal order to get renormalizability in $3+1$ spacetime dimensions. To this end we adopt a perturbative approach, taking in the potential all the terms that contribute to the quadratic action. Our second objective is to enter into the process of quantization of the KC Ho\v{r}ava theory. From the results in the linearized classical theory we obtain the reduced Hamiltonian and study the conditions needed to guarantee the positiveness of its spectrum. Then we study the propagator of the physical modes. Getting explicitly the independent propagators is one of the first tasks to do in the Ho\v{r}ava theory since in this way one elucidates if the theory really possesses the ultraviolet (UV) improved and ghost-free propagators heuristically proposed in the original paper of Ho\v{r}ava \cite{Horava:2009uw}. Indeed, without the KC condition, there is a sector of the space of parameters where the extra mode becomes a ghost \cite{Blas:2009qj}. Another counterexample is that in the theory with detailed balance the operator with the highest derivative does not contribute to the propagator of the extra mode. On the basis of the physical propagators, we give arguments on the power-counting renormalizability of the theory, specifically by computing the superficial degree of divergence of one-particle-irreducible (1PI) diagrams. Our interest is in evaluating the power of divergences directly on the gravitational variables. This is more acute than, for example, using toy models like scalar-field theories since in these models precisely the constraints are not represented. Another question we address about the quantization of the theory is what happens when it is formulated in the nonreduced phase space, as it is typically done in gauge theories. Here the main point is that the nonprojectable Ho\v{r}ava theory, with or without the KC condition, \emph{has second-class constraints}. Whenever these constraints are not solved, which is by definition the formulation in the nonreduced phase space, one is forced to take into account their second-class nature under any scheme of quantization. In this work we study the path-integral quantization, where the presence of second-class constraints requires the modification of the measure. We consider both the Hamiltonian and the Lagrangian (FDiff-covariant) formulation of the path integral. In particular it is important to conciliate the Lagrangian path integral with the canonical one since if one starts solely with the Lagrangian formulation then one does not know the correct measure associated to the second-class constraints. Several authors have made computations in the quantized Ho\v{r}ava gravity or in related toy models without the KC condition. Among them, power-counting renormalizability criteria have been proposed in \cite{Visser:2009fg,Visser:2009ys} (actually, these papers provide a general framework applicable to the KC case). The propagator for a nonprojectable model with $z=1$ and $z=3$ terms was studied in Ref.~\cite{Pospelov:2010mp}. In that paper several considerations about the bounds imposed by the coupling to matter, where the experimental restrictions on Lorentz violations are very strong, were considered. In Refs.~\cite{varios:stochastic} the renormalization of the projectable theory with detailed balance was considered with the methods of stochastic quantization. The one-loop renormalization of the conformal reduction of the projectable theory in $2+1$ dimensions was analyzed in \cite{Benedetti:2013pya}. Gaussian and non-Gaussian fixed points in the renormalization flow as well as their consequences on asymptotic freedom and asymptotic safety have been investigated in the projectable Ho\v{r}ava theory and its couplings in Refs.~\cite{varios:asymptoticfreedom}. The power-counting renormalizability of models with mixed time and spatial derivative terms has been considered in Refs.~\cite{Colombo:2014lta,Colombo:2015yha}. Recently, the authors of Ref.~\cite{Barvinsky:2015kil} showed the complete perturbative renormalizability of the projectable theory (without detailed balance). To this end they used nonlocal gauge-fixing conditions. The quantization of Ho\v{r}ava theory has also been connected to causal dynamical triangulations \cite{varios:cdt}. This paper is organized as follows: in Section 2 we study the consistency of the classical Hamiltonian formulation. We first present the general results for the Hamiltonian formulation with an unspecified potential. Then we address the solutions of the constraints in a perturbative approach. In Section 3 we perform the quantum computations. This section is divided in three parts. In the first one we study the reduced Hamiltonian and the positiveness of its spectrum. In the second one we present the propagator of the physical modes and consider power-counting renormalizability. In the last one we study the path integral in the nonreduced phase space. We devote Section 4 to highlighting the fact that the nonprojectable theory without the KC condition also has second-class constraints and the measure is affected by them. Finally, we present some discussion and conclusions about our results. There is also some appended material relevant for the themes discussed in this paper. \section{Consistency of the classical Hamiltonian} \subsection{The general canonical theory} The formulation of the theory starts with the assumption that in the spacetime there is a timelike direction and a foliation in terms of spacelike hypersurfaces along it with absolute physical meaning. The underlying symmetry of the theory is not the set of general coordinate transformations between time and space but the restricted set of coordinate transformations that do not change the absolute timelike direction and its associated foliation. Thus, the gauge symmetry group is the group of diffeomorphisms over the spacetime that preserve the given foliation (FDiff) \cite{Horava:2009uw}. Its action on the coordinates $(t,\vec{x})$ is \begin{equation} \delta t = f(t) \,, \hspace{2em} \delta x^i = \zeta(t,\vec{x}) \,. \end{equation} The gravitational part of the theory is formulated in the Arnowitt-Deser-Misner (ADM) variables, $g_{ij}$, $N$ and $N_i$. Under FDiff these variables transform as \begin{equation} \begin{array}{l} \delta N = \zeta^k \partial_k N + f \dot{N} + \dot{f} N \,, \\[1ex] \delta N_i = \zeta^k \partial_k N_i + N_k \partial_i \zeta^k + \dot{\zeta}^j g_{ij} + f \dot{N}_i + \dot{f} N_i \,, \\[1ex] \delta g_{ij} = \zeta^k \partial_k g_{ij} + 2 g_{k(i} \partial_{j)} \zeta^k + f \dot{g}_{ij} \,, \end{array} \label{fdiff} \end{equation} where the dot denotes time derivative, $\dot{N} = \frac{\partial N}{\partial t}$. The action of the FDiff group allows two different formulations of the theory, each one characterized by the kind of dependence the lapse function $N$ has. In one version, called the projectable version, $N$ is a function of only the time and this condition is preserved by FDiff (which can be deduced from (\ref{fdiff})). The other version, in which $N$ depends both in time and space, is called the nonprojectable case. The theory we study in this paper belongs to the nonprojectable case. In this case the Hamiltonian constraint is present as a local constraint, like in GR. On the other hand, due to the reduced symmetry group, the behavior of the Hamiltonian constraint is different to GR. With the aim of getting renormalizability while avoiding unitarity loss, the theory is designed in such a way that at high energies it should naturally exhibit an anisotropic scaling between time and space, \begin{equation} t \rightarrow b^z t \,, \hspace{2em} \vec{x} \rightarrow b \vec{x} \,. \end{equation} The parameter $z$ characterizes the degree of anisotropy. Power-counting arguments lead us to consider $z=3$ in $3+1$ spacetime dimensions as the minimal degree of anisotropy to get a renormalizable theory \cite{Horava:2009uw}. Under this scenario the dimensionality (in momentum powers) of the coordinates and field variables is postulated as \cite{Horava:2009uw} \begin{equation} [\,t\,] = - z \,, \hspace{1.5em} [\,\vec{x}\,] = - 1 \,, \hspace{1.5em} [\,g_{ij}\,] = [\,N\,] = 0 \,, \hspace{1.5em} [\,N_i\,] = z - 1 \label{dimensions} \end{equation} (for the intrinsic formulation of the quantum theory it is not essential to have the structure of a four-dimensional spacetime metric, but in any case it can be recovered by a suitable rescaling of the time coordinate using an emerging light-speed constant \cite{Horava:2009uw}). The action of the complete nonprojectable theory is \cite{Horava:2009uw,Blas:2009qj} \begin{equation} S = \int dt d^3x \sqrt{g} N \left( \frac{1}{2\kappa} G^{ijkl} K_{ij} K_{kl} - \mathcal{V} \right), \label{lagrangianaction} \end{equation} where \begin{eqnarray} K_{ij} & = & \frac{1}{2N} ( \dot{g}_{ij} - 2 \nabla_{(i} N_{j)} ) \,, \\[1ex] G^{ijkl} & = & \frac{1}{2} \left( g^{ik} g^{jl} + g^{il} g^{jk} \right) - \lambda g^{ij} g^{kl} \end{eqnarray} and $\lambda$ is a dimensionless constant. Two comments are in order: first, if $z = d = 3$, $\kappa$ becomes a dimensionless coupling constant \cite{Horava:2009uw}. Second, in a relativistic theory, we would have $\lambda = 1$, $z=1$ and $\kappa$ would be dimensionful. We do not put a constant in front of the potential $\mathcal{V}$ because we are going to include an independent coupling constant for each one of its terms. The potential $\mathcal{V}$ can be, in principle, any FDiff scalar made with the spatial metric $g_{ij}$, the vector \begin{equation} a_i = \frac{ \partial_i N }{N} \end{equation} and their FDiff-covariant derivatives (curvature tensors and their derivatives for $g_{ij}$). The potential contains no time derivatives and does not depend on $N_i$. In particular, the $z=1$ potential, which is the most relevant one for the large-distance physics, is \begin{equation} \mathcal{V}^{(z=1)} = - \beta R - \alpha a_i a^i \,, \label{z1potential} \end{equation} where $\beta$ and $\alpha$ are coupling constants. The particular formulation of the Ho\v{r}ava theory we study in this paper is related to the behavior of the kinetic term under anisotropic conformal transformations. If the constant $\lambda$ is fixed at the KC point $\lambda = 1/3$, then under the anisotropic conformal transformations \begin{equation} g_{ij} \rightarrow e^{2\Omega} g_{ij} \,, \hspace{2em} N \rightarrow e^{3\Omega} N \,, \hspace{2em} N_i \rightarrow e^{2\Omega} N_i \,, \hspace{2em} \Omega = \Omega(t,\vec{x}) \,, \label{conformaltrans} \end{equation} the kinetic term $\sqrt{g} N ( K_{ij} K^{ij} - \lambda K^2 )$ remains invariant \cite{Horava:2009uw}. In general the whole theory is not conformally invariant except for the specific case in which the potential itself is conformally invariant under (\ref{conformaltrans}), a situation that we do not consider here. Our interest in bringing the nonprojectable Ho\v{r}ava theory at the KC point comes from the fact that at this point the extra mode is eliminated and the theory acquires the same degrees of freedom of GR \cite{Bellorin:2013zbp}. As we have already commented, this is due to the emerging of two second-class constraints at the KC point. We remark that at the KC point $\lambda = 1/3$ these constraints are always present regardless of the fact that the potential, and hence the full theory, is not conformally invariant. In the following we present the Hamiltonian formulation of the nonprojectable Ho\v{r}ava theory at the KC point \emph{for a general, unspecified potential} $\mathcal{V}$ \cite{Bellorin:2013zbp}. We denote by $\pi^{ij}$ the momentum conjugated of $g_{ij}$ and by $P_N$ the one of $N$, whereas we regard the shift vector $N_i$ as a Lagrange multiplier. We study the asymptotically flat case, under which the canonical field variables behave asymptotically as \begin{equation} g_{ij} - \delta_{ij} = \mathcal{O}(1/r) \,, \hspace{2em} \pi^{ij} = \mathcal{O}(1/r^2) \,, \hspace{2em} N - 1 = \mathcal{O}(1/r) \,. \label{asymptoticonditions} \end{equation} The only local constraint associated to gauge symmetries that are homotopic to the identity, and hence of first class, is the momentum constraint $\mathcal{H}^i$, \begin{equation} \mathcal{H}^i \equiv - 2 \nabla_j \pi^{ij} + P_N \partial^i N = 0 \,, \label{momentumconstraint} \end{equation} which generates the purely spatial diffeomorphisms. The second-class constraints are \begin{eqnarray} P_N &=& 0 \,, \\ \pi &\equiv& g^{ij} \pi_{ij} = 0 \,, \\ \mathcal{H} &\equiv& \frac{2\kappa}{\sqrt{g}} \pi^{ij} \pi_{ij} + \sqrt{g}\, \mathcal{U} = 0 \,, \label{hamiltonianconstraintgeneral} \\ \mathcal{C} &\equiv& \frac{3\kappa}{\sqrt{g}} \pi^{ij} \pi_{ij} - \sqrt{g}\, \mathcal{W} = 0 \,. \label{cconstraint} \end{eqnarray} $\mathcal{U}$ and $\mathcal{W}$ are derivatives of the potential defined by\footnote{We have modified the original definition of $\mathcal{C}$ given in Ref.~\cite{Bellorin:2013zbp} by dividing it by $N$.} \begin{eqnarray} && \mathcal{U} \equiv \frac{1}{\sqrt{g}} \frac{\delta}{\delta N} \int d^3y \sqrt{g} N \mathcal{V} = \mathcal{V} + \frac{1}{N} \sum\limits_{r=1} (-1)^r \nabla_{i_1 \cdots i_r} \left( N \frac{\partial \mathcal{V}}{\partial ( \nabla_{i_r \cdots i_2} a_{i_1} )} \right) \,, \nonumber \\ \label{modifiedpotential} \\ && \mathcal{W} \equiv g_{ij} \mathcal{W}^{ij} \,, \hspace{2em} {\mathcal{W}}^{ij} \equiv \frac{1}{\sqrt{g} N} \frac{\delta}{\delta g_{ij}} \int d^3y \sqrt{g} N \mathcal{V} \,. \label{vprima} \end{eqnarray} $\nabla_{ij\cdots k}$ stands for $\nabla_{i}\nabla_j \cdots \nabla_{k}$. Adopting the nomenclature of GR, $\mathcal{H}^i = 0$ is called the momentum constraint and $\mathcal{H} = 0$ the Hamiltonian constraint. The $\pi = 0$ constraint is the primary constraint that emerges when the theory is formulated at the KC point. Indeed, the conjugated momentum $\pi^{ij}$ obeys the general relation \begin{equation} \frac{\pi^{ij}}{\sqrt{g}} = \frac{1}{2\kappa} G^{ijkl} K_{kl} \,. \end{equation} At $\lambda = 1/3$ the hypermatrix $G^{ijkl}$ becomes degenerated, $g_{ij} G^{ijkl} = 0$, which leads directly to the $\pi = 0$ constraint. As a consequence, the secondary constraint $\mathcal{C} = 0$ emerges when the preservation in time of $\pi = 0$ is demanded. Thus, $\pi$ and $\mathcal{C}$ are the two second-class constraints that emerge at the KC point. Unlike GR, in the nonprojectable Ho\v{r}ava theory the Hamiltonian constraint $\mathcal{H}$ is of second-class behavior, which is associated to the fact that it lacks its role as generator of gauge symmetry. Finally, the $P_N = 0$ constraint must be added since in this theory (with $\lambda = 1/3$ or not) we are forced to included the lapse function $N$ as part of the canonical variables.\footnote{An exception for this rule is the model considered in Ref.~\cite{Bellorin:2010je}.} Unlike GR, the ``bulk" part of the Hamiltonian does not arise as a sum of constraints directly from the Legendre transformation. Instead, it arises in the form \begin{equation} H = \int d^3x \left( \frac{2 \kappa N}{\sqrt{g}} \pi^{ij} \pi_{ij} + \sqrt{g} N \mathcal{V} + N_i \mathcal{H}^i \right) \,. \label{hamiltonianbulk} \end{equation} In addition, the boundary term corresponding to the ADM energy \cite{Arnowitt:1962hi}, \begin{equation} E_{\mbox{\tiny ADM}} \equiv \oint d\Sigma_i ( \partial_j g_{ij} - \partial_i g_{jj} ) \,, \end{equation} must be incorporated because it is needed for the differentiability of the Hamiltonian under the most general asymptotic variations compatible with asymptotic flatness \cite{Regge:1974zd,Hawking:1995fd}. Specifically, this is a consequence of a contribution of the $z=1$ term $-\beta R$, which asymptotically is of order $\mathcal{O}(1/r^3)$. By incorporating the constraints $P_N$ and $\pi$, we finally cast the classical Hamiltonian in the form \begin{equation} H = \int d^3x \left( \frac{2 \kappa N}{\sqrt{g}} \pi^{ij} \pi_{ij} + \sqrt{g} N \mathcal{V} + N_i \mathcal{H}^i + \sigma P_N + \mu \pi \right) + \beta E_{\mbox{\tiny ADM}} \,, \label{hamiltonianbulkfinal} \end{equation} where $N_i$, $\sigma$ and $\mu$ are Lagrange multipliers. This classical Hamiltonian is subject to the constraints (\ref{hamiltonianconstraintgeneral}) and (\ref{cconstraint}), which have not been added with Lagrange multipliers. In Appendix \ref{app:multipliers} we show that if we do so, then the classical condition of preserving the second-class constraints fixes their corresponding Lagrange multipliers equal to zero. Therefore, (\ref{hamiltonianbulkfinal}) is the final classical Hamiltonian and for the classical initial value problem it is enough to impose (\ref{hamiltonianconstraintgeneral}) and (\ref{cconstraint}) only initially (although in the quantum theory there are no such restrictions on the Lagrange multipliers). The form (\ref{hamiltonianbulkfinal}) of the Hamiltonian is quite suitable for quantization since its bulk part remains nonzero on the constrained phase space. On the other hand, if one wishes to stay as close as possible to GR, then by using the constraint $\mathcal{H} = 0$ this Hamiltonian can also be brought to the form of a sum of constraints in the bulk part plus nontrivial boundary terms. This can be achieved because the difference between $\sqrt{g} N \mathcal{V}$ and $\sqrt{g} N \mathcal{U}$ is a sum of exact divergences, see (\ref{modifiedpotential}), and the only one of these that survives upon integration is the $z=1$ divergence. Thus, we have the identity \begin{equation} \int d^3x \sqrt{g} N \mathcal{U} = \int d^3x \sqrt{g} N \mathcal{V} + 2 \alpha \Phi_N \,, \label{identityuv} \end{equation} where \begin{equation} \Phi_N \equiv \oint d\Sigma_i \partial_i N \,. \label{fluxn} \end{equation} The version of the Hamiltonian with a sum-of-constraint bulk part results in\footnote{The presence of the $\Phi_N$ term can also be regarded as a requirement for the differentiability of the Hamiltonian (\ref{hamiltonianfinal}) under general $\delta_N$ variations, since $\mathcal{U}$ has a $2\alpha \nabla_i a^i$ term that asymptotically is of order $\mathcal{O}(1/r^3)$.} \begin{equation} H = \int d^3x \left( N \mathcal{H} + N_i \mathcal{H}^i + \sigma P_N + \mu \pi \right) + \beta E_{\mbox{\tiny ADM}} - 2 \alpha \Phi_N \,. \label{hamiltonianfinal} \end{equation} In particular, this form is useful to obtain a simple expression for the energy. It is also useful to address the preservation of all the constraints. Since the momentum constraint is of first class it is automatically preserved in the totally constrained phase space. In the classical theory, the preservation of the second-class constraints leads to conditions on their associated Lagrange multipliers. In Appendix \ref{app:multipliers} we show that the preservation of $P_N$ and $\pi$ requires the vanishing of the multipliers of $\mathcal{H}$ and $\mathcal{C}$, as we have already mentioned. Finally, the preservation of $\mathcal{H}$ and $\mathcal{C}$ leads to the following equations for the Lagrange multipliers $\sigma$ and $\mu$: \begin{eqnarray} \int d^3y \,\sigma\, \frac{\delta}{\delta N} \int d^3w \sqrt{g}\, \mathcal{U} \delta_{wx} + \int d^3y \,\mu\, g_{ij} \frac{\delta}{\delta g_{ij}} \int d^3w \sqrt{g}\, \mathcal{U} \delta_{wx} - \frac{3 \kappa \pi^{ij} \pi_{ij}}{\sqrt{g}} \mu && \nonumber \\ + 4 \kappa \int d^3y \frac{N \pi_{ij}}{\sqrt{g}} \frac{\delta }{\delta g_{ij}} \int d^3w \sqrt{g}\, \mathcal{U} \delta_{wx} - 4 \kappa \pi^{ij} \mathcal{W}_{ij} = 0 \,, && \label{sigmaeq} \\ \int d^3y \,\mu\, g_{ij} \frac{\delta}{\delta g_{ij}} \int d^3w \sqrt{g}\, \mathcal{W} \delta_{wx} + \int d^3y \,\sigma\, \frac{\delta}{\delta N} \int d^3w \sqrt{g}\, \mathcal{W} \delta_{wx} + \frac{9 \kappa \pi^{ij} \pi_{ij}}{2\sqrt{g}} \mu && \nonumber \\ + 4 \kappa \int d^3y \frac{\pi_{ij}}{\sqrt{g}} \frac{\delta}{\delta g_{ij}} \int d^3w \sqrt{g}\, \mathcal{W} \delta_{wx} + 6 \kappa \pi^{ij} \mathcal{W}_{ij} = 0 \,. && \label{mueq} \end{eqnarray} In these expressions we have labeled spatial points with single letters like $w$, $\delta_{wx}$ is the Dirac delta $\delta^{(3)}(w-x)$ and the spatial point $x$ labels the point at which these equations are evaluated. When the potential is of $z=3$ order the analysis of Eqs. (\ref{sigmaeq}) and (\ref{mueq}) shows that they are inhomogeneous \emph{elliptic} partial differential equations of sixth order for $\sigma$ and $\mu$ \cite{Bellorin:2013zbp}. The equations of motion in the Hamiltonian formalism are \begin{eqnarray} \dot{N} &=& N^k \partial_k N + \sigma \,, \label{dotn} \\ \dot{g}_{ij} &=& \frac{4 \kappa N}{\sqrt{g}} \pi_{ij} + 2 \nabla_{(i} N_{j)} + \mu g_{ij} \,, \label{dotg} \\ \dot{\pi}^{ij} &=& -\frac{4 \kappa N}{\sqrt{g}} ( \pi^{ik} \pi_k{}^j -\frac{1}{4} g^{ij} \pi^{kl} \pi_{kl} ) - \sqrt{g} N \mathcal{W}^{ij} \nonumber \\ & & - 2 \nabla_k N^{(i} \pi^{j)k} + \nabla_k ( N^k \pi^{ij}) - \mu \pi^{ij} \,. \label{dotpi} \end{eqnarray} In the counting of the independent degrees of freedom we have 14 nonreduced canonical variables in the set $\{ ( g_{ij} , \pi^{ij} ) \,,\, ( N,P_N ) \}$, three components of the first-class constraint $\mathcal{H}^i$ and four second-class constraints in the set $\{ P_N, \pi , \mathcal{H} , \mathcal{C} \}$. The number of independent degrees of freedom is given by \begin{equation} (\mbox{14 can. var.}) - \left[ 2 \times (\mbox{3 first-cls. c.}) + ( \mbox{4 second-cls. c.}) \right] = \mbox{4 indep. can. var. } \end{equation} Thus, there are two even physical modes in the theory; that is, two modes that propagate themselves with a complete pair of canonical variables. This is the same number of degrees of freedom of GR; there are no extra modes in this theory. This property naturally raises the question whether the dynamics of this theory is able to reproduce the dynamics of GR for suitable large distances, i. e., at least in a perturbative regime for both theories. This was analyzed for the perturbatively linearized theory in Ref.~\cite{Bellorin:2013zbp}; we take again this point in Section \ref{sec:reducedhamiltonian}. \subsection{Perturbative approach} \label{sec:perturbations} In the previous section we summarized the general Hamiltonian formulation applicable to any potential $\mathcal{V}$. In this section we formulate the constraints and the equations for the Lagrange multipliers in an explicit form with the aim of studying rigorously their solutions. Although a complete $z=3$ potential has a huge number of terms, a perturbative approach may render the problem tractable.\footnote{A perturbative study of a $\lambda =1/3$ nonprojectable model without the $a_i$ terms was done in Ref.~\cite{Park:2009hg}. A perturbative analysis of a projectable model was done in Ref.~\cite{Bogdanos:2009uj}.} In Ref.~\cite{Colombo:2014lta} Colombo, G\"umr\"uk\c{c}uo\u{g}lu and Sotiriou found that within a $z=3$ potential the nonequivalent terms that contribute to the action quadratic in perturbations (around Minkowski spacetime) are \begin{eqnarray} - \mathcal{V}^{(z=1)} &=& \beta R + \alpha a_i a^i \,, \label{v1} \\ - \mathcal{V}^{(z=2)} &=& \alpha_1 R \nabla_i a^i + \alpha_2 \nabla_i a_j \nabla^i a^j + \beta_1 R_{ij} R^{ij} + \beta_2 R^2 \,, \label{v2} \\ - \mathcal{V}^{(z=3)} &=& \alpha_3 \nabla^2 R \nabla_i a^i + \alpha_4 \nabla^2 a_i \nabla^2 a^i + \beta_3 \nabla_i R_{jk} \nabla^i R^{jk} + \beta_4 \nabla_i R \nabla^i R \,, \nonumber \\ \label{v3} \end{eqnarray} where $\nabla^2 \equiv \nabla_i \nabla^i$ and all the alphas and betas are coupling constants.\footnote{In addition to these terms, mixed derivative terms that combine spatial with time derivatives of the spatial metric can be included \cite{Pospelov:2010mp}. They also contribute to the second-order action; actually the main focus of Ref.~\cite{Colombo:2014lta} was on them. These terms could lead to interesting extensions of the Ho\v{r}ava theory. Here we do not consider mixed derivative terms.} We start the perturbations around Minkowski spacetime by introducing the variables $h_{ij}$, $p_{ij}$ and $n$ in the way \begin{equation} g_{ij} = \delta_{ij} + \epsilon h_{ij} \,, \hspace{2em} \pi^{ij} = \epsilon p_{ij} \,, \hspace{2em} N = 1 + \epsilon n \,. \label{perturbativevariables} \end{equation} We use the orthogonal transverse/longitudinal decomposition \begin{equation} h_{ij} = h_{ij}^{TT} + \frac{1}{2} ( \delta_{ij} - \partial_{ij} \partial^{-2} ) h^T + \partial_{(i} h^L_{j)} \,, \label{decomposition} \end{equation} where $\partial_{ij\cdots k}$ stands for $\partial_i\partial_j\cdots\partial_k$, $\partial^2 = \partial_i \partial_i$ and $\partial^{-2} = 1/\partial^2$. $h_{ij}^{TT}$ is subject to $\partial_i h_{ij}^{TT} = h_{ii}^{TT} = 0$. We make an analogous decomposition on $p_{ij}$. We impose the transverse gauge, \begin{equation} \partial_i h_{ij} = 0 \,, \label{gauge} \end{equation} under which all the longitudinal sector of the metric is eliminated. We study the constraints (\ref{momentumconstraint} - \ref{cconstraint}) of the theory at linear order in perturbations adopting the potential defined in (\ref{v1} - \ref{v3}). The momentum constraint (\ref{momentumconstraint}), simplified by using $P_N = 0$ explicitly, eliminates the longitudinal sector of $p_{ij}$, \begin{equation} \partial_i p_{ij} = 0 \,, \end{equation} whereas the $\pi = 0$ constraint dictates that $p_{ij}$ is traceless, hence $p^T = 0$. So far we are left with the set $\{ h^{TT}_{ij}, p^{TT}_{ij}, h^T , n\}$ as the set of remaining canonical variables. Now we move to the $\mathcal{H}$ and $\mathcal{C}$ constraints. To present the results in a compact form, we introduce the vector $\phi$ of scalars and the functional matrix $\mathbb{M}$ in the way \begin{equation} \phi = \left( \begin{array}{c} h^T \\ n \end{array}\right) \,, \hspace{2em} \mathbb{M} = \left( \begin{array}{cc} \mathbb{D}_1 & \mathbb{D}_2 \\[1ex] \mathbb{D}_2 & \mathbb{D}_3 \end{array}\right) \,, \label{phiM} \end{equation} where \begin{equation} \begin{array}{l} {\displaystyle \mathbb{D}_1 \equiv \frac{1}{8} \left( ( 3 \beta_3 + 8 \beta_4 ) \partial^6 - ( 3 \beta_1 + 8 \beta_2 ) \partial^4 + \beta \partial^2 \right) \,, } \\[2ex] {\displaystyle \mathbb{D}_2 \equiv \frac{1}{2} \left( \alpha_3 \partial^6 + \alpha_1 \partial^4 + \beta \partial^2 \right) \,, \hspace{2em} \mathbb{D}_3 \equiv \alpha_4 \partial^6 - \alpha_2 \partial^4 + \alpha \partial^2 } \,. \end{array} \label{operators} \end{equation} Thus, with the potential given in (\ref{v1} - \ref{v3}), the $\mathcal{H}$ and $\mathcal{C}$ constraints at linear order become \begin{equation} \mathbb{M} \phi = 0 \,, \label{eqnh} \end{equation} where the first row of this vectorial equation represents the $\mathcal{C}$ constraint and the second row the $\mathcal{H}$ constraint. With (\ref{eqnh}) we confirm the consistency of the structure of constraints: (\ref{eqnh}) is a system of sixth-order elliptic partial differential equations for $h^T$ and $n$ (after imposing the appropriated positivity conditions on the matrix of coupling constants). To solve the constraints (\ref{eqnh}) we start by decoupling them; that is, we want two separate equations in which $h^T$ and $n$ are not mixed. To this end we multiply Eq.~(\ref{eqnh}) with \begin{equation} \left( \begin{array}{rr} \mathbb{D}_3 & -\mathbb{D}_2 \\ -\mathbb{D}_2 & \mathbb{D}_1 \end{array} \right) \end{equation} from the left and get a diagonal matrix acting on $\phi$, which we write as \begin{equation} \mathbb{L}\phi = 0 \,, \hspace{2em} \mathbb{L} \equiv \mathbb{D}_1 \mathbb{D}_3 - \mathbb{D}_2^2 \,. \label{decoupledeq} \end{equation} Equation (\ref{decoupledeq}) represents two decoupled equations for $h^T$ and $n$ and, moreover, the equations are the same (with the same boundary conditions). Given the values of all the coupling constants, the generic case is when the operator $\mathbb{L}$ is a sixth-order polynomial on $\partial^2$. We can always factorize it; in particular, we may write it as \begin{equation} \mathbb{L} = K ( \partial^2 - z_1 ) P^{(5)}(\partial^2) \,, \label{factorizedl} \end{equation} where $P^{(5)}(u)$ is a fifth-order polynomial on $u$, $z_1$ stands for any one of the roots of $\mathbb{L}$, and we first suppose that $K = (1/8) \left(\alpha_4( 3 \beta_3 + 8 \beta_4 ) - 2\alpha_3^2\right) $ is not zero. By combining (\ref{factorizedl}) with (\ref{decoupledeq}) we write the constraints in the form \begin{equation} \partial^2 P^{(5)}(\partial^2) \phi = z_1 P^{(5)} (\partial^2) \phi \,. \label{eigenfunction} \end{equation} The decoupled equation (\ref{eigenfunction}) implies that $P^{(5)}(\partial^2) \phi$ is an eigenfunction of the Laplacian $\partial^2$. Since we are studying the asymptotically flat case, the spatial domain of the problem is the whole $\mathbb{R}^3$ and the boundary condition is that $\phi$ and its derivatives are zero at spatial infinity. Actually, on a noncompact domain, the flat Euclidean Laplacian $\partial^2$ has no nonzero eigenfunctions that go asymptotically to zero in all angular directions. Thus, the only solution of (\ref{eigenfunction}) that satisfies the boundary condition is \begin{equation} P^{(5)}(\partial^2) \phi = 0 \label{zerosolution} \end{equation} everywhere. Let us present the same argument in another form. Consider the operator $\partial^2 - z_1$, with $z_1 \in \mathbb{C}$, acting on the space of functions $\psi$ whose domain is the whole $\mathbb{R}^3$ and that go asymptotically to zero (see (\ref{asymptoticonditions})). Thus, Eq.~(\ref{eigenfunction}) can be cast as \begin{equation} ( \partial^2 - z_1 ) \psi = 0 \,. \label{inverseproblem} \end{equation} In the space of functions $\psi$, $\partial^2$ has a continuum spectrum valued in $( - \infty , 0 ]$; it has no eigenvalues. With the prescribed asymptotic behavior the inverse $( \partial^2 - z_1 )^{-1}$ exists for any value of $z_1$, but it behaves in different ways depending on whether $z_1$ belongs to the spectrum or not. If $z_1 \not\in ( - \infty , 0 ]$ the inverse $( \partial^2 - z_1 )^{-1}$ is a bounded operator. In this case Eq.~(\ref{inverseproblem}) automatically implies $\psi = 0$, as stated in (\ref{zerosolution}). If $z_1 \in ( - \infty , 0 ]$, $( \partial^2 - z_1 )^{-1}$ still exists but it is an unbounded operator. However, the right-hand side of Eq.~(\ref{inverseproblem}) is zero; $( \partial^2 - z_1 )^{-1}$ acting on it gives zero anyway. Therefore, for any value of $z_1$, Eq.~(\ref{inverseproblem}) has the function $\psi = 0$ as its only solution satisfying the prescribed asymptotic behavior. Coming back to Eq.~(\ref{zerosolution}), it turns out that it poses another eigenfunction problem for the Laplacian since its left-hand side is another polynomial on $\partial^2$, such that we may factorize it again, \begin{equation} \left( \partial^2 - z_2 \right) P^{(4)}(\partial^2) \phi = 0 \,. \end{equation} Since the same arguments hold to solve this equation, we have $P^{(4)} (\partial^2) \phi = 0$ as the unique solution. We may proceed iteratively continuing with this last equation to finally show that the linear-order versions for the variables $h^T$ and $n$ are equal to zero. We remark that it is the noncompactness of the domain and the prescribed asymptotic conditions of the problem posed in (\ref{eigenfunction}) that force the everywhere-vanishing function to be the unique eigenfunction. If $\mathbb{L}$ is a lower order polynomial ($K=0$), an analogous eigenfunction problem for the Laplacian arises since we may factorize the given polynomial. By applying the same reasoning of above, we eventually arrive at the same zero solution. Therefore, we conclude that the unique solution of the linearized $\mathcal{H}$ and $\mathcal{C}$ constraints, which are expressed in (\ref{eqnh}), is \begin{equation} h^T = n = 0 \,. \end{equation} There remains a condition in the space of parameters: we require that the whole operator $\mathbb{L}$ is not completely zero since otherwise the number of constraints effectively reduces and additional modes appear. In addition, we know that the perturbatively linearized version of the purely $z =1$ theory is equivalent to perturbatively linearized GR \cite{Bellorin:2013zbp}. To combine these two facts, we require that the fourth-order coefficient of $\mathbb{L}$, associated to the $z=1$ operators of the theory, is nonzero, \begin{equation} \beta (2 \beta - \alpha) \neq 0 \,. \label{boundz1} \end{equation} We regard this as a condition for the continuity in the number of degrees of freedom and for having a weak regime that tends to GR. The perturbative version of Eqs.~(\ref{sigmaeq}) and (\ref{mueq}) is obtained by regarding the Lagrange multipliers as variables of first order in perturbations. The linearized version of (\ref{sigmaeq} - \ref{mueq}) forms a system equivalent to (\ref{eqnh}), \begin{equation} \mathbb{M} \left( \begin{array}{c} \mu \\ \sigma \end{array} \right) = 0 \,. \label{sigmamueq} \end{equation} Thus, by applying the same procedure as above, we obtain that $\sigma$ and $\mu$ are zero at linear order in perturbations. With all this information we may evaluate directly on Eq.~(\ref{dotg}) the condition of preservation in time of the transverse gauge (\ref{gauge}) (which is a canonical gauge). Considering the perturbation $N_i = \epsilon n_i$, Eq.~(\ref{dotg}) at linear order in perturbations yields \begin{equation} \partial^2 n_i + \partial_i \partial_k n_k = 0 \,. \end{equation} This equation, combined with the boundary condition $n_i |_{\infty} = 0$, implies $n_i = 0$. We stress that this restriction and (\ref{sigmamueq}) are requirements of the classical formulation. They do not arise in the quantum theory. We finally have that, when all the constraints have been solved and the gauge has been fixed at linear order, there remains the pair $\{ h^{TT}_{ij} , p^{TT}_{ij} \}$ as the set of free canonical variables. This confirms rigorously the number of two propagating degrees of freedom that the generic and nonperturbative Hamiltonian analysis anticipated. \section{Focusing the quantization} \subsection{The reduced Hamiltonian and its spectrum} \label{sec:reducedhamiltonian} Once we know the solutions of all the constraints in the transverse gauge, we may compute the reduced canonical Hamiltonian of the linearized theory. Since in this theory we have the version (\ref{hamiltonianbulkfinal}) for the Hamiltonian with a nonvanishing bulk part, the reduced Hamiltonian is obtained by simple substitution of the solutions of the constraints \emph{at linear order} into the second-order Hamiltonian density (the boundary term of (\ref{hamiltonianbulkfinal}) cancels itself after the substitution). We have seen that at linear order in the transverse gauge it holds $h_i^L = h^T = n = p_i^L = p^T = p_n = 0$. The substitution of these solutions yields \begin{equation} H_{\mbox{\tiny RED}} = \int d^3x \left( 2 \kappa p^{TT}_{ij} p^{TT}_{ij} + \frac{1}{4} h^{TT}_{ij} \mathbb{V} h^{TT}_{ij} \right) \,, \label{reducedhamiltonian} \end{equation} where \begin{equation} \mathbb{V} = - \beta \partial^2 - \beta_1 \partial^4 + \beta_3 \partial^6 \,. \label{operatorpotential} \end{equation} Alternatively, it is interesting to see how this reduced Hamiltonian can be obtained from the version (\ref{hamiltonianfinal}) of the exact Hamiltonian whose bulk part is a sum of constraints but there remains the boundary terms. In Appendix \ref{app:boundary} we show that this can be effectively achieved in a quite similar fashion to the asymptotically flat reduced Hamiltonian of GR. In particular, this requires considering the solutions of the constraints at second order in perturbations. In that appendix we show that the boundary terms give the correct reduced Hamiltonian despite the fact that this is a theory with higher order derivatives. There is a further connection between this theory and GR. The largest-distance dynamics of the perturbatively linearized theory can be obtained from the reduced Hamiltonian (\ref{reducedhamiltonian}) by neglecting the higher order derivatives against the lowest order one. By doing so we obtain the effective Hamiltonian for the tensorial modes \begin{equation} H^{\mbox{\tiny eff}}_{\mbox{\tiny RED}} = \int d^3x \left( 2 \kappa p^{TT}_{ij} p^{TT}_{ij} - \frac{\beta}{4} h^{TT}_{ij} \partial^2 h^{TT}_{ij} \right) \,. \label{wavehamiltonian} \end{equation} This is equivalent to taking only the $z=1$ potential (\ref{z1potential}) and then linearizing it \cite{Bellorin:2013zbp}. Thus, the perturbatively linearized version of the large-distance effective action is physically equivalent to linearized GR. Here one of the key features is the vanishing of the variables $h^T$ and $n$ at linear order in perturbations. The evolution equations arising from (\ref{wavehamiltonian}) constitute the wave equation for $h^{TT}_{ij}$, thus the perturbative large-distance theory around Minkowski spacetime propagates gravitational waves exactly as linearized GR does. However, the nonperturbative dynamics of both theories are different, even considering only the $z=1$ order in the side of the Ho\v{r}ava theory, since the nonperturbative field equations are different. The requirement of positivity of the reduced Hamiltonian imposes constraints on the coupling constants $\beta$, $\beta_1$ and $\beta_3$ (we assume that $\kappa$ is positive). We require that $\mathbb{V} \geq 0$. Consequently, from the dominant term in the low-energy range we have that $\beta > 0$ and from the one of the high-energy range it follows $\beta_3 < 0$ ($\beta = 0$ is excluded by (\ref{boundz1}) and $\beta_3 = 0$ is excluded in order to have a genuine $z=3$ Hamiltonian). There is also a bound on $\beta_1$, whose all possible values we consider in the following. \begin{enumerate} \item Case $\beta_1 \leq 0$. In this case $\mathbb{V} \geq 0$ automatically at all ranges of energy. \item Case $\beta_1 > 0$. We address this case by proposing the factorization of $\mathbb{V}$, \begin{equation} \mathbb{V} = \beta_3 \partial^2 ( \partial^2 - z_+ ) ( \partial^2 - z_- ) \,, \end{equation} where \begin{equation} z_{\pm} = \frac{1}{2\beta_3} \left( \beta_1 \pm \sqrt{ \beta_1^2 + 4 \beta \beta_3 } \right)\,. \end{equation} \renewcommand{\labelenumii}{\theenumi.\arabic{enumii}} \begin{enumerate} \item If the discriminant is nonpositive, $\beta_1^2 + 4 \beta \beta_3 \leq 0$ we have that $z_- = \bar{z}_+$. The potential $\mathbb{V}$ is positive, since, for a test function $\psi$, its integral can be written as \begin{equation} \beta_3 \int d^3x \, \bar{\psi}\, \partial^2 ( \partial^2 - \bar{z}_+ ) ( \partial^2 - z_+ ) \psi = - \beta_3 \int d^3x \, | ( \partial^2 - z_+ ) \partial_i \psi|^2 \,. \end{equation} \item If the discriminant is positive, $\beta_1^2 + 4 \beta \beta_3 > 0$, $z_{\pm}$ are real and, due to the signs of the coupling constants, both are negative. The Fourier transform (FT) of $\mathbb{V}$, which is \begin{equation} \tilde{\mathbb{V}}(k^2) = |\beta_3| k^2 ( k^2 - |z_+| ) ( k^2 - |z_-| ) \,, \label{potentialfourier} \end{equation} is useful for determining whether the spectrum of $\mathbb{V}$, given by all the values $\nu$ for which there is no solution $\psi$ of the equation \begin{equation} ( \mathbb{V} - \nu ) \psi = g \,, \label{eqnu} \end{equation} is positive. The function (\ref{potentialfourier}) is a real-valued third-order polynomial of $k^2$. In Fig.~\ref{fig:potential} we show a plot of $\tilde{\mathbb{V}}$ exhibiting its characteristic form in this case. It has a global minimum, which we denote as $\tilde{\mathbb{V}}_0$, and it does not have a global maximum. For our purposes we also need to know that $\tilde{\mathbb{V}}_0$ is always negative, as indicated in the plot. \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.35]{./plotpotential.pdf} \caption{\label{fig:potential} \small The Fourier transform of the operator $\mathbb{V}$ in the case 2.2.} \end{center} \end{figure} The solutions of (\ref{eqnu}) for all $\nu \in \mathbb{C}$ go as follows: if $\nu$ has a nonzero imaginary part, then the solution of (\ref{eqnu}) exists and its FT is given by \begin{equation} \tilde{\psi} = \frac{\tilde{g}}{\tilde{\mathbb{V}} - \nu} \,. \label{nusolution} \end{equation} If $\nu$ is real and satisfies $\nu < \tilde{\mathbb{V}}_0$ then the solution of (\ref{eqnu}) is also given by (\ref{nusolution}). Finally, if $\nu$ is real and satisfies $\nu \geq \tilde{\mathbb{V}}_0$ then the expression (\ref{nusolution}) has a pole, the solution of (\ref{eqnu}) does not exist. We conclude that in this case the spectrum is formed by all the real values $\nu$ that satisfy $\nu \geq \tilde{\mathbb{V}}_0$. Since $\tilde{\mathbb{V}}_0$ is negative, the spectrum is not positive definite. \end{enumerate} \end{enumerate} Case 2.1 can be cast as the range in $\beta_1$ given by $0 < \beta_1 \leq 2\sqrt{\beta|\beta_3|}$. Therefore, the union of cases 1 and 2.1, which are the ones with a positive spectrum of $\mathbb{V}$, is $\beta_1 \leq 2\sqrt{\beta|\beta_3|}$. In summary, the restrictions on the coupling constants needed for the continuity in the number of degrees of freedom, weakest regime approaching to GR and positivity and $z=3$ behavior of the Hamiltonian are \begin{equation} \alpha \neq 2\beta \,, \hspace{2em} \beta > 0 \,, \hspace{2em} \beta_3 < 0 \,, \hspace{2em} \beta_1 \leq 2 \sqrt{\beta |\beta_3|} \,. \end{equation} \subsection{The propagator of the physical modes} Upon the results of the previous sections on the linearized theory, in this section we obtain the propagators of the independent physical modes in the transverse gauge, which for the full $z=3$ KC Ho\v{r}ava theory it has not been considered previously. With the propagator at hand and with the knowledge of the generic structure of the interactions we may compute the superficial degree of divergence of 1PI diagrams and discuss the power-counting renormalizability. The path integral in terms of the reduced phase space is\footnote{In formulas like (\ref{pathintegralreducedcan}) we omit product symbols like $\prod\limits_{i \leq j}{ \mathcal{D}h^{TT}_{ij} }$, etc.} \begin{equation} Z_0 = \int \mathcal{D}h_{ij}^{TT} \mathcal{D}p^{TT}_{ij} \exp\left[ i\int dt d^3x \left( p^{TT}_{ij} \dot{h}_{ij}^{TT} - \mathcal{H}_{\mbox{\tiny RED}} \right) \right] \,, \label{pathintegralreducedcan} \end{equation} where the reduced Hamiltonian density $\mathcal{H}_{\mbox{\tiny RED}}$ can be read from (\ref{reducedhamiltonian}). After a Gaussian integration in $p^{TT}_{ij}$ we obtain the path integral in the noncanonical form \begin{equation} Z_0 = \int \mathcal{D}h_{ij}^{TT} \exp\left[ \frac{i}{4} \int dt d^3x \left( \frac{1}{2\kappa} \dot{h}^{TT}_{ij} \dot{h}^{TT}_{ij} - h^{TT}_{ij} \mathbb{V} h^{TT}_{ij} \right) \right] \,. \label{pathintegralreduced} \end{equation} Consequently, the full propagator of the physical modes is \begin{equation} \left< h_{ij}^{TT} h^{TT}_{kl} \right> = \frac{P^{TT}_{ijkl}}{\omega^2 / 2\kappa - \beta \vec{k}^{\,2} + \beta_1 \vec{k}^{\,4} + \beta_3 \vec{k}^{\,6}} \,, \label{propagator} \end{equation} where \begin{equation} {\displaystyle P^{TT}_{ijkl} \equiv \frac{1}{\sqrt{2}} \left( \theta_{ik} \theta_{jl} + \theta_{il} \theta_{jk} - \theta_{ij} \theta_{kl} \right) } \,, \hspace{2em} {\displaystyle \theta_{ij} \equiv \delta_{ij} - \frac{k_i k_j}{\vec{k}^{\,2}} } \,. \end{equation} Notice that only some terms of the potential (\ref{v1} - \ref{v3}) contribute to the propagator of the physical modes. The independent propagator (\ref{propagator}) of this theory behaves just as was the aim in the original formulation of Ho\v{r}ava for having a renormalizable and unitary theory of quantum gravity \cite{Horava:2009uw}: for high $\omega$ and $\vec{p}$ it is dominated by the $z=3$ mode $(\omega^2 / 2 \kappa + \beta_3 \vec{p}^{\:6})^{-1}$ and there are no more independent propagators other than (\ref{propagator}). With the aim of analyzing UV divergences, we now study qualitatively the structure of the interactions. This requires us to go beyond the linear order. In particular, under the scheme of dealing with reduced variables, the constraints must be solved at higher orders in perturbations. We concentrate ourselves in the second-class constraints since for the first-class one the standard techniques of quantization of gauge systems can, in principle, be applied. Among the set of second-class constraints of the theory, $\mathcal{H}$ and $\mathcal{C}$ possess the more involved structure since they are partial differential equations. At higher orders in perturbations their solutions require the inverse of a nonlocal operator.\footnote{Renormalization of gravity theories with nonlocal terms has been considered in Ref.~\cite{varios:superrenormalizable}, getting super-renormalizable theories.} The operator is the matrix $\mathbb{M}$ given in (\ref{phiM}). To illustrate this, we may present the Hamiltonian constraint $\mathcal{H}$ at second order in perturbations, which is \begin{equation} \begin{array}{rr} {\displaystyle 2 \epsilon \left( \mathbb{D}_2 h^T + \mathbb{D}_3 n \right) = \frac{\epsilon^2}{4} \left[ - 8 \kappa p^{TT}_{ij} p^{TT}_{ij} + \beta_1 \partial^2 h^{TT}_{ij} \partial^2 h^{TT}_{ij} + \beta_3 \partial^2 \partial_i h^{TT}_{jk} \partial^2 \partial_i h^{TT}_{jk} \right.} \\[2ex] \hspace{2em} {\displaystyle \left. + \left( \beta + \alpha_1 \partial^2 + \alpha_3 \partial^4 \right) \left( 4 h^{TT}_{ij} \partial^2 h^{TT}_{ij} + 3 \partial_i h^{TT}_{jk} \partial_i h^{TT}_{jk} - 2 \partial_i h^{TT}_{jk} \partial_k h^{TT}_{ij} \right) \right] } \,, \end{array} \label{hamiltonianconstraintsecondorder} \end{equation} where $\mathbb{D}_2$ and $\mathbb{D}_3$ were defined in (\ref{operators}). In all the terms weighted by a power of $\epsilon^2$ we have substituted the linear-order solutions for the variables that are restricted by the constraints. Note that in the left-hand side member of this constraint we have the second row of the matrix $\mathbb{M}$ acting on the vector $\phi$ (\ref{phiM}). As usual in a perturbative approach, at any order in perturbations the solutions for $h^T$ and $n$ corresponding to the previous orders must be substituted everywhere except on the term of lowest order in $\epsilon$, which is always the one arising in the left-hand member of (\ref{hamiltonianconstraintsecondorder}). Therefore, the $\mathcal{H}$ and $\mathcal{C}$ constraints become linear equations on these variables at any order in perturbations and the operator acting on them is $\mathbb{M}$. Thus, we see that the solutions of the $\mathcal{H}$ and $\mathcal{C}$ constraints require the use of a nonlocal operator, which in general is difficult to represent. However, for our purposes we only need to know the distribution of momenta at the UV regime. We may then approximate the solutions by taking only the terms that contribute with the highest power of momenta in the Fourier space. To achieve this we make the following observation: at any order in perturbations, the highest number of spatial derivatives that the $\mathcal{H}$ and $\mathcal{C}$ constraints have \emph{is the same} both for the scalars $h^T$ and $n$ and for the tensorial modes $h_{ij}^{TT}$. This a consequence of two facts: (i) in the decomposition (\ref{decomposition}) $h_{ij}^{TT}$ and $h^T$ enter with the same order in derivatives (or the same power of Fourier-space momentum, if one whishes)\footnote{Some derivatives that act on $h^T$ are missed in $h^{TT}_{ij}$ since it satisfies $\partial_i h^{TT}_{ij} = 0$. However, there remain other combinations that are not divergences on $h^{TT}_{ij}$. In this discussion we are interested only in the powers of momenta, regardless of their origin.}, and (ii) we are considering the presence of all the inequivalent FDiff-covariant interaction terms till order $z=3$, which implies that the highest number of derivatives of the lapse function $N$ is equal to the one of the spatial metric $g_{ij}$. As an example, Eq.~(\ref{hamiltonianconstraintsecondorder}) has a maximum of six derivatives acting on $h^T$, $n$ and $h^{TT}_{ij}$. In addition, the $\mathcal{H}$ and $\mathcal{C}$ constraints have no spatial derivatives of the conjugate momenta. Thus, for second and higher orders in perturbations, the UV-dominant part of the solutions can be modeled in the schematic form \begin{equation} h^T \,, n \sim \left( \frac{1}{( \partial_m )^{2z}} ( \partial_n )^{2z} \right) \left( h_{ij}^{TT} \cdots h_{kl}^{TT} \right)\,, \frac{1}{( \partial_m )^{2z}} \left( h^{TT}_{ij} \cdots h^{TT}_{kl} p^{TT}_{pq} p^{TT}_{rs} \right) \,. \label{highenergysolution} \end{equation} At the highest order in derivatives, the matrix $\mathbb{M}$ can be expressed as the operator $\partial^{2z}$ times a matrix of dimensionless coupling constants, whose determinant is $K = (1/8) \left(\alpha_4( 3 \beta_3 + 8 \beta_4 ) - 2\alpha_3^2\right) $. We assume that $K\neq 0$. We keep the dependence on $p_{ij}^{TT}$ in quadratic form at any order in $\epsilon$ since $\mathcal{H}$ and $\mathcal{C}$ only have quadratic dependence on the exact momentum $\pi^{ij}$. Moreover, solving the constraints $\mathcal{H}^i$ and $\pi$ for $p^L_i$ and $p^T$ does not increase or lower the power in $p^{TT}_{ij}$. In Appendix \ref{app:linearmomentum} we develop this last argument. In $d+1$ spacetime dimensions the canonically conjugated variable $p_{ij}^{TT}$ scales\footnote{We recall that the assignment of dimensions for coordinates and field variables in Ho\v{r}ava gravity is intentionally made to make the coupling constant $\kappa$ dimensionless \cite{Horava:2009uw}.} with the UV cutoff in momenta $\Lambda$ as $\Lambda^d$. In this theory we intentionally have $z=d$. Then, from the schematic relation (\ref{highenergysolution}) we deduce that the solutions $h^T$ and $n$ do not contribute with powers of momenta in the vertices at any order in perturbations. For example, in a $2z$-order cubic interaction like $h^T h^{TT}_{ij} \partial^6 h_{ij}^{TT}$, after substituting the solution for $h^T$, the vertex still contributes with $2z = 6$ powers of momenta. Therefore, after taking into account the nonlocal nature of the solutions of the second-class constraints, we see that the power counting is not altered by the process of solving them. Upon these considerations and since we have a genuine $z=3$ propagator we may now discuss the power-counting renormalizability guided by the superficial degree of divergence of general 1PI diagrams over the reduced phase space. For this computation we follow Refs.~\cite{Visser:2009fg,Visser:2009ys}. Further developments on the renormalization of Lorentz-violating theories, in particular, studies on the behavior of the subdivergences, were made in Refs.~\cite{varios:subdivergences}. From the propagator (\ref{propagator}) we deduce that if $\Lambda$ is an UV cutoff for the momenta, then $\Lambda^{z}$ is the cutoff for the energy (up to some constants of proportionality that are irrelevant for our purposes), with $z=3$. Therefore, for each loop in the UV regime we have the contribution \begin{equation} \int d\omega d^dk \rightarrow \Lambda^{d+z} \,, \end{equation} while for each propagator \begin{equation} I = \Lambda^{2z} \,. \end{equation} In any vertex we can have at most a contribution of $2z$ powers of loop momenta coming from the vertex itself (for vertices that are of $2z$ order in spatial derivatives). If in a 1PI Feynman diagram $L$ is the number of loops, $I$ is the number of internal lines and $V$ is the number of vertices, its superficial degree of divergence $D$ is bounded by \begin{eqnarray} D &\leq& ( d + z ) L + 2 z ( V - I ) \\ &=& ( d - z ) L + 2 z ( L + V - I ) \,. \end{eqnarray} Now the identity $ L - 1 = I -V $ for graphs is used and in addition in this theory we have $z = d$. Therefore, the superficial degree of divergence is bounded by \begin{equation} D \leq 2z \,. \end{equation} This is the bound (8) of Ref.~\cite{Visser:2009ys}, where Lorentz-violating theories with interactions depending on spatial derivatives were considered. This degree of divergence coincides with the highest order operators already included in the bare action (once we extend our potential to include all the $z \leq 3$ terms, not only the operators that contribute to the quadratic action). This leads to the conclusion that the theory is power-counting renormalizable. Unitarity and the criterion of power-counting renormalizability are safe in this theory. \subsection{The path integral in the nonreduced phase space} \subsubsection{Canonical formulation} If, unlike the procedure in the previous sections, one wants to avoid the problem of solving the constraints and deals with nonreduced variables, then all of the unsolved constraints must be incorporated into the quantization procedure. At least there are two ways to address the quantization of theories with second-class constraints in nonreduced variables: the Dirac brackets in the operator formalism and the adapted measure in the path-integral formalism \cite{Senjanovic:1976br}. Here we study the path integral. Let us introduce a common notation for the second-class constraints: $\theta_1 \equiv \pi$, $\theta_2 \equiv P_N$, $\theta_3 \equiv \mathcal{C}$ and $\theta_4 \equiv \mathcal{H}$; and let $\chi^i$ denote a gauge-fixing condition for the freedom of performing spatial diffeomorphisms. The path integral in terms of the nonreduced canonical variables is \begin{equation} Z_0 = \int \mathcal{D}V \delta(\mathcal{H}^i) \delta(\chi^i) \delta(\theta_m) e^{ i S_{\mbox{\tiny CAN}} } \,, \end{equation} where the measure and the action are given by \begin{eqnarray} \mathcal{D}V &\equiv& \mathcal{D}g_{ij} \mathcal{D}\pi^{ij} \mathcal{D}N \mathcal{D}P_N \times \det\{ \mathcal{H}^k , \chi^l \} \sqrt{ \det \{ \theta_p , \theta_q \} } \,, \label{measurecanonicalgeneral} \\ S_{\mbox{\tiny CAN}} &=& \int dt \left[ \int d^3x \left( \pi^{ij} \dot{g}_{ij} + P_N \dot{N} - \frac{2 \kappa N}{\sqrt{g}} \pi^{ij} \pi_{ij} - \sqrt{g} N \mathcal{V} \right) + \beta E_{\mbox{\tiny ADM}} \right] \,. \nonumber \\ \end{eqnarray} In the canonical formalism the shift vector $N_i$ is a Lagrange multiplier, hence it does not arise in the path integral (unless one wants to ``raise" the $\delta(\mathcal{H}^i)$ up to the Lagrangian). There is an important simplification in the matrix of Poisson brackets between the second-class constraints that helps to implement the path integral: all the combinations of brackets between the constraints $P_N$ and $\pi$ vanish. Thus, the matrix of brackets acquires the triangular form \begin{equation} \{ \theta_p , \theta_q \} = \left( \begin{array}{cc} 0 & \mathcal{M} \\ - \mathcal{M}^t & \mathcal{N} \end{array} \right) \,, \end{equation} where $\mathcal{M}$ is the submatrix of brackets corresponding to the sector $\{ \theta_{p = 1,2} , \theta_{q = 3,4} \}$ and $\mathcal{N}$ is the submatrix of the sector $\{ \theta_{p = 3,4} , \theta_{q=3,4} \}$. Consequently, the measure for the second-class constraints simplifies, \begin{equation} \sqrt{ \det \{ \theta_p , \theta_q \} } = \det \mathcal{M} \,. \label{determinant} \end{equation} On the basis of this relation we can incorporate the measure to the Lagrangian by means of fermionic ghosts. For a potential $\mathcal{V}$ the entries of $\mathcal{M}$ are the equal-time brackets \begin{eqnarray} \{ P_N(x) , \mathcal{H}(y) \} &=& - \frac{\delta}{\delta N(x)} \int d^3w \sqrt{g}\, \mathcal{U} \delta_{wy} \,, \label{bracketph} \\ \{ P_N(x) , \mathcal{C}(y) \} &=& \frac{\delta}{\delta N(x)} \int d^3w \sqrt{g}\, \mathcal{W} \delta_{wy} \,, \\ \{ \pi(x) , \mathcal{H}(y) \} &=& \frac{3 \kappa}{\sqrt{g}} \pi^{ij} \pi_{ij} \delta_{xy} - \left( g_{ij} \frac{\delta}{\delta g_{ij}} \right)_{\!\!x} \int d^3w \sqrt{g}\, \mathcal{U} \delta_{wy} \,, \\ \{ \pi(x) , \mathcal{C}(y) \} &=& \frac{9 \kappa}{2 \sqrt{g}} \pi^{ij} \pi_{ij} \delta_{xy} + \left( g_{ij} \frac{\delta}{\delta g_{ij}} \right)_{\!\!x} \int d^3w \sqrt{g}\, \mathcal{W} \delta_{wy} \,. \label{bracketpic} \end{eqnarray} The vanishing of the brackets between $P_N$ and $\pi$ suggests that perhaps this theory could be reformulated as a theory without second-class constraints and with enhanced gauge symmetries. This technique consists of promoting $P_N$ and $\pi$ to first-class constraints, $\mathcal{H}$ and $\mathcal{C}$ are regarded as gauge-fixing conditions for the associated gauge symmetries and the Hamiltonian is modified without altering the physics. In Appendix \ref{app:firstclass} we study this possibility for the linearized theory, finding eventually that this procedure simply leads to the reduced theory with a trivial gauge symmetry. With the aim of getting explicit formulas, we now consider the path integral of the linearized theory. We introduce the perturbative variables according to (\ref{perturbativevariables}) and adding $P_N = \epsilon p_n$. We perform the transverse-longitudinal decomposition (\ref{decomposition}) in $h_{ij}$ and $p_{ij}$. We consider all the constraints up to linear order in $\epsilon$ on the measure and deltas and consider the action up to second order in $\epsilon$. Some variables that we are not interested in can be quickly eliminated along the same lines of Section \ref{sec:perturbations}. The transverse gauge (\ref{gauge}) and the linearized constraints, except $\mathcal{H}$ and $\mathcal{C}$, yield $h_i^L = p_i^L = p^T = p_n = 0$. Recalling our analysis of the linearized $\mathcal{H}$ and $\mathcal{C}$ constraints of Section \ref{sec:perturbations}, we have that the delta factors in the linearized theory become \begin{equation} \delta(\mathcal{H}^i) \delta(\chi^i) \delta(\theta_m) = \delta(p^L_i) \delta(h^L_i) \delta(p_n) \delta(p^T) \delta(\mathbb{M} \phi) \,, \end{equation} where $\phi$ and $\mathbb{M}$ were defined in (\ref{phiM}). In the passage to the variables $h_i^L$ and $p_i^L$ the factor $ \det\{ \mathcal{H}^k , \chi^l \} $ of (\ref{measurecanonicalgeneral}) is automatically canceled. Taking advantage of the four first deltas we automatically perform the integration in $p^L_i$, $h^L_i$, $p_n$ and $p^T$. This leaves us with the variables $h^T$ and $n$ as the remaining scalars, keeping in mind that the integration in $p^T$ and $p_n$ has already eliminated their propagation. Because of linearity, the submatrix $\mathcal{M}$ introduced in (\ref{determinant}) becomes equal to the matrix $\mathbb{M}$ defined in (\ref{phiM}). Thus, for the linearized theory we have \begin{equation} \sqrt{ \det\{ \theta_p , \theta_q \} } = \det \mathbb{M} \,. \label{measurelinear} \end{equation} After these steps the path integral of the linearized theory becomes \begin{equation} Z_0 = \int \mathcal{D}V \delta(\mathbb{M}\phi) \exp{ \left[ i \epsilon^2 \int dt d^3x \left( p^{TT}_{ij} \dot{h}^{TT}_{ij} - \mathcal{H}_{\mbox{\tiny RED}} - \phi^t \mathbb{M} \phi \right) \right]} \,, \label{pathintegralpreliminar} \end{equation} where now \begin{equation} \mathcal{D}V = \mathcal{D}h^{TT}_{ij} \mathcal{D}p^{TT}_{ij} \mathcal{D}\phi \times \det\mathbb{M} \label{measurephi} \end{equation} and $\mathcal{H}_{\mbox{\tiny RED}}$ can be extracted from (\ref{reducedhamiltonian}). There is no time derivative for the scalars $h^T$ and $n$, as we anticipated. This reflects the fact that the only propagating degrees of freedom are the transverse-traceless tensorial modes. We also remark on the determinant role of the measure associated to the second-class constraints: since the combination $\det\mathbb{M} \times \delta(\mathbb{M} \phi)$ is equivalent to $\delta(\phi)$, in (\ref{pathintegralpreliminar}) we can perform directly the integration in $\phi$. The resulting path integral is exactly expressed in terms of the reduced variables with weight $1$ in the measure, as it should be, coinciding with (\ref{pathintegralreducedcan}). In the linearized theory we may write the measure $\det\mathbb{M}$ in terms of ghosts. To this end we use two ghost fields $c_1,c_2$ and two antighost fields $\bar{c}_1,\bar{c}_2$. Their contribution to the action is \begin{equation} \int dt d^3x \left( \bar{c}_1 \mathbb{D}_1 c_1 + \bar{c}_1 \mathbb{D}_2 c_2 + \bar{c}_2 \mathbb{D}_2 c_1 + \bar{c}_2 \mathbb{D}_3 c_2 \right) \end{equation} The operators $\mathbb{D}_{1,2,3}$, which were defined in (\ref{operators}), are third-order polynomials of the flat Laplacian. Thus, these ghosts/antighost acquire propagators with a $z=3$ scaling in the spatial momenta, but they do not get dependence on the frequency when representing the measure. We have seen that in the linearized theory the part of the measure corresponding to the second-class constraints is the factor $\det \mathbb{M}$, which has no consequence on the dynamics because it is independent of the fields. However, at higher order in perturbations (or in the nonperturbative theory) the measure $\sqrt{\det\{\theta_p , \theta_q\}}$ depends in a highly nontrivial way on the fields, as can be deduced from (\ref{bracketph} - \ref{bracketpic}). Thus, the second-class constraints together with their associated measure must be carefully considered. \subsubsection{Recovering the quantum FDiff-covariant action} In this section we perform an important check of consistency of the quantization procedure: we ask ourselves whether the canonical path integral of the previous section reproduces the action in FDiff-covariant variables and simultaneously we find the appropriated measure for this formalism. To this end it is convenient to avoid the delta in $\phi$ that the canonical path integral (\ref{pathintegralpreliminar}) has since we want to keep the scalars $h^T$ and $n$ as nonzero variables inside the FDiff-covariant action. By introducing a linear-order Lagrange multiplier $\epsilon b$, where $b$ is a two-component vector of scalars, the delta $\delta (\mathbb{M}\phi)$ in (\ref{pathintegralpreliminar}) can be ``raised up" to the Lagrangian, \begin{equation} Z_0 = \int \mathcal{D}V \mathcal{D}b \exp{ \left( i \epsilon^2 \int dt d^3x \left( p^{TT}_{ij} \dot{h}^{TT}_{ij} - \mathcal{H}_{\mbox{\tiny RED}} - ( \phi - b )^t \mathbb{M} \phi \right ) \right) } \,. \label{pathintegralpreliminar2} \end{equation} By virtue of the self-adjointness of $\mathbb{M}$, the following identity holds: \begin{equation} \int d^3x ( \phi - b )^t \mathbb{M} \phi = \int d^3x \left( ( \phi - \frac{1}{2} b )^t \mathbb{M} (\phi - \frac{1}{2} b ) - \frac{1}{4} b^t \mathbb{M} b \right) \,. \label{identityM} \end{equation} Thus, in the path integral we may perform the following change of variables \begin{equation} \phi \rightarrow \phi - \frac{1}{2} b \,, \end{equation} which has unit Jacobian. After this change $\phi$ and $b$ are not mixed in the action. The only dependence the resulting action has in $b$ is in the last term of (\ref{identityM}). Since $b$ is a real bosonic field the integration over it yields a factor $\left(\sqrt{\det\mathbb{M}}\right)^{-1}$ in the measure. Therefore, we have that the path integral with nonzero $h^T$ and $n$ fields take the form \begin{equation} Z_0 = \int \mathcal{D}h^{TT}_{ij} \mathcal{D}p^{TT}_{ij} \mathcal{D}\phi \sqrt{ \det \mathbb{M} } \exp{ \left( i \epsilon^2 \int dt d^3x \left( p^{TT}_{ij} \dot{h}^{TT}_{ij} - \mathcal{H}_{\mbox{\tiny RED}} - \phi^t \mathbb{M} \phi \right ) \right) } \,. \label{finalpathintegralcanonical} \end{equation} By contrasting this version with (\ref{pathintegralpreliminar}) we see that the change consists in dropping the delta in $\phi$ at the price of changing the measure. This version of the canonical path integral is also consistent with the formulation in the reduced phase space since the integration over $\phi$ can be directly performed in (\ref{finalpathintegralcanonical}) yielding a factor of $(\sqrt{\det\mathbb{M}})^{-1}$ that cancels itself with the measure. We now compare with the action written in noncanonical variables (the FDiff-covariant variables). Although those variables give a complete covariant formulation, for simplicity we do the comparison in the transverse gauge, under which (\ref{finalpathintegralcanonical}) is written. The support for this simplification is the fact that the gauge symmetry of pure spatial diffeomorphisms is present in both the Lagrangian and the canonical formulations. The FDiff-covariant variables are the ADM variables $g_{ij}$, $N$ and $N_i$ and the action is given in (\ref{lagrangianaction}). The ghosts associated to the gauge fixing should be included, but they decouple in the linearized theory, thus we do not consider them in this analysis. We introduce the perturbative variables according to (\ref{perturbativevariables}) and adding \begin{equation} N_i = \epsilon ( u_i + \partial_i B ) \,, \end{equation} with $\partial_i u_i = 0$. The linearized version of the action (\ref{lagrangianaction}) in the transverse gauge is given by \begin{equation} \begin{array}{rcl} S &=& {\displaystyle \epsilon^2 \int dt d^3x \left( \frac{1}{8 \kappa} \dot{h}^{TT}_{ij} \dot{h}^{TT}_{ij} + \frac{1 - 2\lambda}{16 \kappa} (\dot{h}^T)^2 + \frac{\lambda}{2 \kappa} \dot{h}^T \partial^2 B \right. } \\ & & {\displaystyle \left. + \frac{1 - \lambda}{2\kappa} (\partial^2 B)^2 - \frac{1}{4 \kappa} u_i \partial^2 u_i - \frac{1}{4} h^{TT}_{ij} \mathbb{V} h^{TT}_{ij} - \phi^t \mathbb{M} \phi \right) } \,, \end{array} \end{equation} where $\mathbb{V}$ is defined in (\ref{operatorpotential}). To arrive at these expressions we have integrated $h_i^L$ out. According to (\ref{finalpathintegralcanonical}), in the measure of the path integral one must include the factor $\sqrt{\det\mathbb{M}}$. Next, integration in $u_i$ can be performed yielding an irrelevant factor in the denominator of the path-integral integrand. $B$ can also be easily integrated after completing squares, which yields the action \begin{equation} S = \epsilon^2 \int dt d^3x \left( \frac{1}{8 \kappa}\dot{h}^{TT}_{ij} \dot{h}^{TT}_{ij} + \frac{1 - 3\lambda}{16 \kappa ( 1 - \lambda )} (\dot{h}^T)^2 - \frac{1}{4} h^{TT}_{ij} \mathbb{V} h^{TT}_{ij} - \phi^t \mathbb{M} \phi \right) \,. \label{finalcovariantaction} \end{equation} The crucial fact about the propagating degrees of freedom at the KC point in the scenario of nonreduced, FDiff-covariant variables can be seen in this action. Recalling that in this theory $\lambda = 1/3$, we have that the action loses the time derivative of $h^T$, whereas the one of $n$ is absent from the very beginning. The goal we pursue in this section is achieved once we compare (\ref{finalcovariantaction}) with (\ref{finalpathintegralcanonical}): with $\lambda = 1/3$ the canonical path integral reproduces the FDiff-covariant Lagrangian since the Gaussian integration of (\ref{finalpathintegralcanonical}) over the momenta $p_{ij}^{TT}$ yields the action (\ref{finalcovariantaction}). With this procedure we have learned that the factor $\sqrt{\det\mathbb{M}}$ must be included in the measure of the path integral in the FDiff-covariant formulation (this factor is not equal to the measure of the second-class constraints in canonical variables!). Again, it is at the level of higher orders in perturbations where this factor affects the dynamics. \section{The non-kinetic-conformal theory} \label{sec:nokc} Since the nonprojectable Ho\v{r}ava theory with $\lambda \neq 1/3$ also has second-class constraints, in this section we want to consider it briefly with the aim of highlighting the need of incorporating the measure of these constraints to the path integral, as in the case of the KC theory. The action is of the same form as (\ref{lagrangianaction}), but now with $\lambda \neq 1/3$ (and $\lambda$ otherwise arbitrary, except for requirements of stability of the linearized theory), such that the metric $G^{ijkl}$ has the inverse given by \begin{equation} \mathcal{G}_{ijkl} = \frac{1}{2} (g_{ik} g_{jl} + g_{il} g_{jk} ) - \frac{\lambda}{3\lambda - 1} g_{ij} g_{kl} \,. \label{inverseg} \end{equation} For our purposes it is enough to take the large-distance effective action, which has the second-order potential \begin{equation} \mathcal{V} = - \beta R - \alpha\, a_i a^i \,. \end{equation} The theory shares with the KC theory the fact that the momentum constraint $\mathcal{H}^i$ is the only first-class constraint. On the other hand, the only second-class constraints are $P_N = 0$ and the Hamiltonian constraint \begin{equation} \mathcal{H} \equiv \frac{2\kappa}{\sqrt{g}} \mathcal{G}_{ijkl} \pi^{ij} \pi^{kl} + \sqrt{g}\, \mathcal{U} = 0\,, \label{hamiltonianconstraint} \end{equation} where \begin{equation} \mathcal{U} \equiv \frac{1}{\sqrt{g}} \frac{\delta}{\delta N} \int d^3y \sqrt{g} N \mathcal{V} = - \beta R + \alpha ( 2 \nabla_i a^i + a_i a^i ) \,. \end{equation} The Hamiltonian in the nonzero-bulk version takes the form \begin{equation} H = \int d^3x \left( \frac{2 \kappa N}{\sqrt{g}} \mathcal{G}_{ijkl} \pi^{ij} \pi^{kl} - \sqrt{g} N ( \beta R + \alpha\, a_i a^i ) + N_i \mathcal{H}^i + \sigma P_N \right) \,. \label{prehamiltonian} \end{equation} The preservation in time of $\mathcal{H} = 0$ yields a second-order, linear, elliptic partial differential equation for $\sigma$. With this step the Dirac procedure for analyzing the structure of constraints closes. Since the theory possesses the momentum constraint $\mathcal{H}^i$ as the first-class constraint and the constraints $P_N$ and $\mathcal{H}$ as the second-class ones it results that the theory propagates three even physical modes. Two of them correspond to the two tensorial modes that are also propagated in the KC theory and GR and the other one is the extra scalar mode. Thus, we have that in this theory there are fewer second-class constraints than in the KC theory. However, as happened in the KC theory, the matrix of Poisson brackets acquires a triangular form since the constraint $P_N$ has a vanishing bracket with itself. Then the measure for the second-class constraints takes the form \begin{equation} \sqrt{\det \{ \theta_p , \theta_q \} } = \det \{ P_N , \mathcal{H} \} \,. \end{equation} It can be directly elevated to the Lagrangian by means of fermionic ghosts. The Poisson bracket we need for the measure (evaluated on the constrained phase space) is \begin{equation} \{ P_N(x) , \mathcal{H}(y) \} = 2 \alpha \frac{\sqrt{g}}{N} \left( \nabla_i (\delta_{xy} a^i) - \nabla^2 \delta_{xy} \right) \,. \label{brackethphi} \end{equation} The lesson we extract from this discussion is the fact that also in the nonprojectable Ho\v{r}ava theory with $\lambda \neq 1/3$ the measure of the second-class constraints is needed (as well as the first-class sector), and that it has a nontrivial dependence on the fields whenever one goes beyond the linearized level, which is of course necessary for evaluating interactions. Notice also that, for simplicity, we have restricted ourselves to the large-distance effective action. The measure gets more involved once high-order operators are considered. \section{Discussion and conclusions} The nonprojectable Ho\v{r}ava theory \cite{Horava:2009uw,Blas:2009qj} possesses second-class constraints. When it is formulated at the kinetic-conformal point, $\lambda = 1/3$, there are four of them, which, together with the momentum constraint, leave two propagating degrees of freedom. The presence of second-class constraints must be carefully considered in any quantization procedure, since standard techniques for gauge theories that have no second-class constraints could not apply. One route to deal with the second-class constraints is to solve them. In this direction we have analyzed the perturbative linearized theory in the transverse gauge, taking all the $z=1,2,3$ terms that contribute to the quadratic action. We have found the propagator for the two transverse-traceless tensorial modes. Our perturbative approach confirms that there are no extra modes or ghosts. Moreover, the physical propagator at the UV regime effectively has the scaling in momenta for which the theory was designed. From this and from the qualitative analysis of the vertices we have shown the power-counting renormalizability of the theory. In addition, within the linearized approach we have rigorously corroborated the consistency of the Hamiltonian formulation of the classical theory. We have confirmed that all the differential-equation constraints and conditions for the Lagrange multipliers have elliptic structures and can be consistently solved. We have found conditions on the space of coupling constants needed to ensure the positiveness of the spectrum of the physical Hamiltonian. To get more insight on the renormalizability of the theory it would be interesting to study the extension of the analysis of Anselmi and Halat, who considered the behavior of subdivergences on Lorentz-violating scalar and fermionic field theories \cite{varios:subdivergences}, to this theory. Those authors found the interesting result that subdivergences in Lorentz-violating theories can be canceled in a similar way as the relativistic theories. There can be other ways of solving the constraints that could apply even for the nonperturbative theory. These techniques are typically noncovariant (under general spatial transformations). For example, in general relativity this has been broadly undertaken with the light-front coordinates \cite{varios:lightfront}. This approach introduces nonlocal operators in the Lagrangian as a consequence of solving the constraints. The light-front quantization of quantum chromodynamics uses similar ideas related to null coordinates, see for example \cite{Brodsky:1997de,Srivastava:2000cf}. This has also been applied to electroweak theory \cite{Srivastava:2002mw}. Under this approach the quantization of nonperturbative and perturbative QCD has been focused, even the one-loop renormalization has been obtained \cite{Srivastava:2000cf}. Thus, it would be interesting to explore the possibility of solving the second-class constraints of the nonprojectable Ho\v{r}ava theory using a special coordinate system. The other route to deal with second-class constraints, which is largely more popular for gauge theories, is to work in the nonreduced phase space. In gauge theories without second-class constraints the standard techniques (Faddeev-Popov and Becchi-Rouet-Stora-Tyutin procedures) have allowed a great advance in establishing their renormalizability (whenever they are so). This has been applied even for general relativity with higher curvature terms \cite{Stelle:1976gc}. However, the point with second-class constraints is that they have no associated gauge symmetry (we have even considered the transformation to a gauge system, but with trivial results). To start from first principles, we have analyzed the formulation of the path integral with the second-class constraints. We have evaluated the prescription for the measure in the canonical theory, finding that there is a simplification since the square root disappears. We have also found the measure for the nonreduced linearized theory, which confirmed the correctness of the prescribed measure since it leads directly to the reduced canonical theory with measure $1$. The measure can, in principle, be incorporated to the Lagrangian with ghosts, but the propagation of them must be considered carefully since this kind of ghost is not directly connected to gauge symmetries. Indeed, we have seen that they arise with a $z=3$ UV scaling in momenta directly from the measure, but without dependence on the frequency. It would be interesting to explore if at higher orders in perturbations, where the dependence of the constraints on the canonically conjugate momenta (and hence on time derivatives) is activated, one can obtain more information about the dependence on the frequency of the propagation of these ghosts. In general, extracting the consequences the measure associated to second-class constraints has in the dynamics of a given theory is a delicate issue.\footnote{There are exceptions to this rule, for example, the massive Yang-Mills theory, whose measure is dynamically trivial (in the exact theory), such that one can ignore it \cite{Senjanovic:1976br}.} In the nonreduced scheme we have also applied an approach to reproduce the path integral in terms of FDiff-covariant variables (simply, the ``Lagrangian" approach); in the linearized theory in this case. This procedure yielded the appropriated measure for the Lagrangian formalism. This is a rather nontrivial issue, since if one starts with the pure Lagrangian formulation of the path integral in a theory with second-class constraints, then one has no general recipe for the measure. Throughout this paper we have used the transverse gauge due to the great simplifications in computations it provides. However, other gauge-fixing conditions can be more convenient for establishing renormalization or for other quantum features. For example, the authors of \cite{Barvinsky:2015kil} found that with a nonlocal gauge-fixing condition they could show the renormalizability of the projectable Ho\v{r}ava theory. The essence of their approach is that with the nonlocal gauge condition they could arrive at regular propagators for all the relevant (nonreduced) variables. \section*{Acknowledgments} A. R. is partially supported by Grant Fondecyt No.~1161192, Republic of Chile.
2023-04-23T06:41:26.939Z
2016-10-06T02:06:22.000Z
redpajama/arxiv
arxiv_0001
2,453
13,693
bb86bfc79d9abbac7e020f3e47d9783c8241e122
\section{Introduction} Quantitative analysis of brain vasculature is used in a variety of fields, including vascular development \cite{connor2015anintegrated,segarra2015avascular,newberry2015testing} and physiology \cite{shih2015robustand}, neurovascular coupling \cite{tsai2009correlations,dalkara2015cerebral}, and blood-brain barrier studies \cite{nhan2013drugdelivery,burgess2014analysis}. Distinguishing blood vessels from the surrounding tissue (vessel segmentation) is often a necessary preliminary step that enables more accurate and efficient analyses of the vascular network. For example, characteristics of vasculature morphology such as tortuosity, length, and diameter, can be obtained without confounding factors from the extravascular space, such as dendrites. In addition to characterizing the vasculature itself, vessel segmentation also facilitates analyses of other dynamic factors, including cortical blood flow and angiogenesis. Clinically, quantitative analysis of vessels will assist in making diagnoses and planning surgeries \cite{lesage2009areview,rudyanto2014comparing,yun2015stenosis}. For example, retinal vasculature imaging \cite{pinhas2013invivo,spaide2015retinal} allows inexpensive and fast screening of several eye-related and systematic pathologies such as glaucoma, age-related macular degeneration, diabetic retinopathy, hypertension, arteriosclerosis and Alzheimer's disease \cite{ikram2013retinal}. Differentiating blood vessels from the surrounding tissue also allows more accurate analyses of extravascular structures, such as tumor volume quantification \cite{reeves2006onmeasuring} and pulmonary lobes structural analysis \cite{lassen2013automatic}. Given that vascular diseases, such as coronary heart disease, are among the largest public health problems in developed countries \cite{worldhealthorganization2008thetop}, accurate and efficient image analysis will only become more relevant. \cite{worldhealthorganization2008thetop}. Thus, the segmentation of vascular structures from surrounding tissue is useful for both basic research and clinical applications. There have been various approaches for vessel segmentation (for reviews see \cite{kirbas2004areview,lesage2009areview}), but to date, no single method have been able to successfully segment vessels from every imaging modality and every organ \cite{rudyanto2014comparing}. Our group uses vessel segmentation for two purposes: 1) To analyze changes in vascular morphology after focused ultrasound mediated blood-brain barrier opening \cite{hynynen2005localand,burgess2016microbubbleassisted}, and 2) to observe tumor pathophysiology and drug kinetics following application of focused ultrasound stimulated microbubbles (unpublished). Both of these projects use the two-photon microscope for acquiring high-resolution images. We were motivated to improve our vessel segmentation pipelines from previous custom-written semi-automatic Matlab scripts \cite{nhan2013drugdelivery}, and labor-intensive manual approaches using commercial Imaris (Bitplane AG, Zurich, Switzerland) platform and open-source ImageJ/FIJI platform \cite{schindelin2015theimagej}, to be more automatic and robust. Delineating blood vessels from the extravascular space enables quantification of the rate and duration of dye leakage, which can be correlated with kinetics and characteristics of blood-brain barrier integrity \cite{cho2011twophoton,nhan2013drugdelivery,burgess2014analysis}. Other groups have used vessel segmentation as an image processing tool to analyze other factors from two-photon datasets, including neurovascular coupling \cite{tran2015acutetwophoton}, neuronal calcium imaging \cite{daniel2015optical,maeda2015weaksinusoidal}, and low-intensity focused ultrasound brain modulation paradigms \cite{tufail2011ultrasonic,bystritsky2015areview,moore2015manipulating}. Two-photon microscopy, or more generally, multiphoton microscopy, has become the workhorse of neuronal imaging \cite{helmchen2005deeptissue}. Multiphoton microscopy allows better optical sectioning and reduced photobleaching outside of the imaging plane compared to the traditional confocal techniques due to the nonlinear nature of the two-photon excitation fluorescence. Traditional two-photon microscopy operates on scanning point-by-point compared to whole-field approach of confocal microscopy, limiting also the maximum frame rates achieved by scanning two-photon microscopy. Two-photon light-sheet imaging operates on a line or a plane basis instead of a point, speeding the volumetric imaging by one or two orders of magnitude if faster faster rates are needed \cite{truong2011deepand}. Additionally two-photon fluorescence imaging can be combined with other nonlinear processes such as with third-harmonic generation (THG) for label-free vascular imaging \cite{witte2011labelfree}, and other microscopy techniques such as electron microscopy for more detailed analysis \cite{bishop2011nearinfrared}. Silvestri \emph{et al.} \cite{silvestri2014correlative} for example integrate \emph{in vivo }two-photon microscopy with \emph{ex vivo }light sheet microscopy and use the major blood vessels as landmark points for registration. Compared to the literature focused on clinical angiography with various modalities and anatomical applications, there exists very little literature devoted on processing multiphoton vasculature images. Likewise, not much work has been done on open-source software and/or code for multiphoton vasculature analysis. The work by Santamaria-Pang \emph{et al.} \cite{santamaria-pang2015automatic} on tubular 3D neuronal structures representing one of the few examples for ``morphological'' multiphoton microscopy analysis, and Python-based VMTK (\cite{antiga2012vmtkvascular}, \href{http://www.vmtk.org/}{http://www.vmtk.org/}) for open-source vessel analysis. This is in stark contrast to work devoted on calcium imaging analysis with various freely available toolboxes (e.g. \cite{mukamel2009automated,tomek2013twophoton,muir2015focusstack,patel2015automated}). Traditionally vessel segmentation have been done on some combination of vascular models, image features and extraction schemes often relying on prior knowledge about the tubularity of vessels \cite{kirbas2004areview,lesage2009areview}. Typically in computer vision/image analysis field, algorithms and pipelines are developed using reference dataset as benchmarks for performance. In biomedical image analysis, almost all open image segmentation challenges are listed in \href{ http://grand-challenge.org/}{ http://grand-challenge.org/} with only challenge (VESSEL12, \cite{rudyanto2014comparing}) devoted to vessel segmentation. It is common that many fields suffer from lack of annotated datasets \cite{ferguson2014bigdata} as they are expensive to generate such as is the case for example in high content screening (HCS) technologies labeled at cell level \cite{kraus2015classifying,ljosa2012annotated}, and in electron microscopy \cite{arganda-carreras2015crowdsourcing}. Additional standardized datasets can be found for evaluating coronary artery centerline extraction algorithms \cite{schaap2009standardized}, and for evaluating coronary artery stenosis detection, stenosis quantification and lumen segmentation algorithms in computed tomography angiography \cite{kiricsli2013standardized}. Thus, despite the numerous papers on vessel segmentation there has been very little effort for creating standardized three-dimensional vascular datasets. The the most similar datasets can be found for example for two-dimension retinal vessels in DRIVE dataset \cite{staal2004ridgebased}, and three-dimension tubular fibers in DIADEM challenge \cite{brown2011thediadem}. Among the 23 submitted methods to the VESSEL12 challenge, only two submission were machine-learning based with the other one of them ending up providing the best overall performance in terms of segmentation accuracy. Similarly with natural images, research teams compete against each other trying to improve the performance of the classifier. One example of such challenge is the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) challenge that is taking place annually with the same database of images \cite{russakovsky2014imagenet}. During past few years, data-driven machine learning algorithms have replaced ``hand-crafted'' filter pipelines on many fields of image processing. Majority of the emerged approaches have relied on deep learning networks \cite{lecun2015deeplearning,schmidhuber2015deeplearning,lake2015humanlevel} opposed to ``traditional'' shallow networks \cite{bianchini2014onthe}. \textbf{\emph{}}From different deep learning architectures, convolutional neural networks (CNN or ConvNet) have been the mostly used in image classification and image segmentation. While ConvNets have been around for decades (e.g. \cite{ivakhnenko1971polynomial,lecun1989backpropagation}), the recent success had been due to the combination of bigger annotated datasets, more powerful hardware, new ideas, algorithms and improved network architectures enabling this sort of ``paradigm shift'' in machine learning. Since 2011, graphical processing unit (GPU)-based ConvNets have dominated classification (\cite{krizhevsky2012imagenet}), and segmentation contests (\cite{ciresan2012deepneural}). ConvNets are loosely inspired of biological networks (e.g. \cite{chen2014incremental}) allowing hierarchical feature learning starting from low-level features such as edges into higher-level features such as faces for example. ConvNets possess two key properties that make them useful in image analysis: spatially shared weights and spatial pooling (\cite{pinheiro2013recurrent}). This allows feature learning that is shift-invariant, i.e. filter that is useful across the entire image as image statistics are stationary \cite{simoncelli2001natural}. Typical convolutional networks are composed of multiple stages (\figref{A-simple-convolutional}), and the output of each stage is made of two or three dimensional arrays depending on the training data, called feature maps. Each feature map is the output of one convolutional filter (or pooling) applied over the full image. This is typically followed by non-linear activation function such as sigmoid, rectifying linear unit (ReLU) or hyperbolic tangent (\emph{$tanh$}). \begin{figure*} \centerline{\includegraphics[width=1.8\columnwidth]{CNN_example}} \caption{Example of a typical deep convolutional neural network (CNN)\textbf{ }using two-dimensional image as an example\textbf{. (top) }Deep neural network consist of several subsequent layers (conv1, conv2, conv3, conv4, conv5 in our example) of which each can contain stacked convolutional layers (e.g. conv1a, conv1b, conv1c) that are followed by a non-linear activation function which in our example are Rectified Linear Unit (ReLU), and hyperbolic tangent (\emph{$tanh$}). The \emph{depth} of the network is defined by the amount of layers, whereas the \emph{width }of the network depend on the amount of feature maps generated on each layer which in our case is 24 feature maps. The number of feature maps correspond to the number of different learned convolutional kernels on each convolutional layer, thus each conv1a, conv1b, conv1c have 24 different learned convolution kernel that try to represent the training data. In our example the size of the convolution kernel is 3$\times3$ (see \textbf{bottom }the 3$\times3$ grid overlaid on input image). Output of each layer is typically downsampled via max-pooling operator that in our example takes the maximum value of 2$\times2$ window, thus the downsampling factor is 2 on each layer resulting a total downsampling factor of 16 after 5 layers.\textbf{(bottom) }The pipeline for one convolutional kernel (3$\times3$) is illustrated for one feature map with edges enhanced which is then mapped with $tanh$ activation function. The mapped feature map is then downsampled using max-pooling operator (example in \textbf{top}), or alternatively max-filtering can be applied as we will be using in this work that does not change the image resolution allowing us to do dense image segmentation without having to upsample the segmentation back to input resolution. \textbf{}\label{fig:A-simple-convolutional}} \end{figure*} After the final pooling layer of the network, there might be one or more fully-connected (FC) layers that aim to perform high-level reasoning. They take all neurons from the previous layer and connect them to every single neuron of current layer (i.e. fully-connected). No spatial information is preserved in typical fully-connected layer configurations. In the end of the networks there is typically a terminal (``output'') classification layer that based on the number of classes produces real-valued or binary scalar for each image in image classification dataset, or for the each pixel in each image image segmentation dataset.\textbf{ }The most typical output layer uses a \emph{softmax }regression that generates probability distribution of the outputs \cite{gu2015recentadvances}. The shortcoming of softmax is that does not capture model uncertainty and often it is interpreted erroneously as model confidence \cite{gal2015dropout}. If model uncertainty is needed, there have been effort to cast deep learning models as Bayesian models\cite{gal2015dropout}. The networks are typically regularized to mitigate over-fitting either using technique called DropOut \cite{srivastava2014dropout} in which each neuron has a probability of 0.5 to be reset with 0-value, typically only used in last fully-connected layers. Alternatively one can regularize the network by injecting noise for example just before the nonlinear activation function \cite{poole2014analyzing}. ConvNets are typically trained using stochastic gradient descent (SDG) optimization method with \emph{mini-batches }so that the gradient on each training iteration is computed using more than one training example (i.e. patch of image/volume) resulting in smoother convergence, and more efficient use of vectorization libraries, thus faster computation times. ConvNets can be roughly divided to two basic types \cite{yuste2015fromthe}: feedforward networks which are organized in layers with unidirectional connections (e.g. the proposed approach here from Lee \emph{et al.} \cite{lee2015recursive}), and recurrent network in which feedback connectivity is dominant (e.g. used by Pinheiro \emph{et al.} \cite{pinheiro2013recurrent} for semantic segmentation). Feedforward networks are typically used for image classification and segmentation, whereas recurrent networks are used for sequential data such as language, and sound processing. Surprisingly even though the ConvNets have been highly successful, the success of the ConvNets are not well understood even by the people designing new algorithms and architectures (e.g. \cite{gu2015recentadvances}). The ultimate goal of artificial intelligence (AI) including image segmentation would be to build machines that understand the world around us, i.e. disentangle the factors and causes it involves (\cite{bengio2012bettermixing}), or in more practical terms, to have an image segmentation system that would have an ``understanding'' of the vesselness. In our case eventually exceeding the human expertise in determining which part of the image is part of the vessel. This human-level concept learning was recently demonstrated for written character recognition by Lake \emph{et al.} \cite{lake2015humanlevel} from very limited training samples starting from just one examples. For ``brute-force approaches'', there have been ConvNets that have surpassed human-level performance on image classification \cite{he2015delving,ioffe2015batchnormalization}. We aim to improve the accuracy of the vessel segmentation for multiphoton microscopy by training a deep learning framework based on convolutional networks (ConvNets) in supervised manner with no free parameters for the user to adjust. We have implemented our three-dimension vessel segmentation using open-source CPU-accelerated ZNN framework \cite{lee2015recursive,zlateski2015imagesegmentation} previously used for three-dimensional electron microscope segmentation. Our main motivation for this proof-of-concept work is to inspire more researchers to work on biomedical segmentation problems by providing public available annotated dataset of two-photon fluorescence microscopy vasculature stacks with the code needed to easily fine-tune the network using your own training data and improve our model. Our work tries to integrate the fields of machine learning, biomedical image analysis, and neuroscience\emph{ }and motivating applications. Two-photon microscopy is capable of providing beautiful high-resolution images of biological processes \emph{in vivo}. By creating an open source, reproducible method of vascular segmentation, quantitative results can be more readily attained and compared. We hope to decrease the time overhead required for image processing by the average microscope user and accelerate the educational translation of new information to the scientific community. \emph{} \section{Related work} Typical simplified schematic of vasculature segmentation pipeline used to process two-photon microscope stacks is shown in \figref{Typical-vessel-segmentation-pipeline}. The image stacks suffer mainly from photon noise following a Poisson distribution \cite{bertero2009imagedeblurring} (i.e. the noise intensity depends on the underlying signal) with some Gaussian noise component added, which can be denoised directly with methods developed for Poisson noise (e.g. PURE-LET \cite{luisier2011imagedenoising}). Alternatively the signal-dependency of the Poisson noise can be removed with a suitable transform such as Anscombe transform \cite{makitalo2011optimal} that allows one to use denoising methods developed for Gaussian noise (e.g. BM3D/BM4D \cite{maggioni2013nonlocal,danielyan2014denoising}). Deconvolution is not done as commonly for multiphoton microscopy as compared to confocal microscopy \cite{mondal2014imagereconstruction}, but if it is been done in can be done jointly with other image restoration operations \cite{persch2013enhancing} or as its independent step \cite{kim2015blinddepthvariant}. This part can be seen as the image restoration part with an attempt to recover the ``original image'' as well as possible corrupted by the imaging process. \begin{figure*} \textbf{\includegraphics[width=2\columnwidth]{imageQualityPipeline}}\caption{Typical vessel segmentation pipeline as simplified schematic for a single slice of a two-photon microscope image of mouse cortical vasculature. The Poisson-corrupted image is denoised (e.g. BM4D \cite{maggioni2013nonlocal}), and then deconvolved (e.g. blind Richardson-Lucy deconvolution \cite{mondal2014imagereconstruction}), or this image restoration can be done jointly. This is followed by a vesselness enhancement filter such as Frangi's filter \cite{frangi1998multiscale}, Optimal Oriented Flux (OOF) \cite{law2008threedimensional}, or some more recent method (e.g by Moreno \emph{et al.} \cite{moreno2015gradientbased}). This is followed by a segmentation algorithm (e.g. active contours \cite{law2010anoriented}) that produces a binary mask (or real-valued probability mask) that can be used to weigh the input. This weighed image stack might be then interpolated in $z$-direction to obtain an isotropic stack with equal-sized voxel sides for example using a B-spline interpolation \cite{thevenaz2000interpolation}. If needed for analysis or visualization, this interpolated stack can be reconstructed into a three-dimensional mesh that is typically obtained via the Marching Cubes algorithm variants \cite{levine2012meshprocessing2}.\label{fig:Typical-vessel-segmentation-pipeline}} \end{figure*} In some cases the restored image is further simplified using some edge-aware smoothing operator such as anisotropic diffusion \cite{meijering2001evaluation,prasath2015multiscale}, or as done by Persch \emph{et al.} \cite{persch2013enhancing} who jointly apply the anisotropic diffusion inpainting (operation that attempts to replace lost or corrupted parts of the image data) with deconvolution and interpolation. This step is followed by some ``vesselness filter'' or ``vesselness enhancement'' filter that is designed to enhance tubular structures such as vessels in the image. The best known filter of those is the Frangi's filter \cite{frangi1998multiscale} that has become outdated as it cannot properly handle crossings nor bifurcation methods, and several filters \cite{law2008threedimensional,turetken2013detecting,smistad2013gpuaccelerated,hannink2014vesselness,moreno2015gradientbased} have been proposed to correct the shortcomings of Frangi's filter with none of them reaching a \emph{de facto }standard status. In our proposed deep learning-based network we are trying to replace the vessel enhancement and segmentation steps, and keep still using "traditional" filters with the image restoration part (see discussion on how to get upgrade them as well in \ref{sub:Training-everywhere}). There have been various "traditional" segmentation algorithms for vessel segmentations (for reviews see \cite{kirbas2004areview,lesage2009areview}), and only the most relevant ones are analyzed here below. In the schematic (\figref{Typical-vessel-segmentation-pipeline}) $z$-interpolation is placed after the segmentation, but it might have been placed as well before the segmentation algorithm \cite{lindvere2013cerebral,ukwatta20133dcarotid}, or jointly with other image restoration operators \cite{persch2013enhancing}. The exact placing of the interpolation depends on the computation before and after it, but in our case we placed in the end to emphasize the gains of $z$-direction interpolation to mesh reconstruction as all our stacks used in this work are anisotropic (see \tabref{Dataset-used-in-Study}). Reconstructing meshes from non-interpolated anisotropic stacks with traditional Marching Cubes algorithm \cite{levine2012meshprocessing2} typically leads to ``staircasing effect'' of the mesh while interpolation gives smoother reconstruction. Advanced mesh reconstruction methods are beyond the scope of this algorithm, but there have been efforts to improve biomedical mesh reconstruction \cite{moriconi2015highquality,saha2015digital} mitigating the problems of triangulation based such as Marching Cubes. With the reconstructed vasculature mesh, it is then possible to for example do morphological analysis \cite{meyer2008altered}, calculate hemodynamic parameters \cite{keshmiri2015vascular}, or analyze the functional diameter changes in response to external stimulus \cite{lindvere2013cerebral}. To the knowledge of the authors, deep learning frameworks including ConvNets have not yet been applied to segmentation of three-dimensional volumetric vasculature images. Despite the limited use of machine learning techniques in VESSEL12 challenge for lung vessels \cite{rudyanto2014comparing}, there have been some work using machine learning techniques for vessel segmentation. Sironi \emph{et al.} \cite{sironi2015learning} for example used an unsupervised dictionary learning \cite{kreutz-delgado2003dictionary} approach that learned optimal separable convolutional filter banks for 2D vasculature segmentation (DRIVE dataset \cite{staal2004ridgebased}), and for 3D olfactory projection fibers (DIADEM challenge \cite{brown2011thediadem}). The filter banks were then used with the popular Random Forests classifier \cite{breiman2001randomforests} continuing the previous work from the same lab \cite{gonzalez2009learning,rigamonti2012accurate}. The authors used their separable filter banks with ConvNets for image classification task but did not discuss about the possibility of using ConvNets with the image segmentation task. Very recently Maji et al. \cite{maji2016ensemble} applied ConvNets for the two-dimensional vasculature DRIVE database with promising performance. Santamaria-Pang \emph{et al.} \cite{santamaria-pang2007automatic} similarly used a dictionary learning approach to learn linear filters for detection of tubular-like structures from multiphoton microscopy stacks. The learned filters were fed to a Support Vector Machine (SVM, \cite{suykens1999leastsquares}) which was shown to provide a better segmentation accuracy compared to the vesselness filter introduced by Sato \emph{et al.} \cite{sato19973dmultiscale}. Recently, Schneider \emph{et al.} \cite{schneider2015joint3d} used Random Forests for classification with multivariate Hough forests to infer probabilistic votes about the vessel center, jointly segmenting vasculature and extracting vessel centerline. The features were learned using steerable filter templates (\cite{jacob2004designof}) at multiple scales instead of the dictionary learning approach. They showed that their learning-based approach outperformed both Oriented Optimal Flow (OOF, \cite{law2008threedimensional}) and Frangi's filter \cite{frangi1998multiscale} for vessel segmentation. Sironi \emph{et al.} \cite{sironi2015projection} take a different approach in their paper inspired by recent work on structured learning-based edge detectors (\cite{dollar2015fastedge}). They combine structured learning with nearest neighbor-based output refinement step designed for situations where edges or thin objects are hard to detect explicitly by the neural network (\cite{ganin2014ntextasciicircum4fields}). They were able to reduce spatial discontinuities, isolated erroneous responses and topological errors of initial score maps from outputs of other algorithms, and when directly trained to segment two-dimensional blood vessels (DRIVE dataset \cite{staal2004ridgebased}). There is relatively more work devoted on natural image processing compared to biomedical image analysis. In natural image processing literature, the corresponding application to our biomedical image segmentation is semantic segmentation \cite{long2014fullyconvolutional,papandreou2015weaklyand,chen2015attention,chen2015semantic}, also referred as scene parsing \cite{pinheiro2013recurrent} or scene labeling \cite{farabet2013learning}. Semantic segmentation with natural images tries to answer to the question ``What is where in your image?'' for example segmenting the ``driver view'' in autonomous driving to road, lanes and other vehicles \cite{kendall2015bayesian}. In typical semantic segmentation tasks there are a lot more possible labels than in our two-label segmentation of vessels and non-vessel voxels, further complicating the segmentation. Most existing biomedical segmentation pipelines start with slice-by-slice two-dimensional processing for volumetric stacks, and only later transition to three-dimensional processing due to high computational cost of fully three-dimensional pipelines \cite{liu2014amodular,takemura2013avisual}. ConvNets with 3D filters had been used for example with block face EM images before \cite{helmstaedter2013connectomic}, most of the 3D filter use being employed in video processing \cite{ji20133dconvolutional,tran2015learning,yao2015describing} where the 2D image with the time can be viewed as an anisotropic 3D image. Due to ever-increasing computational performance in local GPU clusters, and cloud-based services such as Amazon AWS, IBM Softlayer, Microsoft Azure and Google Cloud Platform we expect to see more purely three-dimensional approaches such as the one proposed by Kamnitsas \emph{et al. }\cite{kamnitsas2016efficient} for\emph{ }brain lesion segmentation from MRI images. Deep learning based approaches have been extensively used for volumetric electron microscopy (EM) segmentation \cite{huang2013deepand,maitin-shepard2015combinatorial,wu2015aniterative,lee2015recursive,ronneberger2015unetconvolutional}. Other biomedical image segmentation tasks with deep learning frameworks include for example brain segmentation \cite{havaei2015braintumor,lyksborg2015anensemble,kamnitsas2015multiscale,stollenga2015parallel}, prediction of Alzheimer's disease from magnetic resonance imaging (MRI) scans \cite{payan2015predicting}, microscopic cell segmentation \cite{kraus2015classifying}, glaucoma detection \cite{chen2015automatic}, computational mammography \cite{dubrovina2015computational}, pancreas segmentation \cite{dubrovina2015computational}, bi-ventrical volume estimation \cite{zhen2015multiscale}, and carotid artery bifurcation detection \cite{zheng20153ddeep} The use of deep learning neural networks is not limited to image analysis, and it can employed in various fields that can benefit from data-driven analysis in exploratory or predictive fashion. In neuroscience, in general the datasets are getting increasingly larger and more complex requiring more sophisticated data analysis tools \cite{rubinov2015neuralnetworks}. There have been systems capable of constructing theories automatically in data-driven fashion \cite{ghahramani2015probabilistic}. Artificial neural networks lend themselves well for modeling complex brain function that emerge from activation of ensembles of neurons in which the studying of single neuron at a time is not sufficient \cite{rubinov2015neuralnetworks}. For example, the circuit architecture of the mammalian hippocampus have been modeled to consist of series of sequential feedforward and recurrent neural networks \cite{rolls1998neuralnetworks}. Harvey \emph{et al. }\cite{harvey2012choicespecific} used two-photon imaging to measure the calcium activity of mouse making behavioral choices in virtual maze. The temporal trajectory of neuron populations was shown to be predictive of the behavioral choice, thus being suitable for the use of recurrent neural networks to model the behavior. In addition to basic neuroscience, deep learning ``expert systems'' have been extended to clinical settings \cite{waljee2010machine} for example for predicting clinical outcomes of radiation therapy \cite{kang2015machine}, electroencephalographic (EEG) recording analysis \cite{stober2015deepfeature}, and future disease diagnosis and medicine prescription in routine clinical practice \cite{choi2015doctorai}. \section{Methods} \subsection{Dataset } The vessel dataset described here were acquired from mouse cortex, and from GFP-labelled human squamous cell carcinoma tumors, xenografted onto the dorsal skin of mice with implanted dorsal window chambers (FaDu-GFP, AntiCancer Inc.), tumors summarized in \tabref{Dataset-used-in-Study} (see the maximum-intensity projections of stacks in \ref{fig:Training-dataset-visualized}). Fluorescent dextran (70 kDa Texas Red, dissolved in PBS, Invitrogen) was used to visualize the vasculature in mouse cortex by \cite{burgess2014analysis}, and fluorescent dextran (2MDa FITC, dissolved in PBS, Invitrogen) to label the tumor vasculature. Imaging was performed using the FV1000 MPE two-photon laser scanning microscope (Olympus) with tunable mode-locked Ti:Sapphire laser using several excitation wavelengths and water-immersion objective lenses\emph{.} The auxiliary Matlab code for our implementation of ZNN is provided in \href{https://github.com/petteriTeikari/vesselNN}{https://github.com/petteriTeikari/vesselNN}, with the annotated dataset available from \href{https://github.com/petteriTeikari/vesselNN_dataset}{https://github.com/petteriTeikari/vesselNN\_dataset}. \begin{table*} \caption{Dataset used in the study (check resolution from .oib files, and re-denoise the image at some point with correct metadata as ImageJ lost it). Additional possible parameters: \emph{depth}, \emph{FOV}, and\emph{ dye, excitation wavelength, percentage of vessel labels (see \cite{ciresan2012deepneural}). }\label{tab:Dataset-used-in-Study}} \scriptsize{% \begin{tabular*}{2\columnwidth}{@{\extracolsep{\fill}}c|cccccc} \# & \textbf{Resolution ($\mu$m\textsuperscript{3})} & \textbf{Dimension (voxel\textsuperscript{3})} & \textbf{\# samples} & \textbf{\% of vessel labels} & \textbf{Source} & \textbf{Usage}\tabularnewline \hline 1 & 0.994$\times$0.994$\times$5 & 512$\times$512$\times$15 & 3.75M & 12.4\% & Mouse cortex & Train\tabularnewline 2 & 1.59$\times$1.59$\times$5 & 320$\times$320$\times$26 & 2.54M & 29.8\% & Mouse cortex & Train\tabularnewline 3 & 0.994$\times$0.994$\times$5 & 512$\times$512$\times$10 & 2.5M & 42.1\% & Mouse cortex & Train\tabularnewline 4 & 0.994$\times$0.994$\times$5 & 512$\times$512$\times$15 & 3.75M & 36.1\% & Mouse cortex & Train\tabularnewline 5 & 0.994$\times$0.994$\times$5 & 512$\times$512$\times$25 & 6.25M & 3.2\% & Mouse cortex & Train\tabularnewline 6 & 0.994$\times$0.994$\times$5 & 512$\times$512$\times$25 & 6.25M & 3.7\% & Mouse cortex & Test\tabularnewline 7 & 0.994$\times$0.994$\times$5 & 512$\times$512$\times$23 & 5.75M & 9.5\% & Mouse cortex & Test\tabularnewline 8 & 0.994$\times$0.994$\times$5 & 512$\times$512$\times$25 & 6.25M & 9.0\% & Mouse cortex & Train\tabularnewline 9 & 2.485$\times$2.485$\times$5 & 512$\times$512$\times$14 & 3.5M & 34.0\% & Mouse cortex & Train\tabularnewline 10 & 0.621$\times$0.621$\times$5 & 512$\times$512$\times$15 & 3.75M & 10.5\% & Tumor & Train\tabularnewline 11 & 0.621$\times$0.621$\times$5 & 512$\times$512$\times$21 & 5.25M & 24.1\% & Tumor & Train\tabularnewline 12 & 0.621$\times$0.621$\times$5 & 512$\times$512$\times$27 & 6.75M & 14.2\% & Tumor & Train\tabularnewline \end{tabular*}} \end{table*} \begin{figure} \centerline{\includegraphics[width=0.9\columnwidth]{MIP_traininSet}} \caption{Training dataset visualized as maximum-intensity projections (MIP). The Stack \#5 and Stack \#6 are acquired from same experimental session on different time points with Stack \#5 showing fluorescent dye leakage due to focused ultrasound stimulation. Stack \#10 turned out to be too hard for our network to segment properly, and the network would need more similar training data to handle inferior image quality as well.\textbf{\label{fig:Training-dataset-visualized}}} \end{figure} \subsubsection{Data import} \label{sub:Data-import}We used the Java-based Bio-Formats library (OME - The Open Microscopy Environment, \href{https://www.openmicroscopy.org/}{https://www.openmicroscopy.org/}, \cite{linkert2010metadata,moore2015omeroand}) with Matlab\cite{li2015metadata} to open the OIB files from Olympus FluoView 2-photon microsopy setup. We selected representative substacks from each original stack to reduce the time needed for manual annotation by us researchers. The substacks were converted to 16-bit OME-TIFF image files containing all the original metadata. \subsubsection{Data annotation} The ground truth for the vessels were manually annotated slice-by-slice using custom-written Matlab code to produce a ``seed binary'' image containing the strongest edges which then had to be refined manually using the pencil tool of GIMP (\href{http://www.gimp.org}{http://www.gimp.org}). We used more conservative criteria for labeling vasculature than the traditional ``50\% of the voxel'' to account the partial volume effect \cite{taha2015metrics}, and we tried to include all the vessel-like structures to the label mask. \subsubsection{Denoising (Image Restoration)} After converting the substacks to OME-TIFF files, we denoised the microscopy stacks using the state-of-the art denoising algorithm BM4D (\cite{maggioni2013nonlocal}) implemented in Matlab. BM4D is a volumetric extension of the commonly used BM3D denoising algorithm \cite{dabov2007imagedenoising} for 2D images, which was for example used to denoise two-photon microscope images by Danielyan \emph{et al.} \cite{danielyan2014denoising}. They also demonstrated that the two-photon microscopy noise can be modeled well using the models developed for digital cameras. BM3D/BM4D were designed for denoising images degraded by Gaussian noise, thus we applied first Anscombe transform to reduce the signal-dependency of the noise as done with BM4D for denoising of magnetic resonance imaging (MRI) images \cite{makitalo2011optimal}. After the BM4D denoising, an inverse Anscombe transform was applied to convert the stacks back to original intensity domain. Two of the stacks (\texttt{burgess2014 bbbDisruption}, and \texttt{burgess2014 noisySparseVessels}) were degraded by horizontal periodic ``banding'' caused by improperly balanced microscope stage, and the degradation was mitigated using spatial notch filters in frequency domain applying fast Fourier Transform (FFT) in Matlab. Noise components were manually identified and then removed before denoising those images. We did not apply any blind deconvolution (e.g. \cite{dupe2009aproximal}) for our microscope stacks to improve the image quality. There was no significant spectral crosstalk in any of the stacks, thus no spectral unmixing or blind image separation (e.g. \cite{dao2014useof}) was done for the image stacks. Likewise, no motion compensation algorithms (e.g. \cite{soulet2013automated}) was needed for the dataset. \subsubsection{Error metrics} \label{sub:Error-metrics}To analyze the segmentation quality of our proposed architecture we used Average Hausdorff Distance (AVD) as the error metric. The AVD between the ground truth and output of the proposed architecture was computed using the \texttt{EvaluateSegmentation} package (\href{http://github.com/codalab/EvaluateSegmentation}{http://github.com/codalab/EvaluateSegmentation}) published by Taha \emph{et al. }\cite{taha2015metrics}. AVD was chosen as the metric as it is well suited for evaluating complex boundary delimitation. Disadvantage of the AVD is that it is based on calculating the distances between all pairs of voxels, making it computationally intensive and not feasible to be integrated to network training for example. \textbf{} \subsection{Deep learning network} We trained our 3D vessel segmentation deep ConvNet using ZNN framework \cite{zlateski2015znn}, that uses multicore CPU parallelism for speed instead of typical GPU-accelerated frameworks such as Theano for example \cite{thetheanodevelopmentteam2016theanoa}. At the time of our training, there were not many frameworks available that would take the 3D context into account. Commonly used library Caffe \cite{jia2014caffeconvolutional} had only 2D networks available, while DeepLab built on top of Caffe would have had GPU-accelerated 3D networks implemented. Our approach for vessel segmentation is inspired by the success of ZNN in segmenting three-dimensional electron microscope (EM) image stacks \cite{lee2015recursive}, and we chose to start with the networks described for EM segmentation. \subsubsection{Training with ZNN} ZNN produces a dense output with pixel-by-pixel segmentation maps in contrast to image-level labels in object recognition. ConvNets have excelled in object recognition which typically only require single output value for an entire input image \emph{{[}i.e. is there a dog in the image? yes (1), or no (0){]}}. ZNN employs max-filtering which slides a window across image and applies the maximum operation to that window retaining original image resolution. Traditionally, in semantic segmentation (\cite{long2014fullyconvolutional}) and biomedical image segmentation (\cite{ronneberger2015unetconvolutional}) pipelines, max-pooling is used instead of max-filtering which reduces the dimensions of the output map requiring either post-processing using for example some graphical model (\cite{chen2015semantic}), or upsampling back to the original resolution (\cite{ronneberger2015unetconvolutional})\emph{. }The max-filtering employed by ZNN can be thought as the dense variant of max-pooling filter as it keeps image dimensions intact while making all filtering operation sparse both via convolution and max-filtering. This approach is also called “skip-kernels”\emph{ }(\cite{sermanet2013overfeat}) or “filter rarefaction” (\cite{long2014fullyconvolutional}), and is equivalent in its results to “max-fragmentation-pooling” (\cite{giusti2013fastimage,masci2013afast}). In practice with ZNN we can control the sparseness of filters independent of max-filtering. \subsubsection{Network architecture} We adopted the recursive architecture from Lee \emph{et al.} \cite{lee2015recursive} used to segment electron microscopy (EM) stacks. \begin{figure} \includegraphics[width=1\columnwidth]{proposedFramework} \caption{An overview of our proposed framework (left) and model architectures (right,). The number of trainable parameters in each model is 230K (VD2D), 310K (VD2D3D).(\cite{lee2015recursive}).\label{fig:An-overview-of-ZNN-architecture}} \end{figure} \begin{figure*} \centerline{\includegraphics[width=1.8\columnwidth]{architectureSchematics}} \caption{Network architectures of the different used models: VD2D, VD2D3D, and the extensions of the latter VD2D3D\_v2 and VD2D3D\_v3 which had some of the two-dimensional convolution filters converted to three-dimensional filters. \label{fig:Network-architectures-of-ZNN}} \end{figure*} \paragraph{VD2D} The chosen recursive architecture first involved a two-dimensional VD2D (``Very Deep 2D'') ``pre-training'' stage that is shown in \figref{Network-architectures-of-ZNN} and in \figref{An-overview-of-ZNN-architecture}. All convolutions filters have sizes of 3$\times$3$\times$1, except that \texttt{Conv1c} uses a 2$\times$2$\times$1 filter to make the ``receptive field'' for a single output pixel to have an odd-numbered size, and thus centerable around the output pixel\emph{. }Some convolution layers are employing hyperbolic tangent ($tanh$) nonlinearities rather than traditionally used rectifying linear units (ReLUs) as the authors argued that this might suppress variations in the feature maps due to image quality variations. This was left however untested in their original paper. \paragraph{VD2D3D} The two-dimensional convolutional layers of the following second stage named VD2D3D (``Very Deep 2D-3D'', see \figref{Network-architectures-of-ZNN} and \figref{An-overview-of-ZNN-architecture}) are initialized with the trained weights of the VD2D without enforcing weight sharing as done by some recurrent ConvNets (\cite{pinheiro2013recurrent}). The main idea behind having initial 2D layers in the VD2D3D is to make the network faster to run and train, while the 3D filters in the layers enable the network to use 3D context in vessel segmentation providing more accurate predictions. In theory the accuracy could be further improved by transforming all the layers to 3D but this would in practice come with increased computational cost and memory requirements. The VD2D3D could be used directly for the denoised input images without the initial VD2D training, but Lee \emph{et al. }\cite{lee2015recursive} showed that providing the output of VD2D recursively as the input to the VD2D3D produced a significant improvement in performance. The layers \texttt{Conv1a}, \texttt{Conv1b}, and \texttt{Conv1c} are used to process the recursive inputs along with the denoised input images, which then are combined together after \texttt{Conv1c}. This parallel processing stream should allow more complex, highly nonlinear interaction between low-level features and contextual information in the recursive input. The increase of trainable parameters due to switch from 2D filters to 3D filters were compensated by trimming the size of later layer feature map from 200 (\texttt{Conv5} of VD2D) to 100 (\texttt{Conv4c} of VD2D3D) \paragraph{VD2D3D\_v2} We changed the last two-dimensional layer (\texttt{Conv3} into three-dimensional layer (see VD2D3D\_v2 in \figref{Network-architectures-of-ZNN}) keeping the VD2D3D otherwise the same. \paragraph{VD2D3D\_v3} .We wanted to see what would be the effect of changing the first layer into three-dimensional. This in practice would correspond to the low-level features and should improve the detection of three-dimensional structures rather over two-dimensional filters that could confuse ``feature-like'' two-dimensional noise to ``real'' three-dimensional vasculature. \subsubsection{Training procedure} \label{sub:Training-procedure}The network training procedure was similar to the one described by Lee \emph{et al.} \cite{lee2015recursive}. We trained our network using backpropagation with the cross-entropy loss function. The VD2D was first trained for 60K updates using 100$\times$100$\times$1 output patches. The initial learning rate was set to 0.01, the momentum of 0.9, and an annealing factor of 0.999 which was applied every 6 updates giving us a learning rate of 0.000000452 at the end of VD2D training. Each update took around 2.9 seconds in our Intel Dual Intel Xeon E5650\textbf{ }Quad CPU\textbf{ }(16 hyperthreads, 24 GB RAM) workstation on Ubuntu 14.04, with all the 16 threads in use giving us a total of 2 days for the VD2D training. After completing VD2D training, we continued with the training of VD2D3D for 90,000 updates as in the original paper by Lee \emph{et al. }\cite{lee2015recursive} with an initial learning rate of 0.01, the momentum of 0.9 and with the same annealing factor of 0.999 which was applied on every update for 15K updates, after which the learning rate was set 0.0001 with the same annealing factor that was this time applied on every 10th update. Each update took around 23 seconds, giving us a total of 24 days for the training of VD2D3D with the same 90K updates. For the modified architectures with extended 3D support (v2 and v3) higher memory were required, and fully 3D pipeline was not possible with the current implementation of ZNN with just 24 GB of RAM. Each update with v2 took around 27.2 seconds (90,000 updates took slightly over 28 days), and with v3 each update took around 24.4 /seconds (90,000 updates took slightly over 25 hours). Like Lee \emph{et al.} \cite{lee2015recursive}, we rebalanced the classes (vessels/non-vessels) by differentially weighing the per-pixel loss to deal with the imbalance between vessels and non-vessel pixels which was however lower than the imbalance seen in electron microscope images between boundary and non-boundary pixels. We also augmented the data by randomly rotating and flipping 2D image patches as implemented in ZNN. Additionally we could have introduced photometric distortions (\cite{howard2013someimprovements}) to further counteract the possible overfitting due to limited training data, but they were seen unnecessary at the time of the training. We also used dropout (\cite{srivastava2014dropout}) to further avoid overfitting that was implemented in ZNN. Dropout was applied to the \texttt{Conv4c} layer with a probability of 0.5 to be reset with a 0-valued activation. \subsection{Sharing} Our proposed segmentation pipeline is based on the ZNN framework that is freely available online at \href{https://github.com/seung-lab/znn-release}{https://github.com/seung-lab/znn-release} by the original authors \cite{lee2015recursive,zlateski2015znn}. We have develop some helper function for that using Matlab, and all those files are available from our Github repository at \href{https://github.com/petteriTeikari/vesselNN}{https://github.com/petteriTeikari/vesselNN}. In the spirit of reproducible research \cite{vandewalle2009reproducible,dechaumont2012icyan,kenall2015betterreporting,leek2015opinion} we release also our annotated dataset for other research teams to be used. The dataset is available from \href{https://github.com/petteriTeikari/vesselNN}{https://github.com/petteriTeikari/vesselNN}. \section{Results} \label{sec:Results} See the summary of results of the training in \tabref{Summary-of-the-results} which basically shows that VD2D3D is better than VD2D as expected, and that stack 10 ruins the statistics as it was not segmented that well. Otherwise the Average Hausdorff Distance might be a a bit abstract, but smaller distance the better, and it was recommended for complex boundaries such as vessels and neurons in the review by Taha and Hanbury \cite{taha2015metrics}. The more detailed results of VD2D and VD2D3D architecture with thresholding and dense CRF post-processing can be seen in \tabref{Results-of-VD2D3D}, quantified using Hausdorff average distance (AVD). The difference in performance between different variants of the VD2D3D and VD2D is shown in \tabref{Summary-of-the-Variants}, quantified using the same AVD metric. Comparison of different metrics for the baseline VD2D3D is shown in \tabref{Results-of-VD2D3D} to provide better interpretability compared to other studies as AVD is not the most typically used metric. Rand Index and Area Under the Curve (AUC) was chosen as metrics as they are typically used as error metrics in medical segmentation studies \cite{taha2015metrics}. . Mutual information quantifies recall (i.e. the segmentation should have all the regions marked in the ground truth, while not penalizing the added regions too much) on cost of precision. Hausdorff distance and Mahalanobis distance are spatial distance based metrics closely related to our method of choice Average Hausdorff Distance (AVD) that is basically a more robust version of Hausdorff distance handling outliers better. Mahalanobis distance would be preferred in segmentation where general shape and alignment are important. \begin{table*}[t] \caption{Summary of the results using Hausdorff average distance (AVD) as the measure of segmentation quality. Thresholding is considered the worst-case scenario and DenseCRF inference more advanced version for binary segmentation.\label{tab:Summary-of-the-results}} {\scriptsize{}}% \begin{tabular*}{2\columnwidth}{@{\extracolsep{\fill}}>{\centering}b{0.15\columnwidth}>{\centering}b{0.2\columnwidth}|>{\centering}b{0.05\columnwidth}>{\centering}b{0.05\columnwidth} >{\centering}b{0.05\columnwidth}>{\centering}b{0.05\columnwidth} >{\centering}b{0.05\columnwidth}>{\centering}b{0.05\columnwidth} >{\centering}b{0.05\columnwidth}>{\centering}b{0.05\columnwidth} >{\centering}b{0.05\columnwidth}>{\centering}b{0.05\columnwidth} >{\centering}b{0.05\columnwidth}>{\centering}b{0.05\columnwidth}|>{\centering}b{0.05\columnwidth}>{\centering}b{0.05\columnwidth}} \textbf{\scriptsize{}Network } & \multicolumn{1}{>{\centering}b{0.2\columnwidth}}{\textbf{\scriptsize{}Post-processing }} & \textbf{\scriptsize{}1 } & \textbf{\scriptsize{}2 } & \textbf{\scriptsize{}3 } & \textbf{\scriptsize{}4 } & \textbf{\scriptsize{}5 } & \textbf{\scriptsize{}6 } & \textbf{\scriptsize{}7 } & \textbf{\scriptsize{}8 } & \textbf{\scriptsize{}9 } & \textbf{\scriptsize{}10 } & \textbf{\scriptsize{}11 } & \multicolumn{1}{>{\centering}b{0.05\columnwidth}}{\textbf{\scriptsize{}12 }} & \textbf{\scriptsize{}Mean } & \textbf{\scriptsize{}SD }\tabularnewline \hline {\scriptsize{}VD2D3D } & {\scriptsize{}DenseCRF 2D } & {\scriptsize{}1.44 } & {\scriptsize{}0.19 } & {\scriptsize{}0.23 } & {\scriptsize{}0.29 } & {\scriptsize{}0.67 } & {\scriptsize{}0.67 } & {\scriptsize{}0.48 } & {\scriptsize{}0.49 } & {\scriptsize{}0.18 } & {\scriptsize{}0.98 } & {\scriptsize{}0.98 } & {\scriptsize{}0.31 } & {\scriptsize{}0.57 } & {\scriptsize{}0.38 }\tabularnewline & {\scriptsize{}Thresholding } & {\scriptsize{}2.36 } & {\scriptsize{}0.46 } & {\scriptsize{}0.49 } & {\scriptsize{}0.35 } & {\scriptsize{}1.03 } & {\scriptsize{}1.05 } & {\scriptsize{}1.19 } & {\scriptsize{}1.18 } & {\scriptsize{}0.35 } & {\scriptsize{}1.66 } & {\scriptsize{}1.79 } & {\scriptsize{}0.62 } & {\scriptsize{}1.04 } & {\scriptsize{}0.61 }\tabularnewline {\scriptsize{}VD2D} & {\scriptsize{}DenseCRF 2D } & {\scriptsize{}1.75 } & {\scriptsize{}0.20 } & {\scriptsize{}0.25 } & {\scriptsize{}0.25 } & {\scriptsize{}0.78 } & {\scriptsize{}0.83 } & {\scriptsize{}0.87 } & {\scriptsize{}0.63 } & {\scriptsize{}0.20 } & {\scriptsize{}1.08 } & {\scriptsize{}1.11 } & {\scriptsize{}0.34 } & {\scriptsize{}0.69 } & {\scriptsize{}0.46 }\tabularnewline & {\scriptsize{}Thresholding } & {\scriptsize{}2.58 } & {\scriptsize{}0.53 } & {\scriptsize{}1.30 } & {\scriptsize{}0.43 } & {\scriptsize{}1.32 } & {\scriptsize{}1.31 } & {\scriptsize{}1.44 } & {\scriptsize{}1.41 } & {\scriptsize{}0.47 } & {\scriptsize{}1.80 } & {\scriptsize{}1.98 } & {\scriptsize{}0.73 } & {\scriptsize{}1.28 } & {\scriptsize{}0.63 }\tabularnewline \end{tabular*} \end{table*} \begin{table*}[t] \caption{Summary of the results between different architecture variants using Hausdorff average distance (AVD) as the measure of segmentation quality. The best measure (the lowest value) for each individual stack and for statistical value is shown in bold.\label{tab:Summary-of-the-Variants}} {\scriptsize{}}% \begin{tabular*}{2\columnwidth}{@{\extracolsep{\fill}}>{\centering}b{0.15\columnwidth}>{\centering}b{0.2\columnwidth}|>{\centering}b{0.05\columnwidth}>{\centering}b{0.05\columnwidth} >{\centering}b{0.05\columnwidth}>{\centering}b{0.05\columnwidth} >{\centering}b{0.05\columnwidth}>{\centering}b{0.05\columnwidth} >{\centering}b{0.05\columnwidth}>{\centering}b{0.05\columnwidth} >{\centering}b{0.05\columnwidth}>{\centering}b{0.05\columnwidth} >{\centering}b{0.05\columnwidth}>{\centering}b{0.05\columnwidth}|>{\centering}b{0.05\columnwidth}>{\centering}b{0.05\columnwidth}} \textbf{\scriptsize{}Network } & \multicolumn{1}{>{\centering}b{0.2\columnwidth}}{\textbf{\scriptsize{}Post-processing }} & \textbf{\scriptsize{}1 } & \textbf{\scriptsize{}2 } & \textbf{\scriptsize{}3 } & \textbf{\scriptsize{}4 } & \textbf{\scriptsize{}5 } & \textbf{\scriptsize{}6 } & \textbf{\scriptsize{}7 } & \textbf{\scriptsize{}8 } & \textbf{\scriptsize{}9 } & \textbf{\scriptsize{}10 } & \textbf{\scriptsize{}11 } & \multicolumn{1}{>{\centering}b{0.05\columnwidth}}{\textbf{\scriptsize{}12 }} & \textbf{\scriptsize{}Mean } & \textbf{\scriptsize{}SD }\tabularnewline \hline {\scriptsize{}VD2D} & {\scriptsize{}DenseCRF 2D } & {\scriptsize{}1.75 } & {\scriptsize{}0.20 } & {\scriptsize{}0.25 } & {\scriptsize{}0.25 } & {\scriptsize{}0.78 } & {\scriptsize{}0.83 } & {\scriptsize{}0.87 } & {\scriptsize{}0.63 } & {\scriptsize{}0.20 } & {\scriptsize{}1.08 } & {\scriptsize{}1.11 } & {\scriptsize{}0.34 } & {\scriptsize{}0.69 } & {\scriptsize{}0.46 }\tabularnewline {\scriptsize{}VD2D3D } & {\scriptsize{}DenseCRF 2D } & {\scriptsize{}1.44 } & {\scriptsize{}0.19 } & {\scriptsize{}0.23 } & {\scriptsize{}0.29 } & \textbf{\scriptsize{}0.67 } & {\scriptsize{}0.67 } & {\scriptsize{}0.48 } & {\scriptsize{}0.49 } & \textbf{\scriptsize{}0.18}{\scriptsize{} } & {\scriptsize{}0.98 } & {\scriptsize{}0.98 } & {\scriptsize{}0.31 } & {\scriptsize{}0.57 } & {\scriptsize{}0.38 }\tabularnewline {\scriptsize{}VD2D3D\_v2} & {\scriptsize{}DenseCRF 2D } & \textbf{\scriptsize{}1.17 } & {\scriptsize{}0.20 } & {\scriptsize{}0.24 } & {\scriptsize{}0.30 } & {\scriptsize{}0.70 } & \textbf{\scriptsize{}0.65 } & \textbf{\scriptsize{}0.39 } & {\scriptsize{}0.48 } & {\scriptsize{}0.21 } & {\scriptsize{}0.95 } & \textbf{\scriptsize{}0.90 } & {\scriptsize{}0.35 } & {\scriptsize{}0.47 } & \textbf{\scriptsize{}0.33 }\tabularnewline {\scriptsize{}VD2D3D\_v3} & {\scriptsize{}DenseCRF 2D } & {\scriptsize{}1.22 } & \textbf{\scriptsize{}0.18}{\scriptsize{} } & \textbf{\scriptsize{}0.21 } & \textbf{\scriptsize{}0.25 } & {\scriptsize{}0.68 } & {\scriptsize{}0.69 } & {\scriptsize{}0.48 } & \textbf{\scriptsize{}0.43}{\scriptsize{} } & \textbf{\scriptsize{}0.18 } & \textbf{\scriptsize{}0.94 } & {\scriptsize{}0.96 } & \textbf{\scriptsize{}0.29}{\scriptsize{} } & \textbf{\scriptsize{}0.46 } & {\scriptsize{}0.36 }\tabularnewline \end{tabular*} \end{table*} \begin{table*} \caption{Results of VD2D3D architecture using the DenseCRF 2D for segmentation, with different metrics. The best measure (the lowest value) for each individual stack and for statistical value is shown in bold. \label{tab:Results-of-VD2D3D}} {\scriptsize{}}% \begin{tabular*}{2\columnwidth}{@{\extracolsep{\fill}}>{\centering}p{0.2\columnwidth}|>{\centering}b{0.05\columnwidth}>{\centering}b{0.05\columnwidth} >{\centering}b{0.05\columnwidth}>{\centering}b{0.05\columnwidth} >{\centering}b{0.05\columnwidth}>{\centering}b{0.05\columnwidth} >{\centering}b{0.05\columnwidth}>{\centering}b{0.05\columnwidth} >{\centering}b{0.05\columnwidth}>{\centering}b{0.05\columnwidth} >{\centering}b{0.05\columnwidth}>{\centering}b{0.05\columnwidth}|>{\centering}b{0.05\columnwidth}>{\centering}b{0.05\columnwidth}} \textbf{\scriptsize{}Metric} & \textbf{\scriptsize{}1 } & \textbf{\scriptsize{}2 } & \textbf{\scriptsize{}3 } & \textbf{\scriptsize{}4 } & \textbf{\scriptsize{}5 } & \textbf{\scriptsize{}6 } & \textbf{\scriptsize{}7 } & \textbf{\scriptsize{}8 } & \textbf{\scriptsize{}9 } & \textbf{\scriptsize{}10 } & \textbf{\scriptsize{}11 } & \textbf{\scriptsize{}12 } & \textbf{\scriptsize{}Mean } & \textbf{\scriptsize{}SD }\tabularnewline \hline {\scriptsize{}AUC } & {\scriptsize{}0.92 } & {\scriptsize{}0.93 } & {\scriptsize{}0.92 } & {\scriptsize{}0.89 } & {\scriptsize{}0.95 } & {\scriptsize{}0.96 } & {\scriptsize{}0.94 } & {\scriptsize{}0.95 } & {\scriptsize{}0.91 } & {\scriptsize{}0.94 } & {\scriptsize{}0.89 } & {\scriptsize{}0.94 } & {\scriptsize{}0.93 } & {\scriptsize{}0.02 }\tabularnewline {\scriptsize{}ADJRIND } & {\scriptsize{}0.55 } & {\scriptsize{}0.76 } & {\scriptsize{}0.74 } & {\scriptsize{}0.64 } & {\scriptsize{}0.45 } & {\scriptsize{}0.50 } & {\scriptsize{}0.69 } & {\scriptsize{}0.68 } & {\scriptsize{}0.73 } & {\scriptsize{}0.58 } & {\scriptsize{}0.54 } & {\scriptsize{}0.70 } & {\scriptsize{}0.54 } & {\scriptsize{}0.19 }\tabularnewline {\scriptsize{}MUTINF } & {\scriptsize{}0.28 } & {\scriptsize{}0.56 } & {\scriptsize{}0.62 } & {\scriptsize{}0.48 } & {\scriptsize{}0.11 } & {\scriptsize{}0.13 } & {\scriptsize{}0.27 } & {\scriptsize{}0.27 } & {\scriptsize{}0.55 } & {\scriptsize{}0.29 } & {\scriptsize{}0.38 } & {\scriptsize{}0.36 } & {\scriptsize{}0.31 } & {\scriptsize{}0.18 }\tabularnewline {\scriptsize{}HDRFDST } & {\scriptsize{}47.05 } & {\scriptsize{}33.38 } & {\scriptsize{}82.76 } & {\scriptsize{}24.72 } & {\scriptsize{}35.37 } & {\scriptsize{}62.51 } & {\scriptsize{}26.87 } & {\scriptsize{}29.46 } & {\scriptsize{}23.45 } & {\scriptsize{}59.92 } & {\scriptsize{}73.12 } & {\scriptsize{}27.66 } & {\scriptsize{}37.59 } & {\scriptsize{}22.35 }\tabularnewline \textbf{\scriptsize{}AVGDIST } & \textbf{\scriptsize{}1.44 } & \textbf{\scriptsize{}0.19 } & \textbf{\scriptsize{}0.23 } & \textbf{\scriptsize{}0.29 } & \textbf{\scriptsize{}0.67 } & \textbf{\scriptsize{}0.67 } & \textbf{\scriptsize{}0.48 } & \textbf{\scriptsize{}0.49 } & \textbf{\scriptsize{}0.18 } & \textbf{\scriptsize{}0.98 } & \textbf{\scriptsize{}0.98 } & \textbf{\scriptsize{}0.31 } & \textbf{\scriptsize{}0.49 } & \textbf{\scriptsize{}0.39 }\tabularnewline {\scriptsize{}MAHLNBS } & {\scriptsize{}0.28 } & {\scriptsize{}0.06 } & {\scriptsize{}0.07 } & {\scriptsize{}0.15 } & {\scriptsize{}0.18 } & {\scriptsize{}0.13 } & {\scriptsize{}0.16 } & {\scriptsize{}0.03 } & {\scriptsize{}0.08 } & {\scriptsize{}0.03 } & {\scriptsize{}0.17 } & {\scriptsize{}0.08 } & {\scriptsize{}0.10 } & {\scriptsize{}0.07 }\tabularnewline \end{tabular*}{\scriptsize{}}\\ {\scriptsize \par} \scriptsize{AUC - Area Under the Curve, ADJRIND - Adjust Rand Index considering a correction for chance, MUTINF - Mutual information, HDRFDST - Hausdorff distance with the 0.95 quantile method, AVGDIST - Average Hausdorff Distance, MAHLNBS - Mahalanobis Distance.} \end{table*} The segmentation results are visualized for the best slice for each stack in \figref{Best-correspondence-for}, and for the worst slice for each stack in \figref{Worst-correspondence-for}. For each stack there are four columns: 1) the first column shows the denoised input slice, 2) Label that corresponds to the manually annotated vessels, 3) the real-valued ZNN output from the proposed architecture, 4) the Mask that is a binary mask obtained with dense two-dimensional CRF. It should be noted that the ground truth labels are not optimally defined, as can be seen for example in the worst case scenario of stack \#3 (\figref{Worst-correspondence-for}) with high AVD value, but visually the segmentation seems quite good. The high value of AVD value simply comes from the difference between the suboptimal manual label and the ``real'' vasculature labels that could have been drawn better. Visualized segmentation results and the performance metrics for other VD2D3D variants are shown in the Wiki of our Github repository at \href{https://github.com/petteriTeikari/vesselNN/wiki}{https://github.com/petteriTeikari/vesselNN/wiki}. \begin{figure*} \includegraphics[width=2\columnwidth]{VD2D3D_DenseCRF_2D_best} \caption{\textbf{VD2D3D. Best} correspondence for each stack as evaluated by Average Hausdorff distance. Architecture here VD2D3D, and segmentation with dense CRF.\textbf{ \label{fig:Best-correspondence-for}}} \end{figure*} \begin{figure*} \includegraphics[width=2\columnwidth]{VD2D3D_DenseCRF_2D_worst} \caption{\textbf{VD2D3D. Worst} correspondence for each stack as evaluated by Average Hausdorff distance. The Stack 10 had erroneous correspondences between the ground truth and the actual image explaining now the poor performance. One could argue though that the results are not that horrible, ZNN has found some faint vessels which are not labeled in the ground truth at all. Architecture here VD2D3D, and segmentation with dense CRF. \textbf{\label{fig:Worst-correspondence-for}}} \end{figure*} Visualization of the behavior of the network training for VD2D (\figref{Behavior-of-training-VD2D}) and for VD2D3D (\figref{Behavior-of-training-VD2D3D}) show that for our datasets the training error (accuracy) and the test error (if too high with low training error, the system is overfitting the training data) converged well before the hard-coded limits taken from the study of Lee \emph{et al. }\cite{lee2015recursive} for electron microscopy stacks. \begin{figure} \includegraphics[width=1\columnwidth]{trainingTest_zStat_VD2D} \caption{Behavior of training and test error during training of \textbf{VD2D architecture} (the first 60,000 iterations). ERR - Cost energy. CLS - pixel classification error.\textbf{\label{fig:Behavior-of-training-VD2D}}} \end{figure} \begin{figure} \includegraphics[width=1\columnwidth]{trainingTest_zStat_VD2D3D} \caption{Behavior of training and test error during training of \textbf{VD2D3D architecture} (after the initial 60,000 iterations with the VD2D). ERR - Cost energy. CLS - pixel classification error.\label{fig:Behavior-of-training-VD2D3D}} \end{figure} \section{Discussion} Our proposed networks based on the ZNN framework \cite{lee2015recursive,zlateski2015imagesegmentation} for vasculature segmentation from volumetric two-photon microscope stacks provided promising results of segmentation quality. There is still room for many improvements and optimizations to our proof-of-concept approach which are discussed in more detail below. \subsection{Deep learning} \paragraph*{Refinements to network} \label{sub:Refinements-to-networks}In this work, we chose to use the ``vanilla'' network architecture from Lee \emph{et al.} \cite{lee2015recursive} termed VD2D3D (``Very Deep 2D-3D'') with 2D layers in the initial layers, and 3D layers at higher abstraction layers to make the network faster to run and train. The VD2D3D employed commonly used components of ConvNets with mixed nonlinear activation functions of hyperbolic tangent ($tanh$) and rectified linear units (ReLU), and maximum filtering variant of max pooling that kept the resolution the same throughout the architecture without any need for upsampling as needed for some architectures (e.g. Ronneberger \emph{et al.} \cite{ronneberger2015unetconvolutional} for biomedical image segmentation). The whole field of deep learning and ConvNets is rapidly advancing (see for example a recent review by Gu \emph{et al.} \cite{gu2015recentadvances}). We can thus expect that with future optimization and testing, the ``vanilla'' network can be improved for our application and for volumetric biomedical segmentation in general. For example the convolutional layers used now can be regarded as a generalized linear model (GLM) for the the underlying local image patch, and the nonlinear learning is introduced to the network via nonlinear activation function such as Rectified Linear Units (ReLU). It has been proposed that the convolutional filter itself could be made nonlinear with ``Network in Network'' (NIN) model of Lin \emph{et al.} \cite{lin2013network} or with the Inception module by Szegedy \emph{et al.}\cite{szegedy2014goingdeeper,szegedy2015rethinking}. These modifications enhance the abstraction ability of the local model compared to the current GLM convolution model. Very recently there has been interesting work of replacing convolutional filter with bilateral filter \cite{kiefel2014permutohedral,gadde2015superpixel,jampani2015learning,barron2015thefast} that is very commonly used edge-preserving smoothing filter \cite{tomasi1998bilateral}. The convolutional filters were replaced both from earlier layers \cite{kiefel2014permutohedral}, as well as from later fully-connected layers \cite{gadde2015superpixel} offering faster runtime especially for higher-dimensional signals. Gadde \emph{et al.} \cite{gadde2015superpixel} replaced the Inception modules with ``bilateral Inception'' superpixels yielding better segmentation results than strictly pixel-wise implementations. Bilateral Inception allowed long-range edge-preserving inference directly removing the need for dense CRF as post-processing step according to the authors \cite{gadde2015superpixel}. In contrast, Jampani \emph{et al.} \cite{jampani2015learning} trained the bilateral filter to be used within the dense CRF inference, demonstrating better segmentation performance compared to traditional dense CRF. In general, introducing bilateral filter or some other image-adaptive kernel at the convolutional layer level should allow better edge-preserving properties of the network that is very useful when we are interested in segmenting the vessel boundaries. There have been many attempts to improve the max-pooling \cite{gu2015recentadvances} of which the maximum filtering used here is a dense variant that retains original volume resolution. Pooling in general is used to lower the computation burden by reducing connections between successive layers. From the recent efforts, especially spectral pooling seems like an interesting upgrade \cite{rippel2015spectral} as it can be implemented with little computational cost for Fast Fourier Transform (FFT) based convolution networks such as the VD2D3D used here. In contrast to max-pooling, the information is reduced in frequency domain in linear low-pass filter fashion that will retain more information for the same output dimensionality. The use of spectral pooling provided the best classification performance on CIFAR (10 and 100) image classification dataset \cite{krizhevsky2009learning} compared to other state-of-the-art methods such as stochastic pooling\cite{zeiler2013stochastic}, Maxout \cite{goodfellow2013maxoutnetworks}, ``Network in Network'' (NIN) \cite{lin2013network}, and deeply-supervised nets \cite{lee2015recursive}. Similarly, the traditional nonlinear activation functions such as sigmoid, $tanh$\emph{, }and ReLUs could be improved. ReLUs are probably the most commonly used activation function in ConvNets \cite{nair2010rectified}, with their main disadvantage being that it has zero gradient when the unit is not active. This in practice may cause that the units are not initially active never will become active during the gradient-based optimization (stochastic gradient descent, SDG). To alleviate this problem, Clevert \emph{et al.} \cite{clevert2015fastand} recently proposed exponential linear units (ELUs) which also employ negative values unlike ReLU, and according to the authors the use of ELUs lead not only to faster learning, but also give better generalization performance especially when the networks have at least 5 layers. On CIFAR-100 dataset, the ELUs yielded the best published result. The use of ELUs would be in theory complimentary to spectral pooling and they could also be used together with the nonlinear modifications of convolution layer (e.g. NIN and Inception). It should be noted that at the moment there is no nonlinear activation function for frequency domain \cite{rippel2015spectral}, thus there is a computational bottleneck with the inverse FFT and FFT transforms needed before and after the activation function. We employed Dropout \cite{srivastava2014dropout} for regularization of our network by applying it before the output layer. Recently, Poole \emph{et al.} \cite{poole2014analyzing} showed that injecting Gaussian noise instead of applying Dropout led to improved performance, and Rasmus \emph{et al.} \cite{rasmus2015semisupervised} found no practical difference between Dropout and Gaussian noise injection. Interestingly for Dropout, Gal and Ghahramani \cite{gal2015dropout}; and Kingma \emph{et al.} \cite{kingma2015variational} demonstrated how deep learning network with Dropout can be cast as a Bayesian model. This in practice allows the estimation uncertainty based on Bayesian statistics \cite{ghahramani2015probabilistic}. The estimate of uncertainty is currently lacking in most of the deep learning frameworks. The advantage of the Dropout-based Bayesian estimation is that one can turn existing dropout networks to include model uncertainty, rather than having to re-define the whole architecture. This Dropout-based estimation was used by Kendall \emph{et al.} \cite{kendall2015bayesian} for semantic segmentation showing comparable performance to state-of the-art architectures by applying Dropout in the central layers of their encoder-decoder architecture. In analysis pipelines where a quantitative analysis of morphological vessel behavior (e.g. \cite{lindvere2013cerebral}) follows the image processing, it is useful to propagate the uncertainties involved in the image processing pipeline to the final statistical analysis. The most obvious improvement for the used VD2D3D architecture here would be the conversion of all the convolutional layers to be three-dimensional. However, this is not computationally that feasible using current ZNN implementation with most commonly available hardware around. In the future with increased computational power, and speed optimization this should become feasible either by using Intel Xeon coprocessor \cite{rippel2015spectral,zlateski2015imagesegmentation}, supercomputing clusters \cite{zhang2015areliable}, or GPU-accelerated frameworks such as Theano \cite{thetheanodevelopmentteam2016theanoa}. In our current implementation we chose to do the dense CRF in slice-by-slice manner due to the available implementation of it. In the future, we could upgrade the used dense CRF to three dimension as done for example by Kundu \emph{et al. }\cite{kundu2016feature}. In the architecture employed here, multi-scale representation is not explicitly included. We have tried to provide stacks with different magnifications in our dataset to help the network learn different scales like done by Lee \emph{et al.} \cite{lee2015recursive}. Typically in semantic segmentation networks, multi-scale representation is implemented in two main ways \cite{chen2015attention}, either by using so called \emph{skip-net }that combine features from the intermediate layers of network \cite{sermanet2013overfeat,chen2015semantic,long2014fullyconvolutional}, or via \emph{share-net }that are fed input resized to different scales \cite{lin2015efficient,farabet2013learning}. The discussed bilateral filter modification would be able to encode scale invariance defined on continuous range of image scales without the typically used finite number of subsampled inputs simplifying the network architecture \cite{kiefel2014permutohedral}. In addition to concentrating on the individual components of the ConvNets, there have been alternative approaches to improve computational efficiency \cite{cheng2015anexploration,zhang2015supervised,ioffe2015batchnormalization,gupta2015modelaccuracy}. Our vessel segmentation network took over 20 days (see \ref{sub:Training-procedure}) to train on a typical multicore desktop computer, which emphasizes the utility of faster computation. Batch Normalization technique by Ioffe \emph{et al.} \cite{ioffe2015batchnormalization} has received a lot of attention as the authors showed that the same classification accuracy can be obtained with 14 times fewer training steps while exceeding accuracy of human raters with an ensemble of batch-normalized networks. By normalizing for each training mini-batch, higher learning rates could be used with the training being less sensitive to initialization as well. Another typically used speedup scheme is to use superpixels \cite{nunez-iglesias2013machine,farag2015abottomup,gadde2015superpixel} with two-dimensional images, or supervoxels \cite{lucchi2012supervoxelbased,konyushkova2015introducing} with volumetric three-dimensional images to reduce the dimensionality of the input. Within the superpixel/supervoxel pixels/voxels carry similarities in color, texture, intensity, etc., generally aligning with region edges, and their shapes being generally circular/spherical rather than rectangular patches. The main downside of superpixels/supervoxels are that they introduce a quantization error \cite{gadde2015superpixel} whenever pixels/voxels within one segment have different ground truth label assignments (i.e. in our case supervoxel would have both non-vessel and vessel labels). One of the main bottlenecks currently in deep learning networks, is the lack of efficient algorithms and libraries for sparse data, as majority of the libraries are optimized for dense data \cite{szegedy2014goingdeeper}. The already discussed introduction of bilateral filters, and their computation using permutohedral lattices \cite{adams2010fasthighdimensional,kiefel2014permutohedral} is a one way to speedup the computation of sparse data. In addition to permutohedral lattice, Ghesu \emph{et al.} \cite{ghesu2015marginal} introduced a Marginal Space Deep Learning (MSDL) framework for segmenting volumetric medical images by replacing the standard, pre-determined feature sampling pattern with a sparse, adaptive, self-learned pattern showing increased runtime efficiency. \paragraph*{Improved annotation\label{sub:Improved-annotation}} We manually annotated our ground truths using Matlab-created seeds and GIMP (GNU Image Manipulation Program). This was extremely time-consuming and required a person familiar with the two-photon microscopy vasculature images. Recently Mosinska \emph{et al.} \cite{mosinska2015activelearning} extended the active learning (AL) approach (\cite{settles2010activelearning}) for delineation of curvilinear structures including blood vessels. Active learning is designed to reduce the effort of the manual annotator by selecting from non-annotated dataset, the image stacks for manual annotation that would the most beneficial for improving the performance of the network. Surprisingly and counter-intuitively, recent work on electron microscope image segmentation \cite{konyushkova2015introducing} found that the classifier performance of their implementation was better using only a subset of the training data instead of using the whole available training data. This phenomenon had been reported before by \cite{schohn2000lessis}, suggesting that a well chosen subset of training data can produce better generalization than the complete set. \paragraph*{Crowdsourcing} Kim \emph{et al.} \foreignlanguage{american}{\cite{kim2014spacetime}} demonstrate an interesting approach for acquiring annotations for electron microscopy datasets by developing a game called EyeWire (\href{http://eyewire.org/}{http://eyewire.org/}) for non-experts where they can solve spatial puzzles made out from neuronal boundaries. This crowdsourcing have been traditionally used in tasks that does not require expert-level knowledge such as teaching autonomous cars to drive \foreignlanguage{american}{\cite{rajpurkar2015driverseat}}, but have been thought to be impractical for tasks that require expertise such as medical segmentation \foreignlanguage{american}{\cite{mosinska2015activelearning}}. The innovative approach used in their game is able to transform the biomedical ``expert'' annotation problem to the masses. Additionally to the ``gamification'' of segmentation efforts, one could create a segmentation challenge of our dataset to popular machine learning sites such as Kaggle (\href{https://www.kaggle.com/}{https://www.kaggle.com/}) and Grand Challenges in Biomedical Analysis (\href{http://grand-challenge.org/}{http://grand-challenge.org/}) to bring up the volumetric vascular segmentation in par with the rest of biomedical image analysis domains with existing datasets. \subsubsection*{Unsupervised pre-training} Another way to reduce the labor-intensive ground truth annotation required for our supervised approach, would be to initialize our supervised network using unsupervised pre-training from non-annotated dataset (\cite{bengio2007greedylayerwise}). In practice, we would feed the unsupervised learning network all our existing vascular image stacks without any annotation labels, and the network would learn the most representative features of that dataset that could be then fed into the first layer of our supervised network (\texttt{Conv1a} of \figref{An-overview-of-ZNN-architecture}). Erhan \emph{et al. }\cite{erhan2010whydoes} have suggested that this pre-training initialization serves as a kind of regularization mechanism that is retained even during the supervised part with the classification performance not deteriorating with the additional supervised training. We could for example use the dictionary learning approach with sparsity priors for 2D vessel images and 3D neuron dendrites proposed by \cite{sironi2015learning} as the pre-processing step, or alternatively use some stacked autoencoder variant used for medical image segmentation \cite{shin2013stacked,suk2015deeplearning}. More elegant alternative for unsupervised pre-training is to simultaneously apply both unsupervised and supervised learning, instead of having unsupervised pre-training and supervised training as separate steps \cite{rasmus2015semisupervised,maaloe2015improving}. Rasmus \emph{et al.} \cite{rasmus2015semisupervised} proposed a modified Ladder Network \cite{valpola2014fromneural} which demonstrate how by adding their unsupervised Ladder Network to existing supervised learning methods including convolutional networks improved significantly classification performance in handwriting classification (MNIST database \cite{lecun1998gradientbased}), and in image classification (CIFAR-10 database \cite{krizhevsky2009learning}) compared to previous state-of-the-art approaches. Their approach excelled when the amount of labels were small, and especially when number of free parameters was large compared to the number of available samples, showing that the model was able to use the unsupervised learning part efficiently. Particularly attractive detail of their publicly available approach , is that it can be added relatively easy on a network originally developed for supervised learning such as ours, allowing hopefully a better use of our limited annotated dataset. \subsubsection*{Joint training of the image processing pipeline} \label{sub:Training-everywhere}In our work, we have only focused on replacing the vessel enhancement step (see \figref{Typical-vessel-segmentation-pipeline}) with automated data-driven ConvNet assisted by various parametrized filters requiring some degree of user interaction. Ideally we would like to all relevant steps starting from image restoration to post-processing of the volumetric ConvNet output, all the way to the mesh generation to be automated using training data to increase the robustness and minimize user interaction. Work has already been done for each individual components that could be simply stacked together as separate units, or one could jointly train all components in end-to-end fashion. For example recent work by Vemulapalli \emph{et al.} \cite{vemulapalli2015deepgaussian} showed that their deep learning network based on a Gaussian Conditional Random Field (GCRF) model outperformed existing methods in two-dimensional image denoising including the two-dimensional variant BM3D \cite{dabov2007imagedenoising} of the BM4D algorithm \cite{maggioni2013nonlocal} that we used to denoise our vessel stacks. For other image restoration task such as blind deconvolution \cite{xu2014deepconvolutional} for sharpening the stacks, blind inpainting \cite{cai2015blindinpainting} for filling possibly broken vessels, vibration-artifacts, or other image quality artifacts, and motion-blur correction \cite{sun2015learning} deep learning based solutions have been proposed with promising results. Recent work by Xu \emph{et al.} \cite{xu2015deepedgeaware} demonstrate a deep convolutional networks designed to learn blindly the output of any deterministic filter or a combination of different filters. Authors demonstrated this by learning two different edge-preserving smoothing filters bilateral filter (\cite{tomasi1998bilateral,barron2015thefast}), and $L0$ gradient minimization smoothing (\cite{xu2011imagesmoothing}) jointly without needing to know anything about the implementations of such filters given that input and output images can be accessed. This edge-aware smoothing could be used as a refining step for our image denoising/deconvolution output to further suppress irrelevant structure for the vessel segmentation. Alternatively, the same framework could be potentially to learn the behavior of commercial software as demonstrated by the authors with \emph{``copycat filter scheme''} using Photoshop\textsuperscript{\textregistered} filters \cite{xu2015deepedgeaware}. One could generate training data for deconvolution for example using some commonly used software package such as Imaris (Bitplane AG, Zurich, Switzerland) or AutoQuant (AutoQuant Imaging/Media Cybernetics), and integrating that ``knowledge'' to the same deep learning framework without having to jump between different software packages during the analysis of microscopy stacks. Lee \emph{et al.} \cite{lee2015recursive} argue that the recursive input from VD2D can be viewed as modulatory 'gate' that the feature activations for structures of interest are enhanced while suppressing activations unrelated to structures of interest. Based on that assumption, it would be interesting to try to replace the VD2D altogether for example with data-driven edge detection network such as the N\textsuperscript{4}-fields \cite{ganin2014ntextasciicircum4fields} or holistically-nested edge detection \cite{xie2015holisticallynested}. N\textsuperscript{4}-fields \cite{ganin2014ntextasciicircum4fields} was shown to segment two-dimension retinal vasculature from the DRIVE dataset \cite{staal2004ridgebased} better than the Structured Edge detector \cite{dollar2015fastedge} while the performance was not compared to traditional vessel enhancement filters. Alternatively one could try to integrate recent vessel enhancement filters as structured layers \cite{ionescu2015matrixbackpropagation} within the ConvNet architecture to try to incorporate some domain knowledge without having to resort to totally hand-crafted features. Recent vesselness filters of interest include the scale-invariant enhancement filter by Moreno \emph{et al.} \cite{moreno2015gradientbased}, and the nearest neighbor-inspired detection of elongated structures by Sironi \emph{et al.} \cite{sironi2015projection}. The deep learning can be seen as a ``brute force'' method for vessel segmentation as it does not explicitly model the geometrical relationships that exist between neighboring ``vessel pixels'' as pointed out by Sironi \emph{et al.} \cite{sironi2015projection}. The probability maps can have isolated erroneous responses, discontinuities and topological errors that are typically mitigated using post-processing techniques such as Conditional Random Fields (CRF, \cite{krahenbuhl2012efficient,chen2015semantic,lin2015efficient}), narrow-band level sets \cite{kohlberger2011automatic}, learned graph-cut segmentation \cite{wolz2013automated} or Auto-Context \cite{tu2010autocontext} among others. Authors of the ZNN framework \cite{lee2015recursive} chose to refine their segmentation of electron microscope stacks using a watershed-based algorithm developed by themselves \cite{zlateski2015imagesegmentation}, whereas recent work by Almasi \emph{et al.} \cite{almasi2015anovel} reconstructed microvascular networks from the output of active contours \cite{chan2001activecontours}, and Sironi \emph{et al.} \cite{sironi2015projection} train an algorithm inspired by Nearest Neighbor Fields \cite{ganin2014ntextasciicircum4fields} to induce global consistency for the probability maps. Both those recent works \cite{almasi2015anovel,sironi2015projection} can be seen complimentary and refining post-processing steps to our approach. At the moment we are only training individual stacks at the time, but it is common in biomedical microscopy to image the same stack over multiple time points. We could extend our model to exploit the temporal dependency among multiple time points, as it is done in 2D video processing where the time can be regarded as the third dimension. Huang \emph{et al.} \cite{huang2015bidirectional} for example employ a recurrent neural network (RNN) for modeling temporal context in a video sequence for multi-frame super-resolution reconstruction. This potentially can improve the vessel segmentation as the vessels are not typically deformed heavily between successive stacks when using typical acquisition intervals. The time-extended super-resolution approach should in theory improve the quality of the interpolation in $z$- dimension when isotropic voxels are wanted, compared to deep learning based single-frame super-resolution \cite{kim2015deeplyrecursive}, and traditional B-spline interpolation \cite{indhumathi2012adaptiveweighted}. To the knowledge of the authors, there has been no attempt to improve the mesh reconstruction step using deep learning framework. Closest example to deep learning in surface reconstruction were demonstrated by Xiong \emph{et al.} \cite{xiong2014robustsurface}, who used a dictionary learning for surface reconstruction from a point cloud, which outperformed state-of-the art methods in terms of accuracy, robustness to noise and outliers, and geometric feature preservation among other criteria. Jampani \emph{et al.} \cite{jampani2015learning} demonstrated how they could learn optimal bilateral filter parameters for three-dimensional mesh denoising that could be thus used as a post-processing step for surface reconstruction. This is an improvement of the bilateral filter mesh denoising algorithm implemented in Computational Geometry Algorithms Library (CGAL, \href{http://www.cgal.org/}{http://www.cgal.org/}) that requires user-set parameters. The simplified schematic of the components for joint optimization is shown in \figref{Simplified-end-to-end-pipeline}. In our proposed approach we have only focused on the segmentation part whereas in optimal case we would like to have training data of all the different phases of the image processing pipeline. The schematic does not show any more sophisticated layers that could be embedded inside of more generalistic convolutional networks. For example Ionescu \emph{et al.} \cite{ionescu2015matrixbackpropagation} demonstrated how to backpropagate global structured matrix computation such as normalized cuts or higher-order pooling. The training of normalized cuts within deep learning framework is similar to the approach taken bu Turaga \emph{et al. }\cite{turaga2009maximin}\emph{ }for optimizing a Rand index with simple connected component labeling (MALIS, which is to be implemented in the ZNN framework used by us). Inclusion of such global layers was shown to increase the segmentation performance compared to more generalized deep networks. \begin{figure*} \centerline{\includegraphics[width=1.5\columnwidth]{endToEndTraining}} \caption{Example schematic of fully trainable pipeline for vascular segmentationk. \textbf{(top) }Segmentation pipeline of a single stack.\textbf{ }The pipeline is divided into three sub-components: image restoration, vessel segmentation and mesh reconstruction. The image restoration part could for example consist of joint model for denoising \cite{vemulapalli2015deepgaussian}, deconvolution \cite{xu2014deepconvolutional}, interpolation (super-resolution) \cite{kim2015deeplyrecursive}, inpainting \cite{cai2015blindinpainting}, motion artifact correction, and image-based spectral unmixing if multiple dyes were used. \textbf{}\protect \\ \textbf{(bottom) }Segmentation pipeline of a stack with multiple time points. The added temporal support is needed to estimate motion artifacts \cite{soulet2013automated}, and it is able exploit the temporal dependency of vasculature (i.e. vascular diameter and position changes are not dramatic, \emph{better phrase here maybe}) and in theory should improve the estimates of all the sub-components compared to the single stack scheme, as for example is the case for super-resolution \cite{huang2015bidirectional}. If multiple dyes are used simultaenously there is a potential problem of the dye signals ``leaking'' to other spectral channels that need to be mitigated computationally using for example some blind image separation technique \cite{abolghasemi2012blindseparation}. The spectral crosstalk correction could be done for a single stack, but here we assumed that more input data would allow more robust estimation of the mixed image sources (e.g. with fast independent component analysis \cite{himberg2003icassosoftware} ). \textbf{\emph{}}\protect \\ \textbf{\emph{}}\label{fig:Simplified-end-to-end-pipeline}} \end{figure*} \subsubsection*{Other libraries} Currently there are not many publicly available software for dense image segmentation for volumetric 3D data, so we were constrained in our choice between GPU-accelerated Theano \cite{thetheanodevelopmentteam2016theanoa} and the CPU-accelerated ZNN \cite{zlateski2015znn}. We chose to use the ZNN framework for our vessel segmentation pipeline.\emph{ }The Caffe-derived DeepLab \cite{chen2015semantic,papandreou2015weaklyand} with both CPU and GPU acceleration options was not supporting efficient 3D ConvNets as it were the case with Caffe itself \cite{jia2014caffeconvolutional} as benchmarked by Jampani \emph{et al.} \cite{jampani2015learning} for example. The CPU-accelerated ZNN was shown to have efficient computational performance compared to GPU-accelerated Theano \cite{zlateski2015znn}, and considering the recent price drop of Intel Xeon Phi\textsuperscript{TM} Knights Corner generation of CPU accelerator cards, and introduction of supposedly more user-friendly Knights Landing generation, our choice of implementation seems relatively easy to accelerate in the future. Recently published Python library Keras (\href{http://keras.io/}{http://keras.io/}) functions as a high level abstraction library for either Theano and for TensorFlow \cite{rampasek2016tensorflow} so that the researcher can focus on the ideas and change flexibly the underlying backend between Theano and TensorFlow as one wishes. \subsection{Connection to other software frameworks} Our vessel segmentation pipeline essentially replaces the previously used handcrafted vesselness filters (e.g. \cite{frangi1998multiscale,turetken2012automated,moreno2015gradientbased}) still requiring a refining segmentation algorithm for the ZNN output as the output is not a binary-valued mask, but rather a real-valued probability map. Sumbul \emph{et al. }\cite{sumbul2014automated} used connected component clustering (\texttt{bwlabeln} of Matlab, union-find algorithm, \cite{fiorio1996twolinear}) with morphological filters to refine the ZNN output for retinal ganglion cell (RGC) arbor, while the most recent paper with ZNN \cite{lee2015recursive} compared clustering to more sophisticated watershed-based segmentation \cite{zlateski2015imagesegmentation} for segmenting neuronal boundaries from EM stacks. Our work can be seen also as a pre-processing step for morphological reconstruction of vessel networks in mesh domain. The output from our pipeline could be for example used as an input for the mesh reconstruction pipeline of Python-based open source Vessel Modeling Toolkit (VMTK, \href{http://www.vmtk.org/}{http://www.vmtk.org/}), and its inexpensive graphical front-end VMTKLab (\href{http://vmtklab.orobix.com/}{http://vmtklab.orobix.com/}). This would be more robust segmentation pre-processing step compared to the ones provided by VMTK. VMTK provided the following four vesselness enhancing filters: 1) Frangi's method \cite{frangi1998multiscale}, 2) Sato's method \cite{sato19973dmultiscale}, 3) Vessel Enhancing Diffusion Filter \cite{enquobahrie2007vesselenhancing}, and 4) Vessel enhancing diffusion \cite{manniesing2006vesselenhancing}, with the Frangi's method being the default option. Vessel enhancing filter works as a pre-processing step in VMTK pipeline for the level set based vessel segmentation of VMTK before running the Marching Cubes algorithm derivative \cite{lorensen1987marching} for mesh reconstruction. For researchers who are the most comfortable using graphical tools such as Imaris (Bitplane AG, Zurich, Switzerland), or open-source ImageJ/FIJI platform \cite{schindelin2015theimagej}, the proposed approach can be seen as automatic pre-processing step improving the performance of the following manual processing steps. For example the two-class (vessels, and non-vessels) vessel segmentation in Imaris by \cite{lindvere2013cerebral}, required many user-supplied intensity thresholds which could have been automatized with our ConvNet-based approach, and the remaining steps for graph reconstruction could have done with existing pipeline.\textbf{\emph{}} \subsection{2-PM/Microscopy specific suggestions} In addition to optimizing our algorithm, image parameters should also be carefully chosen to facilitate vessel segmentation. We are interested in quantifying the degree of blood-brain barrier opening (BBBO) following focused ultrasound stimulation \cite{cho2011twophoton,nhan2013drugdelivery,burgess2014analysis}. Experimentally, this is achieved by injecting a fluorescent dextran into the systemic vasculature, and then measuring the difference in fluorescence intensity between the vessels (foreground) and the surrounding tissue during BBBO \cite{nhan2013drugdelivery,burgess2014analysis,yoon2015invivo}. Thus, by the nature of the experiment, we are making the task harder for the segmentation network as the edges between the vessels and the background will become progressively blurred. One way to improve such a loss of contrast is to quantify the BBBO by using two vascular dyes simultaneously, one which readily leaks out from vessels upon BBBD, and another one with a high molecular weight that leaks out less. An alternative to using high-molecular weight dextrans is to use quantum dots that have narrower emission spectra for reduced dye crosstalk \cite{wegner2015quantum}, and less leakage from vessels. Quantum dots have already been used to study the tumor vasculature \cite{stroh2005quantum}. Another option is to use Alexa Fluor 633 dye, which selectively labels the walls of arteries that are greater than 15-$\mu$m in diameter\cite{shen2012anarteryspecific}. This would make vessel segmentation easier as the 'leakage' channel (with the dextran) and 'vessel' channel (with the labeled vessel walls) can be analyzed separately. Recently, multiphoton fluorescent dyes with longer emission and excitation wavelengths \cite{oheim2014newredfluorescent,kim2015twophoton} have been gaining popularity due to their better transmission through biological tissue yielding improved penetration depths and signal-to-noise ratios (SNRs) \cite{smith2009bioimaging,horton2013invivo}. Another promising, yet not commonly employed, technique is to increase excitation laser wavelengths up to 1,700 nm \cite{horton2013invivo}, and switch to three-photon excitation. This also improves depth penetration, but also allows better optical sectioning due to higher non-linearity due to the $z^{4}$ attenuation from the focal plane instead of $z^{2}$ attenuation in two-photon regime, where $z$ is the distance \cite{horton2013invivo}. This reduces noise from out-of-planes and tissue autofluorescence \cite{blab2001twophoton}. In terms of our future versions of deep learning framework, we would like to simultaneously dyes for both two-photon and three-photon process so that the crosstalk in $z$-dimension would be minimized for three-photon process dye allowing that to be used as the ground truth for the super-resolution training (see \figref{Simplified-end-to-end-pipeline}) for two-photon process dyes. Likewise the improved SNR either with longer-wavelength dye and/or three-photon microscopy could be used as the ground truth for the denoising block for denoising shorter-wavelength fluorescent dyes. Another way to improve SNR is to correct the optical aberrations caused by brain tissue in real-time by using adaptive optics \cite{booth2014adaptive}. The use of adaptive optics originated from astronomy \cite{babcock1953thepossibility}, where the correction of aberrations caused by atmospheric was able to give better image quality to astronomers. Ji \emph{et al. } \cite{ji2012characterization} demonstrated the increase in SNR for \emph{in vivo }calcium imaging was especially significant atgreater depths. The better image quality with adaptive optics could be used as the ground truth for the deconvolution block (see \figref{Simplified-end-to-end-pipeline}) and the stack without adaptive optics as the training data. Ideally, one could combine all the above methods for optimized imaging quality. \subsubsection*{Physiological refinement} In the proposed architecture here, we did not explicitly try to further refine the segmented vasculature to subclasses, but rather simply differentiated vessel and non-vessel voxels. There have been some work devoted to separating arteries from veins either using computational techniques \cite{mehrabian2012aconstrained,estrada2015retinal}, or using specific fluorescent labels that specifically label arteries such as Alexa Fluor 633 used by Shen \emph{et al. }\cite{shen2012anarteryspecific}. In the future, we would like extend our network to differentiate arteries from veins by acquiring training data using such an artery-specific dye concurrently with a fluorescent dextran that would label the entire vascular network. \subsubsection*{Extension to other medical applications} In our ``vanilla network'' (see \ref{sub:Refinements-to-networks}) we did not have any vasculature specific optimization, and we decided to leverage on the ability of deep learning network to learn the relevant features itself of relying on handcrafted features. Thus, the same network proposed initially for electron microscope image segmentation \cite{sumbul2014automated,lee2015recursive} can be extended to other applications as demonstrated here for volumetric two-photon vasculature image segmentation. To extend the framework for other application, annotated training data is needed for training the network for the given task. To be used with vasculature datasets such as the VESSEL12 \cite{rudyanto2014comparing}, it would be sufficient to use our pre-trained network and fine-tune the model, training with small learning rate, rather having to learn from scratch as typically done in specific image classification tasks exploiting some pre-trained network with broader dataset \cite{carneiro2015unregistered,zhang2015deepmodel}. This is known as transfer learning or domain adaptation depending on the marginal data distribution \cite{patricia2014learning}. In practice with vascular segmentation, transfer learning approach correspond to a situation when a network trained for tubular dataset such as DIADEM \cite{brown2011thediadem,peng2015fromdiadem} is used as a basis, and fine-tuning that network using limited samples of multiphoton microscopy data. Domain adaptation would correspond to a situation where we would have trained our network to segment vasculature using some other imaging modality than multiphoton microscopy in which the vasculature (foreground) itself might have similar appearance to multiphoton microscopy, but the background from which we try to segment the vasculature would be different. Xie \emph{et al.} \cite{xie2015hybridcnn} combined ConvNet with a traditional dictionary-learning approach for domain adaptation that was able to exploit the local discriminative and structural information more efficiently than just using a ConvNet. This is of a relevance for us, as we could use the unsupervised dictionary-based learning approach for vessel stacks proposed by Sironi \emph{et al. }\cite{sironi2015learning}, and combine that to our ConvNet-based approach to exploit the large number of unlabeled vessel stacks.\textbf{} In medical applications, there have been some effort of going around the high annotation cost by exploiting auxiliary data such as textual reports \foreignlanguage{american}{\cite{schlegl2015predicting,shin2015interleaved}, or image-level labels} \cite{kraus2015classifying} (i.e. whether the whole stack/slice image contains a vessel or not). This type of learning is known as weakly-supervised segmentation, and cannot understandingly reach the segmentation performance as full pixel-level ``strong'' annotated supervised learning. Hong \emph{et al.} \cite{hong2015learning} recently demonstrated that the gap between fully supervised and weakly-supervised can be reduced compared to previous approaches by exploiting pre-trained ImageNet model for transfer learning with weak labels. In multiphoton microscopy, it is not typically possible to use whole-image labels as the vasculature is typically so dense that there are not a lot of empty slices with no vessel labels.. Sometimes in practice, the dye loading is unsuccessful or there are technical glitches, and these empty acquired empty stacks could be used to characterize the noise characteristics of non-vessel areas. \subsection{Open-source code, reproducibility} \label{sub:Open-source-code,-reproducabilit}We share our annotated two-photon vasculature dataset to the scientific community to address the lack of standardized datasets for multiphoton microscopy. We believe that part of the reason for lack of published work on volumetric vessel segmentation is due to lack of suitable training data, most of the biomedical image segmentation efforts being directed to fields such as electron microscopy \foreignlanguage{american}{\cite{wu2015aniterative,lee2015recursive,maitin-shepard2015combinatorial,ronneberger2015unetconvolutional}}, and various clinical applications \foreignlanguage{american}{\cite{havaei2015braintumor,stollenga2015parallel,schlegl2015predicting,dubrovina2015computational}} as the training data is readily available. We want to be part of creating a cultural shift from independent efforts of research groups toward an open source and collaborative neuroscience as datasets get larger and more complex \cite{freeman2015opensource,gao2015onsimplicity}, as well as ensuring that our framework can be easily reproduced and developed further \cite{piccolo2015toolsand}. In the future, we would like to move away from proprietary Matlab environment to totally open-source code in Python as well. \section{Conclusion} We have a proposed a deep learning based framework for two-class segmentation (vessel, and non-vessel) of vascular networks obtained via two-photon microscopy from mouse cortex and human squamous cell carcinoma tumors.\emph{ }We have made the Matlab code available based on the open-source ZNN framework \cite{lee2015recursive,zlateski2015znn}. In contrast to GPU-accelerated frameworks such as Theano \cite{thetheanodevelopmentteam2016theanoa}, the ZNN is optimized to run on CPU while reaching relatively similar performance compared GPU-accelerated approaches \cite{zlateski2015znn}. We have already made our training set freely available to address the lack of annotated reference dataset for multiphoton microscopy vasculature segmentation. We are hoping that this will both inspire other research groups sharing their vasculature datasets, as well as improving our proposed framework. Our future work will focus on enhancing the computational performance and accuracy of the network for multiphoton microscopy vessel segmentation. \subsection*{Acknowledgements} We would like to thank Sharan Sankar for his work as a summer student writing wrapper for various wrappers for ITK C++ functions. \subsection*{} \scriptsize{ \bibliographystyle{plainurl_PT} \addcontentsline{toc}{section}{\refname}
2023-04-23T06:41:27.315Z
2016-06-09T02:05:44.000Z
redpajama/arxiv
arxiv_0001
2,460
14,608
b02beff558c6b1f172322021f377aa82749fd1a1
\section{Introduction: the glueball as a simple Yang-Mills concept} \label{intro} By ``glueballs'' it is broadly understood that we mean the eigenstates of an appropriate Hamiltonian derived from the pure Yang-Mills Lagrangian density, \begin{equation}\label{YM} {\mathcal L}_{YM} = - \frac{1}{4} F^a_{\mu\nu} F^{a\mu\nu} \ , \end{equation} with $F^{a}_{\mu\nu}=A^a_{\nu,\mu}-A^a_{\mu,\nu}+igf_{abc} A^b_{\mu} A^c_{\nu}$. If the symmetry group is Abelian, there are no interaction terms (no $f_{abc}$ group structure constants), so that neither photon-photon nor multiphoton states bind. There is no such thing as ``photonballs'' in the absence of matter. On the contrary, because the non commutative $SU(3)$ Yang-Mills theory underlying Quantum Chromodynamics is strongly coupled and by all evidence, confining, the (colored) one-gluon states such as $\int d^3x f(x) A^a(x)|0\rangle$ are not part of its spectrum (they are presumably removed to infinite energy). The spectrum must then be formed of color-singlet two- or multi-gluon states, or glueballs (sometimes ``Gluonium'' is used for the particular case of exactly two gluons, in analogy with $q\bar{q}$ quarkonium). In conventional lattice gauge theory~\cite{Munster:2000ez}, space-time is rotated to Euclidean four-dimensional space, then discretized at intervals of size $a$, and a change of variables from the Yang-Mills $A^a_\mu$ fields to the parallel-transporter links between two lattice sites, $U(x+a,x)$ is performed. If we could lift the discretization, we could interpret this link variable as a short Wilson line in the lattice direction in which the four-vector $a$ points, \begin{equation} U(x+a,x) = P exp\left( ig\int_x^{x+a} A^b_{\mu}T^b du^\mu \right) \end{equation} with $u^\mu \in [0,a^\mu]$ and $T$ the $3\times 3$ color matrix. Four such links in a closed square of sides $a$ and $b$ of equal length form the gauge-invariant plaquette, $\tilde{U}_{\mu\nu}(x):= U(x,x+b) U(x+b,x+a+b) U(x+a+b,x+a) U(x+a,x)$ from which Wilson's discretized version of Eq.~(\ref{YM}) can be built, \begin{equation} \mathcal{L}_{\rm Wilson} = -\frac{2}{g^2} {\rm Re}({\rm Tr}(U)) \end{equation} (The action is obtained by summing over all possible plaquettes, that in the limit $a\to 0$ amounts to integrating the Euclidean continuation of Eq.~(\ref{YM}).) The mass of the eigenstates (glueballs) of this discretized theory can, in an unsophisticated analysis, be computed from expectation values of two spatial plaquettes separated by a large time interval $t$, \begin{equation} \langle {\rm Tr}(U(t=0)) {\rm Tr}(U(t)) \rangle \propto e^{-m_Gt}\ \end{equation} (which is the Euclidean version of $e^{iHt}$ projected over the lowest eigenvalue, that survives the exponential decay for the longest time). Excited states need to be obtained with smart subtraction of the fundamental one, but this is now routinely done. The resulting glueball spectrum is obtained as function of the lattice energy scale $a^{-1}$. To evaluate this, another observable, typically the static potential between color charges, has to be computed and compared with an experimental observable (typically the quarkonium string tension pseudoobservable extracted from spectroscopy with a potential interpretation). There are numerous systematic effects that are addressed in actual lattice computations~\cite{Liu:2000ce}. An entirely different problem, open to date, is to locate these $G$ states in the physical world where gluons (radiation) are coupled to quarks (matter). This topical review, that does not intend to be exhaustive nor historical, focuses mostly on that problem. The interested reader can delve into the very extensive literature and standing reviews of the field~\cite{Crede:2008vw,Mathieu:2008me}. Our purpose here is to give a quick topical overview of some selected avenues for glueball identification that we find particularly interesting, promising or classic, presenting alleys of investigation that theorists have suggested. At various points of the article I use results from Effective Lagrangians for hadrons, from the Coulomb-gauge constituent picture, from QCD sum rules, from the flux tube model, or from the AdS-CFT approach. A quick search of the Inspirehep database reveals that over 1600 scholar articles contain in their titles one of the words ``glueball'', ``gluonium'' or their plurals. I have purposedfully tried to keep the reference list near 100 to contain the review. I have also chosen to focuse on the more contemporary developments (basically, the latest ones come from data taking at BES-III and TOTEM) and, given the nature of this EPJST volume, deemphasize heavier gluonia in the charmonium region in favor of the few glueballs that are lighter than the $J/\psi$. I have chosen to discuss each of the tree quantum number combinations available for that lightest mass-range, $0^{++}$, $2^{++}$ and $0^{-+}$; because the status of knowledge is different for each of them, and because they may be of interest for different physics phenomena, the review treats them asymmetrically. Also, for concision, I try not to repeat material: for example, since I discuss the mixing and width of the scalar glueball, I do not cover this for the other two glueballs: because Regge theory is most important for the tensor glueball, I do not discuss the Regge trajectories that may be of interest for the other two; and the same principle applies to the rest of the review. \section{Pure gauge theory (or quenched approximation)} \label{sec:intro} \subsection{Lattice spectrum} Following the lattice computations of the late 90's and early 2000's, most of the community became convinced that the lightest (scalar) glueball was to be searched for among the $f_0$ mesons in the 1.3-2 GeV region~\footnote{ There is a minority view that the $\sigma$-meson has a Fock-space component of the lowest scalar gluonium as hinted by early bag-model computations and more elaborate QCD sum rules. The approach accommodates its large coupling to $\pi^+\pi^-$ and to (subthreshold) $K^+K^-$ by invoking a large violation of the OZI rule at these lowest energies. (By contrast, good satisfaction of the OZI rule in the $\sim 1.7$ GeV energy region suggests sizeable couplings to $\eta^{(')}\eta^{(')}$ pairs with large glue content.) } Figure~\ref{fig:glueballspectra} shows the evolution of the $C$-even (two-gluon like) spectrum in the last twenty-five years. Around 1995 the lattice gauge theory prediction was quite uncertain (see the width of the boxes in the left plot; the lines come from the model approach in the next subsection~\ref{subsec:constgluons}, the NCSU Coulomb-gauge Hamiltonian) but it has become quite accurate with the years, as seen in the right plot. \begin{figure} \includegraphics[width=\columnwidth]{GlueballsJuntas.pdf} \caption{\label{fig:glueballspectra} Change of the computed glueball spectrum in 25 years. Left (from~\cite{Szczepaniak:1995cw}, with APS permission): the boxes were the lattice computations at the time, whereas the narrow black lines stand for the NCSU Coulomb-gauge BCS+Tamm-Dancoff model calculation. Right: the most recent lattice computation~\cite{Athenodorou:2020ani} (black lines) now has much reduced uncertainties. The qualitative comparison of the spectra is reasonable. I have marked, in the right graph, the divide between charmonium and light-quark spectroscopy, as well as the two-glueball continuum of pure YM theory. } \end{figure} We can, with quite some certainty, state that the glueballs expected below 3 GeV have the $J^{PC}$ quantum numbers of the $f_0$ family ($0^{++}$), the $f_2$ family ($2^{++}$) and the $\eta$ one ($0^{-+}$). None of these states is faneroexotic (manifestly exotic), instead having conventional $q\overline{q}$ quantum numbers. In the charmonium region and above there can be exotic-quantum number~\cite{Meyer:2010ku} glueballs, but they will compete with hybrid $q\overline{q}g$ mesons~\cite{LlanesEstrada:2000hj,Soto:2017one}, tetraquarks and others. Since this topical review is intended for a volume dedicated to light quark physics, most of the discussion will concern the $f_0$, $f_2$ and $\eta$--like glueballs. \subsection{The gluon constituent picture} \label{subsec:constgluons} It is often stated that gluons are massless particles, and must be so because of gauge symmetry. This affirmation is based on the lack of gauge invariance of a Proca-like Lagrangian density, that adds a term to Eq.~(\ref{YM}), \begin{equation} L_M = \frac{m_g^2}{2} A^{a\mu} A^a_\mu\ . \end{equation} In this sense, yes, classical Yang-Mills theory cannot accommodate a gluon mass. Yet it is obvious that the gluon degree of freedom is dynamically gapped because of the partly discrete nature of the hadron spectrum. If adding a massless gluon with $J^{PC}=1^{--}$ did not cost any energy, one could construct baryons of arbitrary quantum numbers with the same $940$ MeV mass of the proton! That is obviously not the case, with the lowest proton excitation being the $\Delta(1232)$. Thus, gluons need to satisfy a gapped dispersion relation brought about by the interaction terms in the quantum theory~\cite{Cornwall:1982zn} (and this leads directly to a discrete glueball spectrum). Examples of the phenomenon are easily borrowed from electrodynamics, \begin{equation} \omega(k)^2 = k^2 + m_g^2 \end{equation} with $m_g$ stemming from a plasma cutoff frequency in a conductive medium (this, in QCD, is deployed in heavy-ion collision studies, with $m_g^2 \propto \alpha_s T^2$ at finite temperature, see for example~\cite{Alam:1996wp}); or with $m_g$ arising from boundary conditions such as in a microwave cavity, which is deployed in the bag model of hadrons, also used to compute glueball spectra~\cite{Jezabek:1982ic}. For example, the Transverse Electric modes in a bag of radius $R$ have an energy given by \begin{equation} \tan (\omega R) = \frac{\omega R}{1-\omega^2 R^2}\ , \end{equation} with lowest mode (``mass'') equal to $\omega\simeq 2.74/R$ (and $4.5/R$ for a TM mode)~\cite{Lagerkvist}. Of course, the bag model as other approaches containing hadron-external condensates must face the inconvenience of the cosmological constant~\cite{Brodsky:2012ku}. Another well-known such example is the Higgs mechanism in which an additional field is used to break a global symmetry, with the resulting Goldstone bosons providing the longitudinal modes of the electroweak $W$ and $Z$ bosons, and their mass being given by the Higgs condensate. But in the context of Quantum Chromodynamics, the most popular approaches to describe the mass gap are based on many body approximations to the strongly coupled gauged problem itself. For example, a Coulomb-gauge gap equation based on the confining Coulomb potential between gluons~\cite{Szczepaniak:1995cw}, \begin{equation} \omega_k^2 = k^2 +\frac{N_c}{4} \int \frac{d^3q}{(2\pi)^3} V_{\rm Coulomb}({\bf k}+{\bf q}) (1+ (\hat{\bf k}\cdot \hat{\bf q})^2) \frac{\omega_k^2-\omega_q^2}{\omega_k} \end{equation} provides a running gluon energy $\omega_k \simeq \sqrt{k^2 + m_g^2 e^{-(k/\kappa)^2}}$ and a canonically transformed vacuum/ground state $\arrowvert 0\rangle \to |ar \Omega \rangle$ that approximates the exact QCD one. Instead of a gluon dispersion relation approaching a masslike constant at vanishing momentum, other authors employ one where the gluon ``mass'' diverges in the infrared, as variationally estimated by Feuchter and Reinhardt~\cite{Feuchter:2004mk} in accordance with Gribov-Zwanziger's confinement scenario, \begin{equation} \label{disprel2} \omega(k) = \sqrt{k^2 + \frac{M^4}{k^2}} \end{equation} where $M\simeq 880 $ MeV also reproduces lattice glueball spectroscopy. This hadron rest frame picture has been, with quite some labour, been extended to the covariant Dyson-Schwinger+Bethe-Salpeter approach in Landau gauge~\cite{Meyers:2012ka,Huber:2020ngt,Sanchis-Alepuz:2015hma,Souza:2019ylx,Kaptari:2020qlt}. Yet an advantage of the Hamiltonian Coulomb gauge formulation is that the absence of a $J=1$ glueball in the low-lying spectrum is immediate to understand: Yang's theorem~\footnote{A recent well known application thereof was to exclude $J=1$ for the Higgs boson, as its decay $h\to \gamma\gamma$ was quickly identified.} states~\cite{Lee:1981mf} that two identical transverse bosons of spin 1 each cannot couple to total $J=1$. Thus, if the low-lying glueball spectrum is dominated by $\arrowvert gg\rangle$ states in the Coulomb gauge formulation where by construction $\nabla\cdot {\bf A}=0$, so that transversality is guaranteed, a spin-1 glueball is forbidden. This is by no means automatic in covariant approaches, such as the Landau gauge Bethe-Salpeter formulation in which $\partial_\mu A^\mu = 0$ is not sufficient to implement Yang's theorem. A detailed dynamical mechanism must then be responsible for removing the $J=1$ glueball. Likewise, in the AdS-CFT approach to glueballs (that are thought to arise from a supergraviton spectrum in a theory dual to QCD) a light spin-1 glueball appears~\cite{Vento:2017ice} alongside the $0^{++}$ and $2^{++}$, though strong splitting, for reasons not totally clear to me, can raise the state with spin 1 to higher mass~\cite{Brower:2000rp}. The same inconvenience is present in constituent approaches in which the constituent gluons are treated as massive Proca spin-1 bosons: it is not easy to get rid of the $J=1$ glueball~\cite{Bicudo:2004tx,Mathieu:2008me}. Thus, the Coulomb-gauge dynamical mass generation picture remains a competitive contender to understand the low-mass lattice glueball calculation. \newpage \section{Coupling to quarks and glueball width} The lattice glueball spectrum has also been looked at with unquenched QCD that includes dynamical quarks, for example in~\cite{Gregory:2012hu}. This group finds that the effect of including quarks in the simulation is to raise the masses of all the states, even up to 30\%. The scalar glueball is only lifted by 5\%, from the $1.71-1.73$ GeV of other calculations up to $1.8(6)$. Other computations cited therein, however, see the scalar glueball mass descending. Ultimately, in a full QCD calculation, all scalar $f^i_0$ mesons give a signal when computing scalar-scalar correlators, unless the matrix element $\langle \Omega \arrowvert \mathcal{O}_s \arrowvert f_0^i\rangle$ exactly vanishes, which is not to be expected in a theory of the strong interactions. One can speculate that this would be an explanation for the instability seen in such calculations. Ultimately, there is no such thing as ``unquenched glueballs'', at that point one is simply computing the full scalar meson spectrum. One thing that can be done, however, is to adiabatically track the fate of the pure Yang-Mills glueball pole as the coupling to quarks is slowly turned on. To my knowledge, such calculation has not been carried out. The most interesting quantity that would come out of it would be a nonperturbative computation of the ``glueball'' width (at the end point, one of the $f_0$s). Naturalness suggests that $\Gamma_G\sim \Delta M_G$ (the real and imaginary part of the glueball mass acquire contributions of the same order upon unquenching), so that $\Gamma~O(0.1)$ GeV is conceivable. The QCD sum rules approach employs a dispersive analysis with simple model elements to extract the glueball width. A standard analysis~\cite{Shuiguo:2010ak} would proceed by modeling the spectral function of QCD in the scalar channel \begin{equation} \Pi(q^2=s)=\int d^4x e^{iq\cdot x} \langle \Omega \arrowvert T \mathcal{O}_{\rm scalar} (x) \mathcal{O}_{\rm scalar} (0) \arrowvert \Omega \rangle \end{equation} as \begin{eqnarray} {\rm Im} \Pi (s) &=& \rho^{\rm had}(s) + {\rm Im} \Pi^{\rm pQCD}(s) \theta(s-s_0) \nonumber \\ &=& \sum_i^{\rm res} \frac{f_i^6 m_i \Gamma_i}{(s-m_i)^2+\Gamma_i^2/4 + m_i^2 \Gamma_i^2} + {\rm Im} \Pi^{\rm pQCD}(s)\theta(s-s_0) \end{eqnarray} with $f_i=\lambda^i_0 s \ \theta(m_\pi^2-s) + (\lambda^i_0 m_\pi^2 + \lambda_1^{i3})\theta(s-m_\pi^2)$ carrying a couple of fittable strength constants $\lambda^i_0$ and $\lambda^i_1$ to model the coupling of the QCD current to that hadron state, $f_i= \langle \Omega \arrowvert \mathcal{O}_{\rm scalar} \arrowvert f_0^i\rangle $, and the pQCD part computed in perturbation theory. This very rough model (note the Breit-Wigner approximation to the scalar mesons!) of the physical spectral function is then related via a dispersion relation to a spacelike-$q^2$ computation carried out in pQCD together with a classical instanton background. When the dust settles, a glueball width is extracted from the corresponding parameter $\Gamma_i$, that I elevate to table~\ref{tab:width}. The coupling to quarks is computed in perturbation theory through the pQCD elements, but this is a different approximation from the others here discussed, because that part of the computation takes place for unphysical or very large $q^2$, not in the soft hadron region. A typical constituent-like computation of the glueball width into two mesons would proceed by evaluating the coupling in second-order perturbation theory \begin{equation} g=\sum\int \langle \psi_G^* gg \arrowvert H_{\rm int}\arrowvert q\bar{q}g\rangle \frac{1}{M_G-E_{q\bar{q}g}}\langle q\bar{q} g \arrowvert H_{\rm int} \arrowvert q\bar{q}q\bar{q}\rangle \end{equation} through all intermediate state that are hybrid mesons (the Tamm-Dancoff approximation glueball wavefunction $\psi_G^*$, as well as all the masses, need to be calculated ahead before the Feynman diagrams in $H_{\rm int}$ are included). Such approximation is supposed to work better (but yield broader glueballs) the higher the mass $M_G$, because (a) the coupling constant $\alpha_s$ becomes smaller with increasing gluon momentum so that perturbation theory is sounder, and (b) there are more abundant intermediate hybrid mesons in the high spectrum, so some will always be near the energy-shell $M_G$ in the decay. This was estimated for the scalar glueball at $M_G\simeq 1.8$ GeV~\cite{Bicudo:2006sd} and found to yield a relatively narrow state with $\Gamma_G\simeq 0.1$ GeV, with a larger $\pi\pi$ than $K\bar{K}$ component as demanded by phase space, as shown in table~\ref{tab:width}. The best known lattice computation~\cite{Sexton:1996ed}, in quenched approximation, proceeded by matching a three-point function between the scalar glueball current and two pseudoscalar currents $\bar{\psi} \gamma_5 \psi$. It found a $\sim 1.7-1.8$ GeV glueball, of narrow width $\Gamma_G=0.108(29)$ GeV, and interestingly, seemingly asymmetric couplings favoring decays through the strange quark; this topic will be picked up again in subsection~\ref{sratherthanu} below. \begin{table} \caption{\label{tab:width} Theory estimates of the $f_0$-like scalar glueball width for approaches that place it in the $1.5-1.75$ GeV mass region, and experimental estimates of the scalar meson widths in the 1-2 GeV interval. The lattice and semiperturbative Coulomb model estimates include only two body ($\pi\pi$, $K\bar{K}$, etc.) decays, so they are lower bounds to the total width. Overall, a narrow glueball with $\Gamma_G\sim 0.2$ GeV seems a plausible theory prediction (I do not list the additional 1.81 GeV structure in $\omega\phi$ since later analysis confirmed that a new resonance should also be manifest in $K\bar{K}$, which it is not, and that it likely is the same $f_0(1710)$ seen at a higher mass due to the $\omega\phi$ threshold distortion~\cite{Wang:2011tm,MartinezTorres:2012du}).} \begin{tabular}{|c|ccccc|} \hline Method & Sum rules & Lattice (quenched) & Coulomb-$gg$ & $G$-dominance & Flux-tube \\ \hline $\Gamma$ (GeV) & ${\bf 0.23(13)}$\cite{Shuiguo:2010ak} & {\bf 0.11(3)}~\cite{Sexton:1996ed} & {\bf 0.1}~\cite{Bicudo:2006sd} & $>${\bf 0.25-0.39}\cite{Burakovsky:1998zg} & ${\bf \sim 0.18}$~\cite{Iwasaki:2003cr} \\ \hline\hline Meson~\cite{Zyla:2020zbs} & $f_0(1370)$ & $f_0(1500)$ & $f_0(1710)$ & $f_0(2020)$? & \\ \hline $\Gamma_{\rm exp}$ & {\bf 0.2-0.5} & {\bf 0.11(1)} & {\bf 0.12(2)} & $\sim{\bf 0.4}$ & \\ \hline \end{tabular} \end{table} Not all approaches yield such narrow glueballs. Among standing calculations for a broad scalar glueball, I highlight a model computation~\cite{Burakovsky:1998zg} that employs a so called ``glueball dominance hypothesis'' to reduce the parameter space of a mixing calculation of $G$, $s\bar{s}$ and $q\bar{q}$ light quarkonium at the level of the meson mass matrices. Their characteristic hypothesis is that the different flavors of scalar quarkonium are not connected directly, but mix only through an intermediate glueball state, as inspired by large-$N_c$ ideas. The authors are also inspired by the flux tube and $^3P_0$ decay model. They do assume flavor-blind couplings, and uncharacteristically, find a broad scalar glueball with $\Gamma=0.25$ GeV at least, and even above $0.39$ GeV. This is driven by the decay $f_0\to a_1\pi$ that accounts for half the width and is a dominant decay mode. Though these authors place the dominantly glueball-state mass just above 1.7 GeV and the $f_0(1710)$ is their prefered candidate, neither the width of this meson as later measured matches their expectations, nor has the $a_1\pi$ decay mode been listed yet. Flux-tube breaking arguments with $\Gamma_G \propto M_G$~\cite{Iwasaki:2003cr} naturally suggest that excited glueballs will be broader, in line with other types of mesons. \subsection{Exploiting symmetry in glueball decay and mixing} \label{subsec:symmetrymixing} Glueballs are much heavier than pseudoscalar and vector mesons, entailing several possible open strong decay channels. It is obvious that their decays are important to identify them, and this section therefore addresses some of them. Several groups~\cite{Rosenzweig:1981cu,Cheng:2006hu,McNeile:2000xx,Narison:1996fm} have addressed~ the configuration mixing of glueballs with other ordinary or exotic mesons. It is clearly necessary to have criteria which bear on the two topics of glueball identification and mixing, but also to be able to theoretically define that mixing. The Coulomb gauge QCD formulation offers a full Fock expansion of a meson that includes only quarks and (``physical'') transverse gluons, schematically \begin{equation} \label{Fock} \arrowvert M \rangle = \sum\int \left( \alpha_1 \arrowvert q\bar{q} \rangle + \alpha_2 \arrowvert gg \rangle + \alpha_3 \arrowvert q\bar{q}g \rangle + \alpha_4 \arrowvert q\bar{q} q\bar{q} \rangle + \alpha_5 \arrowvert ggg \rangle + \dots \right) \ . \end{equation} With a well-defined canonical transformation~\cite{LlanesEstrada:1999uh} one can choose $g$ and $q$ to correspond to the current fields in the free Lagrangian, or rotated fields whose quanta are massive-like constituents due to the interactions. The inconvenient of this intuitive expansion is the difficulty to experimentally access it because of its frame (and gauge) dependence: the similar light-front gauge expansion useful in subsec.~\ref{scalartensor} below will have different $\alpha_i$ coefficients. Either of them could in principle be accessed by adequately projecting lattice correlators, but this has not been performed. What lattice can more easily provide is a proxy to that expansion, the relative strengths with which different composite field operators couple $\arrowvert M\rangle$ and the vacuum $\arrowvert \Omega \rangle$. This has the inconvenience of including longitudinal gauge modes/scalar potentials, and components of different representations of the rotation group packed inside the representations of the Lorentz group and its lattice symmetry reduction. Because of the difficulty, other methods have been devised. One is to extract gauge-independent content from the large-$N_c$ expansion around $N_c=3$~\cite{Cohen:2014vta}. While interesting, one issue there is that large-$N_c$ only sorts wavefunction configurations into classes: for example, both conventional $q\bar{q}$ and hybrid mesons have widths $\Gamma_{q\bar{q}} \propto \frac{1}{N_c}\propto \Gamma_{q\bar{q}g} $, so they cannot be distinguished~\footnote{A further ambiguity is in the definition of a tetraquark: how does $q\bar{q}q\bar{q}$ with $2=3-1$ pairs generalize to more than three colors, as 2 or as $N_c-1$ pairs? The ambiguity is resolved in~~\cite{Cohen:2014vta}.}. In the end, glueballs are expected to be narrower, $M_G\propto 1$, $\Gamma_{G} \propto \frac{1}{N_c^2}$ instead, so they can be separated with lattice data for different $N_c$ values. But concerning the physical world, the only statement is that glueballs are qualitatively narrower than conventional mesons. Finally, effective hadron models such as shown in subsection~\ref{subsubsec:mix} study the mixing of an additional singlet particle to which some additional ``glueball''-like dynamics is adscribed based on underlying physics, and it is in this sense that most mixing analysis are presented. The connection of the information gained to the microscopic expansion such as Eq.~(\ref{Fock}) is contained in that dynamical statement only. \subsubsection{Flavor-blind quark-gluon vertex} The QCD Lagrangian features a flavor $SU(3)$ symmetric quark-gluon vertex: all flavors equally couple to the gluon. This has been a motivation to write flavor-symmetric chiral Lagrangians, such as used in many mixing analysis, some examples being recalled in the next subsection~\ref{subsubsec:mix}. If glueballs decay/rehadronize via a chain $gg\to gq\bar{q}\to q\bar{q}q\bar{q} \to MM$, which is disputed~\cite{Chao:2005si}, the strong dynamics is not bound to disrupt flavor symmetry much, and for example, its coupling to $\pi\pi$ is expected to be similar to that to $K\bar{K}$. After accounting for phase space, a 1.7-1.8 GeV glueball would have a width around 0.1 GeV and $\pi\pi$ would be dominant~\cite{Bicudo:2006sd}. This flavor symmetry in the couplings is expected for most glueballs in any case, but much of the analysis in the next subsection~\ref{subsubsec:mix} assumes that it particularly applies to the scalar glueball. On the contrary, should the dominant decay mode be $gg-q\bar{q}$ mixing, chiral symmetry is more important for the scalar glueball, badly breaking flavor symmetry; this is quickly overviewed in subsection~\ref{sratherthanu}. \subsubsection{Exploiting flavor symmetry in a mixing analysis} \label{subsubsec:mix} A very well known 1995 analysis of Crystall Ball data by Amsler and Close~\cite{Amsler:1995td}, among other works, gave support to the hypothesis that $f_0(1500)$ was largely the $0^{++}$ glueball $G$; this is therein introduced as an additional singlet state, coupling to the two-pseudoscalar meson pairs according to \begin{eqnarray} \langle G \arrowvert H_{\rm int} \arrowvert \pi\pi \rangle =1 &\phantom{multimixing} & \langle G \arrowvert H_{\rm int} \arrowvert K\bar{K} \rangle :=R \nonumber \\ \langle G \arrowvert H_{\rm int} \arrowvert \eta\eta \rangle =\frac{1+R^2}{2} &\phantom{multimixing} & \langle G \arrowvert H_{\rm int} \arrowvert \eta\eta' \rangle =\frac{1-R^2}{2} \end{eqnarray} with the limit of exact flavor $SU(3)$ symmetry reached by setting $R=1$ and, after accounting for the charge multiplicity, leads to decay proportions \begin{equation} G\to \pi\pi: \eta\eta : \eta \eta': K\bar{K} = 3:1:0:4 \ . \end{equation} (An accurate prediction would additionally need to account for the difference in phase space.) The authors then concluded that $f_0(1500)$ had decay features consistent with the glueball assignment, though a small proportion of this singlet would also be mixed in the $f_0(1370)$. More sophisticated analysis in the next two decades proceeded by constructing full chiral Lagrangians including the additional glueball-singlet state. Among the many studies I have selected two representative ones~\cite{Giacosa:2005zt,Janowski:2014ppa} whose outcomes are shown in figure~\ref{fig:mixing}, including the three scalar states $f_0(1370)$, $f_0(1500)$ and $f_0(1710)$, presumed a mixture of three particles with flavor couplings characteristic of $\frac{u\bar{u}+d\bar{d}}{\sqrt{2}}$, $s\bar{s}$ and a singlet $G$ presumed to be the glueball. \begin{figure}\begin{center} \includegraphics[width=0.4\columnwidth]{PieGiacosamixing1.png}\hspace{-0.4cm} \includegraphics[width=0.4\columnwidth]{PieGiacosamixing3.png} \includegraphics[width=0.4\columnwidth]{PieGiacosaDilaton.png} \end{center} \caption{\label{fig:mixing} Example computations of glueball-like and quarkonium-like mixing. From inner to outer rings, the composition of the $f_0(1370)$, $f_0(1500)$ and $f_0(1710)$ is given. Proceeding counterclockwise from the $OX$ axis, the slices correspond to $\frac{u\bar{u}+d\bar{d}}{\sqrt{2}}$, $s\bar{s}$ and the glueball. The top plots correspond to the first and third solutions, respectively, of Giacosa {\it et al.}~\cite{Giacosa:2005zt}, while the bottom plot shows the mixing resulting from a glueball-as-dilaton chiral model~\cite{Janowski:2014ppa} } \end{figure} The top plots of figure~\ref{fig:mixing}, produced with data from~\cite{Giacosa:2005zt}, suggested that indeed most of the glueball is spanning the state $f_0(1500)$ as also suggested by Amsler and Close. The difference is that, while the left top plot assumes that the direct couplings $G\to\pi\pi,K\bar{K}$ are suppressed and $0^-0^-$ glueball decay proceeds by mixing with conventional quarkonium (exactly the opposite case will be discussed in subsection~\ref{sratherthanu} below), the right plot allows for direct decay. In the later case, some of the glueball component shifted to the lightest $f_0(1370)$. Other analysis with similar flavor symmetry content and experimental data offer a quite different picture, such as that from~\cite{Janowski:2014ppa} that assigns most of the glueball to the $f_0(1710)$ (bottom plot in figure~\ref{fig:mixing}). Conventional mesons are interpreted in the context of a linear sigma model (a specific realization of chiral dynamics less general than Chiral Perturbation Theory) to reduce parameter space, with $q\bar{q}, s\bar{s}\sim \sigma_i$, and the $U(3)_L\times U(3)_R$ chiral invariant effective Lagrangian being constructed from a field multiplet that incorporates these scalar and the pseudoscalar mesons, $\Phi =\sum(S_i+iP_i)\frac{\lambda_i^{\rm Gell-Mann}}{2}$. In that model, the extra glueball state is not only assumed to be a flavor singlet, but endowed with additional dynamics stemming from the assumption that it reflects the loss of dilatation symmetry of the Yang-Mills Lagrangian in Eq.~(\ref{YM}). This is implemented by introducing an auxiliary effective dilaton field $G$ with Lagrangian \begin{equation} {\mathcal{L}}_{\rm dilaton} = \frac{1}{2} (\partial_\mu G)^2 - \frac{m_G^2}{\Lambda^2} \left( \ln \left( \frac{G}{\Lambda}\right) -\frac{1}{4}\right)\frac{G^4}{4} \end{equation} with minimum at $\langle G \rangle =\Lambda$ and particle excitation above it with mass $m_G$. If the glueball/dilaton is further assumed to saturate the trace of the dilatation current brought about by quantum effects (trace anomaly), the authors obtain a relation between $\Lambda$ and $m_G$ that become interdependent. For a ``narrow'' particle-like glueball in the 1.5-1.7 GeV energy range, $\Lambda\sim 3$ GeV, whereas for a more reasonable $\Lambda\sim 0.4$ GeV in the hadronic regime, the glueball becomes a very broad structure. In the first case, the pattern of decays of the scalar mesons is best fit if the mixing angles (that are in these approaches independent model parameters) are as in the bottom plot of figure~\ref{fig:mixing}, with the $f_0(1710)$ predominantly the glueball. In the second case, at odds with the large $N_c$ expectation, my interpretation is that we would think of the glueball as a background, and the glueball would not correspond to any of the experimentally studied $f_0$ mesons. \subsubsection{Flavor-symmetry breaking decay of the scalar glueball} \label{sratherthanu} Building on earlier work, Chanowitz~\cite{Chanowitz:2005du} conjectured, on the basis of an all--orders perturbative QCD computation, that the scalar glueball couples more strongly to $K\bar{K}$ than $\pi\pi$ (as suggested by suppression of its coupling to $q-\bar{q}$ being proportional to $m_q$). The argument rests on conservation of chirality by QCD without quark masses: then, the only appearance of the quark spinor in the Lagrangian is $\bar{\psi}_L \gamma^\mu T^a \psi_L A_a + L\to R$. When the two gluons annihilate into two quarks (thus, the matrix element corresponds to gluonium/quarkonium mixing), the created quark and antiquark have the same chirality at all orders of perturbation theory (since iterating the $L-L$ vertex just written never changes $L$ to $R$, for example). Chirality and helicity coincide for the quark, but are opposite for the antiquark, so they appear with opposite helicities. Now, since in the rest frame the momenta are opposite, ${\bf p}_{\bar{q}}=-{\bf p}_q$, ${\bf S}_q\cdot{\bf p}_q= -{\bf S}_{\bar{q}}\cdot{\bf p}_{\bar{q}}$ (opposite helicities) implies that the spin projections over a fixed $OZ$ axis are actually the same, so that $S^z_{q+\bar{q}}=\pm 1$. This $S=1$ is actually fine to yield a $0^{++}$ quarkonium with $S_{q\bar{q}}=1$, the problem is that the necessary $L_{q\bar{q}}=1$ cannot be reached from an $S$-wave gluon-gluon wavefunction (the angular integral vanishes). At order $m_q$ however, the scalar term $m_q \bar{\psi}\psi$ violates chiral symmetry and allows for an $L\cdot S$ coupling providing extra orbital angular momentum. Comparing this QCD theory input with meson analysis, Albaladejo and Oller~\cite{Albaladejo:2008qa} favor the $f_0(1710)$ scalar as having a larger gluonium component. This is natural given their finding that $\Gamma_{\pi\pi}/\Gamma_{K\bar{K}}\simeq 0.32(14)$: the coupling of this meson is larger to $K\bar{K}$ than $\pi\pi$, as can be seen comparing the first and second plots from the top in figure~\ref{fig:MMchannels} that will be discussed later on. These authors also find that a pole at around 1.6 GeV and somewhat influencing $f_0(1500)$, behaves as a glueball, which is quite surprising since the first excited scalar glueball is not expected below 2.5 GeV (see figure~\ref{fig:glueballspectra}). The explanation is that this pole comes from the $\eta \eta' $ coupled channel and would never be seen in a quenched lattice calculation. What the all--orders perturbative QCD argument of~\cite{Chanowitz:2005du} really suggests is that $gg$-$q\bar{q}$ mixing is suppressed by $m_q$, which would naturally explain the small amount of $q\bar{q}$ quarkonium found in some analysis such as in the bottom plot of figure~\ref{fig:mixing}; that this mixing dominates the decay is then on a less solid basis, since as already mentioned, the decay might proceed by $q\bar{q}q\bar{q}$ intermediate states that easily hadronize into two mesons by ``fall-apart'' decay. As a final remark let me note that dynamical symmetry breaking trascends an all--orders computation and requires an infinite resummation, for example in the form of a Dyson-Schwinger equation. Still, because the typical momentum of a constituent--like gluon in a glueball is of order $M/2$, the running quark mass has dropped sufficiently by that scale (many hundreds of MeV) that chiral symmetry is a reasonable approximation, with $m_u\sim m_d$ plausibly in the 10-20 MeV range or so, already small enough for Chanowitz's argument to make sense. \subsubsection{The axial anomaly and the pseudoscalar glueball} \label{subsubsec:anomaly} One sometimes reads that the glueball-quarkonium mixing in the pseudoscalar channel is responsible for raising the mass of $\eta_{\rm singlet}$ (in turn, a mixture of the physical $\eta$ and $\eta'$ mesons) respect to a reference level in Gell-Mann's octet. This must be incorrect since the variational principle, a simple theorem of linear algebra, guarantees that the mixing of two states \emph{lowers} the mass of the lightest one while raising that of the heaviest (``level repulsion'' in many-body jargon). Thus, the supposed mixing of the $\eta/\eta'$ system and the pseudoscalar glueball is not the cause of the excess mass in that system. That mixing is, to date, unknown. But the large difference in masses ($m_\eta =547$ MeV, $m_\eta' = 958$ MeV, $m_{0^{-+}G}>2$ GeV) suggests that the mixing might not be a dominant feature. What is true is that the anomalous term in the axial current of QCD, \begin{equation} \partial_\mu J^\mu_5 = \frac{3\alpha_s}{4\pi} F^{\mu\nu} \tilde{F}_{\mu\nu} \ (\equiv \partial_\mu K^\mu)\ . \end{equation} with $\tilde{F}_{\mu\nu} = \epsilon_{\mu\nu\rho\sigma} F^{\rho\sigma}$ the dual field-strength tensor, is odd under parity, and thus a pseudoscalar; in pure Yang-Mills theory, a field correlator involving this anomalous term presents a pole at the mass of the pseudoscalar glueball. Because the $\eta_{\rm singlet}$ particle should also appear there, the following approximation has been proposed~\cite{Rosenzweig:1981cu} for an effective meson Lagrangian treatment: \begin{equation} \label{anomalouscurrent} \partial_\mu K^\mu = \tilde{G}_1 + \tilde{G}_2 + \dots \end{equation} substituting the anomaly by a sum over the fields associated with the creation of the singlet pseudoscalar particles, including $\eta_{\rm singlet}$ and $G_{0^{-+}}$ proportional to those in Eq.~(\ref{anomalouscurrent}) (the proportionality constants are explained in~\cite{Rosenzweig:1981cu}). The fun observation of that work is that if the mixing matrix between $\eta_{\rm singlet}$ and $G_{0^{-+}}$ is $a_{ij}$, we have \begin{equation} \label{currentdist} \partial_\mu K^\mu = \sqrt{3} f_\pi (a_{11}\eta_{\rm singlet} + a_{12} G_{0^{-+}} ) \end{equation} so that experimental production of the pseudoscalar glueball proceeds by the (presumably small?) mixing $a_{12}$ with $\eta$, $\eta'$ or by higher-twist operators. This is because the pseudoscalar operator of lowest dimension (smallest number of fields and derivatives) built from the gluon field-tensor is indeed this $F^{\mu\nu} \tilde{F}_{\mu\nu}$ combination~\footnote{This is a different way to show that the analysis of subsection~\ref{scalartensor} below applies to the $0^{++}$ and $2^{++}$ but not to the $0^{-+}$ glueball.}. I would imagine that Eq.~(\ref{currentdist}) will need to be extended for the additional $\eta$-like mesons that may strongly share a flavor-singlet configuration and will play a role in the analysis of the pseudoscalar spectrum in years to come. \subsubsection{Employing exotic quantum numbers} With three gluons one can form glueballs of exotic quantum numbers, that cannot be admixed with conventional $q\overline{q}$ mesons because of $J^{PC}$ conservation by the strong interactions. Because $q\overline{q}$ mesons carry, in terms of the relative $L$ and total $S$ an angular momentum $J\in (|L-S|,\dots L+S)$ and discrete quantum numbers $P=(-1)^{L+1}$, $C=(-1)^{L+S}$, the following $J^{PC}$ combinations are not achievable: $0^{--}$, $(2n)^{+-}$, $(2n+1)^{-+}$. This makes them prime candidates for experimental searches as identification of a resonance featuring them excludes it as a conventional meson; still, mixing with other configurations, saliently meson-meson molecules, is still possible. The $\eta_-$--like $0^{--}$ glueball has been a subject of contemption among QCD sum rule practitionners with Pimikov {\it et al.}~\cite{Pimikov:2017bkk} placing it at an unassailable $7\pm 1$ GeV while Qiao and Tang~\cite{Qiao:2014vva} put it at $3.8\pm 0.1$, in line with other three--gluon states~\cite{LlanesEstrada:2005jf}. A small overview of the masses of other glueballs with exotic quantum numbers, including lattice and sum rule computations~\cite{Qiao:2015iea} suggests that a $0^{+-}$ glueball can be found in the 4.5--5 GeV region; and a $2^{+-}$ in the 4--4.3 GeV one (with the sum rule assigning it instead a much higher mass). Searches for these objects would require multiparticle, exclusive identification in the charmonium region. For example, in analogy with discoveries in the $J/\psi\pi\pi$ spectrum, that showcases salient meson states such as the $1^{++}\ \chi_{c1}'(3872)$ and $1^{--}\ \psi(4260)$ mesons, attention could be given to $J/\psi 4\pi$, that couples to $0^{--}$ quantum numbers; the glueball would be detectable below the $J/\psi f_1(1285)$ if its mass is indeed as in~\cite{Qiao:2015iea}. Since none of these glueballs is expected to populate the energy region below 3 GeV, I will not discuss them any further. \newpage \subsection{Production of the light scalar glueball} Scalar mesons can be produced in multiple collision channels such as $pp$ and $p\bar{p}$, but for the glueballs expected below 2 GeV, a most interesting alley is the radiative $J/\psi$ decay. Because both $c\bar{c}$ quarks are annihilated in ground state charmonium decays, leaving only light quarks (that do not directly couple to charm) and radiation ({\it e.g.} gluons) behind, $J/\psi$ decays have traditionally been considered a gluon-rich environment where to look for glueballs~\cite{Close:1996yc}. Therefore, we concentrate on this channel here, though some eventual comments are found in other parts of this review. \subsubsection{$J/\psi$ radiative decays} Radiative decays $J/\psi\to \gamma+G$ are particularly interesting because the photon carries away the $1^{--}$ quantum numbers of the $J/\psi$, exposing the $PC=++$ glueballs with spin 0 or 2, computed to be the lightest, and other $f_0$, $f_2$ mesons. A typical such spectrum will be shown later in figure~\ref{fig:spectrumdistort}. Meanwhile, let us quickly review a typical analysis~\cite{Guo:2020akt}. The radiative decay widths have been computed in lattice gauge theory~\cite{Chen:2014iua}, that find, approximately, the following branching fractions ($X_i=\Gamma_{J\psi\to i}/\Gamma_{J/\psi \rm total}$) \begin{equation} X_{\gamma G(0^{++})} \simeq 0.004(1)\ \ \ X_{\gamma G(2^{++})} \simeq 0.011(2) \ . \end{equation} These are not negligible branchings, if we compare them to $X_{\gamma \rm hadrons}=0.088(11) \simeq X_{\gamma gg} $ in the interpretation of the particle data group~\cite{Tanabashi:2018oca}. The lattice computation would entail that one in six radiative $J/\psi$ decays would produce a glueball; and it is supported by earlier sum-rule computations~~\cite{Narison:1996fm} that also produced $X_{\gamma G(0^{++})} \simeq 0.004-0.005$. Because~\cite{Close:1996yc,Guo:2020akt} $X_{\gamma f_0} \propto X_{\gamma gg} \frac{m_{f_0}\Gamma_{f_0\to gg}}{m^2_{J/\psi}}$, with known proportionality factors, the $f_0$-to glue branching fractions $b_i := \Gamma_{f_0^i\to gg}/\Gamma_{f_0^i}$ have been reconstructed by Guo {\it et al.}~\cite{Guo:2020akt} to be $b_{1370}=0.28(22)$, $b_{1500}=0.17(8)$ and $b_{1710}=0.85(16)$, in agreement with the bottom chart of figure~\ref{fig:mixing} in which the glueball configuration is dominant in the heaviest of these three mesons and does contribute a small part of the wavefunction of the other two, particularly the lightest one. In all, this is one of the findings that drives the building consensus~\cite{Cheng:2015iaa} around most of the scalar glueball strength being found in $f_0(1710)$: production of this meson is much stronger than that of the $f_0(1500)$ in $J/\psi$ radiative decays. \begin{figure}[h] \includegraphics[width=0.9\columnwidth]{vpp_sum.pdf} \caption{\label{fig:MMchannels} $0^{++}$/$2^{++}$ meson spectrum from $J/\psi \to V+MM$ where the vector meson is the strong-force analog to a $\gamma$. Note the several $f_0$, $f_2$ mesons produced. Reproduced from~\cite{Li:2006ni}, courtesy of the BES-II collaboration and of Elsevier under STM permissions guidelines. (\emph{I thank prof. Shuangshi Fang for providing the graph file and reference}).} \end{figure} \subsubsection{$J/\psi$ to vector + (mesons) decays} An interesting extension of the radiative-decay idea is to substitute the photon by a vector meson with equal $J^{PC}=1^{--}$ quantum numbers; recoiling against that vector is the system of interest, often two pions or two kaons, that carries $0^{++}$ or $2^{++}$ quantum numbers. The large statistics at BES-III allow such exclusive reconstruction, shown in figure~\ref{fig:MMchannels}. Moreover, there are partial-wave analysis of various meson-meson final states that confirm $f_J$ spins as listed. The branching fractions are not negligible: $\omega\pi\pi$ and $\omega K\bar{K}$ make up about 1\% of all $J/\psi$ decays, with $\phi \pi\pi$ and $\phi K\bar{K}$ another half a percent. Because the $\omega$ and $\phi$ are narrow and easily reconstructible, they allow access to a clean recoiling spectrum, as seen in the figure. The figure shows that the $f_0(1710)$ is rather produced recoiling against an $\omega$ than a $\phi$ and preferentially decays to $K\bar{K}$ over $\pi\pi$. The broad $f_0(1370)$ bump, however, is seen to behave in the opposite way, decaying to $\pi\pi$ but being produced with more statistics against a $\phi$ vector meson, an effect that can be somewhat puzzling. \section{Hints from and searches in high-energy scattering} \subsection{The Pomeron and the odderon puzzle} \subsubsection{The $2^{++}$ glueball in the Pomeron trajectory} Hadron scattering amplitudes at high energies (such as $pp\to pp$ as an example) for physical $s$ and $t<0$ are known to behave as power-laws \begin{equation} \label{reggepower} \sigma \propto s^{\alpha(t)-1}\ . \end{equation} This functional dependence naturally arises in Regge theory~\cite{Regge:1959mz}, in which the two-body system's angular momentum $J$ is analytically continued to a complex variable $\alpha$. The function $\alpha(t)$ controls the cross-section for negative $t$, and if this variable is also continued to positive $t$ (that would correspond to the $s$ variable of $p\bar{p}$ annihilation, for example), resonances appear in the Chew-Frautschi plot shown in figure~\ref{fig:pomeron}. The plot illustrates the leading trajectory that entails no exchange of electric charge, parity, nor charge conjugation among the scattering particles, which is due to the so called ``Pomeron'' Regge trajectory. Fits to $pp$ and other scattering data~\cite{Donnachie:2013xia} based on sophisticated versions of Eq.~(\ref{reggepower}) yield the discontinuous line near and to the left of the $J$ ($OY$) axis. \begin{figure} \begin{center} \includegraphics[width=\textwidth]{Pomeron.png} \end{center} \caption{\label{fig:pomeron} Donnachie-Landshoff ``soft'' Pomeron trajectory (solid lines: higher one, the classic trajectory from the 1990s, lower line, 2015 fit~\cite{Donnachie:2013xia}. Lattice data for $J^{++}$ glueballs from two different groups are represented by solid symbols. It seems clear, as has been known for long~\cite{GonzalezMestres:1979zu,Simonov:1990uq,LlanesEstrada:2000jw,Bicudo:2004tx}, that glueballs may offer an explanation of the Pomeron, and that the lightest glueball resonance that may fall near the Pomeron trajectory is the $2^{++}$ $f_2$-like glueball. Lattice data seems to put it at a mass somewhat too high, but it is possible that configuration mixing with a $q\bar{q}$ state moves the eigenvalue closer to the trajectory~\cite{Simonov:1990uq}. } \end{figure} Far to its left on the deep $t<0$ region, pQCD predicts that elastic scattering will asymptotically follow a power law with negative exponent discussed in Eq.~(\ref{differentialcounting}) below. What is of interest for the glueball discussion is the prolongation of that first straight line to the right of the plot (solid line), where $t\to M^2>0$. There is no guarantee that a Regge trajectory stays linear far from the $J$ axis, as demonstrated for the $f_0(500)$~\cite{Pelaez:2015qfa}~\footnote{Incidently, the result of that work shows that this meson, popularly known as $\sigma$, is a poor glueball candidate.}. However, two-gluon glueballs have been computed in many model approaches~\cite{Brisudova:1997ag,LlanesEstrada:2000jw,Buisseret:2009yv,Sharov:2008zz} to fall on linear Regge trajectories $\alpha(t)=\alpha(0)+\alpha'(0) t$. Because two-gluon glueballs are the lightest $PC=++$ glueballs, it has long been conjectured~\cite{GonzalezMestres:1979zu,Simonov:1990uq,LlanesEstrada:2000jw,Bicudo:2004tx} that they might provide the resonances that the Pomeron trajectory produces when $\alpha_P(M^2)=J$, an integer. Supporting this conjecture is the fact that the slope of the Regge trajectory of $gg$ is smaller than that of quark-antiquark states, in any approach with one-gluon like color exchange. In the linearly confining potential field theory of~\cite{LlanesEstrada:2000jw}, $V\to \sigma R$ at large distance, with \begin{equation} \label{casimir} \frac{\sigma_{gg}}{\sigma_{q\bar{q}}} = \frac{N_c}{(N_c^2-1)/(2N_c)} \end{equation} the ratio of two Casimirs, yielding, in view of $\alpha' \propto \frac{1}{\sigma}$, \begin{equation} \frac{\alpha'_{\rm Pomeron}}{\alpha'_{q\bar{q}\ \rm Reggeon}} = \frac{4}{9}\ . \end{equation} Because typical Regge trajectories of conventional $q\bar{q}$ meson Reggeons have \\ $\alpha'_{q\bar{q}\ \rm Reggeon}\simeq 0.9$, if the Pomeron is identified with the $t$-channel exchange of a tower of $gg$ states with $PC=++$, its slope is predicted to be $\alpha'_{\rm Pomeron}\simeq 0.4$, in reasonable agreement with the scattering data extraction of the Pomeron by Donnachie and Landshoff~\cite{Donnachie:2013xia}. While the lattice data seems to have this higher slope, model work in Coulomb gauge QCD~\cite{LlanesEstrada:2000jw} is closer to the empirical Pomeron slope. If the excited $0^{++}$ glueball of figure~\ref{fig:glueballspectra} is ever identified, as it naturally is a radial excitation of the ground state $G$, it will allow to confirm of discard the Casimir string-tension scaling of Eq.~(\ref{casimir}). This should not be taken for granted as it is a feature of Cornell-like approaches that cast much of the confinement strength into (nonperturbative) one-gluon like exchanges with the same color factors, but there are other possibilities~\cite{Greensite:2011zz}. Finally we can reverse the discussion and try to learn something about glueballs from high-energy Pomeron phenomenology. First of all, because the Pomeron trajectory seems to intercept the $t=0$ axis at $J=1+\epsilon$~\footnote{Technically, if $\alpha(0)=1+\epsilon$, $\sigma\propto s^\epsilon$ would violate unitarity at asymptotically high energy. While this is of no urgent concern at the LHC where the cross section of order 100 mbarn is way smaller than the $O(20)$ barn cross section of the Froissart bound, some authors prefer setting $\alpha(0)=1$ exactly. Then a $J=1$ $f_1$ meson would be predicted to have zero mass, which is obviously not present in Nature. The Donnachie-Landshoff Pomeron fit nicely excludes this unwanted feature, but then unitarity needs to be corrected by multiple Pomeron exchange.}, no state with $J=0,1$ can lie on it. Therefore, the lightest and lowest-spin glueball on the Pomeron trajectory is the $2^{++}$ $f_2$-like. While Athenodorou and Teper~\cite{Athenodorou:2020ani} place its mass at $2376\pm 32$ MeV, the Pomeron would seem to prefer a mass somewhat lighter than 2.3 GeV, perhaps as low as 1.9 GeV as in the classic Jaroszkiewicz-Landshoff Pomeron $J=1.08+0.25t$ later used by Donnachie and Landshoff too. \begin{table} \caption{Different computations of the $2^{++}$ glueball mass, extracted from the Pomeron Regge trajectory and from various theory approaches.} \begin{tabular}{|c|cccccc|} \hline Method & Pomeron & Coulomb-$gg$ & Lattice & Constituent & AdS-CFT & Sum rules\\ \hline $2^{++}$ mass & {\bf 1.9}~\cite{Jaroszkiewicz:1974ep} & {\bf 2.05}~\cite{Szczepaniak:1995cw,LlanesEstrada:2000jw} & {\bf 2.38(3)}\cite{Athenodorou:2020ani} & {\bf 2.59}~\cite{Simonov:1990uq} & {\bf 2.3-2.7} \cite{Vento:2017ice} & {\bf 2.0(1)}~\cite{Narison:1996fm} \\ (GeV) & {\bf 2.3}~\cite{Donnachie:2013xia} & {\bf 2.42}~\cite{Szczepaniak:2003mr} & {\bf 2.39(15)}~\cite{Chen:2005mg} & {\bf 2.53}~\cite{Buisseret:2009yv} & $\simeq${\bf 2.3}\cite{Rinaldi:2020qbm} & \\ \hline \end{tabular} \end{table} \begin{figure}[h] \includegraphics[width=5in]{OddballsOdderon.png} \caption{\label{fig:odderon} Computations of the odd $C$-parity glueball spectrum (states with $J^{PC}=3^{--},5^{--},\dots$ represented by various symbols) from~\cite{LlanesEstrada:2005jf} and others quoted there lead to the conclusion that the intercept of the corresponding Regge trajectory would be $\alpha(0)<1$, as shown by the rough band reaching the $OY$ axes even below $1/2$ where conventional Regge trajectories intercept. Recent fits of high energy scattering data~\cite{Szanyi:2019kkn} however suggest an intercept above 1 (dotted line, red online). The controversy is ongoing. } \end{figure} \subsubsection{The odderon puzzle} \begin{figure}[h] \includegraphics[width=\columnwidth]{PomOdd.png} \caption{\label{fig:pomeronodderon} According to recent work~\cite{Ezhela:2020hws,Belousov:2020rzj}, the total cross-section data for $pp$ and $p\bar{p}$ can be fitted with (left: $\sigma_{p\bar{p}}\neq \sigma_{pp}$) or without (right: $\sigma_{p\bar{p}}\to \sigma_{pp}$) an odderon contribution, so its existence as a crossing-odd asymptotically dominant Regge trajectory is not firmly established. Its confirmation would cause an important puzzle in our understanding of ``oddballs'' (negative $C$-parity glueballs). Figure courtesy of V. Petrov and collaborators~\cite{Ezhela:2020hws}. } \end{figure} Moving on, I would like to discuss the very latest developments. Fits to high energy data comparing the $pp$ and $p\bar{p}$ cross sections have lead to a revival of the concept of the odderon, a Regge trajectory that would give a different asymptotic cross section to the two processes. Such fits~\cite{Szanyi:2019kkn,Csorgo:2020rlb} seem to suggest an odderon trajectory $\alpha(t) = (1.23+0.19{\rm GeV}^{-2} t)$ with a 1.23 intercept at $t=0$ that is clearly larger than one~\footnote{Strictly speaking, because of the known asymptotic behavior, Szanyi {\it et al.}~\cite{Szanyi:2019kkn} parameterize the odderon trajectory (I have rounded off for clarity) as $\alpha(t) = (1.23+0.19{\rm GeV}^{-2} t)/ (1+0.032(\sqrt{t_0-t}-\sqrt{t_0})) $. The denominator, for $t\sim 9 {\rm GeV}^2$ in the region where glueballs are important is a small $O(5\%)$ correction so we can ignore it; its importance resides, for physical $t$, in the TeV region covered by the LHC. }. Earlier expectations based on computations of the odd $C$-parity glueball spectrum~\cite{LlanesEstrada:2005jf}~\footnote{The well-known work by Bartels, Lipatov and Bacca~\cite{Bartels:1999yt} deals with the BFKL-type odderon with different kinematics, as the Bjorken limit is needed in addition to high energies, and is not relevant for the glueball discussion.} confirmed by~\cite{Kaidalov:2005kz,Cardoso:2008sb} suggested that the Odderon Regge trajectory would not exist, because its trajectory would fall even below the conventional $\omega$ meson Regge trajectory (see Fig.~\ref{fig:odderon}), and $1-\sigma_{p\bar{p}}/\sigma_{pp}$ would be suppressed at high energy. Other researchers~\cite{Donnachie:2019ciz,Ezhela:2020hws,Belousov:2020rzj}, analyzing the same database, do not seem to find conclusive evidence of an odderon contribution (see figure~\ref{fig:pomeronodderon}), and it seems that more data is needed to close the discussion in this energy range. Its importance lies in that the finding of the odderon would undermine our understanding of the Pomeron as a correlated two-gluon exchange with physical resonances for integer $J$ and $t=M^2>0$, and close a window to glueballs. On the other hand, if no odderon contribution is necessary, a prediction of the whole field stands. \subsection{Absence of Glueballs in Heavy Ion collisions?} Hadron spectroscopy in heavy ion collisions offers interesting possibilities for identifying and classifying certain hadrons~\cite{Cho:2017dcy}. Among them, the case of Yang-Mills glueballs is, according to a part of the literature very easy: if a hadron is reconstructed in a heavy-ion collision, it is very likely not a glueball, because these ``evaporate'' or disappear from the spectrum~\cite{YepezMartinez:2012rf} very quickly at the phase transition. This insight was obtained, in a truncation of Coulomb gauge Yang-Mills theory, by obtaining a variational approximation to $\Omega(k)$, a screened {\it in medio} gluon-self energy minimizing the free energy at finite temperature $\delta \mathcal{F} / \delta \Omega = 0$. Thermodynamic magnitudes can then be represented in terms of that $\Omega$, for example the energy density counting glueballs \begin{equation} \epsilon_1 = \frac{T^2}{V} \frac{\partial (-\beta \mathcal{F})}{\partial T} \end{equation} quickly above the phase transition overshoots the rigorous thermodynamical limit of Stefan-Boltzmann, whereas that counting individual gluons \begin{equation} \epsilon_2 = 2(N_c^2-1) \int \frac{d^3q}{(2\pi)^3} \frac{\Omega(q)}{e^{\beta\Omega(q)}-1} \end{equation} can reproduce it, suggesting that indeed glueballs have molten at the phase transition indicated by lattice data. Unfortunately, ``glueball-like'' ordinary hadrons tend to also be relatively broad structures that disappear from the spectrum, unlike {\it e.g.} $\psi$ or $\Upsilon$ $q\bar{q}$ mesons. Likewise, nonhadronic structures such as triangle singularities also very likely drop out of the spectrum~\cite{Abreu:2020jsl} in the thermal medium. Therefore, the lack of a signal in a heavy-ion collision analysis is far from suggestive that the corresponding state could be a glueball: the statement is that, if a signal is seen in heavy-ion collisions, (a) it is more likely a hadron~\cite{Abreu:2020jsl} than in vacuum collisions and (b) it is unlikely a glueball~\cite{YepezMartinez:2012rf}. Other investigations however suggest that there is an intermediate temperature phase below 270 MeV where glueballs are still active degrees of freedom~\cite{Stoecker:2015zea}, in which case they could contribute to RHIC/LHC phenomenology. \section{Where to look next?} \subsection{Counting rules and production at Belle: $0^{++}$ and $2^{++}$ glueballs} \label{scalartensor} In a renormalizable theory like QCD, when all scattering scales in an exclusive process such as $AB\to CD$ become large and proportional to the total squared cm energy $s$, the differential cross section satisfies the Brodsky-Farrar counting rules~\cite{Brodsky:1973kr,Matveev:1973ra} that yield a simple power-law scaling with $s$, \begin{equation} \label{differentialcounting} \frac{d\sigma(AB\to CD)}{dt} = {f(\theta_{CM})\over {s^{n_i+n_f-2}}} . \end{equation} The power of this observation is that a hadron--level cross section is expressed in terms of quark-gluon level constituents: $n_i$ and $n_f$ represent the minimum number of pointlike particles in the initial and final states. This idea has been exploited to predict the scaling of form factors and various cross sections and helicity selection rules. If orbital angular momentum is included~\cite{Amati:1968kr,Ciafaloni:1968ec,Brodsky:1974vy}, one needs to take into account the short distance suppression brought about by the centrifugal factor $r^L$ (that appears in basically any formulation of hadron structure such as nonrelativistic Schr\"odinger wavefunctions, light-front ones where the radial-like variable is $\zeta^2 = b^2_\perp x(1-x)$, or Bethe-Salpeter bound-state amplitudes). This increases the suppression of amplitudes involving a hadron with $L$ units of internal angular momentum by a factor $\left(\sqrt{s}\right)^{-L}$~\cite{Brodsky:1981kj}, with the cross sections then dropping an additional $s^{-L}$, that is, after summing all internal orbital angular momenta, \begin{equation} \label{counting} \frac{d\sigma}{dt} = {f(\theta_{CM})\over {s^{n_i+n_f+L -2}}} \end{equation} (at fixed angle so that $t\propto s$). This counting rule has recently been proposed~\cite{Brodsky:2018snc,Llanes-Estrada:2018omz} to aid with the identification of the scalar glueball among the $f_0$ states. For the glueballs with $J^{PC}=0^{++}$, the minimum Fock space component is $\arrowvert \vec{g}\cdot \vec{g} \rangle$ with antialigned gluon spins and no orbital angular momentum. Therefore $n_f+L=2$. This happens to be the \emph{slowest} falloff among all the Fock space components that can contribute to the quark-gluon Fock expansion of a scalar meson: a few are shown in table~\ref{tab:suppression} adapted from~\cite{Brodsky:2018snc}. \begin{table} \caption{Power of $s$ in the QCD counting rules that suppress the production of the lowest wavefunctions in a meson Fock expansion \emph{relative to the $s$-wave glueball one} in large momentum transfer reactions involving an $f_0$ or $f_2$ meson. Introducing additional particles obviously further depresses the cross section. The glueball happens to be the most readily produced meson at high energy and momentum transfer. This is a good test to isolate the gluonium components in $0^{++}$ and $2^{++}$ mesons. \label{tab:suppression}} \begin{center} \begin{tabular}{|c|cccc|} \hline Wavefunction& $gg\arrowvert_{L=0}$ & $q\bar{q}\arrowvert_{L=1}$ & $q\bar{q}g$ & $q\bar{q} q\bar{q}$ \\ $n_f+L$ & 2 & 3 & 3 & 4 \\ Suppression & 1 & $s^{-1}$ & $s^{-1}$ & $s^{-2}$ \\ \hline \end{tabular} \end{center} \end{table} Because conventional $0^{++}$ $q\bar{q}$ mesons require a p-wave, their high-energy exclusive production is suppressed respect to the gluonium $gg$. The same observation holds for $2^{++}$ quantum numbers: both $L=0$ $\arrowvert gg \rangle $ glueballs compete in production experiments with $L=1$ $\arrowvert q\bar{q} \rangle$ conventional mesons, with the glueballs dominating at high energy. On the contrary, the $\eta$--like $0^{-+}$ glueball is an $L=1$ state competing with $L=0$ $\arrowvert q\bar{q} \rangle$ conventional mesons, and therefore the glueball production is suppressed respect to conventional quarkonium in that channel. Many accelerator experiments could exploit that advantage of high-energy glueball production, but particularly so Belle-II, for example by means of the reaction $e^-e^+\to \phi f_0$. Because the $\phi$ meson can be readily identified as an $L=0$ $s\bar{s}$ state, $n_f=4$ for that quark-antiquark pair and two gluons for the glueball all with $L=0$, whereas $n_i=2$ for the $e^-e^+$, yielding $\frac{d\sigma}{dt} = f(\theta) \frac{1}{s^4}$. If all events in the Belle-II barrel detector were counted, (amounting to an integration over a fixed solid angle that excludes the forward direction, so $t$ is not suppressed respect to $s$), all scales are large and \begin{equation} \sigma\arrowvert_{\rm barrel} = 4\arrowvert {\bf p}_\phi\arrowvert \arrowvert {\bf p}_{f_0}\arrowvert \times \int_{0}^{\cos\theta_{\rm min}} \! d\cos\theta \ \ \frac{d\sigma}{dt} \end{equation} brings in one more power of $s$, resulting in the asymptotic power-law behaviors \begin{eqnarray}\label{Glueballscaling} \sigma \left(f_{0/2}=\arrowvert {\bf{gg}} \rangle_{L=0} +\dots \right) & \sim & \frac{\rm constant}{\bf s^3} \\ \nonumber & \phantom{\sim} & \\ \nonumber \sigma \left(f_{0/2}=\arrowvert {\bf{q\bar{q}}}\rangle_{L=1} +\dots \right) & \sim& \frac{\rm constant}{\bf s^4} \\ \nonumber & \phantom{\sim} & \\ \nonumber \sigma \left(f_{0/2}=\arrowvert {\bf{q\bar{q}q\bar{q}}}\rangle_{s-{\rm wave}} +\dots \right) & \sim& \frac{\rm constant}{\bf s^5} \end{eqnarray} If Belle-II took data {\it e.g.} at 9 and 11 GeV (off--resonance), the ratio of the cross sections at the two energies would fall by a factor, $ \frac{\sigma(9{\rm GeV})}{\sigma(11{\rm GeV})} \simeq 3.4 \ (gg)\ ; \ 5 \ (q\bar{q})_{L=1}\ ; \ 7.5\ (qq\bar{q}\bar{q})$, etc. that depended on the inner structure of the scalar (eventually, tensor) meson. The large energy of this reaction entails fast separation of the two $\phi$ and $f$ mesons reducing final state interactions. The well known $C=+1$ $\pi\pi$ spectrum from radiative $J/\psi$ decays~\cite{Bennett:2014fgt} is shown in the left plot of figure~\ref{fig:spectrumdistort}. The typical scale here is thus at the charmonium's 3.1 GeV. \begin{figure}[t] \includegraphics*[width=0.48\columnwidth]{Jpsitopipiphoton.pdf}\ \ \includegraphics*[width=0.48\columnwidth]{pipispectrum.pdf} \caption{\label{fig:spectrumdistort} Left: Experimental $\pi\pi$ spectrum~\cite{Bennett:2014fgt} from $J/\psi \gamma \pi\pi$. Right: example $\pi\pi$ spectrum resulting from $e^-e^+\to \phi f_J$ with $E= 9$ and 11 GeV, assuming that $f_0(1710)$ is the glueball and with absolute normalization taken from~\cite{Brodsky:2018snc}. Whichever state dropped least in this plot upon having experimental data at hand would fit the role of the glueball. Reprinted from~\cite{Brodsky:2018snc} (Elsevier) under STM permissions guidelines.} \end{figure} The right plot in fig.~\ref{fig:spectrumdistort} then assumes, to exemplify, that $f_0(1710)$ is mostly the glueball and the remaining $C=+1$ states present, saliently the $f_2(1270)$, have cross sections scaling as $q\bar{q}$ mesons. With $\sigma(9{\rm GeV})\sim 70$ fbarn, 70000 $\phi$--recoiling $f_0(1710)$s could be obtained at Belle-II with 1 ab$^{-1}$ of integrated luminosity (several weeks of off-resonance data), and about 20000 events at 11 GeV, numbers that allow a check of the scaling law even after allowing for experimental cuts. The experimental data itself, once collected, can inform the collaboration whether the energy achieved is high enough to be in the asymptotic limit $s\sim t\to \infty$, because it can test as follows whether the hadron is still behaving as pointlike without its constituents being exposed. Profiting from the reasonable Vector Meson Dominance model, where the $\gamma$ fluctuates to a vector meson (such as $\phi(1680)$ or $Y(2175)$) and constructing an interaction Lagrangian along the lines of~\cite{Black:2006mn}, \begin{equation} {\mathcal L_{\phi'\phi f_0}} = \frac{\beta}{2}f_0(\phi'_{\nu,\mu}-\phi'_{\mu,\nu})(\phi^{\mu,\nu}-\phi^{\nu,\mu}) + \frac{e}{3} \tilde{g} f_\pi^2 A^\mu \phi'_\mu\ , \end{equation} hadrons behave as pointlike objects, and the prediction for the cross section is much softer than Eq.~(\ref{Glueballscaling}), since $n_i+n_f+L-2=2+2+0-2=2$ (as the initial state contains $e^-e^+$ and the final state two pointlike mesons). Thus, up to logarithms, and while the softest drop in $\sigma$ that QCD supports in Eq.~(\ref{Glueballscaling}) at large $s$ is $1/s^3$, with unstructured $\phi$ and $f_0$ the cross--section falls as \begin{equation} \label{eq:hadroncross} \sigma_{\rm hadron}(e^-e^+\to \phi f_0) \propto \frac{1}{s}\ . \end{equation} This behavior provides the experimental null hypothesis (no access to the meson's internal structure): as long as the cross section drops following the $1/s$ behavior of Eq.~(\ref{eq:hadroncross}), production is still low--energy, probing the hadron as a whole. Only once $\sigma$ drops as $1/s^3$ or faster can one access the intrinsic QCD counting. \subsection{Multibeam analysis to search for the $0^{-+}$ glueball (and other $\eta$-like mesons)} Additionally to its interest for the axial anomaly commented on in subsection~\ref{subsubsec:anomaly}, the pseudoscalar glueball is sensitive to the three-gluon scattering kernel $V^{\mu\rho\sigma}$ that extends the three-gluon vertex of pQCD to the nonperturbative, strong-coupling regime~\cite{Souza:2019ylx}, so that finding out its mass would immediately constrain the integrated strength of that function of the gluon momenta, of interest for Dyson-Schwinger studies. But the spectrum of pseudoscalar, isospin-singlet mesons in the relevant mass region, around and above 2 GeV, is much less understood that the scalar one in the one and a half GeV mass range, though there are several $\eta$-like pseudoscalar mesons below 2 GeV. The $\eta$ and $\eta'$, mixed and influenced by the anomaly as they seem to be, are clearly markers of and presumably seeded by the flavor-nonet (octet+singlet) characteristic representation of $q\bar{q}$ mesons. The next three possible states are $\eta(1295)$, $\eta(1405)$ and $\eta(1475)$. The lightest, $\eta(1295)$ is almost degenerate with the $\pi(1300)$ which would suggest an ideally mixed $(u\bar{u}+d\bar{d})/\sqrt{2}$ configuration, with the $s\bar{s}$ remainder at higher mass (see minireview in \cite{Zyla:2020zbs}), and all corresponding to a radially excited quark-model nonet. Which one is that additional $\eta$ meson that would complete the nonet is more disputed. The proposal~\cite{Albaladejo:2010tj,Liang:2013yta} that $\eta(1475)$ can be explained as a molecular-type state of composition $\eta K\bar{K}$, as they find strong binding in this channel (but not in $\eta'K\bar{K}$) would leave the lighter $\eta(1405)$ as the other largely $q\bar{q}$ state. However, the dominant decays of the higher $\eta(1475)$ state matching those of $s\bar{s}$ suggest that it is the middle one that is a supernumerary, and since the 80s its study was pursued as a possible glueball candidate (see {\it e.g.}~\cite{Masoni:2006rz}). But its mass does not match the lattice gauge theory predictions for the pseudoscalar glueball mass, that put it in the 2 GeV region, nor that of several other approaches (such as the Coulomb-gauge computations cited that require to pay the energy cost of a $p$-wave, the AdS-CFT conjecture that would make it nearly degenerate with the $2^{++}$ state, and others). Even more, some authors~\cite{Wu:2012pg} interpret the evidence as there being only one pseudoscalar state instead of two: this would be the traditional $\eta(1440)$ and there would be no supernumerary state in this mass region. At higher energy yet, there could be two broad structures, the $\eta(1760)$ (with $\Gamma\sim O(250)$ MeV) and the $\eta(2225)$ (with $\Gamma\sim O(200)$ MeV). Whether any of these two, particularly the higher one, have anything to do with the pseudoscalar glueball remains to be seen. To produce pseudoscalar mesons in this mass range, $J/\psi$ radiative decays are not a good tool, since $J^{PC}$ conservation in an $s$-wave decay, $1^{--}\to 0^{-+} + 1^{+-}$ cannot be exploited, as there is no $1^{+-}$ meson below 1 GeV to leave enough phase space for the high $\eta$ spectrum. The $\psi(3686)$ decays are not promising either because, though $h_1(1170)$ is light enough to leave the needed phase space, it is very broad, difficulting the multibody reconstruction. Belle II could profit from its higher center of mass energy and attempt the analysis of the decay chain \begin{equation} \Upsilon(4S) (1^{--}) \to X(0^{-+}) + h_c(1P) \end{equation} in which the pseudoscalar $X$, maybe not reconstructed but with spectrum obtained by the recoiling mass technique~\cite{Pakhlov:2009nj}, measuring the rest of the reaction, would contain the glueball and any other mesons in that energy range. The narrow $1P$ charmonium state with $\Gamma\simeq 0.7$ MeV, and the ample phase space, would work in favor of the search. Reconstructing the $h_c$ however is not so straightforward, because its dominant decay mode $\gamma \eta_c(1S)\to \gamma \eta/\eta' \pi\pi$ has to confront the 30 MeV-broad $\eta_c$. The other promising alley is to use a $e^-e^+$ machine as a photon-photon collider, \begin{equation} e^-e^+ \to e^-e^+ + {\gamma\gamma\to \rm hadrons}\ . \end{equation} The quantum number combinations that appear in this reaction with the leptons tagged are of course similar to those of the glueball spectrum: $0^{++}$, $2^++$ ($s$-wave), $0^{-+}$ (p-wave) ... Belle-II could certainly dedicate some effort to the identification of pseudoscalar mesons~\cite{Shwartz:2019zle} in the 2-3 GeV energy range. In fact, the earlier Belle collaboration carried out a fruitful spectroscopy experimental program based on two-photon physics, though more focused on charmonia~\cite{Uehara:2006cj}. Their copious statistics would allow them to produce $p$-wave states of the $\gamma\gamma$ system, though not the $\eta$-glueball directly (as it is uncharged); but they could also employ the large scalar samples to search for two-pseudoscalar mesons, one of them being a $\pi^0$, by $\gamma\gamma\to G(0^{-+})+\pi^0$ as proposed early on by Wakely and Carlson~\cite{Wakely:1991ej}. Finally, an additional, less immediate possibility, would be to rig one of the two existing $e^-e^+$ colliders with polarized beams, an upgrade that seems to be under consideration for Belle-II~\cite{Roney:2019til}. Having polarized beams of enough purity would hopefully allow to overcome the overwhelming one-photon $e^-e^+$ annihilation background, which has $J^{PC}=1^{--}$ quantum numbers, exposing the two-photon annihilation reaction $ e^- e^+\arrowvert_{S=0} \to 0^{-+} $. The combination of these three production methods at lepton machines, together with $p\bar{p}$ annihilation by the PANDA experiment~\cite{Boca:2015oza,Brambilla:2014jmp,Belias:2020zwx} or at Glue-X in Jefferson Lab~\cite{Gutsche:2016wix}, that will allow production and careful study of $\eta$-like mesons above 1.9 GeV, irrespective of their components being or not charged as the proton and antiproton can annihilate via the strong force, will facilitate mixing studies such as those in subsection~\ref{subsec:symmetrymixing}. This will hopefully allow the identification of the pseudoscalar glueball in a not too distant future. As seen in table~\ref{tab:pseudoscalar}, the $\eta$-glueball will be suppressed in channels preferentially producing charged final states. \begin{table} \caption{\label{tab:pseudoscalar} A multibeam analysis combining data from different measurements will be essential to eventually separate the pseudoscalar Yang-Mills glueball from other $\eta$-like mesons in the 2 GeV energy region. Because gluons are uncharged, direct production in lepton machines is forbidden unless another hadron populates the final state.} \begin{tabular}{|c|c|c|c|c|c|} \hline Reaction & $p\bar{p}\to 0^{-+}$ & $\gamma\gamma \to 0^{-+}$ & $\gamma \gamma \to 0^{-+}0^{-+}$ & (polarized) $e^-e^+\to 0^{-+}$ & $\Upsilon\to 0^{-+}+h_c$\\ \hline $q\bar{q}$, $q\bar{q}g$ \dots & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark \\ Glueball & \checkmark & {\large $\times$} & \checkmark & {\large $\times$} & \checkmark\\ \hline \end{tabular} \end{table} \section{Conclusions} Gluonium or glueballs are a doubtlessly attractive piece of physics: a dense, self-bound matter-like state made of pure radiation, without fermions seeding it. Nature has so far not offered us another example of this configuration~\footnote{Graviton-graviton scattering is a very long shot~\cite{Blas:2020dyg}.}. Glueballs have been searched for, and not unmistakeably identified, for over four decades. Nevertheless, searching for them has been and remains an inspiring quest to understand hadrons and is worth carrying on because the data obtained and analysis methods employed are some of the activities keeping hadron physics fascinating. This search for Ithaca seems to at least have found a coast. Many colleagues concur that a large part of the scalar glueball configuration, in spite of mixing, is to be found in $f_0(1710)$, as most of the experimental puzzles can be resolved~\cite{Cheng:2015iaa}. Therefore, $f_0(1370)$ and $f_0(1500)$ may have a small part of glueball component, but are, presumably, $q\overline{q}$ quarkonia to a large extent. Under this hypothesis, that $f_0(1710)$ meson should be a starting point for studies of the conformal anomaly, how the QCD scale arises from the Yang-Mills sector. The two excited states almost certainly below 3 GeV and thus relevant for light quarks are the $2^{++}$ and $0^{-+}$ glueballs. They will be hidden among a largely unknown spectrum of $f_2$ and $\eta$-like mesons, respectively. There is a window to the $2^{++}$ glueball in the Pomeron Regge trajectory, on which the glueball-Pomeron conjecture reasonably maintains that it is the most prominent resonance. However, recent claims that an asymptotic odderon has been detected have cast doubt in the picture, because oddball (odd-$C$ glueballs, by contemporary usage, not hybrid mesons) computations would predict a subleading odderon-like trajectory, that is, asymptotically equal $\sigma_{pp}$ and $\sigma_{\bar{p}p}$. Both this glueball and the pseudoscalar one, that might open a window to glance at the axial anomaly, are expected to be broader than the ground state scalar one, by the many hadron channels open (no wavefunction suppression in the final state), with the flux tube model predicting $\Gamma \propto M$, so their widths would possibly be in the $O(250-350)$ MeV rather than $O(100-200)$ MeV. Finding them might amount to clarifying the entire spectrum in that mass region, just as with the $0^{++}$ one. Additionally, the pseudoscalar glueball does not have such easily identifiable decay channels as the positive parity ones. Therefore, a promising detection strategy is by the recoil mass technique, identifying, for example, a primary $\pi_0$ meson against which the glueball may recoil. Because other $\eta$-like mesons can behave in the same manner, a multibeam analysis comparing the same spectrum sourced from different initial states will help in sorting out which of the states had electric charge (and thus, quarks) in their configurations. For the $0^{++}$ and $2^{++}$ glueballs, we can additionally exploit, in future experiments, the fact that they are produced at the lowest twist in pQCD, because their leading Fock-space expansion is made of only two particles with no relative orbital angular momentum, so that they dominate over other mesons of the same quantum numbers in asymptotic production. Belle-II could study $f_0$ and $f_2$ exclusive production against, for example, a $\phi$ or similar meson, and from the cross-section falloff help identify which mesons have the largest glueball component. Gluonium will doubtlessly remain an object of study for years to come. \section*{Funding acknowledgment} This publication is supported by EU Horizon 2020 research and innovation programme, STRONG-2020 project, under grant agreement No 824093; grants MINECO:FPA2016-75654-C2-1-P, MICINN: PID2019-108655GB-I00, PID2019-106080GB-C21 (Spain); Universidad Complutense de Madrid under research group 910309 and the IPARCOS institute.
2023-04-23T06:41:27.743Z
2021-01-15T02:04:18.000Z
redpajama/arxiv
arxiv_0001
2,479
12,815
945f3828c3e6364809d2518338fee413332b39ae
\section{Introduction} \renewcommand{\thesection}{\arabic{section}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{equation}{0} Recent years, the study of submanifolds of constant curvature in product manifolds attracts many geometers' attention. For instance, Hopf in 1955 discovered that the complexification of the traceless part of the second fundamental form of an immersed surface $\mathcal{U}^{2}$, with CMC $H$, in $\mathbb{R}^3$ is a holomorphic quadratic differential $Q$ on $\mathcal{U}^{2}$, and then he used this observation to get his well-known conclusion that any immersed CMC sphere $\mathbb{S}^{2}\hookrightarrow\mathbb{R}^3$ is a standard distance sphere with radius $1/H$. By introducing a generalized quadratic differential $\widetilde{Q}$ for immersed surfaces $\mathcal{U}^{2}$ in product spaces $\mathbb{S}^{2}\times\mathbb{R}$ and $\mathbb{H}^{2}\times\mathbb{R}$, with $\mathbb{S}^{2}$, $\mathbb{H}^{2}$ the $2$-dimensional sphere and hyperbolic surface respectively, Abresch and Rosenberg \cite{ar} can extend Hopf's result to CMC spheres in these target spaces. Meeks and Rosenberg \cite{mr} successfully classified stable properly embedded orientable minimal surfaces in the product space $N\times\mathbb{R}$, where $N$ is a closed orientable Riemannian surface. In fact, they proved that such a surface must be a product of a stable embedded geodesic on $N$ with $\mathbb{R}$, a minimal graph over a region of $N$ bounded by stable geodesics, $N\times\{t\}$ for some $t\in\mathbb{R}$, or is in a moduli space of periodic multigraphs parameterized by $P\times\mathbb{R}^{+}$, where $P$ is the set of primitive (non-multiple) homology classes in $H_{1}(N)$. Mazet, Rodr\'{\i}guez and Rosenberg \cite{lmh} analyzed properties of periodic minimal or CMC surfaces in the product manifold $\mathbb{H}^{2}\times\mathbb{R}$, and they also construct examples of periodic minimal surfaces in $\mathbb{H}^{2}\times\mathbb{R}$. In \cite{hfj}, Rosenberg, Schulze and Spruck showed that a properly immersed minimal hypersurface in $N\times\mathbb{R}^{+}$ equals some slice $N\times\{c\}$ when $N$ is a complete, recurrent $n$-dimensional Riemannian manifold with bounded curvature. Very recently, Gao, Mao and Song \cite{gms} proved the existence and uniqueness of solutions to the CMC equation with nonzero Neumann boundary data in product manifold $N^{n}\times\mathbb{R}$, where $N^{n}$ is an $n$-dimensional ($n\geq2$) complete Riemannian manifold with nonnegative Ricci curvature. Equivalently, this conclusion gives the existence of CMC graphic hypersurfaces defined over a compact strictly convex domain $\Omega\subset N^{n}$ and having nonvanishing contact angle. Of course, for more information, readers can check references therein of these papers. Hence, it is interesting and important to consider submanifolds of constant curvature in the product manifold of type $N^{n}\times\mathbb{R}$. Inspired by Shahriyari's progress on complete translating graphs in $\mathbb{R}^{3}$ (see \cite{ls} for details) and the Jenkins-Serrin theory on minimal graphs and CMC graphs, Zhou \cite{hyz} considered complete translating, minimal and CMC graphs in $3$-dimensional product manifold $N^{2}\times\mathbb{R}$ over a domain $\Omega\subset N^{2}$, where $N^{2}$ is a complete Riemannian surface, and successfully showed the boundary behavior of $\Omega$. This conclusion extends some of Shahriyari's conclusions in \cite{ls} from the Euclidean $3$-space $\mathbb{R}^{3}$ to the setting of $3$-dimensional product space $N^{2}\times\mathbb{R}$. \emph{Stability} plays an important role in the study of minimal or CMC hypersurfaces in Euclidean space or, more generally, product manifolds. For instance, if stability assumption was made, nice curvature estimates or classification results for minimal or CMC surfaces can be obtained -- see, e.g., \cite{cm1,cm2,mr,rs,srz,hyz}. The famous Bernstein theorem (holds only for $n\leq7$) in the Euclidean space says that the entire nonparametric minimal hypersurfaces in $\mathbb{R}^{n+1}$, $n\leq7$, are hyperplanes (see \cite{ssy}). Calabi \cite{ec} (for $n\leq4$), Cheng-Yau \cite{cy} (for all $n$) proved that a complete maximal spacelike hypersurface in the flat Lorentz-Minkowski $(n+1)$-space $\mathbb{L}^{n+1}\equiv\mathbb{R}^{n}_{1}$ is totally geodesic. Therefore, specially, the only entire nonparametric maximal space-like hypersurfaces in $\mathbb{R}^{n}_{1}$ are space-like hyperplanes. This interesting example shows that it is meaningful to ask whether classical results in Riemannian geometry (or specially the Euclidean space) can be transplanted to pesudo-Riemannian geometry (or specially the pesudo-Euclidean space) or not. This example also shows that, in some aspect, there exists essential difference between the Euclidean space and the pesudo-Euclidean space. Motivated by the previous experience, we try to get stability conclusions in Lorentz manifolds of type $M^{n}\times\mathbb{R}$. Fortunately, so far, we get one -- see Theorem \ref{THEOREM3.1} for details. In order to state our conclusion clearly, we need to introduce some notions first. Throughout this paper, denote by $M^{n}\times\mathbb{R}$, with the metric $-ds^{2}+\sigma$, an $(n+1)$-dimensional ($n\geq2$) Lorentz manifold where $M^{n}$ is a complete Riemannian $n$-manifold with the metric $\sigma$. For a domain $\Omega\subset M^{n}$ with piecewise smooth boundary, a \emph{translating space-like graph} in the Lorentz $(n+1)$-manifold $M^{n}\times\mathbb{R}$ is the space-like graph of $u(x)$, where $u(x):\Omega\rightarrow\mathbb{R}$ is a solution of the following mean curvature type equation \begin{eqnarray} \label{trg} \mathrm{div}\left(\frac{Du}{\sqrt{1-|Du|^{2}}}\right)=\frac{c}{\sqrt{1-|Du|^{2}}}, \end{eqnarray} where $D$ is a covariant derivative operator on $M^{n}$, $\mathrm{div}(\cdot)$ denotes the divergence operator, and $c$ is a constant. Translating space-like graphs by mean curvature flow (MCF for short) in the Lorentz manifold $M^{n}\times\mathbb{R}$ are translating surfaces that can be viewed as a space-like graph of a function over a domain. In fact, let $\{x,u(x)\}$ be a space-like graphic surface defined over $\Omega\subset M^{n}$ in the Lorentz manifold $M^{n}\times\mathbb{R}$, and then, since the mean curvature of the space-like surface is (see \cite[Sect. 1]{chmx} in Section \ref{sect3} here for this calculation) \begin{eqnarray*} H=\mathrm{div}\left(\frac{Du}{\sqrt{1-|Du|^{2}}}\right), \end{eqnarray*} the graph of $u$ is a vertically translating space-like with constant speed $c$ if and only if $u$ is a solution to the equation (\ref{trg}). Recently, Mao and his collaborators \cite{chmx} showed that along the nonparametric MCF with prescribed contact angle boundary condition in the Lorentz $3$-manifold $M^{2}\times\mathbb{R}$, if $M^{2}$ has nonnegative Gaussian curvature, then the evolution of space-like graphs over compact strictly convex domains in $M^{2}$ exists for all the time and solutions of the flow converge to ones moving only by translation. Translating solutions play an important role in the study of type-II singularities of the MCF. For instance, Angenent and Vel\'{a}zquez \cite{av1,av2} gave some examples of convergence which implies that type-II singularities of the MCF there are modeled by translating surfaces. Denote by $\widetilde{M^{n}\times\mathbb{R}}$ the $(n+1)$-dimensional pseudo-Riemannian manifold \begin{eqnarray*} \{(x,s)|x\in M^{n},s\in\mathbb{R}\} \end{eqnarray*} equipped with the weighted metric $e^{cs}(-ds^{2}+\sigma_{ij}dx^{i}dx^{j})$. Clearly, $\widetilde{M^{n}\times\mathbb{R}}$ can be achieved by the Lorentz $(n+1)$-manifold $M^{n}\times\mathbb{R}$ with a conformal transformation to its Lorentzian metric. Here, we have used Einstein summation convention, that is, summation should be done to repeated subscripts and superscripts. In the sequel, without specification, Einstein summation convention will be always used. We can prove a stability result for translating space-like graphs as follows: \begin{theorem} \label{THEOREM3.1} Assume that $u(x)$ is a solution to (\ref{trg}). Then $\Sigma=\{x,u(x))|x\in\Omega\}$ is a stable, maximal space-like graph in $\widetilde{M^{n}\times\mathbb{R}}$. \end{theorem} The paper is organized as follows. In Section \ref{sect2}, some useful formulas for space-like hypersurfaces in a Lorentz manifold will be recalled. The proof of Theorem \ref{THEOREM3.1} will be given in Section \ref{sect3}. Meanwhile, as a byproduct, a convergence result related to maximal, CMC or translating space-like graphs in Lorentz manifolds will also be shown. In Section \ref{sect4}, examples related to the existence of translating space-like graphs in the Lorentz $(n+1)$-manifold $M^{n}\times\mathbb{R}$ will be introduced. \section{Geometry of space-like hypersurfaces in a Lorentz manifold} \renewcommand{\thesection}{\arabic{section}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{equation}{0} \label{sect2} Given an $(n+1)$-dimensional Lorentz manifold $\left(\overline{M}^{n+1},\overline{g}\right)$, with the metric $\overline{g}$, and its space-like hypersurface $M^{n}$. For any $p\in M^{n}$, one can choose a local Lorentzian orthonormal frame field $\{e_{0},e_{1},e_{2},\ldots,e_{n}\}$ around $p$ such that, restricted to $M^{n}$, $e_{1},e_{2},\ldots,e_{n}$ form orthonormal frames tangent to $M^{n}$. Taking the dual coframe field $\{w_{0},w_{1},w_{2},\ldots,w_{n}\}$ such that the Lorentzian metric $\overline{g}$ can be written as $\overline{g}=-w_{0}^{2}+\sum_{i=1}^{n}w_{i}^{2}$. Making the convention on the range of indices \begin{eqnarray*} 0\leq\alpha,\beta,\gamma,\ldots\leq n; \qquad\qquad 1\leq i,j,k\ldots\leq n, \end{eqnarray*} and doing differentials to forms $w_{\alpha}$, one can easily get the following structure equations \begin{eqnarray} &&(\mathrm{Gauss~ equation})\qquad \qquad R_{ijkl}=\overline{R}_{ijkl}-(h_{ik}h_{jl}-h_{il}h_{jk}), \label{Gauss}\\ &&(\mathrm{Codazzi~ equation})\qquad \qquad h_{ij,k}-h_{ik,j}=\overline{R}_{0ijk}, \label{Codazzi}\\ &&(\mathrm{Ricci~ identity})\qquad \qquad h_{ij,kl}-h_{ij,lk}=\sum\limits_{m=1}^{n}h_{mj}R_{mikl}+\sum\limits_{m=1}^{n}h_{im}R_{mjkl}, \label{Ricci} \end{eqnarray} and the Laplacian of the second fundamental form $h_{ij}$ of $M^{n}$ as follows \begin{eqnarray} \label{LF} &&\Delta h_{ij}=\sum\limits_{k=1}^{n}\left(h_{kk,ij}+\overline{R}_{0kik,j}+\overline{R}_{0ijk,k}\right)+ \sum\limits_{k=1}^{n}\left(h_{kk}\overline{R}_{0ij0}+h_{ij}\overline{R}_{0k0k}\right)+\nonumber\\ &&\qquad\qquad \sum\limits_{m,k=1}^{n}\left(h_{mj}\overline{R}_{mkik}+2h_{mk}\overline{R}_{mijk}+h_{mi}\overline{R}_{mkjk}\right)\nonumber\\ &&\qquad\quad -\sum\limits_{m,k=1}^{n}\left(h_{mi}h_{mj}h_{kk}+h_{km}h_{mj}h_{ik}-h_{km}h_{mk}h_{ij}-h_{mi}h_{mk}h_{kj}\right), \end{eqnarray} where $R$ and $\overline{R}$ are the curvature tensors of $M^{n}$ and $\overline{M}^{n+1}$ respectively, $A:=h_{ij}w_{i}w_{j}$ is the second fundamental form with $h_{ij}$ the coefficient components of the tensor $A$, $\Delta$ is the Laplacian on the hypersurface $M^{n}$, and, as usual, the comma ``," in subscript of a given tensor means doing covariant derivatives -- this convention will also be used in the sequel. For detailed derivation of the above formulae, we refer readers to, e.g., \cite[Section 2]{hzl}. Clearly, in our setting here, all formulas mentioned in this section can be used directly with $\overline{M}^{n+1}=M^{n}\times\mathbb{R}$. \section{Stability} \renewcommand{\thesection}{\arabic{section}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{equation}{0} \label{sect3} Similar to the calculation in \cite[Sect. 1]{chmx}, for the space-like graph $\Sigma=\{(x,u(x))|x\in\Omega\}$, defined over $\Omega\subset M^{n}$, in the Lorentz $(n+1)$-manifold $M^{n}\times\mathbb{R}$ with the metric $\overline{g}:=\sigma_{ij}dw^{i}\otimes dw^{j}-ds\otimes ds$, tangent vectors are given by \begin{eqnarray*} X_{i}=\partial_{i}+D_{i}u\partial_{s}, \qquad i=1,2,\ldots,n, \end{eqnarray*} and the corresponding upward unit normal vector is given by \begin{eqnarray*} \vec{v}=\frac{1}{\sqrt{1-|Du|^2}}\left(\partial_s+D^{j} u\partial_{j}\right), \end{eqnarray*} where $D^{j}u=\sigma^{ij}D_{i}u$. Denote by $\overline{\nabla}$ the gradient operator on $M^{n}\times\mathbb{R}$, and then the second fundamental form $h_{ij}dw^{i}\otimes dw^{j}$ of $\Sigma$ is given by \begin{eqnarray*} h_{ij}=-\langle\overline{\nabla}_{X_i} X_j,\vec{v}\rangle=\frac{1}{\sqrt{1-|Du|^2}}D_{i}D_{j}u. \end{eqnarray*} Moreover, the scalar mean curvature of $\Sigma$ is \begin{eqnarray} \label{hf} \qquad H=\sum_{i=1}^{n}h^i_i=\frac{1}{\sqrt{1-|Du|^2}}\left(\sum_{i,k=1}^{n}g^{ik}D_{k}D_{i}u\right)&=& \frac{\sum_{i,k=1}^{n}\left(\sigma^{ik}+\frac{D^{i}uD^{k}u}{1-|Du|^{2}}\right)D_{k}D_{i}u}{\sqrt{1-|Du|^2}}\nonumber\\ &=&\mathrm{div}\left(\frac{Du}{\sqrt{1-|Du|^{2}}}\right), \end{eqnarray} where $g^{ik}$ is the inverse of the induced Riemannian metric $g$ on the space-like graph $\Sigma$. Denote by $\Theta$ the angle function of $\Sigma$, and then using (\ref{trg}), the above equality can be written equivalently as \begin{eqnarray} \label{c-1} H=-c\Theta=-c\langle\vec{v},\partial_{s}\rangle. \end{eqnarray} \begin{proof} [Proof of Theorem \ref{THEOREM3.1}] The area functional of $\widetilde{M^{n}\times\mathbb{R}}$ is given by \begin{eqnarray*} F(\Sigma)=\int_{\Sigma}e^{cs}d\mu, \end{eqnarray*} where $d\mu$ is the volume element of $\Sigma$ induced by the metric $g$ of the Lorentz $(n+1)$-manifold $M^{n}\times\mathbb{R}$. Let $\Sigma_{r}$ be a family of surfaces satisfying \begin{eqnarray} \label{cf} \frac{\partial\Sigma_{r}}{\partial r}\Bigg{|}_{r=0}=\phi\vec{v}~~~~\mathrm{with}~~~~\Sigma_{0}=\Sigma, \end{eqnarray} where $\phi(x)$ is a smooth function defined on $\Sigma$ with compact support. Treating $\Sigma_{r}$ as a curvature flow of $\Sigma$ in the Lorentz $(n+1)$-manifold $M^{n}\times\mathbb{R}$, and by direct calculation, it follows that: \begin{lemma} \label{lemma3.1} Along the curvature flow (\ref{cf}), we have \begin{eqnarray} \label{c-2} \begin{split} &\frac{\partial\vec{v}}{\partial r}\Bigg{|}_{r=0}=\nabla\phi,\\ &\frac{\partial H}{\partial r}\Bigg{|}_{r=0}=\Delta\phi-\left(|A|^{2}+\overline{\mathrm{Ric}}(\vec{v},\vec{v})\right)\phi, \end{split} \end{eqnarray} where, following the convention used in Section \ref{sect2}, $\nabla$ and $\Delta$ denote the covariant derivative and the Laplacian of $\Sigma$ respectively, and $\overline{\mathrm{Ric}}(\cdot,\cdot)$ stands for the Ricci tensor of the ambient space $M^{n}\times\mathbb{R}$. \end{lemma} \begin{proof} First, we have \begin{eqnarray*} \begin{split} \frac{\partial\vec{v}}{\partial r}\Bigg{|}_{r=0}&=\left\langle\frac{\partial\vec{v}}{\partial{r}},(\Sigma_{r})_{,i}\right\rangle g^{ik}(\Sigma_{r})_{,k}\Bigg{|}_{r=0}\\ &= -\left\langle\vec{v},(\phi\vec{v})_{,i}\right\rangle g^{ik}(\Sigma_{r})_{,k}\Bigg{|}_{r=0}\\ &=\phi_{,i}g^{ik}(\Sigma_{r})_{,k}\Bigg{|}_{r=0}=\nabla\phi, \end{split} \end{eqnarray*} where, following the convention used in Section \ref{sect2}, $(\cdot)_{,k}$ means doing covariant derivative with respect to the tangent vector $X_{k}$ on the translating space-like graph $\Sigma$. Second, we have \begin{eqnarray*} \begin{split} \frac{\partial g_{lm}}{\partial r}\Bigg{|}_{r=0}&=\frac{\partial}{\partial r}\left\langle(\Sigma_{r})_{,l},(\Sigma_{r})_{,m}\right\rangle\Bigg{|}_{r=0}\\ &= 2\left\langle(\phi\vec{v})_{,l},(\Sigma_{r})_{,m}\right\rangle\Bigg{|}_{r=0}\\ &= -2\phi\left\langle\vec{v},(\Sigma_{r})_{,lm}\right\rangle\Bigg{|}_{r=0}=2\phi h_{ij}, \end{split} \end{eqnarray*} and \begin{eqnarray*} \begin{split} \frac{\partial h_{ij}}{\partial r}\Bigg{|}_{r=0}&=-\frac{\partial}{\partial r}\left\langle\vec{v},(\Sigma_{r})_{,ij}\right\rangle\Bigg{|}_{r=0}\\ &= -\left\langle\phi_{,l}(\Sigma_{r})_{,m}g^{ml},(\Sigma_{r})_{,ij}\right\rangle\Bigg{|}_{r=0} -\left\langle\vec{v},(\phi\vec{v})_{,ij}\right\rangle\Bigg{|}_{r=0}\\ &= -\left\langle\phi_{,l}(\Sigma_{r})_{,m}g^{ml},\Gamma_{ij}^{k}(\Sigma_{r})_{,k}+h_{ij}\vec{v}\right\rangle\Bigg{|}_{r=0}-\left\langle\vec{v},\left(\phi_{,j}\vec{v}+\phi h_{jl}g^{lm}(\Sigma_{r})_{,m}\right)_{,i}\right\rangle\Bigg{|}_{r=0}\\ &= -\Gamma_{ij}^{k}\phi_{,k}+\phi_{,ij}+\phi h_{jl}g^{lm}h_{im}\\ &= \nabla_{i}\nabla_{j}\phi+\phi h_{il}g^{lm}h_{im}, \end{split} \end{eqnarray*} where, as usual, $\Gamma^{k}_{ij}$ denote Christoffel symbols determined by the metric $g$. By \eqref{Gauss}, \eqref{Codazzi}, \eqref{LF} and Simon's identity of $\phi$, we have \begin{eqnarray*} g^{ij}\frac{\partial h_{ij}}{\partial r}\Bigg{|}_{r=0}=\Delta\phi+|A|^{2}\phi-\overline{\mathrm{Ric}}\left(\vec{v},\vec{v}\right)\phi, \end{eqnarray*} and then \begin{eqnarray*} \begin{split} \frac{\partial H}{\partial r}\Bigg{|}_{r=0}&=\frac{\partial}{\partial r}\left(g^{ij}h_{ij}\right)\Bigg{|}_{r=0}\\ &= -g^{il}\frac{\partial g_{lm}}{\partial r}|_{r=0}g^{mj}h_{ij}+g^{ij}\frac{\partial h_{ij}}{\partial r}|_{r=0}\\ &= -2\phi|A|^{2}+\Delta\phi+|A|^{2}\phi-\overline{\mathrm{Ric}}\left(\vec{v},\vec{v}\right)\phi\\ &= \Delta\phi-\left(|A|^{2}+\overline{\mathrm{Ric}}\left(\vec{v},\vec{v}\right)\right)\phi. \end{split} \end{eqnarray*} This completes the proof of Lemma \ref{lemma3.1}. \end{proof} By \eqref{c-1} and \eqref{c-2}, it is not hard to obtain \begin{eqnarray} \label{c-3} \begin{split} &\frac{\partial F(\Sigma_{r})}{\partial r}\Bigg{|}_{r=0}=\int_{\Sigma}\phi\left(H+c\left\langle\vec{v},\partial_{s}\right\rangle\right)e^{cs}d\mu=0,\\ &\frac{\partial^{2}F(\Sigma_{r})}{\partial^{2}r}\Bigg{|}_{r=0}=\int_{\Sigma}\phi\left[\Delta\phi-\left(|A|^{2}+\overline{\mathrm{Ric}}(\vec{v},\vec{v})\right)\phi+c\langle\nabla\phi,\partial_{s}\rangle\right]e^{cs}d\mu. \end{split} \end{eqnarray} Define an elliptic operator $L$ as follows \begin{eqnarray} \label{c-4} L\phi=\Delta\phi-\left(|A|^{2}+\overline{\mathrm{Ric}}(\vec{v},\vec{v})\right)\phi+c\left\langle\nabla\phi,\partial_{s}\right\rangle. \end{eqnarray} Therefore, putting (\ref{c-4}) into the second equality of (\ref{c-3}) yields \begin{eqnarray} \label{c-5} \frac{\partial^{2}F(\Sigma_{r})}{\partial^{2}r}\Bigg{|}_{r=0}=\int_{\Sigma}\phi L\phi e^{cs}d\mu. \end{eqnarray} Now, we only need to show that the RHS of (\ref{c-5}) is non-positive. Since $\Sigma$ is a space-like graph, its angle function satisfies $\Theta=\langle\vec{v},\partial_{r}\rangle<0$. Thus we can write $\phi=\eta\Theta$, where $\eta$ is another function over $\Sigma$ with compact support. Then it follows that \begin{eqnarray} \label{c-6} \phi L\phi=\eta\Theta\left(\eta L\Theta+\Theta\Delta\eta+2\langle\nabla\eta,\nabla\Theta\rangle+c\Theta\langle\nabla\eta,\partial_{s}\rangle\right). \end{eqnarray} The reason why we adapt this form is based on a general formula of $\Delta\Theta$ as follows. \begin{lemma}\label{LEMMA3.2} For any $C^{2}$ space-like hypersurface $S$ in the Lorentz $(n+1)$-manifold $M^{n}\times\mathbb{R}$, it holds that \begin{eqnarray} \label{c-7} \Delta\Theta-\left(|A|^{2}+\overline{\mathrm{Ric}}(\vec{v},\vec{v})\right)\Theta-\langle\nabla H,\partial_{s}\rangle=0, \end{eqnarray} where $A$ is the second fundamental form of $S$. \end{lemma} \begin{proof} Fix a point $p\in S$. Suitably choose an orthonormal frame field $\{e_{1},e_{2},\ldots,e_{n}\}$ on $S$ such that $\nabla_{e_{i}}e_{j}(p)=0$ and $\langle e_{i},e_{j}\rangle=\delta_{ij}$. Then $\overline{\nabla}_{e_{i}}e_{j}(p)=h_{ij}\vec{v}$, where, following the convention used in Section \ref{sect2}, $\overline{\nabla}$ denotes the covariant derivative of the ambient space $M^{n}\times\mathbb{R}$ and $\vec{v}$ is the unit normal vector of $S$. It is easy to know that for any smooth vector field $X$, $\overline{\nabla}_{X}\partial_{s}=0$. By direct calculation, one has \begin{eqnarray} \label{c-8} \Delta\Theta(p)&=\nabla_{e_{i}}\nabla_{e_{i}}\langle\partial_{s},\vec{v}\rangle-\nabla_{\nabla_{e_{i}}e_{i}}\Theta(p)\nonumber\\ &= e_{i}\langle\partial_{s},h_{ik}e_{k}\rangle(p)\nonumber\\ &= h_{ik,i}\langle\partial_{s},e_{k}\rangle+|A|^{2}\Theta. \end{eqnarray} Using the Codazzi equation (\ref{Codazzi}) directly yields \begin{eqnarray*} \label{c-9} h_{ik,i}=h_{ii,k}+\overline{R}_{0iki}. \end{eqnarray*} Hence, it gives \begin{eqnarray} \label{c-10} h_{ik,i}\langle\partial_{s},e_{k}\rangle=\langle\nabla H,\partial_{s}\rangle+\overline{\mathrm{Ric}}\left(\vec{v},\langle\partial_{s},e_{k}\rangle e_{k}\right). \end{eqnarray} Since $\overline{\nabla}_{X}\partial s=0$ for any vector $X$, we know \begin{eqnarray*} \label{c-11} \langle\partial_{s},e_{k}\rangle e_{k}=\partial_{s}+\Theta\vec{v} \end{eqnarray*} and $\overline{\mathrm{Ric}}(\vec{v},\partial_{s})=0$. Putting these two facts into (\ref{c-10}) implies \begin{eqnarray*} h_{ik,i}\langle\partial_{s},e_{k}\rangle=\langle\nabla H,\partial_{s}\rangle+\overline{\mathrm{Ric}}(\vec{v},\vec{v})\Theta. \end{eqnarray*} The assertion of this lemma follows by combing the above equality with \eqref{c-8} directly. \end{proof} Let us go back to the proof of Theorem \ref{THEOREM3.1}. Since $\Sigma$ is a translating space-like graph in the Lorentz $(n+1)$-manifold $M^{n}\times\mathbb{R}$, one has $H=-c\Theta$, and then \eqref{c-7} can be rewritten as \begin{eqnarray*} \label{c-12} L\Theta=0. \end{eqnarray*} Therefore \eqref{c-6} becomes \begin{eqnarray*} \label{c-13} \phi L\phi=\eta\Theta\left(\Theta\Delta\eta+2\langle\nabla\eta,\nabla\Theta\rangle+c\Theta\langle\nabla\eta,\partial_{s}\rangle\right). \end{eqnarray*} On the other hand, the divergence of $\eta\Theta^{2}\nabla\eta e^{cs}$ is \begin{eqnarray} \label{c-14} \begin{split} \mathrm{div}\left(\eta\Theta^{2}\nabla\eta e^{cs}\right)&=\eta\Theta e^{cs}\left(\Theta\Delta\eta+2\langle\nabla\eta,\nabla\Theta\rangle+c\Theta\langle\nabla\eta,\partial_{s}\rangle\right)+\Theta^{2}|\nabla\eta|^{2}e^{cs}\\ &= \phi e^{cs}L\phi+\Theta^{2}|\nabla\eta|^{2}e^{cs}. \end{split} \end{eqnarray} Combining (\ref{c-14}) with \eqref{c-5} and applying the divergence theorem result in \begin{eqnarray*} \frac{\partial^{2}F(\Sigma_{r})}{\partial^{2}r}\Bigg{|}_{r=0}=-\int_{\Sigma}\Theta^{2}|\nabla\eta|^{2}e^{cs}d\mu\leq0. \end{eqnarray*} Then we conclude that the translating space-like graph $\Sigma$ is stable and maximal in $\widetilde{M^{n}\times\mathbb{R}}$. \end{proof} Applying Lemma \ref{LEMMA3.2}, we can obtain the following interesting rigidity result. \begin{theorem}\label{THEOREM3.3} Let $\{\Sigma_{n}\}_{n=1}^{\infty}$ be a sequence of smooth connected space-like graphs in the Lorentz $(n+1)$-manifold $M^{n}\times\mathbb{R}$ with diameter $\varrho$ converging uniformly to a connected space-like hypersurface $\Sigma$ in the $C^{2}$ sense. If all $\Sigma_{n}$ are translating space-like graphs in the interior of $\Sigma$, the angle function $\Theta$ satisfies that $\Theta<0$ or $\Theta\equiv0$. The conclusion is also true in the case of maximal or CMC space-like graphs. \end{theorem} \begin{proof} Without loss of generality, we assume $\Theta<0$. By continuity, we know that in the interior of all $\Sigma_{n}$, $|A|^{2}<\beta_{1}$ holds for some positive constant $\beta_{1}$ depending only on $M^{n}$. Now, first, we assume that $\Sigma_{n}$ are maximal or CMC space-like graphs. Then $\nabla H\equiv0$. By Lemma \ref{LEMMA3.2}, we have \begin{eqnarray} \label{c-15} \Delta\Theta-\left(|A|^{2}+\overline{\mathrm{Ric}}(\vec{v},\vec{v})\right)\Theta=0 \end{eqnarray} on all $\Sigma_{n}$. Since \begin{eqnarray*} \overline{\mathrm{Ric}}(\vec{v},\vec{v})=\frac{u_{,k}^{2}(\Gamma_{kk,i}^{i}+\Gamma_{kk}^{l}\Gamma_{il}^{i}-\Gamma_{ik,k}^{i}-\Gamma_{ik}^{l}\Gamma_{kl}^{i})}{1-|Du|^{2}}, \quad i,k,l=1,2,\ldots,n, \end{eqnarray*} there exists a positive constant $\beta_{2}$ only depending on $M^{n}$ such that $\overline{\mathrm{Ric}}(\vec{v},\vec{v})\leq\beta_{2}$ in the interior of all $\Sigma_{n}$. By (\ref{c-15}) we have $\Delta\Theta\geq\left(\beta_{1}+\beta_{2}\right)\Theta$ on all $\Sigma_{n}$. Because $\Sigma$ is the $C^{2}$ uniform limit of $\Sigma_{n}$ as $n\rightarrow\infty$, it follows that $\Theta\leq0$ and $\Delta\Theta\geq(\beta_{1}+\beta_{2})\Theta$. By the strong maximum principle of second-order elliptic equations, we can obtain that $\Theta\equiv0$ or $\Theta<0$ on $\Sigma$. Second, assume that $\Sigma_{n}$ are translating space-like graphs. Then $H\equiv-c\Theta$ by \eqref{c-1}. Similar argument gives \begin{eqnarray*} \Delta\Theta\geq(\beta_{1}+\beta_{2})\Theta-c\left\langle\nabla\Theta,\partial_{s}\right\rangle \end{eqnarray*} on all $\Sigma_{n}$. Based on the strong maximum principle and the fact that $\Theta\leq0$ on $\Sigma$, we also have $\Theta\equiv0$ or $\Theta<0$ on $\Sigma$. \end{proof} \section{Examples of translating space-like graphs} \renewcommand{\thesection}{\arabic{section}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{equation}{0} \label{sect4} In this section, we construct some examples of translating space-like graphs to MCF when the hypersurface $M^{n}$ has a domain with certain warped product structure. Suppose that $M^{n}$ is an $n$-dimensional ($n\geq2$) complete Riemannian manifold with a metric $\sigma$ containing a domain $M_{0}^{n}$ equipped with the following coordinate system: \begin{eqnarray} \label{f-1} \left\{\theta=(\theta_{2},\theta_{3},\ldots,\theta_{n})\in \mathbb{S}^{n-1}, r\in[0,r_{0})\right\}~~~~\mathrm{with}~~~~\sigma=dr^{2}+h^{2}(r)d\theta^{2}, \end{eqnarray} where $d\theta^{2}$ is the round metric on the unit $(n-1)$-sphere $\mathbb{S}^{n-1}$, $h(r)$ is a positive function satisfying $h(0)=0$, $h'(0)=1$ with $h'(r)\neq0$ for all $r\in(0,r_{0})$. Now, with the help of examples constructed below, we can somehow show the existence of translating space-like graphs in the Lorentz $(n+1)$-manifold $M^{n}\times\mathbb{R}$ with the structure \eqref{f-1} and the metric $\overline{g}$. \begin{theorem}\label{THEOREM6.1} Let $M^{n}$ be a complete Riemannian $n$-manifold mentioned above. Let $u(r):[0,r_{0})\rightarrow\mathbb{R}$ be a $C^{2}$ solution of the following ordinary differential equation (ODE for short) \begin{eqnarray} \label{f-2} \frac{u_{rr}}{1-u_{r}^{2}}+(n-1)\frac{h'(r)}{h(r)}u_{r}=c, \end{eqnarray} with $u_{r}(0)=0$ for $r\in[0,r_{0})$ and $|u_{r}|<1$. Then $\Sigma=(x,u(r))$ for $r\in[0,r_{0})$ is a translating space-like graph in the Lorentz $(n+1)$-manifold $M^{n}\times\mathbb{R}$, where $x=(r,\theta)\in M_{0}^{n}$ given by \eqref{f-1}. If $r_{0}=\infty$, then $\Sigma$ is complete. \end{theorem} \begin{remark}\label{LEMARK6.2} \rm{Clearly, \eqref{f-2} is a second-order ODE whose component of the second-order derivative term does not degenerate under the assumption $|u_{r}|<1$. The existence of its solution is obvious.} \end{remark} \begin{proof} If $r_{0}=\infty$, then $M_{0}^{n}$ is simply connected and should be a whole $M^{n}$. Thus $\Sigma$ is complete. In the rest part, we show that $\Sigma$ is a translating space-like graph. By \eqref{c-1}, we know that here it is sufficient to derive the identity \begin{eqnarray} \label{f-3} H=-c\Theta, \end{eqnarray} where $H$ is the mean curvature of $\Sigma$ and $\vec{v}$ is its upward normal vector. Fix a point $(x,u(x))$ on $\Sigma$, where $x\in M_{0}^{n}$ and the polar coordinate of $x$ in $M_{0}^{n}$ is not $(0,0,\ldots,0)$. Clearly, the polar coordinate system on $M_{0}^{n}$ given by \eqref{f-1} determines a frame field $\{\partial_{r},\partial_{\theta_{2}},\ldots,\partial_{\theta_{n}}\}$ naturally. For the space-like graph $\Sigma$ determined by $u(x)=u(r)$ in the Lorentz $(n+1)$-manifold $M^{n}\times\mathbb{R}$, denote by $u_{r}$ and $u_{\theta_{i}}$, $i=2,3,\ldots,n$, the partial derivatives of $u$. Since here $u(r)$ is a radial function, $u_{\theta_{i}}\equiv0$, $i=2,3,\ldots,n$. Therefore, on $\Sigma$, a natural frame $\{e_{1}=\partial r+u_{r}\partial_{s}, e_{i}=\partial_{\theta_{i}}\}$, $i=2,\ldots,n$ can be obtained, where, as before, $\partial_{s}$ denotes the vector field tangent to $\mathbb{R}$. Then the Riemannian metric on $\Sigma$ and the upward unit normal vector of $\Sigma$ are given by \begin{eqnarray*} g_{11}=\left\langle e_{1},e_{1}\right\rangle=1-u_{r}^{2},~~~~g_{kl}=g_{lk}=\left\langle e_{l},e_{k}\right\rangle=0,~~~~k\neq l, \end{eqnarray*} \begin{eqnarray*} g_{ii}=\left\langle e_{i},e_{i}\right\rangle=h^{2}(r),~~~~i=2,\ldots,n, \end{eqnarray*} and \begin{eqnarray*} \vec{v}=\frac{\partial_{s}+u_{r}\partial_{r}}{\sqrt{1-u_{r}^{2}}}. \end{eqnarray*} By direct calculation, its second fundamental forms are \begin{eqnarray*} h_{11}=-\left\langle\overline{\nabla}_{e_{1}}e_{1},\vec{v}\right\rangle=\frac{u_{rr}}{\sqrt{1-u_{r}^{2}}}, \end{eqnarray*} and \begin{eqnarray*} h_{ii}=-\left\langle\overline{\nabla}_{e_{i}}e_{i},\vec{v}\right\rangle=-\left\langle-h(r)h'(r)\partial_{r},\vec{v}\right\rangle=\frac{h'(r)h(r)u_{r}}{\sqrt{1-u_{r}^{2}}},~~~~i=2,\ldots,n. \end{eqnarray*} where we use the fact \begin{eqnarray*} \left\langle\overline{\nabla}_{e_{i}}e_{i},\partial r\right\rangle=-h'(r)h(r),~~~~i=2,\ldots,n. \end{eqnarray*} Then, by \eqref{f-2}, the mean curvature of $\Sigma$ with respect to $\vec{v}$ is \begin{eqnarray*} H=g^{11}h_{11}+g^{22}h_{22}+\ldots+g^{nn}h_{nn}=\frac{1}{\sqrt{1-u_{r}^{2}}}\left(\frac{u_{rr}}{1-u_{r}^{2}}+(n-1)\frac{h'(r)}{h(r)}u_{r}\right) =\frac{c}{\sqrt{1-u_{r}^{2}}}. \end{eqnarray*} On the other hand, we have \begin{eqnarray*} \Theta=\left\langle\vec{v},\partial_{s}\right\rangle= \left\langle\frac{\partial_{s}+u_{r}\partial_{r}}{\sqrt{1-u_{r}^{2}}},\partial_{s}\right\rangle=-\frac{1}{\sqrt{1-u_{r}^{2}}}. \end{eqnarray*} Hence in our case here we have $H=-c\Theta$, which implies $\Sigma$ is a translating space-like graph. The proof is finished. \end{proof} \section*{Acknowledgments} \renewcommand{\thesection}{\arabic{section}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{equation}{0} \setcounter{maintheorem}{0} This research was supported in part by the NSF of China (Grant Nos. 11801496 and 11926352), the Fok Ying-Tung Education Foundation (China), and Hubei Key Laboratory of Applied Mathematics (Hubei University).
2023-04-23T06:41:28.517Z
2021-01-15T02:08:08.000Z
redpajama/arxiv
arxiv_0001
2,493
5,332
82bc6f14f48e1c32eefb6c43f050bec376272a9b
\section{Introduction} There has been a steady development of graph-theoretic approaches to compute dynamic functional connectivity (FC) that is fueled by an increasing agreement that the brain network does not remain constant across time and instead undergoes temporal changes resulting from endogenous and exogenous factors (Filippi et al., 2019). For example, task-related imaging studies have shown that the brain networks will re-organize when the subjects undergo different modulations of the experimental tasks during the scanning session (Chang and Glover, 2010; Lukemire et. al, 2020). Similarly, dynamic FC has also been observed during resting-state experiments (Bullmore and Sporns, 2009). These, and other recent studies, have found increasing evidence of underlying neuronal bases for temporal variations in FC which is linked with changes in cognitive and disease states (Hutchinson et al., 2013). Dynamic connectivity approaches involve time-varying correlations derived via graph-theoretic methods, and may be broadly classified into the following categories: (i) change point methods (Cribben et al., 2013; Kundu et al., 2018) that assume stable phases of connectivity interspersed with connectivity jumps at unknown locations, which results in piecewise constant connectivity; (ii) Hidden Markov Models (HMMs) involving fast transient networks that are reinforced or revisited over time, which have been applied to electrophysiological data (Quinn et al., 2018) and more recently to fMRI data (Warnick et al., 2018); and (iii) sliding window approaches that enforce temporally smooth correlations (Chang and Glover, 2010; Monti et al., 2014) based on the biologically plausible assumption of slowly varying temporal correlations with gradual changes in connectivity. Although sliding window methods are arguably the most widely used, these approaches may be limited by practical issues such as the choice of the window length (Lindquist et. al, 2014). On the other hand, change point models and HMMs have the advantage of model parsimony by limiting the distinct number of parameters. However, the performance of these methods often depend on modeling assumptions, and temporal smoothness of connectivity estimates can not be typically ensured. More importantly, since most of these existing approaches typically rely on single-subject data, they often face challenges in terms of detecting rapid changes in connectivity and may result in inaccurate estimates due to a limited information from a single individual. Essentially, almost the entirety of the existing dynamic connectivity literature has focused on data from single individuals, due to the fact that temporal changes in connectivity are expected to be subject-specific and may not be replicated across individuals. However, recent evidence suggests that combining information across individuals in a group provides more accurate estimates for connectivity (Hindriks et al., 2016), which adheres to the commonly used statistical principle of data aggregation using multiple samples to obtain more robust estimates. Kundu et. al (2018) proposed a sub-sampling approach to compute time varying dynamic connectivity networks using multi-subject fMRI data, which resulted in considerable gains in dynamic network estimation under limited heterogeneity across samples, compared to a single-subject analyses. Unfortunately, the variations across samples may not be restricted in many practical settings. To our knowledge, there is a scarcity of carefully calibrated approaches for pooling information across heterogeneous samples in order to accurately estimate a population of (single-subject) dynamic networks. This is perhaps not surprising, given that there are considerable challenges involved in developing such methods. From a methodological perspective, it is not immediately clear how to effectively borrow information across individuals in a data-adaptive manner that also respects the inherent connectivity differences between heterogeneous samples. Similarly when estimating dynamic networks with $V$ brain regions for $N$ individuals each having $T$ time scans, one encounters computational challenges in terms of computing $NT$ distinct $V\times V$ connectivity matrices, which is not straightforward for high-dimensional fMRI applications. In this article, our goal is to develop a fundamentally novel hierarchical Bayesian product mixture modeling (BPMM) approach incorporating covariates (MacEachern, 1999) for estimating a population of dynamic networks corresponding to heterogeneous multi-subject fMRI data. The importance of using covariates to model {\it known} stationary networks has already been illustrated in recent literature (Zhang et al., 2019; Sun and Li, 2017), where the networks are specified in advance. These methods suggest a strong justification for incorporating demographic, clinical, and behavioral covariates when modeling dynamic networks in order to obtain more accurate and reliable estimates (Shi and Guo, 2016). Motivated by these existing studies, the proposed BPMM framework estimates {\it unknown dynamic} networks by leveraging covariate information in order to inform the clustering mechanism under the mixture model, which is better designed to tackle heterogeneity across samples that ultimately results in more accurate network estimation. Under the proposed model, subgroups of individuals with similar dynamic connectivity profiles are identified, where the subgroup memberships are also influenced by covariate profiles and change over time in an unsupervised manner that is designed to pool information in order to estimate the dynamic networks. Another appealing feature of the proposed BPMM approach is the ability to report cluster level network summaries that are more robust to noise and heterogeneity in the data. Since the proposed approach clusters samples independently at each time scan guided by covariate information, it is clearly distinct from HMM approaches that instead cluster transient brain states across time scans. To our knowledge, the proposed approach is one of the first to estimate a population of dynamic networks incorporating covariate knowledge by integrating heterogeneous multi-subject fMRI data, which represents considerable advances. In order to tackle the daunting task of estimating $NT$ connectivity matrices, each of dimension $V\times V$, the proposed approach employs dimension reduction by clustering samples under the mixture modeling framework that translates to considerable computational gains. In particular, the BPMM approach induces model parsimony by reducing the number of unique model parameters from $\frac{NT\times V(V-1)}{2}$ to $\frac{(\sum_{t=1}^T k_t)\times V(V-1)}{2}$, where $k_t(<<N)$ denotes the number of clusters at the $t$-time point that is determined in an unsupervised manner. Temporal smoothness in connectivity for each network is also ensured via additional hierarchical fused lasso priors on mixture atoms in the BPMM, which results in gradual changes in connectivity that is biologically meaningful. In scenarios where sharp connectivity changes are anticipated in certain localized time windows (due to changes in experimental design in a block task experiment, or other exogenous or endogenous factors), one may estimate these connectivity change points via a post-processing step that involves applying the total variation penalty (Bleakley and Vert, 2010) to the dynamic connectivity estimates under the proposed approach. Additional post-processing steps involving a K-means algorithm are also proposed to identify subgroups of individuals with similar dynamic connectivity patterns consolidated across time, which is particularly useful in terms of obtaining insights related to heterogeneity. Figure \ref{fig:toy} provides a visual illustration of the proposed approach. The proposed BPMM is developed for dynamic pairwise correlations as well as dynamic precision matrices, which provide complementary interpretations of dynamic connectivity. In particular, pairwise correlations encode connections between pairs of nodes without accounting for the effects of third party nodes, whereas partial correlations report association between nodes conditional on the effects of the remaining network nodes. While our goals do not involve assessing the merits of one approach over the other (see Smith et al., 2013 for a review), the proposed development is designed to provide users with an option to implement either approach as desired and suitable for respective applications. We develop an efficient Expectation-Maximization (EM) algorithm to implement the dynamic pairwise correlation method separately for each edge, and another EM algorithm for dynamic precision matrix estimation that simultaneously involves all network nodes. We perform extensive simulations to evaluate the performance of the proposed method in contrast to existing approaches that involved a variety of dynamic network structures. The proposed methods were also used to investigate dynamic functional connectivity changes due to a high intensity, aerobic exercise 'spin' intervention when compared to a non-aerobic exercise, control intervention, which were administered to a heterogeneous group of sedentary adults who performed a fMRI block task experiment. Our goals are to provide connectivity insights that are complimentary to previous activation-based findings from the same study (McGregor et al., 2018), but involves analytic challenges due to the short duration of the fixation and task blocks that induce rapid connectivity changes which are usually difficult to detect via existing methods. The rest of the article is structured as follows. Section 2 develops the proposed approach for dynamic pairwise connectivity (denoted as integrative dynamic pairwise connectivity with covariates or idPAC) and dynamic precision matrices (denoted as integrative dynamic precision matrix with covariates or idPMAC), outlines a post-processing strategy for estimating network change points, as well as identifying clusters of samples with similar dynamic connectivity profiles. Section 3 develops a computationally efficient EM algorithm to implement the proposed approaches, and describes choices for tuning parameters. Section 4 reports results from extensive simulation studies, and Section 5 reports our analysis and results from the block-task fMRI experiment. Additional discussions are provided in Section 6. Throughout the article, we will use BPMM to denote the overall Bayesian product mixture modeling framework used for developing the idPAC and idPMAC approaches, as appropriate. \begin{figure} \centering \includegraphics[width=\linewidth,height=5.5 in]{toy_scheme.jpg} \caption{\small A schematic diagram illustrating the proposed dynamic pairwise correlation method. A mixture prior with $H=3$ components is used to model dynamic correlations, where the mixture weights are modeled using covariates. The resulting networks at each time scan for each sample are allocated to one of the $H$ clusters representing distinct network states that are represented by red, orange and blue cubes. Although the proposed method does not cluster transient states across time, the simplified representation in the Figure illustrates the similarity of brain states contained in identical colored cubes across the experimental session. Such temporal smoothness of the network is imposed via hierarchical fused lasso priors on the mixture atoms. Once, the dynamic FC is estimated, a post-processing step using K-means (Section 2.3) is applied to compute sub-groups of samples that exhibit similar dynamic connectivity patterns summarized across all time scans. The subgroups are represented by the circle, pyramid, triangle and inverted triangle shapes in the Figure and correspond to different modes of dynamic connectivity with different number of brain states represented by different patterns within each shape. The connectivity change points for each individual, as well as at a cluster level, are computed via another post-processing step that employs a group fused lasso penalty (Section 2.4). The method reports both individual and cluster-level network features.} \label{fig:toy} \end{figure} \section{Methods} In this section, we propose a novel approach for estimating a population of dynamic networks using heterogeneous multi-subject fMRI data with the same number of brain volumes across all individuals. For modeling purposes, we will assume that the demeaned fMRI measurements are normally distributed with zero mean (Kundu et al., 2018) at each time scan, and that pre-whitening steps have been performed to minimize temporal autocorrelations. We will fix some notations to begin with. Suppose fMRI data is collected for $T$ scans and $V$ nodes (voxels or regions of interest) for $N$ individuals. Denote the fMRI measurements across all the nodes at time point $t$ as ${\bf y}^{(i)}_t= (y^{(i)}_{1,t},\ldots,y^{(i)}_{V,t})'$, and denote the $V\times T$ matrix of fMRI measurements for the $i$-th individual as $Y^{(i)}$ that has the $t$-th column as ${\bf y}^{(i)}_t,i=1,\ldots,N$. Further, denote the vector of $q\times 1$ covariates as ${\bf x}_i$ for the $i$-th sample, and represent the collection of fMRI data matrices across all individuals as ${\bm Y}$. In what follows, we develop the idPAC method for pairwise correlations (Section 2.1) and idPMAC method involving partial correlations (Section 2.2), both of which involve a combination of likelihood terms and priors on the model parameters that are combined into a posterior distribution, which is used to estimate model parameters. The posterior distribution for parameter $\theta$ given data $Y$ is defined as $P(\theta| Y)=\frac{L(Y\mid \theta)\times \pi(\theta)}{P(Y)}$ using Bayes theorem, where $L(Y| \theta)$ denotes the data likelihood given the parameter value $\theta$, $\pi(\theta)$ represents the prior on $\theta$ under the Bayesian model, and $P(Y)=\int L(Y| \theta)\pi(\theta)d\theta$ is the marginal likelihood after integrating out all possible values of $\theta$. Full details of the posterior distributions for the idPAC models in (\ref{eq:base})-(\ref{eq:base_cov}) and the idPMAC model in (\ref{eq:base2})-(\ref{eq:base_cov2}) are provided in the Appendix. \subsection{Dynamic Connectivity via Pair-wise Correlations} Let the unknown dynamic functional connectivity (pairwise correlation) of individual $i$ be denoted as $\bm{\rho^{(i)}}: = \{\rho^{(i)}_{jl,t}, j<l, j,l = 1 \dots V, t=1\dots T \}$, and the corresponding Fisher-transformed pairwise correlations be denoted as $\gamma^{(i)}_{jl,t}={arctanh}(\rho^{(i)}_{jl,t})$. We propose a Bayesian hierarchical approach that models the dynamic correlations for one edge at a time, using data from multiple individuals. We propose the following model for edge $(j,l)$, and $t=1,\ldots,T,$ \begin{eqnarray} &&\begin{bmatrix} y^{(i)}_{jt} \\ y^{(i)}_{lt} \end{bmatrix} \sim N_2 \Bigg( \begin{bmatrix} 0 \\ 0 \end{bmatrix} , \sigma_y^2 \begin{bmatrix} 1 & \rho^{(i)}_{jl,t} \\ \rho^{(i)}_{jl,t} & 1 \end{bmatrix} \Bigg) , \gamma^{(i)}_{jl,t} \sim \sum_{h=1}^H \xi_{h,jlt}({\bf x}_i) N(\gamma_{h,jlt}^*, \sigma^2_{\gamma,h}),\mbox{ } i=1,\ldots,N, \nonumber \\ && \pi(\gamma_{h,jl1}^*, \ldots \gamma_{h,jlT}^*) \propto \exp(-\lambda \sum_{t=1}^{T-1}|\gamma^*_{h,jlt}-\gamma^*_{h,jl,t-1}|), \mbox{ } \sigma^{-2}_{\gamma,h}\sim Ga(a_\sigma,b_\sigma),\mbox{ } h=2,\ldots, H, \label{eq:base} \end{eqnarray} where the Fisher-transformed correlations $\gamma^{(i)}_{jl,t}$ are modeled under a mixture of Gaussians prior having $H$ components denoted as $\gamma_{h,jlt}^*,h=1,\ldots,H,$ with the prior probability for the $h$-th mixture component denoted as $\xi_{h,jlt}({\bf x}_i)$ that depends on covariates, such that $\sum_{h=1}^H \xi_{h,jlt}({\bf x}_i)=1$ for all $t=1,\ldots,T$, $\sigma_y^2$ denotes the residual variance in the likelihood term, $\sigma^2_{\gamma,h}$ captures the (unknown) variability of the pairwise correlations under the mixture prior specification, $|\cdot|$ denotes the $L_1$ norm, and $N_v(\mu,\Sigma)$ denotes a multivariate Gaussian distribution with mean $\mu$ and $V\times V$ covariance matrix $\Sigma$. Under a hierarchical Bayesian specification, $\sigma^{-2}_{\gamma,h}$ is estimated under the conjugate Gamma prior with shape and scale parameters $a_\sigma,b_\sigma,$ respectively. The mixture prior specifies that for any given time scan $t$, the functional connectivity for each individual can take values revolving around any one of the $H$ mixture atoms denoted by $(\gamma_{1,jlt}^*,\ldots,\gamma_{H,jlt}^*)$ with respective prior probabilities $(\xi_{1,jlt}({\bf x}_i),\ldots,\xi_{H,jlt}({\bf x}_i))$. These mixture probabilities and atoms are unknown and learnt adaptively from the data via posterior distributions under the proposed idPAC approach. {\noindent \emph{Modeling mixture atoms via fused lasso}}: The mixture atoms are modeled under a fused lasso prior in (\ref{eq:base}) that encourages temporal smoothness of pairwise correlations by assigning small prior probabilities for large changes in the values between consecutive time scans. Although temporal smoothness in correlations is encouraged, the Bayesian approach is still equipped to accommodate sharp jumps in connectivity that may arise due to changes in experimental design or other factors. Such connectivity jumps are detected using a post-processing step (see Section 2.4) applied to the estimated dynamic connectivity under the proposed model. {\noindent \uline{Modeling mixture weights via covariates:}} In order to effectively tackle heterogeneity, we incorporate supplementary covariate information when modeling the mixture weights under our mixture modeling framework in (\ref{eq:base}). By incorporating covariate information, the model is designed to achieve more accurate identification of clusters, which then naturally translates to improved estimates for dynamic FC at the level of each individual. In particular, we model $(\xi^{(i)}_{1,jl},\ldots,\xi^{(i)}_{H,jl})$ via a Multinomial Logistic regression (Engel, 1988) as \begin{eqnarray} \xi^{(i)}_{h,jlt}({\bf x}_i)= \frac{exp(\bm{x_i}^T \bfb_{h,jlt})}{1 + \sum_{h=1}^{H-1}exp(\bm{x_i}^T\bfb_{h,jlt})}, \mbox{ } \bfb_{h,jlt}\sim N(\bm{0},\Sigma_{\bfb}), \mbox{ } t=1,\ldots,T, h=1,\ldots,H-1, \label{eq:base_cov} \end{eqnarray} where $\small\bfb_{H,jlt}=0,t=1,\ldots,T,$ is fixed as the reference group, and $\small\bfb_{h,jlt},$ represent the vector of unknown regression coefficients that control the contribution of the covariates to the mixture probabilities for the $h$-th component ($h=1,\ldots,H-1$), in contrast to the $H$-th component. These regression coefficients are assigned a Gaussian prior with mean zero and prior covariance $\Sigma_\beta$ under a hierarchical Bayesian specification. A large value of these regression coefficients implies increased importance of the corresponding covariate with respect to modeling a particular edge under consideration, whereas $\small \bfb_{1,jlt}\approx \ldots \approx \bfb_{H-1,jlt}\approx 0$ for all $t=1,\ldots,T,$ indicates spurious covariates unrelated to the dynamic pairwise correlations. Model (\ref{eq:base_cov}) suggests that the log-odds for each component $\xi^{(i)}_{h,jlt}({\bf x}_i)/\xi^{(i)}_{H,jlt}({\bf x}_i),h=1,H-1,$ can be expressed as a linear combination of covariates. When two or more samples have similar covariate information, the prior specification in (\ref{eq:base_cov}) will encourage similar mixture components to characterise the dynamic connectivity for all these samples that will result in analogous connectivity patterns. However the posterior distribution (that is used to derive parameter estimates) should be flexible enough to accurately estimate varying connectivity patterns between individuals even when they share similar covariate values, by leveraging information present in the data (as evident from extensive numerical studies in Section 4). {\noindent \uline{Role of clustering in tackling heterogeneity and pooling information:}} Under model (\ref{eq:base}), each sample will be assigned to one of the $H$ clusters at each time scan in an unsupervised manner and guided by their covariate profiles in order to model the edge-level dynamic connectivity. Due to independent clustering at each time scan, these cluster configurations change over the experimental session in a data-adaptive manner to characterize connectivity fluctuations across individuals. Such time scan specific clusters represent subgroups of individuals with similar connectivity profiles over a subset of time scans, which are learnt by pooling information across all samples within a cluster. Here, it is important to note that model (\ref{eq:base}) does not impose identical dynamic connectivity across all time scans between multiple individuals (that is biologically unrealistic), but instead encourages common connectivity patterns within subgroups of samples for a subset of time points that are learnt in a data-adaptive manner. Hence, the proposed method is designed to result in more accurate estimation compared to a single subject analysis that is not equipped to pool information across samples or a group level analysis that does not account for within sample heterogeneity. We note that although the estimation is performed separately for each edge, the connectivity estimates across all edges are consolidated to obtain connectivity change point estimates (Section 2.3) or identify subgroups with common dynamic connectivity profiles (Section 2.4). \subsection{Dynamic Precision Matrix Estimation} We now propose a mixture model for dynamic precision matrix estimation that looks at the totality of all nodes in the network, in contrast to the edge-wise analysis in Section 2.1. While the proposed approach also uses a mixture modeling framework as in Section 2.1, the two methods are fundamentally distinct in the manner in which the mixture prior is specified and in terms of how the network edges are constructed and interpreted. The proposed approach estimates the network by computing the $V\times V$ precision matrix involving $V(V-1)/2$ distinct partial correlations that are learnt by borrowing information across $V$ nodes at each time scan. The partial correlations measure interactions between pairs of regions after removing the influence of third party nodes, which is successful in filtering out spurious correlations. Hence a zero partial correlation between two nodes implies conditional independence. The proposed idPMAC approach enables one to report graph-theoretic network summary measures that capture important patterns of network information transmission (Lukemire et al., 2020), which are otherwise difficult to report using pairwise correlations (Smith et al., 2012). Denote the $V\times V$ precision matrix over all nodes for the $i$-th individual at the $t$-th time point as $\small\Omega^{(i)}_t= \begin{bmatrix} \omega^{(i)}_{t,11} &{\bm\omega}^{(i)}_{1,t} \\ {\bm\omega}^{(i)'}_{1,t} &\Omega^{(i)}_{11,t} \end{bmatrix} $, and note that the partial correlation between nodes $k$ and $l$ is given directly as $-\omega_{kl}/\sqrt{\omega_{kk}\omega_{ll}}$ (ignoring the subject-specific and time-scan specific notations). We propose a Gaussian graphical model involving product mixture priors as: \begin{eqnarray} {\bf y}^{(i)}_t&\sim& N\Bigg[{\bf 0},\Omega^{(i)}_t \Bigg], \mbox{ } {\bm\omega}^{(i)}_{v,t} \sim \sum_{h=1}^H \xi_{h,t}({\bf x}_i)N_{V-1}({\bm\omega}_{h,t}^*, \sigma^2_{\omega,h}I_{V-1}), \mbox{ } t=1,\ldots,T,i=1,\ldots,N, \nonumber \\ \omega^{(i)}_{t,vv} &\sim& E(\frac{\alpha}{2}), \mbox{ } \pi({\bm\omega}_{h,1}^*, \ldots, {\bm\omega}_{h,T}^*)\propto \exp(-\lambda \sum_{t=1}^{T-1}|{\bm\omega}^*_{h,t}-{\bm\omega}^*_{h,t-1}|), \mbox{ } h=1,\ldots, H,\label{eq:base2} \end{eqnarray} where $\Omega^{(i)}_t\in M^{+}_V$, the space of positive definite matrices, $E(\alpha)$ denotes the Exponential distribution with scale parameter $\alpha$, and ${\bm\omega}^{(i)}_{v,t}$ denotes the vector of $(V-1)$ off-diagonal elements corresponding to the $v$-th row of $\Omega^{(i)}_t$ that are modeled using a mixture of multivariate Gaussians prior. Specifically, the dynamic connectivity at time scan $t$ is likely to be characterised via the $h$th mixture component with prior probability $\xi_{h,t}({\bf x}_i)$ depending on covariates, where the prior mean and variance for this unknown mixture component is given by ${\bm\omega}^*_{h,t}$ and $\sigma^2_{\omega,h}$ respectively. The idPMAC approach in (\ref{eq:base2})-(\ref{eq:base_cov2}) specifies independent mixture priors on the set of all edges related to each node and at each time scan, which ensures symmetric and positive definite precision matrices that are necessary for obtaining valid partial correlation estimates. Full details for the computational steps are presented in Section 3. {\noindent \emph{Modeling mixture atoms:}} Under a hierarchical Bayesian specification, the mixture atoms or component-specific means ${\bm\omega}^*_{h,t}$ are themselves unknown and modeled via a fused lasso prior, which encourages temporal homogeneity of partial correlations by assigning small prior probabilities for large changes in the values. We note that although the fused lasso prior encourages temporal smoothness in partial correlations, systematic changes in connectivity reflected by sharp jumps may be still identified via a post-processing step in Section 2.4. {\noindent \underline{Modeling mixture weights via covariates:}} The node level mixture weights incorporating covariates are modeled via a Multinomial Logistic regression that is defined as: \begin{eqnarray} \xi^{(i)}_{h,t}({\bf x}_i)= \frac{e^{\bm{x_i}^T \bfb_{h,t}}}{1 + \sum_{h=1}^{H-1}e^{\bm{x_i}^T\bfb_{h,t}}}, \mbox{ } \bfb_{h,t}\sim N(\bm{0},\Sigma_{\bfb}),\mbox{ } i=1,\ldots,N, \label{eq:base_cov2} \end{eqnarray} where $\bfb_{h,t}$ refers to the unknown regression coefficients corresponding to time scan $t$ and mixture component $h$ that is assigned a Gaussian prior, and $\bfb_{H,t}=0,t=1\dots T,$ is set as the reference group. The prior in (\ref{eq:base2})-(\ref{eq:base_cov2}) encourages similar clustering configurations resulting in analogous time-varying partial correlations for individuals with similar covariate profiles. However in the presence of heterogeneity, the posterior distribution under the idPMAC method is able to identify divergent dynamic connectivity patterns even among individuals with similar covariate profiles (as evident from extensive numerical studies in Section 4). {\noindent \uline{Role of clustering in tackling heterogeneity and pooling information:}} Under model (\ref{eq:base2}), each column of the precision matrix is assigned to one of the $H$ clusters at each time scan in an unsupervised manner. Hence, the mixture modeling framework allows subsets of rows/columns of $\Omega^{(i)}_t$ to have the same values depending on their clustering allocation at each given time scan, which is an unique feature under the idPMAC approach that is not shared by the idPAC method. This feature results in robust estimates by pooling information across nodes and samples to estimate common partial correlations, and is a necessary dimension reduction step for scenarios involving large networks. For example, all weak or absent edges can be subsumed into one cluster which yields model parsimony. In addition, divergent connectivity patterns are captured via distinct time-varying clustering configurations across individuals as derived from the posterior distribution, which accommodates heterogeneity. Hence, the clustering mechanism under the idPMAC method not only enables dimension reduction, but also provides a desirable balance between leveraging common connectivity patterns within and across networks and addressing inherent network differences across individuals. \subsection{Post-processing steps for sub-group detection} In practical neuroimaging applications, it is often of interest to detect dissimilar modes of dynamic connectivity patterns that are embodied by distinct subgroups of individuals who also differ in terms of demographic or clinical characteristics, or other factors. For example in our fMRI task study, one of the objectives is to assess variations in dynamic connectivity with respect to subgroups of samples that were assigned different interventions, and who also had varying demographic characteristics. Instead of comparing network differences between pre-specified subgroups that are likely to contain individuals with heterogeneous connectivity patterns, it is more appealing to develop a data-adaptive approach to identify subgroups that comprise individuals with homologous dynamic connectivity, and then examine connectivity variations across such subgroups and how these variations are related to intervention and other factors of interest. When estimating these subgroups, we do not require identical dynamic connectivity patterns for all individuals within subgroups, but rather expect them to have limited network differences in terms of edge strengths and connectivity change points. An inherently appealing feature of subgroup detection is that is allows one to compute cluster level change points and other aggregate network features (see Section 2.4) which are more reproducible in the presence of noise and heterogeneity, compared to a single-subject analysis. Subgroup level network summaries may be particularly beneficial in certain scenarios such as fMRI block task experiments where it may be challenging for single-subject analyses to detect rapidly evolving network features induced via quick transitions between rest and task blocks within the experimental design. We propose an approach that consolidates the time-varying clusters of samples under the BPMM approach to detect subgroups which comprises samples with similar network-level dynamic connectivity patterns. In order to identify these subgroups, we first create a $N\times N$ similarity matrix that measures the propensity of each pair of samples to belong to the same cluster over the experimental session. This matrix is created by examining the proportion of time scans during which a pair of samples belonged to the same cluster across the experimental session, averaged across all edges. Once this similarity matrix has been computed, a K-means algorithm is applied to identify clusters of samples that exhibit similar dynamic connectivity patterns across the experimental session. The number of clusters $K$ is determined using some goodness of fit score such as the elbow method (Thorndike, 1953), or it is fixed as the maximum number of mixture components ($H$) under the BPMM approach. Finally, we note that the subgroup identification step is not strictly needed under the proposed BPMM framework for dynamic network estimation, but it is an optional analysis that can be used to identify cluster-level network features in certain scenarios of interest. \subsection{Post-processing steps for connectivity change point estimation} The estimated dynamic correlations in Sections 2.1-2.2 can be used to detect connectivity change points in scenarios involving sharp changes in the network during the session, such as in fMRI task experiments. Our strategy involves computing change points for each individual network (a) at the edge level that captures localized changes; and (b) at the global level that captures major disruptions in connectivity over the entire network. We compute the change points using the total variation penalty (Bleakley and Vert, 2010) that was also used in CCPD approach by Kundu et. al (2018). However the proposed idPAC and idPMAC methods are distinct from the two-stage CCPD approach; the latter estimates connectivity change points based on empirical time-varying connectivity measures in the first stage, and then in the second stage, computes piecewise constant networks conditional on the estimated change points that represent connectivity jumps. In contrast, proposed idPAC and idPMAC methods pool information across samples in order to first estimate dynamic correlations that does not depend on change points and can vary continuously over time, and subsequently uses a post-processing step to compute connectivity change points without requiring piecewise constant connectivity assumptions. An appealing feature of the proposed mixture modeling framework guided by covariates is that it is more suitable for tackling divergent dynamic connectivity across samples, in contrast to empirical correlations under the CCPD approach. Denote the vector of estimated (pairwise or partial) correlations over all edges for the $i$-th individual and at time scan $t$ as $\widehat{\bf r}^{(i)}_t \in \Re^{V(V-1)/2}, t=1,\ldots,T, i=1,\ldots,N$. Then the functional connectivity change points for the $i$-th individual may be estimated using connections across all edges via a total variation norm penalty that is defined as $\small ||{\bf u}^{(i)}_{t+1} - {\bf u}^{(i)}_{t} ||=\frac{1}{V(V-1)/2}\sqrt{\sum_{m=1}^{V(V-1)/2}(u^{(i)}_{t+1,m} - u^{(i)}_{t,m})^2}$. In particular, the following penalized criteria is used as in Kundu et al. (2018) for detecting network level connectivity change points: \begin{eqnarray} min_{{\bf u}\in \Re^{V(V-1)/2}} \sum_{t=1}^T|| {\widehat{\bf r}}^{(i)}_{t} - {\bf u}^{(i)}_t ||^2 + \lambda_u \sum_{t=1}^{T-1}||{\bf u}^{(i)}_{t+1} - {\bf u}^{(i)}_{t} ||, \label{eq:multiTV} \end{eqnarray} where $\lambda_u$ represents the penalty parameter and ${\bf u}^{(i)}_{t}\in \Re^{p(p-1)/2}$ represents the piecewise constant approximation to the time series of correlations at time point $t$ for the $i$-th individual that also assumes the presence of an unknown number of connectivity jumps. The first term in (\ref{eq:multiTV}) measures the error between the observed correlations and the piece-wise constant connectivity, while the second term controls the temporal smoothness of correlations for $V(V-1)/2$ edges. The increment $||{\bf u}^{(i)}_{t+1} - {\bf u}^{(i)}_{t} ||$ in the second term becomes negligible when the multivariate time series does not change significantly between times $t$ and $t+1$, but it takes large values corresponding to significant connectivity changes. The network change points computed via (\ref{eq:multiTV}) represent global changes functional connectivity resulting from a subset of edges that exhibit large connectivity changes. It is important to note that not all edges are expected to exhibit changes at these estimated change points. When it is of interest to compute edge-level connectivity change points, one can simply use criteria (\ref{eq:multiTV}) separately for each edge, so that the total variation term translates to the $L_1$ penalty. However, it is important to note that edge-level connectivity changes represent granular fluctuations that are typically more challenging to detect in the presence of noise in fMRI. The number of change points is determined by the penalty parameter $\lambda_u$, with a smaller value yielding a greater number of change points and vice-versa. Tibshirani and Wang (2007) proposed an estimate of $\lambda_u$ based on a pre-smoothed fit of a univariate time series using a lowess estimator (Becker et al., 1988). We adapt this approach for a multivariate time series to obtain an initial estimate for $\lambda_u$, and then propose post-processing steps to tune this estimate in order to obtain change points, as in the CCPD approach in Kundu et al. (2018). Full details for these steps are provided in Supplementary Materials. {\noindent \uline{Cluster-level connectivity change point estimation:}} For fMRI task experiments involving multiple subjects, subgroups of individuals are expected to share analogous dynamic connectivity patterns with limited variations across samples, as discussed in Section 2.3. The proposed total variation penalty norm in (\ref{eq:multiTV}) is equipped to leverage information across samples within a cluster for identifying cluster level change points, which reflect aggregated dynamic connectivity changes across all samples within a cluster at the global network level. These cluster level connectivity changes are obtained by aggregating the change points obtained via (\ref{eq:multiTV}) applied separately to each sample within the cluster, and then choosing those change points that show up repeatedly within the cluster. One can define a threshold such that all change points that appear with a high frequency (above the chosen threshold) across samples within the cluster are determined to represent cluster level change points (Kundu et al., 2018). We note that under the proposed method, it is entirely possible for individuals within a cluster to have unique connectivity changes in addition to the common cluster level change points, which reflect within sample heterogeneity. In our experience, this method typically works well in accurately recovering aggregated cluster-level connectivity changes, in certain scenarios such as block task experiments, or more generally in the presence of subgroups of individuals with similar dynamic connectivity patterns. \section{Computational Details for Parameter Estimation}\label{sec:EM} Although one can use Markov chain Monte Carlo (MCMC) to sample the parameters from the posterior distribution, we use a {\it maximum-a-posteriori} or MAP estimators for our purposes in this article that bypasses the computational burden under a MCMC implementation. The MAP estimators are obtained by maximizing the posterior distribution for the model parameters and are derived via the Expectation-Maximization or EM algorithm. The EM algorithm is scalable to high-dimensional fMRI applications of interest that requires one to compute $N\times T$ distinct dynamic networks each involving $V\times V$ connectivity matrices. \subsection{EM Algorithm for Pair-wise dynamic connectivity} {\noindent \bf \uline{EM Algorithm:}} Denote the matrix containing the fMRI time series for the $l$th node as $Y_l=({\bf y}_{1,l},\ldots,{\bf y}_{T,l})$ where ${\bf y}_{t,l}=(y^{(1)}_{l,t},\ldots,y^{(N)}_{l,t})'$ represents the fMRI observations across all samples for node $l$ and time scan $t$. Further, denote $\Delta_h$ as a latent indicator variable for the $h$th mixture component (that is not observed and is imputed in the proposed EM algorithm) and finally, denote by $\bm\Theta^{jl}$ the collection of all model parameters under the specification (\ref{eq:base})-(\ref{eq:base_cov}) corresponding to edge $(j,l)$. Note that under the proposed model (\ref{eq:base_cov}), one has an equivalent specification under the binary latent variables distributed as $\small (\Delta^{(i)}_{1,jlt},\ldots,\Delta^{(i)}_{H,jlt})\sim MN\big(1,(\xi_{1,jlt}({\bf x}_i;\bfb_{h,jlt}),\ldots,\xi_{H,jlt}({\bf x}_i;\bfb_{h,jlt})) \big), $ where $MN(1;{\bf p}_0)$ denotes a multinomial distribution with probability vector ${\bf p}_0$, $B_{jlt}=({\bfb}_{1,jlt},\ldots,{\bfb}_{H-1,jlt} )$ and one can marginalize out $(\Delta^{(i)}_{1,jlt},\ldots,\Delta^{(i)}_{H,jlt})$ to recover the prior in (\ref{eq:base_cov}). The EM algorithm uses the augmented log-posterior derived in the Appendix involving the above latent mixture indicators, to computer MAP estimates for the model parameters by iteratively applying the Expectation (E) and Maximization (M) steps. The latent indicators $\small \{\Delta^{(i)}_{h,jlt},h=2,\ldots,H, t=1,\ldots,T,i=1,\ldots,N \}$ are imputed via the E-Step by using the posterior probability of $\gamma_{jl,t}^{(i)}$ taking values from the $h$-th mixture component, which is denoted by $\psi_{h,jlt}^{(i)}=Pr(\Delta^{(i)}_{h,jlt} = 1\mid-)$ and updated as: {\noindent \bf E-step}: Compute the posterior expectation for the latent cluster membership indicators as $\hat{\psi}_{h,jlt}^{(i)} = \big\{\sum_{r=1}^H \xi_{r,jlt}({\bf x}_i;\bfb_{h,jlt}) \phi(\gamma_{jl,t}^{(i)} \mid \gamma_{r,jlt}^*, \sigma^2_{\gamma,h})\big\}^{-1}\big\{ \xi_{h,jlt}({\bf x}_i;\bfb_{h,jlt}) \times \phi(\gamma_{jl,t}^{(i)} \mid \gamma_{h,jlt}^*, \sigma^2_{\gamma,h})\big\}$, where $\phi({\gamma_{jl,t}^{(i)}}\mid \gamma^*,\sigma^2_\gamma)$ denotes the normal density with mean $\gamma^*$ and variance $\sigma_{\gamma}^2$. The remaining parameters are updated via M-steps using closed form solutions except $\gamma^{(i)}_{jl,t}$ that is updated using Newton-Raphson steps. These M-steps comprise several mathematically involved derivations and are detailed in the Appendix. The E and M steps are repeated till convergence, which occurs when the absolute change in the log-posterior between successive iterations falls below a certain threshold (we use $10^{-4}$ in our implementation). \subsection{EM Algorithm for Dynamic Precision Matrix Estimation} Let us denote the collection of all the precision matrices as $\bm{\Theta}$, and ${\bf y}^{(i)'}_{t,-v}$ as the $(V-1)$-dimensional vector of fMRI measurements at time scan $t$ over all nodes except node $v$. The prior on the precision matrix can be expressed as $\pi(\Omega^{(i)}_t)=\prod_{v=1}^V\pi(\omega^{(i)}_{t,vv})\pi({\bm\omega}^{(i)}_{vt})$, with the corresponding prior distributions $\pi(\cdot)$ being defined in (\ref{eq:base2}). Denote by $|\cdot|_1$, the element-wise $L_1$ norm, denote $\kappa^{(i)}_{1,t}=\omega^{(i)}_{t,11}-{\bm\omega}^{(i)'}_{1,t}\Omega^{(i)-1}_{11,t}{\bm\omega}^{(i)}_{1,t}$ to represent the conditional variance corresponding to the fMRI measurements for the $v$th node given all other nodes, and let $\omega^{(i)}_{t,vv}$ and ${\bm\omega}^{(i)}_{v,t}$ respectively denote the diagonal and the vector of off-diagonal elements of the $v$th row in $\Omega^{(i)}_t$. Moreover use $det(A)$ to denote the determinant of the matrix $A$, and write $\small S^{(i)}_t={\bf y}^{(i)}_t{\bf y}^{(i)'}_t= \begin{bmatrix} s^{(i)}_{t,11} &{\bf s}^{(i)}_{1,t} \\ {\bf s}^{(i)'}_{1,t} &S^{(i)}_{11,t} \end{bmatrix}$ as the matrix of cross-products of the response variable, where $s^{(i)}_{vv,t}$ and ${\bf s}^{(i)}_{v,t}$ denote the $v$-th diagonal element and the off-diagonal elements for the $v$-th row respectively. Introduce latent indicator variables $(\Delta^{(i)}_{1,vt},\ldots,\Delta^{(i)}_{H,vt})$ that follow a multinomial distribution with probability vector $(\xi_{1,t}({\bf x}_i),\ldots,\xi_{H,t}({\bf x}_i))$ such that $\sum_{h=1}^H \xi_{h,t}({\bf x}_i)=1$. Denote by $\Omega^{(i)}_{vv,t}$, the $(V-1)\times(V-1)$ obtained by deleting the $v$-th row and column from $\Omega^{(i)}_t$. The EM algorithm uses an E step for the latent mixture indicators, as well as a Monte Carlo E step that samples from the posterior distribution in order to obtain estimates for the precision matrix. These steps are described below: {\noindent \bf E-step for mixture component indicator}: For $v=1,\ldots,V,$ use the expression: $\hat{\psi}_{h,vt}^{(i)} =\big\{\sum_{r=1}^H\xi_{r,t}\big({\bf x}_i;\bfb_{h,t}\big) \phi_{V-1}\big({\bm\omega}_{v,t}^{(i)} \mid {\bm\omega}_{r,t}^*, \sigma^2_{\omega,r}I_{V-1}\big)\big\}^{-1}\times \big\{\xi_{h,t}\big({\bf x}_i;\bfb_{h,t}\big) \phi_{V-1}\big({\bm\omega}_{v,t}^{(i)} \mid {\bm\omega}_{h,t}^*, \sigma^2_{\omega,h}I_{V-1}\big) \big\} $, where $\phi_{V-1}(\cdot\mid {\bm\omega}^*,\Sigma)$ denotes the probability density function for the ($V-1$)-dimensional normal density with mean and variance as $({\bm\omega}^*,\Sigma)$ respectively. {\noindent \bf Monte Carlo E-step for precision matrix:} We use an E-step to update the precision matrix that computes the posterior mean by averaging MCMC samples drawn from the posterior distribution, which is equivalent to a Monte Carlo EM method (Wei and Tanner, 1990). We use this Monte Carlo approximation for the conditional expectation since it provides a computationally efficient approach to sample positive definite precision matrices via closed form posterior distributions. The posterior distribution for the precision off-diagonal elements are given as $\pi(\hat{\bm\omega}^{(i)}_{vt}\mid -) \sim N\Bigg[V_{\omega_{vt}}\bigg ( \sum_{h=1}^H\frac{\Delta^{(i)}_{h,vt} \omega_{h,t}^*}{\sigma_{\omega,h}^2} + 2({\bf s}^{(i)}_{v,t})\bigg ), V_{\omega_{v,t}}\Bigg]$, where $V_{\omega_{vt}}=\bigg(\sigma^2_{\omega,h}I_{V-1}+(s^{(i)}_{vv,t}+\alpha)(\Omega^{(i)-1}_{vv,t}) +\sum_{h=1}^H \frac{\Delta^{(i)}_{h,vt}}{\sigma_{\omega,h}^2}\bigg )^{-1}$ is the posterior covariance. Moreover, writing $\omega^{(i)}_{t,vv}=\kappa^{(i)}_{v,t}+{\bm\omega}^{(i)'}_{v,t}\Omega^{(i)-1}_{vv,t}{\bm\omega}^{(i)}_{v,t}$, the diagonal precision matrix elements are updated via the posterior $\kappa_{vt}^{(i)} \sim GA(\frac{1}{2}+1,\frac{s_{vv,t}^{(i)}+\alpha}{2})$ where $\alpha$ is pre-specified. The above steps can be alternated to sample positive definite precision matrices as in Wang (2012), and we draw several MCMC samples and average over them to approximate the conditional expectation. The remaining parameters are updated via closed form expressions under the M step, which involve mathematically involved derivations and are detailed in the Appendix. The algorithm iterates through the E and M steps until convergence. \subsection{Tuning Parameter Selection} Certain tuning parameters in the BPMM need to be selected properly or pre-specified, in order to ensure optimal performance. For both dynamic pair-wise correlations and precision matrix estimation, $\lambda$ is the tuning parameter used in fused lasso penalty for the mixture atoms that controls the temporal smoothness of the dynamic connectivity. We choose an optimal value for $\lambda$ over a pre-specified grid of values, as the value of the tuning parameter that minimizes the BIC score. In model (\ref{eq:base}) for the dynamic pairwise correlation, the $\sigma_y^2$ is also pre-specified as the initial mean variance over all edges and across all samples. Moreover when updating covariate effects, $\Sigma_{\beta}$ is pre-fixed as a diagonal matrix with the diagonal terms as $1$, although it is possible to impose a hierarchical prior on $\Sigma_\beta$ and update it using the posterior distribution. Extensive simulation studies revealed that the proposed approach is not sensitive to the choices of $\Sigma_\beta$ as long as the variances are not chosen to be exceedingly small. Other hyper-parameters in the hierarchical Bayesian specification include $\alpha$ in the prior on the precision matrices (chosen as in Wang (2012)), and $a_\sigma=0.1,b_\sigma=1,$ that results in an uninformative prior on the mixture variance. The number of mixture components $H$ also needs to be chosen appropriately. On the one hand, a large value of $H$ may be used to address inherent heterogeneity, but it will also increase the running time and may generate redundant clusters that overcompensates for the variations across samples. On the other hand, a small value of $H$ may restrict the approach to overlook connectivity variations across individuals, resulting in inaccurate estimates. One may use a data adaptive approach to select $H$ in certain scenarios where it is reasonable to assume that the dynamic connectivity can be approximated by piecewise constant connectivity. In such cases that potentially involve block task experiments (Kundu et al., 2018), one can evaluate criteria (\ref{eq:multiTV}) separately for each individual under different values of $H$, and fix the optimal choice as that which minimizes the average value of the criteria (\ref{eq:multiTV}) across all individuals. Based on extensive empirical studies, we noticed the need for larger values for $H$ when fitting the model for cases involving large number of nodes and samples. \section{Numerical Experiments} \subsection{Simulation set-up} {\noindent \underline{Data generation:}} We generate observations from Gaussian distributions with sparse and piecewise constant precision matrices that change at a finite set of change points. Moreover, the network change points are generated based on covariate information where individuals with identical covariates have partially overlapping connectivity change points. Broadly, we use the following few steps to generate the data, each of which is described in greater detail in the sequel: (i) generate a given number of change points for each subject using corresponding covariate information; (ii) conditional on the generated change points, piecewise constant networks are simulated such that the connectivity changes occur only at the given change points; (iii) conditional on the network for a given state phase, a corresponding positive definite precision matrix is generated for each time scan where non-zero off-diagonal elements represent edge strengths and zero off-diagonals represent absent edges; and (iv) the response variable for a given time point is generated from a Gaussian distribution having zero mean and the precision matrix in step (iii). Four clusters are created with 10 samples each, where the samples with each cluster have the same number of connectivity change points, common state phase specific networks and identical covariate values. However within each cluster, there are differences in locations of connectivity change points and the network edge strengths are free to vary across individuals even when they share the same network structure. All samples in the first two clusters have 3 connectivity change points each, whereas the samples in the other two clusters have 4 change points, out of a total of $T=300$ time scans. Conditional on the change points in step (i), several types of networks are constructed for each state phase in step (ii) that include: (a) Erdos Renyi network where each edge can randomly appear with a fixed probability; (b) small-world network, where the mean geodesic distance between nodes are relatively small compared with the number of nodes and which mimics several practical brain network configurations; and (c) scale-free network that resembles a hub network where the degree of network follows a power distribution. Given these networks, the corresponding precision matrix was generated in step (iii) by assigning zeros to off-diagonals for absent edges, and randomly generating edge weights from uniform [-1,1] for all important edges. To ensure the positive definiteness, the diagonal values of the precision matrix were rescaled by adding the sum of the absolute values of all elements in each row with one. Finally, the response variables were generated either (a) independently at each time point via a Gaussian graphical model, or (b) via a vector autoregressive (VAR) model where the response variables are autocorrelated across time. In both cases, sparse time-varying precision matrices having dimensions $V=40,100,$ were used. We generated two binary features that resulted in four distinct covariate configurations, i.e. (0,0), (0,1), (1,0), (1,1), and all samples with identical covariates were allocated to the same cluster. In addition, we also evaluated the performance of proposed method in the presence of spurious covariates that are not related to dynamic connectivity patterns. Specifically, we introduced anywhere between 1 to 8 spurious covariates for each sample (in addition to the two true covariates described earlier), which were randomly generated using uniform as well as from random normal distributions. We then investigated the performance of the proposed approach over varying number of spurious covariates. While the proposed approach is expected to work best in practical experiments involving a carefully selected set of covariates that influence dynamic connectivity patterns, our goal was also to investigate the change in performance as the number of spurious covariates increase. {\noindent \underline{Competing methods}:} We perform extensive simulation studies to evaluate the performance of the proposed approach, and compare the performance with (a) change point estimation approaches such as the CCPD (Kundu et al., 2018) that can estimate single subject connectivity using multi-subject data in the presence of limited heterogeneity, and the dynamic connectivity regression (DCR) approach for single subjects proposed in Cribben et al. (2013); (b) an empirical sliding window based approach (SD) and the model-based SINGLE (Monti et al., 2014) method that uses sliding window correlations; and (c) a covariate-naive version of the proposed approach using the methods in Sections 2.1 and 2.2 (denoted as BPMM-PC and BPMM-PR respectively) that employs a multinomial distribution to model the mixture weights without covariates. While methods in (a) and (c) are designed to report connectivity change points, we augmented the sliding window approaches in (b) using a post-processing step similar to (\ref{eq:multiTV}) to compute change points based on the estimated sliding window correlations. Moreover for the proposed approach, the data under the VAR case was prewhitened via an autoregressive integrated moving average (ARIMA) before fitting the proposed models. In particular, the `$auto.arima$' in $R$ was used to prewhiten the raw data, which yielded residuals that were subsequently used for analysis. We note that it was not possible to report results under SINGLE for $V=100$ due to an infeasible computational burden. {\noindent \underline{Performance metrics:}} We evaluate the performance of different approaches in terms of different metrics. First, we investigated the accuracy in recovering true connectivity change points at the network and edge level for each sample, using sensitivity (defined as the proportion of truly detected change points or true positives), as well as the number of falsely detected change points or false positives. In addition, the performance of the network connectivity change points at the cluster level was also evaluated by comparing the true connectivity change points for each sample within the cluster with the aggregated cluster level change points. We note that since there were variations in connectivity change points within each cluster, false positive change points are to be expected under any estimation approach; however our goal is to evaluate how well these false positives are controlled and the sensitivity in detecting true change points under different methods. In addition, we also evaluated accuracy in terms of estimating the strength of connections that is computed as a squared loss (MSE) between the estimated and the true edge-level pairwise correlations. The pairwise correlations corresponding to dynamic precision matrix approaches for computing MSE were obtained by inverting the respective precision matrices. In order to evaluate the accuracy in dynamic network estimation, we computed the F-1 score defined as $\small 2 (\mbox{Precision}\times \mbox{Recall})/(\mbox{Precision + Recall})$, where Precision=$\small TP/(TP+FP)$ is defined as the true positive rate, and Recall=$\small TP/(TP+FN)$ represents the sensitivity in estimating the edges in the network. Here, $TP, FP, FN,$ refer to the number of true positive, false positive, and false negative edges that are obtained via binary adjacency matrices derived by thresholding the estimated absolute partial correlations. We employed reasonable thresholds (0.05) that are commonly used in literature (Kundu et al., 2018). In contrast, it was not immediately clear how to choose such thresholds for pairwise correlations given the fact that they tend to be typically larger in magnitude and have greater variability. Hence, we did not report F-1 scores corresponding to pairwise correlations, although one could do so in principle by choosing suitable thresholds to obtain binary adjacency matrices. Finally, we also evaluated the clustering performance in terms of the clustering error (CE) and Variation of Information (VI). CE (Patrikainen and Meila, 2006) is defined as the maximum overlap between the estimated clustering with the true clustering, whereas VI (Meil$\check{a}$, 2007) calculates the entropy associated with different clustering configurations. \subsection{Results} The performance in terms of recovering the true clusters of subjects is provided in Table \ref{tab:clus_acc}, in the presence of two covariates that are both related to the true connectivity changes. It is clear from the results that incorporating covariate information results in near perfect recovery of the clusters, in contrast to the covariate-naive version of the method. For $V=100$, the dynamic pairwise correlation approach seems to have a slightly higher accuracy in terms of cluster recovery compared to the dynamic precision matrix approach when data is generated from a VAR model. Table \ref{tab:cluster_cp} reports the accuracy in recovering the true network-level change points under the proposed approaches at the level of the estimated clusters, as per discussions in Section 2.4. In this case, both idPAC and idPMAC methods are shown to have near perfect recovery of the true network connectivity change points when data is generated under GGM, and high sensitivity when data is generated under VAR. Moreover when using data from a VAR model, the idPAC method has a comparable or higher sensitivity but also higher false positives for $V=100$ in terms of detecting connectivity change points at the cluster level, compared to the idPMAC method. We note that although all samples within a cluster had identical covariate information, the proposed approach was able to accommodate within cluster connectivity differences that is evident from low false positives and high sensitivity when estimating cluster level change points. Moreover as seen from Tables \ref{tab:dypc}-\ref{tab:dypm}, the accuracy in recovering cluster level connectivity change points is considerably higher than the corresponding results at the level of individual networks. These results indicate the usefulness of aggregating information when it is reasonable to assume the existence of subgroups of individuals who share some similar facets of dynamic connectivity. \begin{table}[] \centering \begin{tabular}{l|cccc|cccc} \hline &\multicolumn{4}{c|}{idPAC} &\multicolumn{4}{c}{BPMM-PC}\\ &\multicolumn{2}{c}{V=40} &\multicolumn{2}{c|}{V=100} &\multicolumn{2}{c}{V=40} &\multicolumn{2}{c}{V=100} \\ &CE&VI&CE&VI&CE&VI&CE&VI \\ \hline GGM+Erdos-Renyi&0&0&0&0&0.64&1.93&0.62&2.19\\ GGM+Small-world&0&0&0&0&0.57&1.92&0.71&2.23\\ GGM+Scale-free&0&0&0&0&0.63&2.01&0.66&2.19\\ VAR+Erdos-Renyi &0&0&0&0&0.61&1.93&0.67&1.97 \\ VAR+Small-World &0&0&0&0&0.59&1.88&0.61&1.90\\ VAR+Scale-Free &0&0&0&0&0.61&1.78&0.61&1.93\\ \hline &\multicolumn{4}{c|}{idPMAC} &\multicolumn{4}{c}{BPMM-PR}\\ &\multicolumn{2}{c}{V=40} &\multicolumn{2}{c|}{V=100} &\multicolumn{2}{c}{V=40} &\multicolumn{2}{c}{V=100} \\ \hline GGM+Erdos-Renyi &0&0&0&0&0.43&1.41&0.54&1.59\\ GGM+Small-world &0&0&0&0&0.41&1.41&0.51&1.68\\ GGM+Scale-free &0&0&0&0&0.43&1.49&0.60&1.78\\ VAR+Erdos-Renyi &0.08&0.25&0.04&0.17 & 0.54 &1.51 & 0.66 &1.88\\ VAR+Small-World &0&0&0.03&0.14 &0.48 &1.47 &0.58&1.91 \\ VAR+Scale-Free &0&0&0.04 &0.11 &0.49 &1.42 & 0.63 &1.75 \\ \hline \end{tabular} \caption{ Clustering performance under different network types. GGM implies that the Gaussian graphical model was used to generate temporally uncorrelated observations; VAR implies a vector autoregressive model that was used to generate temporally dependent observations. For the VAR case, the observations were pre-whitened before fitting the model. } \label{tab:clus_acc} \end{table} \begin{table}[] \centering \begin{tabular}{l|cccc|cccc} \hline &\multicolumn{4}{c|}{idPAC}&\multicolumn{4}{c}{idPMAC} \\ &\multicolumn{2}{c}{V=40} &\multicolumn{2}{c|}{V=100} &\multicolumn{2}{c}{V=40} &\multicolumn{2}{c}{V=100} \\ &sens&FP&sens&FP&sens&FP&sens&FP\\ \hline GGM+Erdos-Renyi &1 &2.15 &0.99& 1.58 & 0.97& 3.94 &0.99 &3.18\\ GGM+Small-world & 0.97 & 2.11 &1& 1.59 & 0.99 &4.18 & 0.98 &3.17\\ GGM+Scale-free & 0.99 &2.09&1&1.37 & 1 & 3.91 & 0.97 & 3.09\\ \hline VAR+Erdos-Renyi & 0.91 &3.71 &0.88& 3.66 & 0.87& 3.47 &0.87 &2.89\\ VAR+Small-world & 0.84 & 3.44 &0.8& 3.09 & 0.82 &3.45 & 0.81 &2.98\\ VAR+Scale-free & 0.88 &3.29&0.84&3.68 & 0.85 &3.3 & 0.81 & 3.01\\ \end{tabular} \caption{Cluster-based network change point estimation under the proposed approaches, assuming that all samples within a particular cluster have the same number and similar location of change points, with a limited degree of heterogeneity in functional connectivity. If this assumption holds, then the cluster level network change point estimation provides greater accuracy compared to the estimated change points at the level of individuals as reported in subsequent Tables.} \label{tab:cluster_cp} \end{table} \begin{table}[] \centering \begin{tabular}{l|ccccc|ccccc} \hline {\bf Results for V=40} & \multicolumn{2}{c}{Network CP} &\multicolumn{2}{c}{Edge CP}& MSE & \multicolumn{2}{c}{Network CP} &\multicolumn{2}{c}{Edge CP}& MSE \\ &sens &FP &sens&FP &MSE &sens &FP &sens&FP &MSE\\ \hline &\multicolumn{5}{c}{BPMM-PC} &\multicolumn{5}{c}{idPAC}\\ \hline GGM+Erdos-Renyi&0.91&7.31&0.50&1.12&{\bf 0.1}&{\bf 1}& 2.75&{\bf 0.92}&1.08&{\bf 0.09}\\ GGM+Small-world&0.92&5.99&0.47&1.03&0.12&{\bf 0.98}& 2.77&{\bf 0.92}&1.01&{\bf 0.08}\\ GGM+Scale-free&0.91&7.29&0.49&1.19&0.12&{\bf 1}& 2.81&{\bf 0.92}&1.1&{\bf 0.09}\\ \hline &\multicolumn{5}{c}{SD+GFL}&\multicolumn{5}{c}{CCPD}\\ \hline GGM+Erdos-Renyi&0.3&3.13&0.09&2.97&0.29 &0.92&$\bm{2.15}$&0.31&4.1&0.16\\ GGM+Small-world&0.29&3.31&0.09&3.08&0.27&0.92&$\bm{2.18}$&0.29&4.17&0.21\\ GGM+Scale-free&0.29&3.08&0.09&2.99&0.24&0.91&$\bm{2.33}$&0.29&4.09&0.19\\ \hline \hline \hline &\multicolumn{5}{|c}{BPMM-PC} &\multicolumn{5}{c}{idPAC}\\ \hline VAR+Erdos-Renyi &0.68&6.55&0.43&1.08&0.2&{\bf 0.84}&5.57&{\bf 0.80}&1.06&{\bf 0.12}\\ VAR+Small-world &0.66 &5.97&0.47 &1.14 &0.19 &{\bf 0.77} &5.54 &{\bf 0.74} &1.12 & {\bf 0.09} \\ VAR+Scale-free & 0.59 &5.51&0.39&1.02&0.17&{\bf 0.78}&5.29&{\bf 0.73} &1.06&{\bf 0.09}\\ \hline &\multicolumn{5}{c}{SD+GFL}&\multicolumn{5}{c}{CCPD}\\ \hline VAR+Erdos-Renyi&0.41&7.72&0.13&3.06&0.26&0.55&{\bf 1.12}&0.18&4.33&0.21\\ VAR+Small-world& 0.56 & 6.29 &0.14 &2.98 &0.19 & 0.64&{\bf 1.36} &0.17 &3.47 &0.23\\ VAR+Scale-free&0.42 &6.99 &0.17 &3.13 &0.22 & 0.58 &{\bf 1.27} &0.19 &3.29 &0.2 \\ \toprule \toprule {\bf Results for V=100} & \multicolumn{2}{c}{Network CP} &\multicolumn{2}{c}{Edge CP}& MSE & \multicolumn{2}{c}{Network CP} &\multicolumn{2}{c}{Edge CP}& MSE \\ &sens &FP &sens&FP &MSE &sens &FP &sens&FP &MSE\\ \hline &\multicolumn{5}{c}{BPMM-PC} &\multicolumn{5}{c}{idPAC}\\ \hline GGM+Erdos-Renyi&0.92&4.77&0.51&1.31&0.11&{\bf 1}&2.31&{\bf 0.83}&{\bf 1.16}&0.09\\ GGM+Small-world&0.91&4.69&0.49&1.33&0.1&{\bf 1}&2.37&{\bf 0.82}&{\bf 1.17}&0.09\\ GGM+Scale-free&0.91&4.71&0.50&1.31&0.11&{\bf 1}&2.29&{\bf 0.83}&{\bf 1.16}&0.09\\ \hline &\multicolumn{5}{c}{SD+GFL}&\multicolumn{5}{c}{CCPD}\\ \hline GGM+Erdos-Renyi&0.3&3.13&0.09&2.97&0.29&0.9&{\bf 1.12}&0.29&4.6&0.18\\ GGM+Small-world&0.29&3.31&0.09&3.08&0.27&0.91&{\bf 1.18}&0.25&4.2&0.17\\ GGM+Scale-free&0.29&3.08&0.09&2.99&0.27&0.91&{\bf 1.02}&0.27&4.4&0.17\\ \hline \hline \hline &\multicolumn{5}{|c}{BPMM-PC} &\multicolumn{5}{c}{idPAC}\\ \hline VAR+Erods-Renyi&0.66&5.97&0.51&1.07&0.14&{\bf 0.82}&5.88&{\bf 0.81}&1.04&{\bf 0.11}\\ VAR+Small-world&0.59&6.03&0.41&1.02&0.14&{\bf 0.75}&5.44&{\bf 0.74}&1.05&{\bf 0.12} \\ VAR+Scale-free& 0.62&5.49&0.44&0.99&0.15&{\bf 0.77}&5.51&{\bf0.71}&1.11&{\bf 0.13}\\ \hline &\multicolumn{5}{c}{SD+GFL}&\multicolumn{5}{c}{CCPD}\\ \hline VAR+Erdos-Renyi&0.37&8.03&0.1&3.14&0.15&0.55&{\bf 1.09}&0.17&3.75&0.22\\ VAR+Small-world& 0.44&7.51&0.16&2.71&0.16&0.66&{\bf 1.44}&0.19&3.41&0.19\\ VAR+Scale-free& 0.36&7.72&0.18&2.88&0.18&0.59&{\bf 1.31}&0.17&3.44&0.19\\ \end{tabular} \caption{Results under the dynamic pair-wise correlation approaches for network and edge-level connectivity change-point estimation (Edge CP) accuracy and network changepoint (Network CP) estimation accuracy for $V=40,100$. GGM and VAR correspond to data generated from Gaussian graphical models and vector autoregressive models. Significantly improved metrics among the four approaches corresponding to the GGM data and separately for the VAR data, are highlighted in bold font.} \label{tab:dypc} \end{table} \begin{table}[] \centering \setlength\tabcolsep{2.5pt} \begin{tabular}{l|cccccc|cccccc} \hline {\bf Results for V=40} & \multicolumn{2}{c}{Network CP} &\multicolumn{2}{c}{Edge CP}& MSE&F1 & \multicolumn{2}{c}{Network CP} &\multicolumn{2}{c}{Edge CP}& MSE&F1 \\ &sens &FP &sens&FP &MSE&F1 &sens &FP &sens&FP &MSE&F1\\ \hline &\multicolumn{6}{|c}{BPMM-PM} &\multicolumn{6}{c}{idPMAC}\\ \hline GGM+Erdos-Renyi&0.85&6.99&0.32&1.04&0.1& 0.79 &${\bf 1}$&{\bf 5.2}&{\bf 0.79}&{\bf 0.89}&0.08&{\bf 0.88}\\ GGM+Small-world&0.88&7.14&0.33&1.16&0.08&0.77 &${\bf 1}$&{\bf 5.11}&{\bf 0.81}&{\bf 0.91}&0.08&{\bf 0.9}\\ GGM+Scale-free&0.87&7.36&0.33&1.19&0.08& 0.71 &${\bf 0.97}$&{\bf 5.6}&{\bf 0.77}&{\bf 0.92}&0.07&{\bf 0.89}\\ \hline &\multicolumn{6}{c}{DCR} &\multicolumn{6}{c}{SINGLE}\\ \hline GGM+Erdos-Renyi &0.22&16.15&0.41&9.39&0.27&0.59 &0.35&6.49&0.1&2.84&0.08&0.71\\ GGM+Small-world &0.19&11.83&0.49&9.66&0.22&0.61 &0.32&6.55&0.09&2.88&0.07&0.77\\ GGM+Scale-free &0.21&10.92&0.49&9.058&0.23&0.62 &0.33&6.01&0.09&2.94&0.07&0.69\\ \hline \hline \hline &\multicolumn{6}{|c}{BPMM-PM} &\multicolumn{6}{c}{idPMAC}\\ \hline VAR+Erdos-Renyi& 0.66&{\bf 4.45} &0.29&{\bf 1.16}&0.10&0.77&{\bf 0.79}&4.81&{\bf 0.68}&1.22&0.09&{\bf 0.81}\\ VAR+Small-world & 0.59&5.12&0.27&{\bf 1.03}&0.1&0.74&{\bf 0.78}&{\bf 4.99}&{\bf 0.69}&{\bf 1.04}&0.09&{\bf 0.79}\\ VAR+Scale-free & 0.61&4.77&0.31&{\bf 1.04}&0.12&0.77&{\bf 0.76}&{\bf 4.64}&{\bf 0.71}&{\bf0.99}&{\bf 0.09}&{\bf 0.82}\\ \hline &\multicolumn{6}{c}{DCR} &\multicolumn{6}{c}{SINGLE}\\ \hline VAR+Erdos-Renyi &0.22&9.83&0.4&3.35&0.24 &0.64 &0.42&7.35&0.13&3.11&0.27 &0.66\\ VAR+Small-world &0.24&10.14&0.33&3.61&0.23&0.63 & 0.44&7.12 &0.17 &3.04 &0.26&0.62\\ VAR+Scale-free &0.21 &9.98&0.32&3.61&0.22&0.59 & 0.38 &6.77 &0.21 &3.36 &0.23&0.6\\ \toprule \toprule {\bf Results for V=100} & \multicolumn{2}{c}{Network CP} &\multicolumn{2}{c}{Edge CP}& MSE&F1 & \multicolumn{2}{c}{Network CP} &\multicolumn{2}{c}{Edge CP}& MSE &F1\\ &sens &FP &sens&FP &MSE&F1 &sens &FP &sens&FP &MSE&F1\\ \hline &\multicolumn{6}{|c}{BPMM-PM} &\multicolumn{6}{c}{idPMAC}\\ \hline GGM+Erdos-Renyi&{\bf 0.92}&6.83&0.28&1.09&0.08&0.83&{\bf 0.97}&{\bf 5.1}&{\bf 0.82}&{\bf 0.89}&0.08&{\bf 0.89}\\ GGM+Small-world&{\bf 0.91}&6.98&0.31&1.19&0.09&0.81&{\bf 0.97}&{\bf 5.44}&{\bf 0.81}&{\bf 0.99}&0.07&{\bf 0.87}\\ GGM+Scale-free&{\bf 0.92}&7.44&0.32&1.25&0.08&0.81&{\bf 0.96}&{\bf 5.6}&{\bf 0.79}&{\bf 0.94}&0.07&{\bf 0.87}\\ \hline &\multicolumn{6}{c}{DCR} &\multicolumn{6}{c}{SINGLE}\\ \hline GGM+Erdos-Renyi &0.33&16.14&0.41&9.39&0.22&0.63 &&&&&& \\ GGM+Small-world &0.31&15.88&0.4&9.66&0.27&0.59&&&NA&&&\\ GGM+Scale-free &0.34&16.82&0.39&10.08&0.27&0.64&&&&&&\\ \hline \hline \hline &\multicolumn{5}{c}{BPMM-PM} &\multicolumn{6}{c}{idPMAC}\\ \hline VAR+Erdos-Renyi&0.73&4.41&0.29&1.18&0.14&0.77&{\bf 0.88}&4.22&{\bf 0.63}&{\bf 1.09}&0.13&{\bf 0.82}\\ VAR+Small-world &0.56& 5.22&0.22&{\bf 0.91}&0.11&0.78&{\bf 0.72}&{\bf 4.87}&{\bf 0.61}&1.09&0.1&{\bf 0.81}\\ VAR+Scale-free & 0.59&5.13&0.29&1.03&0.11&0.78&{\bf0.77}&{\bf 4.49}&{\bf 0.65}&1.08&0.09&{\bf 0.81}\\ \hline &\multicolumn{6}{c}{DCR} &\multicolumn{6}{c}{SINGLE}\\ \hline VAR+Erdos-Renyi&0.23&9.92&0.43&3.19&0.16&0.64&&&&&&\\ VAR+Small-world&0.31&10.23&0.37&3.37&0.19&0.67 &&&NA&&&\\ VAR+Scale-free& 0.25&10.23&0.38&3.61&0.18&0.65&&&&&&\\ \end{tabular} \caption{Results under the dynamic precision matrix estimation approaches for network and edge-level connectivity change-point estimation (Edge CP) accuracy and network changepoint (Network CP) estimation accuracy for $V=40,100$. GGM and VAR correspond to data generated from Gaussian graphical models and vector autoregressive models respectively. Significantly improved metrics among the four approaches corresponding to the GGM data and separately for the VAR data, are highlighted in bold font.} \label{tab:dypm} \end{table} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{FscorePM3.jpg} \caption{F1-score over time for one single subject under the case of dynamic partial correlation method. The vertical green lines are the true change points. Red line represents the proposed method with dynamic partial correlation (idPMAC), the cyan line represents the covariate-naive version (BPMM-PM), the blue line represents DCR, and the pink line represents SINGLE method.} \label{fig:f1score} \end{figure} Table \ref{tab:dypc} reports the performance under pair-wise correlation based approaches, i.e. idPAC, BPMM-PC, SD, and CCPD. It is clear for the results that the proposed idPAC method has a near perfect sensitivity when data is generated under GGM, and a suitably high sensitivity under the VAR model, when estimating connectivity change points. The sensitivity for network and edge change point estimation, along with the MSE in estimating the pairwise correlations are significantly improved under idPAC compared to competing approaches in Table \ref{tab:dypc}. The CCPD method is shown to have the lowest false positives when estimating the network level change points, but otherwise has poor sensitivity for change point estimation and high MSE, which is potentially due to the assumption of piecewise constant connectivity. The approach based on sliding window correlations has the poorest performance across all the reported metrics, which illustrates their drawback in estimating dynamic connectivity. Table \ref{tab:dypm} reports the performance under precision matrix based approaches, i.e. idPMAC, BPMM-PR, SINGLE, and DCR. The results under the SINGLE method is not reported for $V=100$ due to infeasible computational burden. It is evident that the proposed idPMAC method has near-perfect or high sensitivity for detecting network level change points, corresponding to data generated under GGM and VAR models respectively. It also has a suitably high sensitivity for detecting edge level connectivity change points under both cases. Similarly, the MSE for edge strength estimation and the F-1 scores for network estimation accuracy are significantly improved under the proposed method in contrast to competing approaches. Figure \ref{fig:f1score} illustrates that the F-1 score over time under the proposed dynamic precision matrix method with covariates is almost always higher across almost all time scans compared to competing methods. Moreover the DCR and SINGLE method have the least impressive performance in terms of connectivity change point estimation, which also translates to poor dynamic network estimation (low F-1 scores). Our results clearly illustrate the advantages of the proposed methods over existing approaches that are not effective in leveraging information across samples. In addition, Tables \ref{tab:dypc}-\ref{tab:dypm} also illustrate the gains of incorporating covariate information under the proposed idPAC and idPMAC approaches over the covariate naive BPMM counterparts. It is interesting to note that the covariate naive BPMM still fares better than existing dynamic connectivity methods that fail to pool information across samples in a systematic manner. We also note that while the presence of false positive (FP) connectivity change points are expected due to the heterogeneity across samples, the proposed approaches provide desirable control of FP even while pooling information across samples with varying networks. In fact, the FP under the proposed method are lower than all competing methods except CCPD, whose performance is otherwise less impressive in terms of significantly lower sensitivity for change point detection, and inferior network estimation as reflected by poor MSE and F-1 scores. When comparing the relative performance between idPAC and idPMAC methods, it is evident that the former has comparable or higher sensitivity but lower false positives in terms of estimating connectivity change points at the network level, when data is generated under a GGM. When data is generated under a VAR model, the idPAC method has higher sensitivity but also higher false positives compared to idPMAC, for estimating network connectivity change points. This is also true when estimating edge-level connectivity change points. In addition, since the idPMAC method estimates all edges simultaneously, the mean squared error for estimating edge strengths is often lower compared to the idPAC method. Moreover when the number of spurious covariates is increased, both these approaches experience a drop in performance, as expected. However, while the rate of deterioration in terms of estimating connectivity change points is similar between the two methods (see second and third rows in Figure \ref{fig:spur_cov}), the dynamic precision matrix approach is more resilient to the presence of spurious covariates in terms of recovering the true clusters. This is evident from the top panels in Figure \ref{fig:spur_cov} that show a slower increase in the clustering error under the idPMAC method. \begin{figure}[] \centering \includegraphics[width=1.05\linewidth, height=6.7in]{clus_spurious.JPG} \caption{Performance of dynamic pairwise correlation (columns 1 and 2) and dynamic precision matrix (columns 3 and 4) methods under different number of spurious covariates represented by the X-axis. Lines with different color represent different network structure: Green (Erdos Renyi), Red (Small World), Blue (Scale Free). The top row provides the information of clustering performance (Clustering Error and Variation of Information), the middle row demonstrates the performance of network level change points estimation (sensitivity and number of False Positive estimations), and the performance of edge level change point estimation was provided in the bottle row.} \label{fig:spur_cov} \end{figure} The computation time for the proposed approaches are much faster compared to existing dynamic connectivity methods such as SINGLE, and comparable to the DCR approach proposed by Cribben et al. (2013). For example, it took the pairwise dynamic connectivity without covariates about 20 minutes to run for $V=20$, and the run time was around 26 minutes for this method with two covariates, with 40 individuals. Similarly, when $V=40$ and $T=300$, the average computation time is around 80 minutes with 40 subjects without covariates. The proposed method was scalable to $V=100$ and $T=300$, unlike the SINGLE approach whose average computation time was around 6 hours. The total computation time under BPMM is expected to increase with $V,T,N,$ which is true for any method that computes dynamic connectivity at the level of each individual. \section{Analysis of Task fMRI Data} \subsection{Description of the study} We analyze a block task data involving a semantic verbal fluency at Veterans Affairs Center for Visual and Neurocognitive Rehabilitation, Atlanta. In a 12-week randomized controlled trail, 33 elderly individuals (aged 60-80, 11 males, 22 females) were assigned to two intervention groups: spin aerobic exercise group (14 participants) and the non-aerobic exercise control group (19 participants). During the intervention, individuals belonging to the aerobic spin group were required to do 20-45 minutes of spin aerobic exercise three times a week, led by a qualified instructor. For control group, participants were asked to do the same amount of non-aerobic exercise per week, such as group balance and light muscle toning exercise. A more detailed description of the data is available in Nocera et al. (2017). For each participant, fMRI scans were conducted with 6 blocks of semantic verbal fluency (task) conditions with 8 scans, both pre- and post-intervention. The semantic verbal fluency task involved participants looking at different categories (e.g. ``colors") at the center of video screen and they were asked to generate and speak 8 different objects associated with that category (e.g. ``blue"). After task block, a rest block with 3-5 TRs would appear and participants were required to read the word ``rest" out loud. A total of 74 brain scans were acquired using a 3T Siemens Trio scanner with a whole-brain, 1-shot gradient EPI scan (240\time 240 mm FOV, 3.75 $\times$ 3.75 in-plane resolution, TR=5830ms, TA=1830ms, TE=25ms, flip angle (FA)=70). Analysis of Functional NeuroImages (AFNI) software and FMRIB Software Library (FSL) were used for pre-processing, as in Nocera et al. (2017). Slice-time corrections, linear trend removal, echo planar images alignment, and motion correction were performed as a part of the pre-processing pipeline. We used 18 brain regions for analysis that were shown to be differentially activated between the two intervention groups as described in Nocera et al. (2017). These regions are listed in Table \ref{tab:ROI} and comprise more regions in the right hemisphere due to decreased activity in that hemisphere in the spin group following the intervention, as compared to the control group. We note that since these regions corresponded to group differences due to spin exercise, they can not be described as ``canonical" regions associated with semantic language function, which would also comprise some additional homologous regions in the left hemisphere. Since the purpose of the study was to investigate dynamic connectivity changes between brain regions due to the intervention, an analysis based on the selected 18 regions was undertaken instead of using canonical regions. \begin{table}[] \centering \begin{tabular}{|l|l|c|l|} \hline ROI Number & Region name &Broadmann area & MNI coordinate\\ \hline 1 &R Cerebullum 1 & NA & (5,-62,-57)\\ 2 &R Inferior Temporal Gyrus &20 &(41,-27,-30)\\ 3 &R Angular Gyrus &39 &(44,-56,12)\\ 4 &R Middle Frontal Gyrus &10 &(23,56,-6)\\ 5 &R Middle Temporal Gyrus 1 &22 &(53,-12,-9)\\ 6 &L Precuneus 1 &7 &(-9,-74,57)\\ 7 &L Cingulate Gyrus &NA &(-9,-33,39)\\ 8 &R Precuneus &7 &(6,-80,48)\\ 9 &R Cerebellum 2 &NA &(35,-53,-27)\\ 10 &R Middle Temporal Gyrus 2 &21 &(60,-45,-6)\\ 11 &R Inferior Frontal Gyrus/precentral gyrus &44 &(59,9,9)\\ 12 &R Retrosplenial Area &30 &(9,-47,18)\\ 13 &R Supramarginal Gyrus &40 &(41,-36,33)\\ 14 &R Pars Triangularis/MFG &45 &(47,47,-9)\\ 15 &L Precuneus 2 &7 &(-6,-71,45)\\ 16 &L Cuneus &19 &(-15,-80,27)\\ 17 &L Superior Frontal Gyrus &6 &(-17,-18,69)\\ 18 &R Middle Temporal Gyrus 3 &22 &(60,-36,0)\\ \hline \end{tabular} \caption{Summary of brain regions used for analysis. R and L are abbreviations for right and left respectively.} \label{tab:ROI} \end{table} \subsection{Analysis Outline} We performed the analysis separately for the pre-intervention and post-intervention data, under both the dynamic pairwise correlations and dynamic precision matrix estimation methods. We used age and gender as covariates for the pre-intervention dataset, while also using the type of intervention (spin or non-aerobic control) as an additional covariate for the post-intervention analysis. Our analysis is designed to: (i) investigate the clustering behavior and inspect how these clusters differ with respect to demographics and the intervention type; (ii) investigate the cluster-level network differences using network summary measures; (iii) estimate the connectivity change points and examine how well they align with the changes dictated by the block task experiment; (iv) infer nodes and edges in the network with significantly different connectivity patterns between pre- and post-intervention. Objective (i) enables us to characterize homogeneous dynamic connectivity patterns corresponding to clusters of samples in terms of their demographic and clinical characteristics; aim (ii) will be instrumental in interpreting the cluster-level network differences that will shed light on network variations across transient network states; aim (iii) will provide insights regarding the effectiveness of the proposed approaches in terms of recovering connectivity jumps where these changes are influenced by, but often not fully aligned with, the changes in the block task experimental design (Hindriks et al., 2016; Kundu et al., 2018); and aim (iv) will inform investigators regarding dynamic connectivity differences that are associated with the type of intervention. For aim (ii), we were only able to report results under dynamic precision matrix estimation, since a graph theoretic framework is necessary to compute the network summary measures, which may not be feasible under a pairwise correlation analysis. \subsection{Results} {\noindent \uline{Cluster analysis:}} As seen from Table \ref{tab:clus_va}, the analysis under both idPAC and idPMAC methods yielded 5 clusters consolidated over all time scans (using the K-means algorithm described in Section 2.3), although the size of the clusters were more equitable under the idPAC method. The pre-intervention analysis yielded clusters that were largely homogeneous with respect to gender. These clusters were also reasonably well-separated with respect to age under the idPAC analysis, whereas the age of the participants within clusters were more diverse under the idPMAC analysis. The post-intervention analysis yielded more heterogeneous clusters with respect to both age and gender, with only one cluster comprising all males under both the idPAC and idPMAC analyses. This suggests a realignment of the dynamic connectivity after the intervention is administered, such that individuals with similar genders and age-groups have synchronous dynamic connectivity patterns pre-intervention as identified via subgroups, but the subgroups and their composition with respect to age and gender change post-intervention. Our post-intervention analysis also suggests that the variability across clusters under the idPAC method can be largely explained via the intervention type. \begin{table}[h] \centering \begin{tabular}{l|ccccc|ccccc} \hline Method & \multicolumn{5}{c}{idPAC} &\multicolumn{5}{c}{idPMAC}\\ Cluster index &1 &2 &3 &4 &5 &1 &2 &3 &4 &5\\ \hline Cluster features & \multicolumn{5}{c}{Pre-intervention} & \multicolumn{5}{c}{Pre-intervention}\\\\ \hline Size &8 &6 &8 &7 &4 &3 &5 &17 &6 &2\\ \% of females &0 &100 &0 &14 &100 &0 &100 &0 &100 &0\\ Age (mean) &72.2 &65.8 &64.7 &76.7 &67.7 &71.7 &69 &70.4 &66.8 &67\\ Age(range) &69-73 &60-72 &60-68 &74-80 &66-69 &63-78 &62-80 &60-80 &60-72 &66-68\\ CP(Task-Rest) &6 &3 &4 &4 &4 &4 &5 &5 &4 &3 \\ CP(Rest-Task) &3 &5 &2 &3 &4 &4 &4 &2 &4 &3 \\ \hline &\multicolumn{5}{c}{Post-intervention} &\multicolumn{5}{c}{Post-intervention}\\\\ \hline Size &8 &4 &7 &11 &3 &3 &4 &9 &11 &6\\ \% of females &63 &75 &0 &18 &33 &67 &100 &0 &9 &67\\ Age (mean) &67.3 &65 &65.1 &74.5. &73.7 &73.7 &69.3 &68.6 &73 &62.7 \\ Age(range) &62-70 &60-71 &60-68 &71-80 &68-78 &67-80 &68-72 &63-78 &68-80 &60-66\\ CP(Task-Rest) &5 &6 &4 &3 & 6 &3 &3 &5 &5 &5 \\ CP(Rest-Task) &3 &5 &2 &2 &4 &2 &5 &2 &4 &2 \\ Spin(\%) &0 &100 &100 &0 &100 &33 &0 &100 &9 &50\\ \hline \end{tabular} \caption{Results for analysis of block task fMRI experiments. Size refers to the number of participants in each cluster, `CP(Task-Rest)' and `CP(Rest-Task)' denotes the cluster level connectivity change points that were detected within +/- 2 time scan of the change in experimental design from task to fixation, and from fixation to task, respectively. `Spin' refers to the percentage of individuals assigned the Spin intervention belonging to each cluster.} \label{tab:clus_va} \end{table} {\noindent \emph{Connectivity change point estimation:}} Table \ref{tab:clus_va} illustrates the cluster level connectivity change point estimation. We observed that under both the idPAC and idPMAC methods, the estimated change points were consistent with 4 or more (out of 6) changes in experimental design when transitioning from task to rest, except one cluster where 3 of the connectivity change points aligned with the experimental design. These patterns were consistent in both the pre- and post-intervention analysis; however the number of connectivity change points that were strongly aligned with changes in the experimental design were (on average) greater in the post-intervention analysis compared to the pre-intervention analysis. This suggests a learning effect of the task that was reflected in terms of higher concordance between the connectivity change points and the experimental design post-intervention. On the other hand, the cluster-level estimation of change points when transitioning from fixation to task was (on average) less aligned with the experimental design compared to the change points when transitioning from task to fixation, as seen in Table \ref{tab:clus_va}. This is somewhat expected since there were only 3-5 time scans in each fixation block, which made it extremely challenging to detect connectivity changes when transitioning from fixation to task. However, the proposed approach was still able to detect at least two, and often 3 or more connectivity change points (out of 6) aligned with the experimental design that suggests a reasonable concordance between connectivity jumps and experimental transitions from fixation to task. In contrast, the CCPD approach detected at most one or two connectivity change points, while the DCR method was not able to detect connectivity change points at all, which makes these results appear biologically impractical given the nature of the block task experiment. Although the changes in connectivity are not expected to be fully aligned with changes in the experimental design (Hindriks et al., 2016), one expects a certain degree of synchronicity between the two. Our results indicate that this is not captured at all via existing change point methods especially when there are rapidly occurring transitions in the experimental design, which highlights their limitations. Hence, our analysis clearly illustrates the advantages of pooling information across heterogeneous samples and incorporating covariate knowledge via a mixture modeling framework, which is simply not possible using existing approaches that rely on information from single subjects as in DCR, or that use empirical methods to pool information across individuals as in CCPD. {\noindent \emph{Cluster level network differences:}} In order to investigate the differences between the networks corresponding to the different clusters, we examined variations in dynamic network metrics that capture modes of information transmission in the brain. These network metrics include the characteristic path length (CPL) that measures the length of connections between nodes, and the mean clustering coefficient (MCC) that measures the clustering tendency averaged over all network nodes. Using permutation testing, we examined p-values to evaluate which pairs of clusters exhibited significantly different network summary measures. None of the clusters had significantly different CPL values in the pre-intervention analysis, but several pairs of clusters exhibited significant CPL differences post-intervention. The CPL differences were particularly pronounced between the first and remaining clusters, as well as the last and remaining clusters in the post-intervention analysis. These two clusters also demonstrated the highest within cluster variability in CPL values amongst all clusters. Moreover, the number of pairs of clusters with significantly different MCC values increased from the pre-intervention to post-intervention analysis, with 8 out of 10 pairs of post-intervention clusters reporting significantly different MCC values compared to at least one other cluster. Hence, our results suggest greater variability in network organization between clusters in the post-intervention analysis compared to pre-intervention, which potentially reflects greater network heterogeneity after the 12 week intervention was administered. {\noindent \emph{Network differences pre- and post-intervention:}} We applied paired t-test with multiplicity adjustment in order to infer which edges were significantly different between pre- and post-intervention at 5\% level of significance, along with identifying which network nodes contained the greatest number of differential edges. Since the magnitude of the pairwise correlations and the corresponding edge strength differences were higher, we discovered higher number of edges with differential edge strengths under the idPAC analysis. For both the idPAC and idPMAC methods, the bulk of the pre- vs post-intervention connectivity differences were concentrated in individuals in the spin group exclusively that were not present in the control group. We obtained 57 significantly different edges under the idPAC analysis, and 38 significantly different edges under the idPMAC analysis, which were exclusive to the spin group - see Figure \ref{fig:pre-post}. In contrast, the number of significantly different edges between the pre- and post-intervention networks under the idPAC analysis were 20 corresponding to both the spin and control groups, and 7 corresponding to the control group only. Moreover the idPMAC analysis did not produce any significant edge level differences between the pre- and post-intervention networks corresponding to both the intervention groups as well as for the control group only. Our results suggest a considerably strong realignment in dynamic connectivity after the 12-week intervention that were exclusive to the spin group, compared to negligible changes in the control group. \begin{figure} \centering \includegraphics[width=0.8\linewidth, height=3.7in]{pre_pair.jpg} \includegraphics[width=0.8\linewidth,height=3.7in]{pre_postnew2.jpg} \caption{Circle plots for the edges that are significantly different pre- and post-intervention in spin group but not in the control group. The top and bottom panel correspond to the results under dynamic pairwise correlation and dynamic precision matrix estimation incorporating covariates, respectively. Red and blue lines correspond to lower or higher edge strengths in the pre-intervention network compared to post-intervention. RC1 and RC2 refer to the two brain regions in the right cerebellum; RMTG1-RMTG3 refer to the three brain regions in the right middle temporal gyrus; and LP1-LP2 refer to the two regions in the left precuneus. The MNI coordinates for these regions are provided in the Figure legend.} \label{fig:pre-post} \end{figure} The changes between the pre-vs post intervention networks that occurred exclusively in the spin group under idPAC analysis were concentrated in the following brain regions: Right Angular Gyrus(8 edges), Left Precuneus(10 edges), Right Cerebellum(9 edges), Right Middle Temporal Gyrus(11 edges), and Right Middle Temporal Gyrus(8 edges). Similarly the following brain regions had the highest number of differential edges pre- vs post-intervention under the idPMAC analysis: Right Middle Frontal Gyrus(16 edges), Right Cerebellum(6 edges), Right Pars Triangularis/MFG(8 edges), and Right Middle Temporal Gyrus(7 edges). Two nodes, Right Cerebellum and Right Middle Temporal Gyrus had a large number of significantly differential edges under both idPAC and idPMAC analyses, while the right middle frontal gyrus had, by far, the largest number of differential edges (16) under the dynamic precision matrix analysis. In addition, we also observe that more nodes in right hemisphere of the brain have significantly differential connectivity, which is to be expected since the majority of the 18 brain regions being investigated lie in the right hemisphere. The large number of differential connections with respect to the right cerebellum is believed to be attributable to the generation of internal models or context specific properties of an object (Moberget et al., 2014), and preferential activation during a semantic challenge (D'Mello et al., 2017). The connectivity between the right cerebellum and inferior frontal regions has been noted in earlier studies (Balsters et al., 2013), with the inferior frontal regions being responsible for ordering language and codifying the motor output for syntax (Balsters et al., 2013). Moreover, the differential connectivity in the right middle temporal gyrus is along the lines of earlier findings that illustrated the role of the left temporal gyrus as a hub for integration of sensory input into a transformation to semantic forms (Davey et al., 2016), and the corresponding connectivity differences in the right middle temporal gyrus may be attributable to a shift in laterality of involvement (Lacombe et al. 2015) due to aging. Finally, the large number of differential edges corresponding to the right middle frontal gyrus is potentially associated with semantic priming in older adults (Laufer et al., 2011). Given that this region is associated with executive function (Wang et al., 2019; Jolles et al., 2013) and is well characterized as being involved in working memory tasks, it is likely for connectivity differences to be focused on this region since the semantic task requires a continuous reference to working memory. \section{Discussion} In this article, we developed a novel approach that accurately estimates a population of subject-level dynamic networks by pooling information across multiple subjects in an unsupervised manner under a mixture modeling framework using covariates. The proposed approach, which is one of the first of its kind in dynamic connectivity literature, results in significant gains in dynamic network estimation accuracy, as illustrated via extensive numerical studies. The gains under the proposed method become particularly appealing compared to existing approaches in the presence of rapid transitions in connectivity as evident from our fMRI block task analysis. The proposed approach works best in fMRI task experiments involving a group of heterogeneous individuals executing the same task protocols, and in the presence of a carefully chosen set of covariates that are related to the dynamic network. We also illustrate the robust performance of the proposed approach in the presence of a limited number of covariates that are not related to changes in connectivity, although the performance deteriorates as the number of spurious covariates increase. In the presence of a large number of features that may not be necessarily related to dynamic connectivity, one can perform a screening step to exclude unimportant predictors from the analysis. This step will involve examining the associations between each covariate and the dynamic connectivity estimates obtained from the covariate naive BPMM approach, and subsequently only retaining the covariates with significant associations for analysis using the full model. This approach is expected to work well as long as the screening step does not exclude any important covariates and manages to largely filter out spurious covariates that are unrelated to the network. In future work, we plan to extend the proposed approach to incorporate feature selection that automatically identifies significant covariates that are related to the dynamic networks, and down-weights the contribution of unimportant covariates using Bayesian shrinkage priors. In addition to identifying important connectivity changes, during the fMRI block task experiment, our analysis conclusively established major changes between the pre- and post-intervention networks that were exclusive to the spin group. We note that existing literature has established the role of cardiovascular fitness in regulating aging related declines in both language and motor control (McGregor et al., 2011, 2013). However, much less is known about the effect of exercise intervention on dynamic connectivity, particularly in older adults. Because connectivity is a fundamental aspect of neuronal communication required for high-level cognitive processes, it is important to understand the potential impact of aging and/or aerobic exercise interventions in aging on changes in brain connectivity. Further, our analysis also discovered subgroups of individuals with homologous dynamic connectivity, where the heterogeneity within these subgroups with respect to intervention was higher under the idPMAC method compared to the idPAC analysis. This indicates that dynamic pairwise correlations were more accurate in classifying participants in terms of the intervention administered. It is important to note that the separation of clusters with respect to intervention reflects the distinct patterns of dynamic connectivity between the 18 brain regions specified in our study that are known to be differentially activated in spin and control groups (Nocera et al., 2017). However, if additional regions are included that may not be necessarily associated with intervention type, it is entirely possible to obtain more heterogeneous clusters that have a more equitable composition with respect to intervention group. This is due to the presence of network edges between regions that are not necessarily associated with intervention and hence behave similarly in both the spin and control groups. Future work will focus on a more general analysis involving a larger number of cannonical regions known to be associated with the semantic language function. \section*{Supplementary Materials} The Supplementary Materials contain additional details corresponding to the M-steps for dynamic pairwise correlations and partial correlations, as well as details for selecting the tuning parameter in (\ref{eq:multiTV}) for change point estimation corresponding to Section 2.4. \section*{Acknowledgements} The views expressed in this work do not necessarily reflect those of the National Institutes of Health, Department of Veterans Affairs or the United States Government. The work was supported by NIMH award number R01MH120299 (SK), and VA research awards: IK2RX000956 (KMM); IK2RX000744 (JN). \section*{Data and Code Availability} A portion of the data presented in this work is property of the United States Department of Veterans Affairs. Copies of the de-identified data can be made available upon written request to the corresponding author and Department of Veterans Affairs. The code for implementing the proposed approaches are available here: {\it https://github.com/Emory-CBIS/BPMM} \section*{Ethics Statement} Study procedures were approved by the institutional review board of Emory University, informed consent was obtained for experimentation with human subjects, and procedures were consistent with the Declaration of Helsinki. \section*{Appendix} \subsection*{Posterior Distribution for Dynamic Pairwise Correlations} Here, we derive the log-posterior distribution that is used in the EM algorithm to derive parameter estimates. The augmented log-posterior distribution for $\bm\Theta^{jl}$ under (\ref{eq:base})-(\ref{eq:base_cov}) is: \begin{align} &\log(\pi(\bm{\Theta}^{jl}\mid \bm{Y} ) ) \propto \log \bigg(P(\bm{\Theta}^{jl})P(\bm{Y|\Theta}^{jl}) \bigg) =\sum_{h=1}^H\sum_{t=1}^T\log(\pi(\gamma^*_{h,jlt})) + \sum_{h=1}^H\log(\pi(\sigma^2_{\gamma,h})) \nonumber \\ &+\sum_{h=1}^{H-1}\sum_{t=1}^T\log(\pi(\bfb_{h,jlt})) +\sum_{i=1\dots N} \sum_{t=1\dots T}\log \bigg(P(y^{(i)}_{jt},y^{(i)}_{lt}|\gamma_{jl,t}^{(i)},\sigma_{y}^2) \times \pi(\gamma_{jl,t}^{(i)}) \bigg) \nonumber \\ &\propto \sum_{i=1}^N \sum_{t=1}^T \bigg[ -\frac{1}{2}\log\bigg\{1-\bigg(\frac{\exp(2\gamma_{jl,t}^{(i)})-1}{\exp(2\gamma_{jl,t}^{(i)})+1}\bigg)^2\bigg\} - \bigg\{\frac{(y^{(i)}_{jt})^2 + (y^{(i)}_{lt})^2 - 2\big(\frac{\exp(2\gamma_{jl,t}^{(i)})-1}{\exp(2\gamma_{jl,t}^{(i)})+1}\big)y^{(i)}_{jt}y^{(i)}_{lt}}{2\sigma_y^2 \bigg(1-(\frac{\exp(2\gamma_{jl,t}^{(i)})-1}{\exp(2\gamma_{jl,t}^{(i)})+1})^2\bigg)}\bigg\} \nonumber\\ & -\frac{1}{2}\sum_{h=1}^H\bigg\{\frac{1}{\sigma_{\gamma,h}^2}\Delta^{(i)}_{h,jlt} (\gamma_{jl,t}^{(i)}-\gamma_{h,jlt}^*)^2 + \Delta^{(i)}_{h,jlt}\log(\sigma^2_{\gamma,h})\bigg\} +\sum_{h=1}^{H-1} \Delta_{h,jlt}^{(i)} \bm{x_i}^T \bfb_{h,jlt} - \log \big(1+\sum_{r=1}^{H-1}e^{\bm{x_i}^T \bfb_{r,jlt}}\big) \bigg] \nonumber \\ &+\sum_{h=1}^{H-1}\sum_{t=1}^T\log(\pi(\bfb_{h,jlt}))+ \sum_{h=2}^H \bigg\{\bigg(-\lambda|\gamma^*_{h,jl,t}-\gamma^*_{h,jl,t-1}|\bigg) - (a_{\sigma}+1)\log(\sigma_{\gamma,h}^2) - \frac{b_{\sigma}}{\sigma_{\gamma,h}^2} \bigg\},\label{eq:lpost1} \end{align} where $\small \log(\pi(\bfb_{h,jlt}))=-\frac{\bfb_{h,jlt}^T\Sigma_{\beta}^{-1}\bfb_{h,jlt}}{2} - \frac{1}{2} \log (det(\Sigma_{\beta})) $ represents the logarithm of the prior distribution on the covariate effects. The detailed computational steps for deriving the MAP estimates corresponding to the above posterior distribution are discussed in Section 3. \subsection*{Posterior Distribution for Dynamic Precision Matrices} The augmented log-posterior distribution for the model parameters can be written as $\log(\bm{\Theta}\mid Y^{(1)},\ldots,Y^{(N)})$ \begin{align}\small &\propto \sum_{i=1}^{N}\sum_{t=1}^{T}\log\bigg(P({\bf y}^{(i)}_t\mid \Omega^{(i)}_t) \prod_{v=1}^{V}\pi(\omega^{(i)}_{t,vv}\mid\alpha)\pi({\bm\omega}^{(i)}_{vt}\mid {\bm\omega}_{1,vt}^{*},\ldots,{\bm\omega}_{H,vt}^{*},\sigma^{2}_{\omega,1},\ldots,\sigma^{2}_{\omega,H})\bigg) \nonumber \\ &+\sum_{h=1}^H \sum_{v=1}^H \sum_{t=1}^T \log (\pi(\bm{\omega_{h,vt}^*})) + \sum_{h=1}^H \log(\pi(\sigma_{\omega,h}^2)) \propto \sum_{i=1}^N\sum_{t=1}^T \frac{1}{2} \bigg[ \log \det (\Omega_{11,t}^{(i)}) - {\bf{y}}_{t,-1}^{(i)'} \Omega_{11,t}^{(i)} {\bf{y}}_{t,-1}^{(i)} \bigg ] \nonumber \\ & +\sum_{i=1}^N \sum_{t=1}^T \frac{1}{2} \bigg [ -\log {\kappa}_{1,t}^{(i)} -\big ( s_{11,t}^{(i)}+\alpha \big )\kappa_{1,t}^{(i)} - {\bm{\omega}}_{1t}^{(i)'} \bigg ( \sigma_{\omega,h}^2 I_{V-1} + (s_{11,t}^{(i)}+\alpha) \Omega_{11,t}^{(i)-1} \bigg ) {\bm{\omega}}_{1t}^{(i)} + 2 {\bf s}_{1,t}^{(i)'} {\bm \omega}_{1t}^{(i)} \bigg ] \nonumber \\ & -\frac{1}{2}\bigg\{\sum_{h=1}^{H}\sum_{v=1}^{V}\frac{1}{\sigma_{\omega,h}^{2}}\Delta^{(i)}_{h,vt} ({\bm\omega}^{(i)}_{v,t}-{\bm\omega}_{h,t}^{*})'({\bm\omega}^{(i)}_{v,t}-{\bm\omega}_{h,t}^{*}) \bigg\} -\frac{V(V-1)}{2}\bigg\{\sum_{h=1}^{H} \Delta^{(i)}_{h,vt}\log(\sigma^{2}_{\omega,h})\bigg\} \nonumber \\ &+ \sum_{v=1}^{V}\bigg\{ \sum_{h=1}^{H-1}\Delta^{(i)}_{h,vt} (\bm{x_i}^{T} \bfb_{h,t}) - \log \bigg(1+\sum_{r=1}^{H-1}exp({\bm{x_i}^{T} \bfb_{r,t}}) \bigg) \bigg\} \bigg] + \sum_{t=1}^{T}\sum_{v=1}^{V}\sum_{h=1}^{H-1}\log(\pi(\bfb_{h,t})) \nonumber \\ &+\sum_{h=1}^{H} \bigg\{\sum_{t=1}^{T}\bigg(-\lambda|{\bm\omega}^{*}_{h,t}-{\bm\omega}^{*}_{h,t-1}|_1\bigg) - (a_{\sigma}+1)\log(\sigma_{\omega,h}^{2}) - \frac{b_{\sigma}}{\sigma_{\omega,h}^{2}} \bigg\}, \label{eq:lpost2} \end{align} where $\small \log(\pi(\bfb_{h,t}))=-\frac{\bfb_{h,t}^T\Sigma_{\beta}^{-1}\bfb_{h,t}}{2} - \frac{1}{2} \log (det(\Sigma_{\beta})) $ represents the logarithm of the prior distribution on the covariate effects. The EM algorithm to derive the MAP estimators for model parameters is based on the expression for the above log-posterior (see Section 3). \subsection*{M-steps for dynamic pairwise correlations} {\noindent \bf M-step for mixture atoms:} Denote $\bm{\gamma}_{h,jl}=(\gamma_{h,jl,1},\ldots,\gamma_{h,jl,T})$, $\bar{\gamma}_{h,jl,t}= \frac{1}{\sum_{i=1}^N \hat{\psi}_{h,jlt}^{(i)}}\sum_{i=1}^N \hat{\psi}_{h,jlt}^{(i)}\gamma_{jl,t}^{(i)}$, $w_{h,jlt}=\frac{\sum_{i=1}^N\hat{\psi}_{h,jlt}^{(i)}}{2\sigma^2_{\gamma,h}}$, and $\bar{\gamma}^{(w)}_{h,jl,t}=\sqrt{w_{h,jlt}}\bar{\gamma}_{h,jl,t}$. Further denote $|\cdot|$ as the element-wise $L_1$ norm, and denote $\bm{\tilde{\eta}}_{h,jl}=(\tilde{\eta}_{h,jl,0},\tilde{\eta}_{h,jl,1},\ldots, \tilde{\eta}_{h,jl,T-1})$, $\tilde{\eta}_{h,jl,0}=\gamma^*_{h,jl,1}$, $\tilde{\eta}_{hjl,t-1}=\gamma^*_{h,jl,t}-\gamma^*_{h,jl,t-1}$. Then, using the derivations presented in the Supplementary Materials, $ \small \widehat{\bm{\gamma}}^*_{h,jl} = \arg\min\sum_{t=1}^T (\sqrt{w_{h,jlt}}\bar{\gamma}_{h,jl,t} - \sqrt{w_{h,jlt}}\gamma^*_{h,jl,t})^2 + \lambda\sum_{t=1}^{T-1}|\gamma^*_{h,jlt}-\gamma^*_{h,jl,t-1}| = \arg\min|| \bar{\bm{\gamma}}^{(w)}_{h,jl} - \tilde{M}_{h,jl}\bm{\tilde{\eta}}_{h,jl}||^2 + \lambda\sum_{t=0}^{T-1}|\tilde{\eta}_{h,jl,t}|, $ where the $T\times T$ matrix $\tilde{M}_{h,jl}$ has the following form \begin{eqnarray*} \tilde{M}_{h,jl}= \begin{bmatrix} \sqrt{w_{h,jl,1}} &0 &0 \ldots &0\\ \sqrt{w_{h,jl,2}} &\sqrt{w_{h,jl,2}} &0 \ldots &0 \\ \sqrt{w_{h,jl,3}} &\sqrt{w_{h,jl,3}} &0 \ldots &0\\ \vdots &\vdots &\vdots &\vdots\\ \sqrt{w_{h,jl,T}} &\sqrt{w_{h,jl,T}} & &\sqrt{w_{h,jl,T}} \end{bmatrix}. \end{eqnarray*} The solution can be obtained using a Lasso algorithm with the penalty parameter $\lambda$ being chosen using BIC. The solutions for $\bm{\tilde{\eta}}_{h,jl}$ can be directly used to recover the estimates for ${\bm\gamma}^*_{h,jl}=(\gamma^*_{h,jl,1},\ldots,\gamma^*_{h,jl,T} )$, which in turn yields the dynamic connectivity estimates. {\noindent \bf M-step for mixture variance:} Use the closed form solution to estimate ($h=1,\ldots,H$):\\ $\hat{\sigma_{\gamma,h}^2} = \bigg(a_{\sigma}+0.5\sum_{t=1}^T\sum_{i=1}^N \hat{\psi}_{h,jlt}^{(i)}-1\bigg)^{-1}\bigg(b_{\sigma}+ 0.5\sum_{t=1}^T\sum_{i=1}^N \hat{\psi}_{h,jlt}^{(i)}(\gamma_{jl,t}^{(i)}-\hat{\gamma}_{h,jlt}^*)^2)\bigg)$. {\noindent \bf M-step for pair-wise correlations:} The update of $\gamma_{jl,t}^{(i)}$ is performed via a Newton-Raphson step. Denote the parameter estimate at $f$-th iteration of Newton-Raphson as $\gamma_{jl,t}^{(i)[f]}$, and use the update for the $(f+1)$-th iteration as $\gamma_{jl,t}^{(i)[f+1]} = \gamma_{jl,t}^{(i)[f]} - \frac{a_1(\gamma_{jl,t}^{(i)[f]})}{a_2(\gamma_{jl,t}^{(i)[f]})}$, where $a_1(\gamma_{jl,t}^{(i)[f]})$ and $a_1(\gamma_{jl,t}^{(i)[f]})$ are expressed as: \begin{align*} a_1(\gamma_{jl,t}^{(i)[f]}) &= d(i(\Theta)^{[f]})/d(\gamma^{(i)[f]}_{jl,t}) = \frac{exp(2\gamma_{jl,t}^{(i)[f]})-1}{exp(2\gamma_{jl,t}^{(i)[f]})+1} - \sum_{h=1}^H\frac{\hat{\psi}_{h,jlt}^{(i)}(\gamma_{jl,t}^{(i)[f]}-\gamma_{h,jlt}^*)}{\sigma_{\gamma,h,jlt}^2 }\\ &- \frac{(exp(2\gamma_{jl,t}^{(i)[f]})^2-1)(y_{jt}^2+y_{lt}^2)-2y_{jt}y_{lt}(exp(2\gamma_{jl,t}^{(i)[f]})^2+1)}{4\sigma_y^2 exp(2\gamma_{jl,t}^{(i)[f]})}, \mbox{ and } \\ a_2(\gamma_{jl,t}^{(i)[f]}) &= d(i(\Theta)^{[f]2})/d(\gamma^{(i)[f]}_{jl,t})^2= \frac{4exp(2\gamma^{(i)[f]}_{jl,t})}{(exp(2\gamma^{(i)[f]}_{jl,t})+1)^2} -\sum_{h=1}^H \frac{\hat{\psi}_{h,jlt}^{(i)}}{\sigma_{\gamma,h,jlt}^2} \\ & -\frac{exp(2\gamma^{(i)[f]}_{jl,t})^2(y_{jt}^2+y_{lt}^2-2y_{jt}y_{lt})+(y_{jt}^2+y_{lt}^2+y_{jt}y_{lt})}{2\sigma_y^2exp(2\gamma^{(i)[f]}_{jl,t})} \end{align*} The above iterative steps are repeated until convergence, i.e. when $|{\gamma_{jl,t}^{(i)[f+1]} - \gamma_{jl,t}^{(i)[f]}|} < 10^{-3}$. {\noindent \bf M-step for covariate effects:} The log-posterior $\log(\pi(\bfb_{h,jlt}\mid -))\propto$ \begin{eqnarray*} \label{eq:logbeta1}\small && \bigg\{-\frac{\bfb_{h,jlt}^T\Sigma_{\beta}^{-1}\bfb_{h,jlt}}{2} - \frac{1}{2} \log (det(\Sigma_{\beta})) \bigg\} + \sum_{i=1}^N \bigg\{ \Delta_{h,jlt}^{(i)} \bm{x_i}^T \bfb_{h,jlt} - \log \big(1+\sum_{r=1}^{H-1}exp(\bm{x_i}^T \bfb_{r,jlt})\big) \bigg\} \\ &&\approx -\frac{1}{2}\sum_{i=1}^N w_{h,jlt}(z_{h,jlt} - \bm{x_i}^T \bfb_{h,jlt})^2 -\frac{\bfb_{h,jlt}^T\Sigma_{\beta}^{-1}\bfb_{h,jlt}}{2}, \end{eqnarray*} using the expression in (\ref{eq:lpost1}), and a quadratic approximation as in (Friedman et al., 2010) for the last step, in order to facilitate closed form updates. In the above expression, $ \small z_{h,jlt} = \bm{x_i}^T \tilde{\bfb_{h,jlt}} + \frac{\widehat{\Delta}_{h,jlt}^{(i)}-\tilde{p}_{h,jlt}({\bm x_i})}{\tilde{p}_{h,jlt}({\bm x_i})(1-\tilde{p}_{h,jlt}({\bm x_i}))}$, $ w_{h,jlt}=\tilde{p}_{h,jlt}(\bm_{x_i})(1-\tilde{p}_{h,jlt}({\bm x_i}))$, $\tilde{p}_{h,jlt} = \tilde{P}(\Delta^{(i)}_{h,jlt} = 1 \mid {{\bm x}_i})=\frac{exp(\bm{x_i}^T \check{\bfb}_{h,jlt})}{1 + \sum_{h=1}^{H-1}exp(\bm{x_i}^T\check{\bfb}_{h,jlt})}$ represents the approximated probability under the quadratic approximation, $\check{\bfb}_{h,jlt}$ represents the estimate of $\bfb_{h,jlt}$ at previous step, and $\widehat{\Delta}_{h,jlt}^{(i)}$ represents expected probability for the $i$-th subject as in the E-step. The above approximate log-posterior can be optimized to obtain a closed form expression as $\widehat{\bfb}_{h,jlt}=\arg\max_{\bfb} \log(\pi(\bfb_{h,jlt}\mid - )) = (\Sigma_{\beta}^{-1}+\sum_{i=1}^Nw_{h,jl}\bm{x_i}\bm{x_i}^T)^{-1}(\sum_{i=1}^N w_{h,jl}z_{h,jl}\bm{x_i}),$ where the notations in the expression for $\widehat{\bfb}_{h,jlt}$ has been defined previously. \subsection*{M-steps for dynamic precision matrix estimation} {\noindent \bf M-step for mixture atoms:} Define ${\bf e}^*_{h,t}=({\bm\omega}^*_{h,t}-{\bm\omega}^*_{h,t-1})'=(e^*_{h,1t},\ldots,e^*_{h,Vt}),t=1,\ldots,T-1$, $E^*_{h}=({\bm\omega}^*_{h}, {\bf e}^*_{h,1},\ldots,{\bf e}^*_{h,T-1})'$, $\{e^*_{h,v't} \}$ represents the elements in ${\bf e}^*_{h,t}, \bar{W}_{h,v}$ is a $T\times V-1$ matrix with the $t$-th row as $\frac{\sum_{i=1}^{N} \Delta_{h,vt}^{(i))} \bm{\omega}_{v,t}^{(i)}}{2\sigma_{\omega,h}^2}$ , $\bar{W}_{h,v}(\bullet, v')$ and $E^*_{h}(\bullet, v')$ represent the $v'$th column of $\bar{W}_{h,v}$ and $E^*_{h}$ respectively, and $|\cdot|_1$ represents element-wise $L_1$ norm. Similar to the steps for dynamic pairwise correlations, the estimate for mixture atom ${\bm\omega}^*_{h,t}, h=1,\ldots,H, t=1,\ldots,T,$ can be obtained by minimizing the following objective function: \begin{eqnarray*} \small \sum_{v=1}^V \{|| \bar{W}_{h,v} - M^*_{h,v}E^*_{h}||^2 &+& \lambda\sum_{t=1}^{T-1}|{\bf e}^*_{h,t}|_1 \} = \sum_{v=1}^V \sum_{v'=1}^{V-1} \{|| \bar{W}_{h,v}(\bullet, v') - M^*_{h,v}E^*_{h}(\bullet, v')||^2 + \lambda\sum_{t=1}^{T-1}|e^*_{h,v't}|_1 \}, \\ \mbox{ where } M^*_{h,v} &=& \begin{bmatrix} \sqrt{w_{h,v,1}} &0 &0 \ldots &0\\ \sqrt{w_{h,v,2}} &\sqrt{w_{h,v,2}} &0 \ldots &0 \\ \sqrt{w_{h,v,3}} &\sqrt{w_{h,v,3}} &\sqrt{w_{h,v,3}} \ldots &0\\ \vdots &\vdots &\vdots &\vdots\\ \sqrt{w_{h,v,T}} &\sqrt{w_{h,v,T}} &\sqrt{w_{h,v,T}} \ldots &\sqrt{w_{h,v,T}} \end{bmatrix}. \end{eqnarray*} The above equation can be solved using a Lasso algorithm with the penalty parameter $\lambda$ being chosen using BIC. The solutions for $\bm{E}^*_{h}$ are then used to recover the estimates for ${\bf \omega}^*_{h,t}$. {\noindent \bf M-step for mixture variance:} Use $\hat{\sigma}^2_{\omega,h}= \frac{b_\sigma + 0.5\sum_{i=1}^N\sum_{t=1}^T\sum_{v=1}^V \Delta^{(i)}_{h,vt} ({\bm\omega}^{(i)}_{v,t}-{\bm\omega}_{h,t}^*)'({\bm\omega}^{(i)}_{v,t}-{\bm\omega}_{h,t}^*)}{a_\sigma+1+0.5V(V-1)\sum_{t=1}^T\sum_{i=1}^N \Delta^{(i)}_{h,vt}}$. {\noindent \bf M-step for covariate effects:} Using similar arguments as in Section 3.1, one can approximate the posterior as: \begin{align*} \small \log(\pi(\bfb_{h,t}\mid - )) \approx -\frac{1}{2}\sum_{i=1}^N\sum_{v=1}^V w_{h,t}(z_{h,vt} - \bm{x_i}^T \bfb_{h,t})^2 -\frac{\bfb_{h,t}^T\Sigma_{\beta}^{-1}\bfb_{h,t}}{2}, \end{align*} where $ z_{h,vt} = \bm{x_i}^T \tilde{\bfb_{h,t}} + \frac{\widehat{\psi}_{h,vt}^{(i)}-\tilde{p}_{h,t}({\bm x_i})}{\tilde{p}_{h,t}({\bm x_i})(1-\tilde{p}_{h,t}({\bm x_i}))}$, $ w_{h,t}=\tilde{p}_{h,t}({\bf x}_i)(1-\tilde{p}_{h,t}({\bf x}_i))$, $\tilde{p}_{h,t} = \tilde{P}(\Delta^{(i)}_{h,t} = 1 \mid {{\bm x}_i})=\frac{exp(\bm{x_i}^T \tilde{\bfb}_{h,t})}{1 + \sum_{h=1}^{H-1}exp(\bm{x_i}^T\tilde{\bfb}_{h,t})}$ represents the approximated probability under the quadratic approximation, where $\tilde{\bfb}_{h,t}$ denotes the estimate of $\bfb_{h,t}$ at previous step, and $\widehat{\psi}_{h,vt}^{(i)}$ represents expected probability for subject $i$ as calculated in the E-step. The above approximate log-likelihood can be optimized to obtain a closed form expression $ \widehat{\bfb}_{h,t}= (\Sigma_{\beta}^{-1}+ V\sum_{i=1}^N w_{h,t}\bm{x_i}\bm{x_i}^T)^{-1}(\sum_{i=1}^N\sum_{v=1}^V w_{h,t}z_{h,vt}\bm{x_i}),$ where the notations in the expression for $\widehat{\bfb}_{h,vt}$ has been defined previously. \section*{References} \begin{enumerate} \item Allen, E. A., Damaraju, E., Plis, S. M., Erhardt, E. B., Eichele, T., and Calhoun, V. D. (2014). Tracking whole-brain connectivity dynamics in the resting state. Cerebral cortex, 24(3), 663-676. \item Balsters, J. H., Whelan, C. D., Robertson, I. H., and Ramnani, N. (2013). Cerebellum and cognition: evidence for the encoding of higher order rules. Cerebral Cortex, 23(6), 1433-1443. \item Becker, R. A., Chambers, J. M., and Wilks, A. R. (1988). The New S Language. Wadsworth \& Brooks. Cole.[Google Scholar]. \item Bullmore, E., and Sporns, O. (2009). Complex brain networks: graph theoretical analysis of structural and functional systems. Nature reviews neuroscience, 10(3), 186-198. \item Chang, C., \& Glover, G. H. (2010). Time–frequency dynamics of resting-state brain connectivity measured with fMRI. Neuroimage, 50(1), 81-98. \item Cribben, I., Wager, T., and Lindquist, M. (2013). Detecting functional connectivity change points for single-subject fMRI data. Frontiers in computational neuroscience, 7, 143. \item Davey, J., Thompson, H. E., Hallam, G., Karapanagiotidis, T., Murphy, C., De Caso, I., ... and Jefferies, E. (2016). Exploring the role of the posterior middle temporal gyrus in semantic cognition: Integration of anterior temporal lobe with executive processes. Neuroimage, 137, 165-177. \item D'Mello AM, Turkeltaub PE, and Stoodley CJ. (2017). Cerebellar tDCS Modulates Neural Circuits during Semantic Prediction: A Combined tDCS-fMRI Study. J Neuroscience;37(6):1604-1613. \item Durante, D., Dunson, D. B., and Vogelstein, J. T. (2017), “Nonparametric Bayes modeling of populations of networks,” Journal of the American Statistical Association, 112, 1516–1530. \item Engel, J. (1988), Polytomous logistic regression. Statistica Neerlandica, 42: 233-252. \item Filippi, M., Spinelli, E. G., Cividini, C., and Agosta, F. (2019). Resting state dynamic functional connectivity in neurodegenerative conditions: a review of magnetic resonance imaging findings. Frontiers in neuroscience, 13, 657. \item Hidot, S., and Saint-Jean, C. (2010). An Expectation–Maximization algorithm for the Wishart mixture model: Application to movement clustering. Pattern Recognition Letters, 31(14), 2318-2324. \item Hindriks, R., Adhikari, M. H., Murayama, Y., Ganzetti, M., Mantini, D., Logothetis, N. K., \& Deco, G. (2016). Can sliding-window correlations reveal dynamic functional connectivity in resting-state fMRI?. Neuroimage, 127, 242-256. \item Hutchison, R. M., Womelsdorf, T., Allen, E. A., Bandettini, P. A., Calhoun, V. D., Corbetta, M., ... and Handwerker, D. A. (2013). Dynamic functional connectivity: promise, issues, and interpretations. Neuroimage, 80, 360-378. \item Jolles, D. D., van Buchem, M. A., Crone, E. A., \& Rombouts, S. A. (2013). Functional brain connectivity at rest changes after working memory training. Human brain mapping, 34(2), 396-406. \item Kundu, S., Ming, J., Pierce, J., McDowell, J., \& Guo, Y. (2018). Estimating dynamic brain functional networks using multi-subject fMRI data. NeuroImage, 183, 635-649. \item Lacombe, J., Jolicoeur, P., Grimault, S., Pineault, J., and Joubert, S. (2015). Neural changes associated with semantic processing in healthy aging despite intact behavioral performance. Brain and language, 149, 118-127. \item Laufer, I., Negishi, M., Lacadie, C. M., Papademetris, X., and Constable, R. T. (2011). Dissociation between the activity of the right middle frontal gyrus and the middle temporal gyrus in processing semantic priming. PloS one, 6(8), e22368. \item Lindquist, M. A., Xu, Y., Nebel, M. B., \& Caffo, B. S. (2014). Evaluating dynamic bivariate correlations in resting-state fMRI: a comparison study and a new approach. NeuroImage, 101, 531-546. \item Lukemire, J., Kundu, S., Pagnoni, G., \& Guo, Y. (2020). Bayesian joint modeling of multiple brain functional networks. Journal of the American Statistical Association, 1-13. \item MacEachern, S. N. (1999). Dependent nonparametric processes. In ASA Proceedings of the Section on Bayesian Statistical Science, Alexandria, VA. American Statistical Association. \item McGregor, K. M., Zlatar, Z., Kleim, E., Sudhyadhom, A., Bauer, A., Phan, S., et al. (2011). Physical activity and neural correlates of aging: a combined TMS/fMRI study. Behav. Brain Res. 222, 158–168. \item Meil$\check{a}$, M. (2007). Comparing —an information based distance. Journal of multivariate analysis, 98(5), 873-895. \item Moberget, T., Gullesen, E. H., Andersson, S., Ivry, R. B., and Endestad, T. (2014). Generalized role for the cerebellum in encoding internal models: evidence from semantic processing. The Journal of neuroscience : the official journal of the Society for Neuroscience, 34(8), 2871–2878. \item Monti, R. P., Hellyer, P., Sharp, D., Leech, R., Anagnostopoulos, C., \& Montana, G. (2014). Estimating time-varying brain connectivity networks from functional MRI time series. NeuroImage, 103, 427-443. \item Nielsen, S. F. V., Madsen, K. H., Schmidt, M. N., and Mørup, M. (2017). Modeling dynamic functional connectivity using a wishart mixture model. In Proceedings of the 2017 International Workshop on Pattern Recognition in Neuroimaging (pp. 1-4). IEEE. 2017 International Workshop on Pattern Recognition in Neuroimaging (prni) https://doi.org/10.1109/PRNI.2017.7981505 \item Nocera, J., Crosson, B., Mammino, K., and McGregor, K. M. (2017). Changes in cortical activation patterns in language areas following an aerobic exercise intervention in older adults. Neural Plasticity, 2017. \item Patrikainen A. and Meila M. (2006). Comparing subspace clusterings. IEEE Transactions on Knowledge and Data Engineering 18, 902–916. \item Quinn, A. J., Vidaurre, D., Abeysuriya, R., Becker, R., Nobre, A. C., and Woolrich, M. W. (2018). Task-evoked dynamic network analysis through hidden markov modeling. Frontiers in neuroscience, 12, 603. \item Shi, R., and Guo, Y. (2016). Investigating differences in brain functional networks using hierarchical covariate-adjusted independent component analysis. The annals of applied statistics, 10(4), 1930. \item Smith, S. M., Beckmann, C. F., Andersson, J., Auerbach, E. J., Bijsterbosch, J., Douaud, G., ... \& Kelly, M. (2013). Resting-state fMRI in the human connectome project. Neuroimage, 80, 144-168. \item Sun, W. W., and Li, L. (2017). STORE: sparse tensor response regression and neuroimaging analysis. The Journal of Machine Learning Research, 18(1), 4908-4944. \item Thorndike, R. L. (1953). Who belongs in the family?. Psychometrika, 18(4), 267-276. \item Tibshirani, R., \& Wang, P. (2008). Spatial smoothing and hot spot detection for CGH data using the fused lasso. Biostatistics, 9(1), 18-29. \item Vert, J. P., \& Bleakley, K. (2010). Fast detection of multiple change-points shared by many signals using group LARS. In Advances in neural information processing systems (pp. 2343-2351). \item Wang, H. (2012). Bayesian graphical lasso models and efficient posterior computation. Bayesian Analysis, 7(4), 867-886. \item Wang H, He W, Wu J, Zhang J, Jin Z, and Li L. A coordinate-based meta-analysis of the n-back working memory paradigm using activation likelihood estimation. Brain Cogn. 2019 Jun;132:1-12. \item Wang, L., Zhang, Z., and Dunson, D. (2019). Common and individual structure of brain networks. The Annals of Applied Statistics, 13(1), 85-112. \item Warnick, R., Guindani, M., Erhardt, E., Allen, E., Calhoun, V., and Vannucci, M. (2018). A Bayesian approach for estimating dynamic functional network connectivity in fMRI data. Journal of the American Statistical Association, 113(521), 134-151. \item Wei, G. C. G. and Tanner, M. A. (1990). A Monte Carlo implementation of the EM algorithm and the poor man’s data augmentation algorithms. Journal of the American Statistical Association 85 699–704. \item Zhang, J., Sun, W. W., and Li, L. (2018). Network response regression for modeling population of networks with covariates. arXiv preprint arXiv:1810.03192. \item Zhang, Z., Allen, G. I., Zhu, H., and Dunson, D. (2019). Tensor network factorizations: Relationships between brain structural connectomes and traits. Neuroimage, 197, 330-343. \end{enumerate} \end{document}
2023-04-23T06:41:29.073Z
2021-01-15T02:13:56.000Z
redpajama/arxiv
arxiv_0001
2,508
20,239
511216056b095001b93c577290c916c1d82d897a
\section{Introduction} The problem of interaction of vortices in anisotropic superconductors has been studied extensively in early 90s both theoretically \cite{Grishin,Buzdin,NakThiem} and experimentally \cite{Bolle}. For vortices parallel to one of the principal crystal directions the problem is solved just by rescaling the isotropic results. In particular the interaction is repulsive for any position of the second vortex relative to the first. However, the force direction in general is not along the vector $\bm R$ connecting the vortices, in other words, for an arbitrary positions of the pair there is a torque, unless $\bm R$ is directed along principal directions \cite{forces}. The situation is different if parallel vortices are tilted out of principal directions \cite{Grishin,Buzdin,NakThiem}. Then, at distances of the order of London penetration depth $\lambda$, the magnetic field $\bm h(\bm R)$ of a single tilted vortex may change sign and approach zero for $R\to\infty$ being negative. In other words, the vortex-vortex interaction being repulsive at short distances may turn attractive at large distances. This leads to formation of chains of vortices in tilted fields \cite{Bolle}. In this paper we consider the magnetic field and current distributions of {\it moving} anisotropic vortices. Commonly, moving vortices are considered as static but displaced as a {\it whole}. It was argued, however, that out-of-core moving vortex structure differs from the static case due to out-of-core dissipation \cite{leo,TDL}. The moving vortex magnetic field $h(r,t)$ generates the electric field and currents of normal excitations, which in turn distort the field $h$. We show that at large distances the distortion is not small and even able to change the field sign. Unexpectedly, this distortion attenuates with distance as a power law $1/R^3$, i.e. much slower than the standard decay of undistorted field $\sim e^{-R/\lambda}$. At distances large in comparison to the core size of interest in this work, one can use the time-dependent London approach based on the assumption that the current consists of the normal and superconducting parts: \begin{equation} {\bm J}= \sigma {\bm E} -\frac{2e^2 |\Psi|^2}{mc}\, \left( {\bm A}+\frac{\phi_0}{2\pi}{\bm \nabla}\chi\right) \,,\label{current} \end{equation} where $\bm A$ is the vector potential, $\Psi$ is the order parameter, $\chi$ is the phase, $\phi_0$ is the flux quantum, ${\bm E}$ is the electric field, and $\sigma$ is the conductivity associated with normal excitations. The conductivity $\sigma$ approaches the normal state value $\sigma_n$ when the temperature $T$ approaches $T_c$; in s-wave superconductors it vanishes with decreasing temperature along with the density of normal excitations. This is not the case, however, for strong pair-breaking when superconductivity is gapless while the density of states approaches the normal state value at all temperatures. Unfortunately, not much experimental information about the $T$ dependence of $\sigma$ is available. Theoretically, this question is still debated, e.g. Ref.\,\cite{Andreev} discusses possible enhancement of $\sigma$ due to inelastic scattering. Experimentally, interpretation of the microwave absorption data is not yet settled either \cite{Maeda}. At distances large in comparison with the vortex core size, $|\Psi|$ is a constant $ \Psi_0 $ and Eq.\,(\ref{current}) becomes: \begin{equation} \frac{4\pi}{c}{\bm J}= \frac{4\pi\sigma}{c} {\bm E} -\frac{1}{\lambda^2}\, \left( {\bm A}+\frac{\phi_0}{2\pi}{\bm \nabla}\chi\right) \,, \label{current1} \end{equation} where $\lambda^2=mc^2/8\pi e^2|\Psi_0|^2 $ is the London penetration depth. Acting on this by curl one obtains: \begin{equation} {\bm h}- \lambda^2\nabla^2{\bm h} +\tau\,\frac{\partial {\bm h}}{\partial t}= \phi_0 {\bm z}\sum_{\nu}\delta({\bm r}-{\bm r_\nu})\,,\label{TDL} \end{equation} where ${\bm r_\nu}(t) $ is the position of the $\nu$-th vortex which may depend on time $t$, $\bm z$ is the direction of vortices, and the relaxation time \begin{equation} \tau= 4\pi\sigma\lambda^2/c^2 \,. \label{tau} \end{equation} Equation (\ref{TDL}) can be considered as a general form of the time dependent London equation (TDL). The anisotropic generalization of this equation was given in \cite{anisTDL} and reproduced here in Section III. \section{Vortex at rest in anisotropic case} For an arbitrary oriented vortex in anisotropic material this problem have been considered in \cite{K81,Grishin}. In general, results are cumbersome, so here we consider a simple situation of an orthorhombic superconductor in field along the $c$ axis. The London equation in this case is: \begin{eqnarray} h_z(x,y)- \lambda^2_1 \, \frac{\partial^2h_z}{\partial y^2}- \lambda^2_2 \, \frac{\partial^2h_z}{\partial x^2 } = \phi_0\delta(\bm r)\,, \label{hz} \end{eqnarray} Here, the frame $x,y,z$ is chosen to coincide with $a,b,c$ of the crystal, $\bm r=(x,y)$, $\lambda^2_{xx}=\lambda^2_1$ and $\lambda^2_{yy}=\lambda^2_2$ are the diagonal components of the tensor $(\lambda^2)_{ik} $. The solution of this equation is \begin{eqnarray} h_z(x,y)= \frac{ \phi_0 }{2\pi\lambda_1\lambda_ 2} K_0\left(\rho \right)\,,\quad \rho^2=\frac{x^2}{\lambda_2^2} + \frac{y^2}{\lambda_1^2}\,. \label{h0} \end{eqnarray} Current densities follow: \begin{eqnarray} J_x = - \frac{c \phi_0 }{8\pi^2\lambda_1^3\lambda_2 } \frac{y\,K_1(\rho)}{\rho}\,,\quad J_y = \frac{ c\phi_0 }{8\pi^2\lambda_1\lambda_2^3 } \frac{x\,K_1(\rho)}{\rho}\,,\qquad \label{Jab} \end{eqnarray} where $K_{0,1}$ are Modified Bessel functions. It is easy to see that the contours $ h_z(x,y)= \,\,$const coincide with the stream lines of the current, an example is shown in Fig.\,\ref{f1a}. \begin{figure}[h ] \includegraphics[width=7cm] {Fig1.pdf} \caption{The stream lines of the current for $\gamma=\lambda_2/\lambda_1=3$ or, which is the same, contours of constant $h_z(x,y)$. $\lambda_1$ is taken as unit length. } \label{f1a} \end{figure} The current lines have the expected ellipse-like shape. \begin{figure}[h ] \includegraphics[width=7cm] {Fig2.pdf} \caption{The contours of constant current {\it values} $J(x,y)=\sqrt{J_x^2+J_y^2}$ for $\lambda_2/\lambda_1= 3$. $x$ and $y$ are in units of $\lambda_1$. } \label{f2a} \end{figure} This is, however, not the case for the distribution of the current {\it values} $J(x,y)=\sqrt{J_x^2+J_y^2}$. An example is shown in Fig.\,\ref{f2a}. Hence, the geometry of the streamlines of the vector $\bm J$ differs from that of contours $|J(x,y)|=const$, unlike the isotropic case where they are in fact the same. \section{Moving vortex} The anisotropic generalization of the isotropic Eq.\,(\ref{current1}) for the current is straightforward: \begin{equation} J_k= \sigma_{kl} E_l -\frac{c}{4\pi}\left(\lambda^{-2}\right)_{kl}\left (A_l + \frac{\phi_0}{2\pi} \frac{\partial\chi}{\partial x_l}\right)\,. \label{current-a} \end{equation} Here, $\sigma_{kl}$ and $\left(\lambda^{-2}\right)_{kl}$ are tensors of the conductivity due to normal excitations and of the inverse square of the penetration depth. Having in mind to derive an equation for the magnetic field $\bm h$ we first have to get rid of the vector potential. To this end, multiply both sides by $4\pi \left(\lambda^{2}\right)_{k\mu}/c$ where $\left(\lambda^{2}\right)_{k\mu}$ is the tensor inverse to $\left(\lambda^{-2}\right)_{k\mu}$ and sum up over $k$. Then apply ${\rm curl}$ to both sides and use the relation \begin{equation} {\rm curl} (\bm A +\phi_0 \bm \nabla \chi/2\pi)= \bm h-\phi_0\hat{\bm z}\delta(\bm r-\bm r_\nu)\,, \end{equation} where $\bm r_\nu$ is the vortex core position. It is convenient to use in the following the notation curl$_\nu\bm V= \epsilon_{\nu s\mu}\partial V_\mu/\partial x_s$ where $\epsilon_{\nu s\mu }$ is Levi-Chivita unit antisymmetric tensor: $\epsilon_{xyz}=1$ and so do all components with even number of transpositions of indices, it is $-1$ for odd numbers, and zero otherwise. Hence, applying $\epsilon_{\nu s\mu}\partial /\partial x_s$ to Eq.\,(\ref{current-a}), one obtains the anisotropic version of TDL \cite{anisTDL}: \begin{eqnarray} h_\nu &+&\frac{4\pi}{c}\epsilon_{\nu s\mu} \lambda^2_{k\mu} \frac{\partial J_k}{\partial x_s} - \frac{4\pi}{c}\epsilon_{\nu s\mu} \lambda^2_{k\mu}\sigma_{kl} \frac{\partial E_l}{\partial x_s} \nonumber\\ &=&\phi_0 \hat{\bm z}_\nu\delta(\bm r-\bm v t).\qquad \label{anis-TDL} \end{eqnarray} In this form, the equation is valid for an arbitrary oriented vortex and any crystal anisotropy. For an orthorhombic crystal in which the vortex and its field are along one of the principal directions (call it $z$), this cumbersome equation takes the form: \begin{eqnarray} h_z &-&\frac{4\pi}{c}\left( \lambda^2_{xx} \frac{\partial J_x}{\partial y} - \lambda^2_{yy} \frac{\partial J_y}{\partial x}\right) \nonumber\\ &+&\frac{4\pi\sigma}{c}\left( \lambda^2_{xx} \frac{\partial E_x}{\partial y} - \lambda^2_{yy} \frac{\partial E_y}{\partial x}\right)=\phi_0 \delta(\bm r-\bm v t).\qquad \label{ortho-TDL} \end{eqnarray} Here we further simplified the problem assuming isotropic conductivity of normal excitations $\sigma_{xx}=\sigma_{yy}=\sigma $. This should be solved together with quasi-stationary Maxwell equations curl$\bm E =-\partial_t\bm h/c$ and div$\bm E =0$ \cite{LL,Gorkov}, which can be done in 2D Fourier space: \begin{equation} E_{\bm k x}=-\frac{k_y}{k_x}E_{\bm k y}= -\frac{ik_y}{ck^2} \,\frac{\partial h_{\bm k z}}{\partial t} \,, \label{Es} \end{equation} so that we obtain the 2D Fourier transform of Eq.\,(\ref{ortho-TDL}): \begin{eqnarray} h_{\bm k} && \left(1+k_x^2 \lambda^2_{yy}+ k_y^2 \lambda^2_{xx}\right) \nonumber\\ && +\frac{4\pi\sigma}{c^2}\,\frac{ \lambda^2_{yy}k_x^2+ \lambda^2_{xx}k_y^2}{k^2} \frac{\partial h_{\bm k} }{\partial t} =\phi_0e^{-i\bm k\bm v t}\,, \label{FTortho-TDL} \end{eqnarray} where $h_{\bm k} $ is the Fourier transform of $h_z (\bm r)$. In isotropic case we obtain the equation studied in \cite{TDL}. We further denote $\lambda^2_{yy}=\lambda_2^2,\quad \lambda^2_{xx}=\lambda_1^2$ and $\lambda=\sqrt{\lambda_1\lambda_2}$. The anisotropy parameter is defined as $\gamma=\lambda_2/\lambda_1$. Then, we obtain: \begin{eqnarray} h_{\bm k} \left(1+k_x^2 \lambda^2_2+ k_y^2 \lambda^2_1\right) +\tau \,\frac{ \lambda^2_2k_x^2+ \lambda^2_1k_y^2}{\lambda^2k^2} \frac{\partial h_{\bm k} }{\partial t} =\phi_0e^{-i\bm k\bm v t}. \nonumber\\ \label{ h(k,t)} \end{eqnarray} with $\tau=4\pi\sigma\lambda^2/c^2$. This is a linear differential equation for $h_{\bm k}(t)$ with the solution \begin{eqnarray} &&h_{\bm k} =\frac{\phi_0e^{-i\bm k\bm v t}}{ C-iD\bm k\cdot\bm s}\,,\quad \bm s= \bm v\tau \,, \nonumber\\ && C=1+k_x^2 \lambda^2_2+ k_y^2 \lambda^2_1\,,\quad D=\frac{ \lambda^2_2k_x^2+ \lambda^2_1k_y^2}{\lambda^2k^2}\,. \qquad \label{h(k,t)} \end{eqnarray} Since we are interested in stationary motion with a constant velocity, we can set here $t=0$. The dimensionless parameter \begin{eqnarray} S=\frac{s}{\lambda}=\frac{4\pi v\sigma\lambda}{c^2} \label{ S} \end{eqnarray} is small even for vortex velocities exceeding the speed of sound presently attainable \cite{Eli,Denis}. Although in principle $S$ can take larger values, we restrict this discussion by small $S$ and call this case a ``slow motion". \section{Slow motion} For $s\to 0$ one can expand $h(\bm k, \bm s)$ in powers of small $s$ up to ${\cal O}(s)$: \begin{eqnarray} h_{\bm k}=\frac{\phi_0 }{ C}+i\frac{\phi_0D}{C^2}\bm k\cdot\bm s\,, \label{ expand} \end{eqnarray} The first term corresponds to the static solution discussed above: \begin{eqnarray} h_0(x,y)= \frac{ \phi_0 }{2\pi\lambda^2} K_0\left(\rho \right)\,,\quad \rho^2=\frac{x^2}{\lambda_2^2} + \frac{y^2}{\lambda_1^2}\,. \label{h0} \end{eqnarray} The correction due to motion is given by \begin{eqnarray} \frac{\delta h_{\bm k}\lambda^2}{\phi_0}= i\frac{(\lambda^2_2k_x^2+ \lambda^2_1k_y^2)\bm k\cdot\bm s}{k^2(1+\lambda^2_2k_x^2+ \lambda^2_1k_y^2)^2}\,, \label{corr} \end{eqnarray} To separate the part that does not disappear when $\lambda_1= \lambda_2$, one can use the identity \begin{eqnarray} \frac{ \lambda^2_2k_x^2+ \lambda^2_1k_y^2 }{ k_x^2+k_y^2 }=\lambda_2^2+ \frac{ k_y^2(\lambda^2_1 - \lambda^2_2) }{ k_x^2+k_y^2 } \label{ident} \end{eqnarray} to obtain: \begin{eqnarray} &&\frac{4\pi^2\lambda^2\delta h(\bm r)}{i\phi_0}= \lambda_2^2\int \frac{d^2\bm k(\bm k\cdot\bm s)e^{i\bm k \bm r}} {(1+\lambda^2_2k_x^2+ \lambda^2_1k_y^2)^2}\nonumber\\ &&+(\lambda^2_1 - \lambda^2_2)\int \frac{d^2\bm kk_y^2(\bm k\cdot\bm s) e^{i\bm k \bm r}} {k^2(1+\lambda^2_2k_x^2+ \lambda^2_1k_y^2)^2} \,. \label{corr1} \end{eqnarray} Evaluation of the first contribution is outlined in Appendix A: \begin{eqnarray} h_1 = - \frac{\phi_0}{2\pi\lambda^2} \frac{ S_xX +S_yY\gamma^2}{2} K_0\left(\sqrt{\frac{X^2}{\gamma}+ Y^2\gamma} \right)\,, \qquad \label{h1} \end{eqnarray} where \begin{eqnarray} \bm S =\frac{\bm s }{\lambda},\,\,\, X=\frac{x}{\lambda},\,\,\, Y=\frac{y}{\lambda},\,\,\, \lambda=\sqrt{\lambda_1\lambda_2},\,\,\, \gamma=\frac{\lambda_2}{\lambda_1}.\qquad \label{eq23} \end{eqnarray} It is shown in \cite{Norio2} that in the isotropic case for a vortex moving along $x$ \begin{eqnarray} h (\bm r) = \frac{\phi_0}{2\pi\lambda^2} e^{-sx/2\lambda^2} K_0\left(\frac{r}{2\lambda}\sqrt{4+s^2/\lambda^2}\right) \qquad \label{hz(r)1b} \end{eqnarray} in common units. Expanding this in small $s$ one obtains for a slow motion: \begin{eqnarray} \delta h (\bm r) =- \frac{\phi_0}{4\pi\lambda^4} s x K_0\left(\frac{r}{\lambda}\right)\,. \label{hz(r)1c} \end{eqnarray} Hence, $h_1$ of Eq.\,(\ref{h1}) has the correct isotropic limit. The second integral over two components of $\bm k$ in Eq.\,(\ref{corr1}) can be reduced to integrals over a single variable which are easy to deal with numerically, see Appendix B: \begin{widetext} \begin{eqnarray} &&\frac{2\pi\lambda^2}{\phi_0} h_2= \frac{ ( \gamma^2-1)}{4\gamma}\Big\{S_xX\int_0^\infty \frac{d\zeta}{(\zeta+\gamma)^{3/2}(\zeta+1/\gamma)^{3/2}} \left[ K_0\left( {\cal R}_\zeta\right)-\frac{Y^2}{(\zeta+1/\gamma) {\cal R}_\zeta} K_1\left( {\cal R}_\zeta\right) \right] \nonumber\\ &&+S_yY\int_0^\infty \frac{d\zeta}{(\zeta+\gamma)^{1/2}(\zeta+1/\gamma)^{5/2}} \left[ 3K_0\left( {\cal R}_\zeta\right)-\frac{Y^2}{(\zeta+1/\gamma) {\cal R}_\zeta} K_1\left( {\cal R}_\zeta\right) \right]\Big\}, \qquad {\cal R}_\zeta=\sqrt{\frac{X^2}{\zeta+\gamma}+\frac{Y^2}{\zeta+1/\gamma}}\,.\qquad\qquad\qquad\qquad \label{dh2} \end{eqnarray} \end{widetext} Thus, the vortex field can be calculated as $h=h_0+h_1+h_2$ with $h_0$ given in Eq.\,(\ref{h0}), $h_1$ in Eq.\,(\ref{h1}), and $h_2$ in Eq.\,(\ref{dh2}). The results obtained with the help of Wolfram Mathematica package are shown below. \begin{figure}[h ] \includegraphics[width=7cm] {Fig3.pdf} \caption{Contours $h(x,y)=$ const for the vortex moving along $x$ axis ($S_x=0.1, \,\,S_y=0$) and $\lambda_2/\lambda_1= 3$. The motion is directed to $+x$. $x$ and $y$ are in units of $\lambda=\sqrt{\lambda_1\lambda_2} $. } \label{f3} \end{figure} One can see in Fig.\,\ref{f3} that the current stream-lines (or, what is the same, contours $h(x,y)=$ const) in the vicinity of the moving vortex core are only weakly distorted relative to the static elliptic shape. The most interesting feature of this distribution is that at large distances $h(x,y)$ changes sign in some parts of the $(x,y)$ plane. Since the interaction energy of the vortex at the origin with another one at $(x,y)$ is proportional to $h(x,y)$, the presence of domains with $h<0$ means that for the second vortex in these domains the intervortex interaction is attractive. The field distribution is different for the motion along $y$ axis shown in Fig.\,\ref{f4}. \begin{figure}[h ] \includegraphics[width=7cm] {Fig4.pdf} \caption{Contours $h(x,y)=$ const for the vortex moving along $y$ axis ($S_x=0, S_y=0.1$) and $\lambda_2/\lambda_1= 3$. The motion is directed to $+y$. $x$ and $y$ are in units of $\lambda=\sqrt{\lambda_1\lambda_2} $. } \label{f4} \end{figure} \begin{figure}[htb ] \includegraphics[width=7cm] {Fig5.pdf} \caption{Contours $h(x,y)=$ const for the vortex moving along the diagonal $x=y$ ($S_x=S_y=0.1$) and $\lambda_2/\lambda_1= 3$. $x$ and $y$ are in units of $\lambda=\sqrt{\lambda_1\lambda_2} $. } \label{f5} \end{figure} It is seen that the flux in front of the moving vortex is depleted whereas behind it is enhanced, the feature first discussed in \cite{norio1} for the isotropic case. This feature remains also for a general direction of motion; an example of motion along the line $x=y $ is shown in Fig.\,\ref{f5}. Moreover, Fig.\,\ref{f3}--\ref{f5} show that this depletion may even change sign of the field. It is worth mentioning that the London theory is reliable in the region $r\gg \xi$, $\xi$ being the core size, and so are our predictions of a non-trivial behavior of $h(x,y)$ at large distances. \begin{figure}[h] \includegraphics[width=7cm] {Fig6.pdf} \caption{The field $h(0,y)$ for the vortex moving along $y$ ($S_x=0, S_y=0.1 $); $\lambda_2/\lambda_1= 3$. $x$ is in units of $\lambda=\sqrt{\lambda_1\lambda_2} $. } \label{f6} \end{figure} It is instructive to see how the interaction changes along certain directions. E.g., for $S_x=0, \,\,\, S_y=0.1$, the motion along $y$-axis, $h(0,Y)$ is positive if $0<Y\lesssim 2.5$ (so that the second vortex at $(0,Y)$ in this region is repelled by the vortex at the origin). If the second vortex is at $2.5\lesssim Y<\infty$ the interaction is attractive. This is illustrated in Fig.\,\ref{f6}. \begin{figure}[htb ] \includegraphics[width=7cm] {Fig7.pdf} \caption{The integrand of Eq.\,(\ref{int R}) for $Y=10$ and $\gamma=3$. } \label{f7} \end{figure} \subsection{Asymptotic behavior of $\bm {h (0,Y) }$ for $\bm{Y\to\infty}$} For $X=0$, Eq.\,(\ref{dh2}) yields \begin{eqnarray} \frac{2\pi\lambda^2}{\phi_0} h_2= \frac{ \gamma^2-1}{4\gamma} S_y Y \int_0^\infty \frac{d\zeta\left[3K_0( \eta )- \eta K_1 (\eta)\right]}{(\zeta+\gamma)^{1/2}(\zeta+1/\gamma)^{5/2}}, \nonumber\\ \eta=\frac{|Y|}{ \sqrt{\zeta+1/\gamma}}\,.\qquad\qquad\qquad\qquad \label{x=0} \end{eqnarray} Going to the integration variable $\eta$, we get \begin{eqnarray} \frac{2\pi\lambda^2}{\phi_0} h_2= \frac{ \gamma^2-1}{2\gamma} \frac{S_y}{Y^2} \int_0^{Y\sqrt{\gamma}} \frac{d\eta\, \eta^3\left[3K_0(\eta)- \eta K_1 (\eta)\right]}{\sqrt{Y^2+\eta^2(\gamma-1/\gamma) }}.\nonumber\\ \label{int R} \end{eqnarray} Fig.\,\ref{f7} shows that the integrand here is substantial only in the finite region $ 0< \eta\lesssim 10$. Therefore being interested in the asymptotic behavior for $|Y|\to\infty$, one can replace the denominator by $|Y |$ and the upper limit of integration by $\infty$: \begin{eqnarray} \frac{2\pi\lambda^2}{\phi_0} h_2(0,Y)&=& \frac{ \gamma^2-1}{2\gamma} \frac{ S_y}{Y^3} \int_0^{\infty} d\eta\, \eta^3\left[3K_0 - \eta K_1 \right]_\eta \qquad \nonumber\\ & =& - \frac{\gamma^2-1}{ \gamma} \frac{2S_y}{Y^3} \,. \label{ass} \end{eqnarray} Thus, $h_2(0,Y)$ is negative when $Y\to\infty$ and positive for $Y\to -\infty$. It decays as $1/Y^3$, therefore, the total field $h_0+h_1+h_2$ attenuates as a power law as well, since $h_0$ and $h_1$ decay exponentially and at large distances can be disregarded. Hence, $h_2$ can be replaced with $h$ in this region. This conclusion agrees with direct numerical evaluation of $h(0,Y)$ shown in Fig.\,\ref{f6} In the same way one can obtain the leading term in the asymptotic behavior for $Y=S_y=0$ for the motion along the $x$ axis: \begin{eqnarray} h(X,0)\sim \frac{\phi_0}{2\pi\lambda^2} \frac{ \gamma^2-1}{2\gamma} \frac{ 2S_x}{X^3} \,. \label{assX} \end{eqnarray} For the sake of brevity we do not provide other terms in the asymptotic series. The power-law decay of the field $h(x,y)$ for vortices moving in anisotropic superconductors is a surprising feature. Clearly, this feature disappears for vortices at rest as well as for vortices moving in isotropic materials. Formally, the power-law behavior in real space originates in the factor $1/k^2$ in Fourier transforms, see e.g. Eq.\,(\ref{corr1}), which, however, cancels out for $\gamma=1$. \section{Electric field for slow motion} In the approximation linear in velocity, we have according to Eq.\,(\ref{h(k,t)}) \begin{eqnarray} \frac{\partial h_{\bm k}}{\partial t} =-i \frac{\phi_0(\bm k\cdot\bm v)}{C}\,,\quad C=1+k_x^2 \lambda^2_2+ k_y^2 \lambda^2_1\,. \label{dh/dt} \end{eqnarray} This yields the electric field \begin{equation} E_{\bm k x}=-\frac{k_y}{k_x}E_{\bm k y}= -\frac{\phi_0}{c\tau}\frac{k_y(\bm k\cdot\bm s)}{ k^2C} \,,\quad \bm s =\bm v\tau\,, \label{Es1} \end{equation} see Eqs.\,(\ref{Es}). Hence, we have in real space \begin{equation} E_x = -\frac{\phi_0}{4\pi^2c\tau }\int \frac{d^2\bm k\,k_y (\bm k\cdot\bm s)}{k^2C} e^{i\bm k \bm r}\,, \label{Ex} \end{equation} or, using $\lambda=\sqrt{\lambda_1\lambda_2}$ as the unit length, \begin{equation} E_x = -\frac{\phi_0}{4\pi^2c\tau\lambda }\int \frac{d^2\bm q\,q_y (\bm q\cdot\bm S)}{ q^2C} e^{i\bm q \bm R}\,. \label{Ex1} \end{equation} Here, $\bm q=\bm k\lambda$, $\bm R=(X,Y)=\bm r/\lambda$ (see definitions (\ref{eq23}), and \begin{equation} C = 1+q_x^2 \gamma+ q_y^2 /\gamma\,,\quad \gamma=\lambda_2/\lambda_1\,. \label{C1} \end{equation} In the same way we obtain \begin{equation} E_y = \frac{\phi_0}{4\pi^2c\tau\lambda }\int \frac{d^2\bm q\,q_x (\bm q\cdot\bm S)}{ q^2C} e^{i\bm q \bm R}\,. \label{Ey1} \end{equation} The integrals in Eqs.\,(\ref{Ex1}) and (\ref{Ey1}) are dimensionless. It is of interest to see the streamlines of $\bm E$ (or, that is the same, of the normal current $\bm J_n=\sigma\bm E$). To this end, we calculate the stream function $G(x,y)$ such that $E_x=\partial_yG$ and $E_y=-\partial_xG$; the streamlines then are given by contours $G(x,y)= const$. In Fourier space we have $E_{x\bm k}=ik_y G_{\bm k}$ so that \begin{eqnarray} G_{\bm k}= \frac{i\phi_0}{c\tau}\frac{ (\bm k\cdot\bm s)}{ k^2C},\,\,\,\, G(\bm r)= \frac{i\phi_0}{4\pi^2c\tau}\int \frac{d^2\bm q (\bm q\cdot\bm S)e^{i\bm q \bm R}}{ q^2C} .\qquad \label{Gr} \end{eqnarray} The formal procedure of reducing the double to single integration in Eq.\,(\ref{Gr}) is similar to that used for $h(\bm r)$ and is outlined in Appendix C. The result is: \begin{eqnarray} &&G(\bm r) = - \frac{\phi_0}{4\pi c\tau} \int_0^\infty \frac{d\eta \, K_0({\cal R} \sqrt{\eta}) }{\sqrt{\mu\nu}} \left( \frac{ S_xX }{ \mu}+\frac{S_y Y }{ \nu} \right),\nonumber\\ && \mu= 1+\eta \gamma\,,\quad \nu= 1+\eta /\gamma\,,\quad {\cal R}=\sqrt{ \frac{ X^2}{\mu}+\frac{ Y^2}{ \nu} }\,. \qquad \label{Gr1} \end{eqnarray} Figs.\,\ref{f8} and \ref{f9} show two examples of $J_n$-streamlines (or contours $G(X,Y)=\,\,$const) obtained by numerical integration of Eq.\,(\ref{Gr1}). \begin{figure}[h ] \includegraphics[width=7cm] {Fig8.pdf} \caption{ Streamlines of the field $\bm E$ (or of the normal current $\bm J_n$) for the vortex moving along $X$ ($S_x=0.1, S_y=0$). $\gamma=\lambda_2/\lambda_1= 3$. $X,Y$ are in units of $\lambda=\sqrt{\lambda_1\lambda_2} $. Positive constants by the contours correspond to the clockwise current direction, negative otherwise.} \label{f8} \end{figure} \begin{figure}[htb] \includegraphics[width=7cm] {Fig9.pdf} \caption{ Streamlines of the normal current for the vortex moving along the line $X=Y$ ($S_x= S_y=0.1$), $\gamma=\lambda_2/\lambda_1= 3$. $X,Y$ are in units of $\lambda=\sqrt{\lambda_1\lambda_2} $. Positive constants by the contours correspond to the clockwise current direction, negative otherwise.} \label{f9} \end{figure} The electric field is now readily obtained by differentiation of $G$. We will not write down these cumbersome expressions. Instead we consider the asymptotic behavior of electric fields at large distances in two relatively simple cases using the method employed above for asymptotic behavior of $h(0,y)$ and $h(x,0)$. Omitting formalities, we give the results: \begin{eqnarray} G(X,0) \sim - \frac{\phi_0 }{2\pi c\tau } \frac{S_x} {X}\,,\qquad |X|\to\infty\,, \label{G(X)as)} \end{eqnarray} that yields \begin{eqnarray} E_x(X,0)=0\,,\quad E_y(X,0) \sim \frac{\phi_0 }{2\pi c\tau } \frac{S_x} {X^2} \,. \label{G(X)as)} \end{eqnarray} Similarly, for the motion along $Y$ axis \begin{eqnarray} E_y(0,Y)=0\,,\quad E_x(0,Y) \sim \frac{\phi_0 }{2\pi c\tau } \frac{S_y} {Y^2} \,. \label{G(Y)as)} \end{eqnarray} Interestingly, the material anisotropy does not enter these results at all. This means that the power-law decay of the electric field should exists also in the isotropic case. In fact, for $\gamma=1$ one has from Eq.\,(\ref{Gr}) \begin{eqnarray} G(X,0)= \frac{i\phi_0S_x}{4\pi^2c\tau}\int \frac{d^2\bm q \, q_x e^{i \bm q\bm X}}{ q^2(1+q^2)} ,\qquad \label{GX} \end{eqnarray} which is readily done integrating first over the angle between $\bm q$ and $\bm X$. We obtain: \begin{eqnarray} G(X,0)= \frac{\phi_0S_x}{2\pi c\tau}\left[ K_1(X) - \frac{1}{ X}\right] ,\qquad \label{GX1} \end{eqnarray} that gives \begin{eqnarray} E_y(X,0) = -\frac{\phi_0 S_x}{2\pi c\tau } \left[ K_1^\prime(X) + \frac{1}{ X^2}\right] . \label{e44} \end{eqnarray} \begin{figure}[htb] \includegraphics[width=7cm] {Fig10.pdf} \caption{ The solid line is the square brackets in Eq.\,(\ref{e44}) for $E_y(X,0)$ when the vortex moves along the $X$ axis ($S_y=0$), $\gamma=1$. The dashed line shows the power-law term $1/X^2$. $X$ are in units of $\lambda$. } \label{f10} \end{figure} Figure \ref{f10} shows that the field $E_y(X,0)$ changes sign at $x/\lambda\approx 1$, reaches maximum near 2, and slowly decays as a power law $\lambda^2/x^2$. This is quite surprising since the electric field power-law decay means that no screening of $\bm E$ is involved, in other words, that there is no Meissner-type effect for the field $\bm E$. \section{Discussion} We have studied effects of vortex motion within time dependent London theory, which is based on the assumption that in time dependent phenomena the current in superconductors consists of the persistent and normal components, Eq.\,(\ref{current1}). This approach differs from the common assumption that the vortex magnetic structure moves as a whole, so that in the frame bound to the moving vortex the magnetic field distribution is the same as for a vortex at rest, see e.g. \cite{Dolgov} or multitude of papers describing the flux flow. Within the TDL approach the field distribution of the moving vortex differs from that of vortex at rest even in the frame moving with the vortex. The physical reason for this is simple: the moving magnetic structure $h(x,y)$ induces the electric field and currents of normal excitations, while the latter distort the moving static field distribution $h_0(x,y)$. This is a general feature of systems with singularities (vortices) moving in dissipative media \cite{leo,TDL}. The equation describing these time-dependent phenomena are diffusion-like, so that solutions are obtained in the 2D Fourier space: we obtain $h_{\bm k}$ and to recover $h({\bm r})$ one has to evaluate double integrals $\int d^2\bm k...\,$, a heavy numerical procedure. We offer a way to reduce double integrals to a single $\int_0^\infty d\eta...$ which can be evaluated within Wolfram Mathematica package efficiently and fast, that is relevant especially for generating plots of various 2D distributions. We have investigated the field distribution of moving vortices away of the vortex core whether the time-dependent London theory is reliable. As in the isotropic case \cite{norio1}, the magnetic field of moving vortices in anisotropic materials is distorted relative to the static case, the magnetic flux is redistributed so that it is depleted in front of the moving vortex and enhanced behind it. The depletion could be strong enough so that the field $h_z$ changes sign in some parts of the $xy$ plane. This suggests that the interaction of two vortices, one at the origin at some moment and another is at $(x,y)$, being repulsive at short intervortex distances may turn attractive. The physical reason for this change is the induced electric field $\bm E$ and along with it the currents of normal excitations $\sigma \bm E$. This field is obtained by solving quasi-stationary Maxwell equations curl$\bm E=\partial_t\bm h/c$, the condition of quasi-neutrality div$\bm E=0$, coupled with the time-dependent London equation (basically, the same procedure as in deriving time-dependent Ginzburg-Landau equations \cite{Gorkov}). Unlike $\bm h$, the field $\bm E$ cannot be screened in the bulk of the material, so that one may say that there is no ``Meissner effect" for the electric field {\it per se}. It turns out that in anisotropic case the magnetic field of moving vortex has a power law dependence on distances $r>>\lambda$: $h \propto (\gamma^2-1)v/r^3$ ($\gamma$ is the anisotropy parameter, $v$ is the vortex velocity). The exponentially decaying part of $h$ is still present, but at large distances it is irrelevant in comparison with the power-law part. In isotropic case, the power law gives way to the standard exponential decay. The electric field, however, goes as $1/r^2$ in both cases. Most of our calculation were done for orthorhombic materials with the in-plane anisotropy parameter $\gamma = 3$ and the vortex along $c$. Such materials in fact exist, examples are NiBi films \cite{NiBi}, or Ta$_4$Pd$_3$Te$_{16}$ \cite{17}.
2023-04-23T06:41:30.565Z
2021-07-27T02:03:29.000Z
redpajama/arxiv
arxiv_0001
2,548
5,187
9017ce3db623f20302b74733c191c0825f60c0b3
\section{0pt}{12pt plus 4pt minus 2pt}{6pt plus 2pt minus 2pt} \titlespacing\subsection{0pt}{12pt plus 4pt minus 2pt}{3pt plus 2pt minus 3pt} \titlespacing\subsubsection{0pt}{12pt plus 4pt minus 2pt}{0pt plus 2pt minus 3pt} \singlespacing \def\boxit#1{\vbox{\hrule\hbox{\vrule\kern6pt \vbox{\kern6pt#1\kern6pt}\kern6pt\vrule}\hrule}} \def\bfred#1{{\color{red}\bf#1}} \def\bfblue#1{{\color{blue}\bf#1}} \def\red#1{{\color{red}#1}} \def\blue#1{{\color{blue}#1}} \definecolor{orange}{rgb}{1,0.5,0} \definecolor{MyDarkBlue}{rgb}{0,0.08,0.45} \def\orange#1{{\color{orange}#1}} \def\fredcomment#1{\vskip 2mm\boxit{\vskip 2mm{\color{MyDarkBlue}\bf#1} {\color{MyDarkBlue}\bf -- Fred\vskip 2mm}}\vskip 2mm} \def\alexcomment#1{\vskip 2mm\boxit{\vskip 2mm{\color{red}\bf#1} {\color{red}\bf -- Alex\vskip 2mm}}\vskip 2mm} \def\cscomment#1{\vskip 2mm\boxit{\vskip 2mm{\color{orange}\bf#1} {\color{black}\bf -- C.S.\vskip 2mm}}\vskip 2mm} \newtheorem{corollary}{Corollary}[section] \newtheorem{definition}{Definition} \newtheorem{example}{Example}[section] \newtheorem{lemma}{Lemma}[section] \newtheorem{proposition}{Proposition}[section] \newtheorem{remark}{Remark}[section] \newtheorem{theorem}{Theorem}[section] \newtheorem{problem}{Problem}[section] \newtheorem{assumption}{Assumption}[section] \newtheorem{Def}{Definition}[section] \renewcommand{\baselinestretch}{1.5} \interfootnotelinepenalty=10000 \begin{document} \title{\Large \bfseries Deep equal risk pricing of financial derivatives with non-translation invariant risk measures \blfootnote{A GitHub repository with some samples of codes can be found at \href{https://github.com/alexandrecarbonneau}{github.com/alexandrecarbonneau}.} } \author[a] {Alexandre Carbonneau\thanks{Corresponding author.\vspace{0.2em} \newline {\it Email addresses:} \href{mailto:[email protected]}{[email protected]} (Alexandre Carbonneau), \href{mailto:[email protected]}{[email protected]} (Fr\'ed\'eric Godin).}} \author[b]{Fr\'ed\'eric Godin} \affil[a,b]{{\small Concordia University, Department of Mathematics and Statistics, Montr\'eal, Canada}} \vspace{-10pt} \date{ \today} \date{\bigskip\bigskip \today} \maketitle \thispagestyle{empty} \begin{abstract} \vspace{-5pt} The use of non-translation invariant risk measures within the equal risk pricing (ERP) methodology for the valuation of financial derivatives is investigated. The ability to move beyond the class of convex risk measures considered in several prior studies provides more flexibility within the pricing scheme. In particular, suitable choices for the risk measure embedded in the ERP framework such as the semi-mean-square-error (SMSE) are shown herein to alleviate the price inflation phenomenon observed under Tail Value-at-Risk based ERP as documented for instance in \cite{carbonneau2021equal}. The numerical implementation of non-translation invariant ERP is performed through deep reinforcement learning, where a slight modification is applied to the conventional deep hedging training algorithm \citep[see][]{buehler2019deep} so as to enable obtaining a price through a single training run for the two neural networks associated with the respective long and short hedging strategies. The accuracy of the neural network training procedure is shown in simulation experiments not to be materially impacted by such modification of the training algorithm. \noindent \textbf{Keywords:} Finance, Option pricing, Hedging, Reinforcement learning, Deep learning. \end{abstract} \doublespacing \setcounter{page}{1} \pagenumbering{arabic} \section{Introduction} \label{section:introduction} The equal risk pricing (ERP) methodology for derivatives valuation, which was initially proposed by \cite{guo2017equal}, entails setting the price of a contingent claim as the initial hedging portfolio value which leads to equal residual hedging risk for both the long and short positions under optimal hedges. This pricing procedure is associated with numerous advantageous properties, such as the production of prices that are arbitrage-free under some technical conditions \citep[see][]{guo2017equal,marzban2020equal,carbonneau2021equal}, consistency with non-myopic global dynamic optimal hedging strategies, invariance of the price with respect to the position considered (i.e. long versus short), and the ability to consider general risk measures\footnote{For instance, the ability to depart from the quadratic penalty considered in the celebrated variance-optimal approach of \cite{schweizer1995variance} enables avoiding adverse behavior associated with the penalization of hedging gains.} for the objective function of the hedging optimization problem. To further improve the ERP framework, several subsequent studies proposed some modifications to the original scheme. For instance, \cite{marzban2020equal} and \cite{carbonneau2021equal} use the physical probability measure rather than the risk-neutral one to perform hedging optimization; this has the advantage of improved interpretability of resulting prices on top of removing the subjectivity associated with the choice of the risk-neutral measure in an incomplete market setting. Furthermore, to enhance the computational tractability of the ERP approach, these two studies also consider the set of convex risk measures to represent the risk exposure of hedged transaction for both long and short parties.\footnote{The original work from \cite{guo2017equal} considers expected penalties as risk measures, which do not possess all properties of convex risk measures (e.g. most lack the translation invariance property). For instance, the Tail-Value-at-Risk (TVaR) is not a particular case of an expected penalty.} Indeed, when convex measures are used, the translation invariance property leads to a useful characterization of equal risk prices which removes the need to perform a joint optimization over all possible hedging portfolio initial values. The most natural convex risk measure to consider within the ERP approach is arguably the Conditional Value-at-Risk (CVaR), which is equivalent to the Expected Shortfall (ES) or Tail-Value-at-Risk under the assumption that underlying loss variables are absolutely continuous. See \cite{rockafellar2002conditional} for a formal definition of the CVaR and a description of its properties. The $\text{CVaR}_\alpha$ can be interpreted as the operator computing a probability weighted average of worst-case risks occurring within an event of probability below or exactly $1-\alpha$, which is very intuitive. Moreover, it is a coherent risk measure in the sense of \cite{artzner1999coherent}, which implies favorable properties from a risk measurement standpoint.\footnote{The class of coherent risk measures is a subset of the class of convex risk measure which assumes for instance the subadditivity and positive homogeneity properties; the latter are more stringent than the convexity property satisfied by all convex risk measures.} Furthermore, the CVaR measure is used extensively in practice by the financial sector to quantify capital requirements, see for instance \cite{BCBS2016}. Due to its favorable properties, several studies use the CVaR within the ERP framework: see \cite{carbonneau2021equal} and \cite{carbonneau2021deep}. It was observed in the foremost that when only the underlying asset is used to hedge put options and conventional risk-neutral measures are used to determine the initial capital for hedging, the tail risk is much more pronounced for the short position than for the long one, especially for out-of-the-money puts. This leads to equal risk prices that are substantially higher than their risk-neutral counterparts when the confidence level $\alpha$ of the $\text{CVaR}_\alpha$ is high, to an extent that can cast doubt on the applicability of the method in practice. An avenue that was explored in the aforementioned study to remedy this drawback is to reduce the confidence level as prices were shown numerically to be positively related to the latter. Unfortunately, as shown in this present paper, reducing the confidence level to obtain smaller option prices becomes quickly impractical since the resulting hedging strategies exhibit poor risk mitigation performance with speculative behavior magnifying tail losses for very high quantiles above the CVaR confidence level. This approach should therefore not be pursued in practice. A second possible solution to the inflated ERP prices issue which is explored in \cite{carbonneau2021deep} consists in incorporating other hedging instruments (e.g. short-term options) within dynamic hedging schemes. That approach is shown therein to produce prices that are often still higher than the traditional risk-neutral ones, but much closer to them. This avenue was thus deemed successful when applicable. However, it requires a more sophisticated model to represent the price dynamics of hedging instruments, which complicates its implementation in practice. Furthermore, hedges relying on option trades might not be feasible or desirable under some circumstances (e.g. lack of liquidity). The aforementioned simulation-based results on ERP prices highlight the need to identify an ERP approach which can strictly rely on the underlying asset for hedging transactions and, at the same time, alleviate the price inflation obtained with CVaR-based ERP. A straightforward route to explore so as to attempt obtaining a satisfactory ERP method respecting the above constraints is to modify the risk measure acting as the objective function in the optimal hedging problems underlying the ERP framework. For instance, risk measures putting less relative weight on tail risk and more on more moderate risk scenarios should produce lower option prices. However, such risk measures (e.g. the semi-variance, semi-root-mean-square-error (SRMSE), etc.) do not necessarily satisfy properties of convex risk measures, in particular the translation invariance property. Equal risk prices stemming from such risk measure choices therefore do not have the convenient characterization associated with convex risk measures, which highlights the need of tailor-made numerical procedures handling this additional complexity. The main contribution of this manuscript is twofold. The first is to propose a modification of the deep reinforcement learning approach illustrated in \cite{carbonneau2021equal} and \cite{carbonneau2021deep} to handle non-translation invariant risk measures within ERP naturally and without excessive additional computational burden. This modification essentially consists in feeding varying initial hedging portfolio values with simulated risky asset paths to the deep hedging algorithm from \cite{buehler2019deep}, and then coupling the trained neural network output with a bisection search to seek the initial hedging portfolio value equating risks for both the long and short positions. The latter bisection method search has previously been suggested in a similar context for instance in \cite{marzban2020equal}. The training algorithm modification is shown in the present work not to lead to a material deterioration in the hedging performance of the neural network underlying the numerical approach. The second contribution consists in exploring equal risk prices of options generated when using typical non-translation invariant risk measures. It is seen that the use of the class of semi-$\mathbb{L}^{p}$ risk measures of the form $L(x) = x^p \mathds{1}_{ \{x>0\} }$ for $p > 0$ is able to reduce ERP prices to more natural levels better in line with these of existing methodologies while simultaneously resulting in effective trading policies. Indeed, numerical results indicate that equal risk prices generated by the class of semi-$\mathbb{L}^{p}$ risk measures can span wider ranges of prices than these obtained under the $\text{CVaR}_{\alpha}$ risk measures with conventional confidence level $\alpha$ values. The latter phenomenon is shown to hold across all moneyness levels for puts, and is robust to all risky asset dynamics considered. Furthermore, the benchmarking of neural networks trading policies hedging performance demonstrates that optimized policies under the semi-$\mathbb{L}^{p}$ objective functions are effective for mitigating hedging risk across all values of $p$ considered, where $p$ is shown to control the relative weight associated with extreme hedging losses. This is in contrast with the $\text{CVaR}_{\alpha}$ objective function where hedging policies optimized with relatively small confidence level $\alpha$ exhibit poor risk mitigation for loss quantiles larger than $\alpha$. Lastly, our results show that the use of the semi-$\mathbb{L}^{2}$ objective function to price long-term European puts with trades involving exclusively the underlying stock is almost as successful to reduce equal risk price values as compared to values obtained by trading shorter-term options with the CVaR$_{\alpha}$ risk measure. All of these results clearly demonstrate the benefit of using the class of semi-$\mathbb{L}^{p}$ risk measures within the ERP framework by simultaneously alleviating the price inflation phenomenon observed under the class of CVaR measures as well as resulting in effective trading policies for risk management. This paper is divided as follows. \cref{se:littrev} provides a literature review about incomplete market derivatives pricing, hedging methods and reinforcement learning in finance. The theoretical setting used for the ERP approach in the present work is presented in \cref{section:market_setup}. \cref{sec:methodology} explains the reinforcement learning methodology for neural networks embedded in the ERP approach with the modified training algorithm proposed in this paper. \cref{section:numerical_results} displays results of numerical experiments associated with semi-$\mathbb{L}^{p}$ risk measures based ERP. \cref{section:conclusion} concludes. \section{Literature review} \label{se:littrev} Financial derivatives pricing in incomplete markets has received an extensive amount of attention in the literature. Numerous papers approach this problem through the selection of a suitable risk-neutral measure based on various considerations such as shifting of the drift to achieve risk-neutrality and model invariance, see \cite{hardy2001regime} and \cite{christoffersen2010option}, consistency with equilibrium models, see \cite{esw1994option} and \cite{duan1995garch}, or minimum entropy distance between the physical and risk-neutral measures, see \cite{frittelli2000minimal}. Another strand of literature considers pricing methods consistent with optimal hedging strategies. At first, quadratic hedging methods were considered in \cite{follmer1988hedging}, \cite{schweizer1995variance}, \cite{elliott1998discrete} and \cite{bertsimas2001hedging} due to their tractability. However, as a consequence of the limitations associated to the quadratic penalty (e.g. penalizing equally gains and losses), other objective functions were considered in alternative dynamic hedging schemes such as quantile hedging \citep{follmer1999quantile}, expected penalty minimization \citep{follmer2000efficient} or VaR and CVaR optimization as in \cite{melnikov2012dynamic} and \cite{godin2016minimizing}. Some pricing schemes were also developed to enable consistency with non-quadratic hedging methods, for instance utility indifference \citep{hodges1989optimal} or risk indifference \citep{xu2006risk}. An issue with the latter approaches is that different prices are obtained depending on if a long or short position is considered in the derivative. The ERP approach developed by \cite{guo2017equal} identifying the derivative price equating hedged risk exposure of both long and short positions remedies this drawback by providing a unique price invariant to the direction (i.e. long versus short) of the position. Several additional papers have used or expanded on the initial ERP methodology. One problem often considered by that methodology is the tackling of market incompleteness arising from short-selling bans on the underlying asset: \cite{alfeus2019empirical}, \cite{ma2019pricing} and \cite{he2020revised}. \cite{marzban2020equal} propose to substitute the risk-neutral measure for the physical measure during the determination of the equal risk price and to replace expected loss functions by convex risk measures within the objective function. \cite{carbonneau2021equal} provide a tractable methodology based on deep reinforcement learning to implement the ERP framework with convex risk measures under very general conditions. \cite{carbonneau2021deep} examine the impact of introducing options as hedging instruments within the ERP framework under convex risk measures. The computation of equal risk prices for derivatives is a highly non-trivial endeavor requiring advanced numerical schemes in most cases. \cite{marzban2020equal} propose to use dynamic programming which they apply on a robust optimization setting. Conversely, \cite{carbonneau2021equal} and \cite{carbonneau2021deep} use the deep reinforcement learning approach of \cite{buehler2019deep} coined as \textit{deep hedging}. Other papers have relied on the deep hedging methodology for the hedging of financial derivatives: \cite{cao2020discrete}, \cite{carbonneau2021deepIME}, \cite{horvath2021deep} and \cite{lutkebohmert2021robust}. Deep reinforcement learning is a very favorable technique for multistage optimization and decision-making in financial contexts: it allows tackling high-dimensional settings with multiple state variables, underlying asset dynamics and trading instruments. For this reason, it was used in multiple other works on derivatives pricing and hedging. Various techniques were considered such as Q-learning in \cite{halperin2020qlbs} and \cite{cao2021deep}, proximal policy optimization in \cite{chong2021pseudo}, least squares policy iteration and fitted Q-iteration for American option pricing in \cite{li2009learning}, or batch policy gradient in \cite{buehler2019deep}. Moreover, various other financial problems were tackled through reinforcement learning procedures in the literature, for instance portfolio management as in \cite{moody1997optimization}, \cite{jiang2017deep}, \cite{pendharkar2018trading}, \cite{garcia2019continuous}, \cite{wang2020continuous}, \cite{ye2020reinforcement} and \cite{betancourt2021deep}, optimal liquidation, see \cite{bao2019multi}, or trading optimization as in \cite{hendricks2014reinforcement}, \cite{lu2017agent} and \cite{ning2018double}. \section{Financial market setup} \label{section:market_setup} This section details the mathematical framework for the financial market considered along with the theoretical setup for the ERP derivatives valuation approach. A discrete set of equally spaced time points spanning a horizon of $T$ years $\mathcal{T}\equiv\{0=t_0 < t_1 <\ldots<t_N=T\}$ with $t_n \equiv n \Delta$, $n=0,\ldots,N$ is considered. $\Delta$ corresponds to the length of a time period in years. Unless specified otherwise, the present study uses either $\Delta=1/260$ or $\Delta=1/12$ corresponding to daily or monthly periods. Moreover, consider the probability space $\left(\Omega, \mathcal{F}_N, \mathbb{P}\right)$ endowed with a filtration $\mathbb{F} \equiv \{ \mathcal{F}_n\}^N_{n=0}$ satisfying the usual conditions, with $\mathcal{F}_n$ being the sigma-algebra characterizing the information available to the investor at time $t_n$. Multiple traded assets are introduced in the financial market. First, a risk-free asset grows at a constant periodic risk-free rate $r \in \mathbb{R}$: its time-$t_n$ price is given by $B_{n} \equiv e^{r t_{n}}$. The $D+1$ other non-dividend paying risky asset prices are characterized by the vectorial stochastic processes $\{S^{(b)}_n\}^{N}_{n=0}$ and $\{S^{(e)}_n\}^{N-1}_{n=0}$ where $S^{(b)}_n \equiv \left[S^{(0,b)}_n, \ldots, S^{(D,b)}_n\right]$ and $S^{(e)}_n \equiv \left[S^{(0,e)}_n, \ldots, S^{(D,e)}_n\right]$ respectively represent the \textit{beginning-of-period} and \textit{end-of-period} prices of risky assets $0,\ldots,D$ available for trading at time $t_n$. This implies $S^{(b)}_n$ is $\mathcal{F}_n$-measurable (i.e. observable at time $t_n$) whereas $S^{(e)}_n$ is $\mathcal{F}_{n+1}$-measurable. Due to traded instruments changing on every time period (for example, some traded options mature contracts need to be rolled-over), it is possible to have $S^{(j,e)}_{n} \neq S^{(j,b)}_{n+1}$, $j=1,\ldots,D$. However, the risky asset $j=0$ is assumed to be an underlying asset with no maturity such as a stock, thus available for trading on all periods. Hence, $S^{(0,e)}_{n} = S^{(0,b)}_{n+1}$. For simplicity, an absence of market frictions is assumed throughout the paper. Correspondingly, it is assumed all positions in a given portfolio are liquidated at the end of any period, and are repurchased at the beginning of the next if needed. A European-type derivative of time-$t_N$ payoff $\Phi\left(S^{(0,b)}_{N}\right)$ is considered. A suitable price for that contract and corresponding hedging strategies must be determined. Define a trading strategy $\delta \equiv \{ \delta_n \}^N_{n=0}$ as an $\mathbb{F}$-predictable process\footnote{This means $\delta_0$ is $\mathcal{F}_0$-measurable and $\delta_n$ is $\mathcal{F}_{n-1}$-measurable for $n=1,\ldots,N$.} where $\delta_n \equiv \left[\delta^{(0)}_{n}, \ldots, \delta^{(D)}_{n}, \delta^{(B)}_{n}\right]$. The latter comprises $\delta^{(0:D)}_{n} \equiv \left[\delta^{(0)}_{n}, \ldots, \delta^{(D)}_{n}\right]$ which contains the positions in all respective risky assets $0,\ldots,D$ within the portfolio between time $t_{n-1}$ and time $t_n$, and $\delta^{(B)}_{n}$ which contains the portfolio investment in the risk-free asset for the same period. For a trading strategy $\delta$, the corresponding time-$t_n$ portfolio value is defined as \begin{equation*} V^\delta_n \equiv \begin{cases} \delta^{(0:D)}_0 \bigcdot S^{(b)}_{0} + \delta^{(B)}_0 B_0, \quad n=0, \\ \delta^{(0:D)}_n \bigcdot S^{(e)}_{n-1} + \delta^{(B)}_n B_n, \quad n=1,\ldots,N, \end{cases} \end{equation*} where $\bigcdot$ is the conventional dot product. A trading strategy $\delta$ is said to be \textit{self-financing} if \begin{equation*} \delta^{(0:D)}_{n+1} \bigcdot S^{(b)}_{n} + \delta^{(B)}_{n+1} B_n = V^\delta_n, \quad n=0,\ldots,N-1. \end{equation*} Denote by $\Pi$ the set of all self-financing trading strategies that are sufficiently well-behaved mathematically.\footnote{Details characterizing well-behavedness in the context of the present study are omitted to avoid lengthy discussions straying us away from the main research objectives of the present work.} It turns out that the portfolio value process of self-financing trading strategies can be expressed conveniently in terms of so-called \textit{discounted gains}. For a trading strategy $\delta \in \Pi$, the latter are defined as \begin{equation*} G^\delta_0 \equiv 0, \quad G^\delta_n \equiv \sum_{j=1}^{n} \delta^{(0:D)}_j \bigcdot \left( B^{-1}_j S^{(e)}_{j-1} - B^{-1}_{j-1} S^{(b)}_{j-1}\right), \quad n=1,\ldots,N. \end{equation*} Using standard arguments outlined for instance in \cite{lamberton2007introduction}, for any self-financing trading strategy $\delta \in \Pi$, \begin{equation*} V^\delta_n = B_n \left(V^\delta_0 + G^\delta_n\right). \end{equation*} Such representation is convenient as it allows avoiding calculating $\delta^{(B)}_n$ for $n=0,\ldots,N$ explicitly when calculating the portfolio value. Aforementioned definitions allow posing the main optimization problems underlying the ERP methodology, which consist in finding the best self-financing trading strategies leading to optimal hedges in terms of penalized hedging errors at the maturity of the derivative. Solutions of such problems are referred to as \textit{global hedging procedures} due to their measurement of hedging efficiency in terms of risk at maturity rather than on a period-by-period basis. Consider a given risk measure $\rho$ characterizing the risk aversion of the hedger.\footnote{A risk measure is a mapping taking a random variable representing a random loss as input, and return a real number representing its perceived risk as an output.} Specific examples of risk measures considered in this study are formally defined subsequently. For a given value of $V_0 \in \mathbb{R}$, define mappings $\epsilon^{(\mathcal{L})}:\mathbb{R} \rightarrow \mathbb{R}$ and $\epsilon^{(\mathcal{S})}:\mathbb{R} \rightarrow \mathbb{R}$ representing optimal residual hedging risk respectively for a long or short position in the derivative when the initial portfolio value is $V^\delta_0 = V_0$ as \begin{align} \epsilon^{(\mathcal{L})}(V_0) &\equiv \underset{\delta\in \Pi}{\min} \, \rho \left(-\Phi(S^{(0,b)}_{N}) -V^\delta_{N}\right), \quad \epsilon^{(\mathcal{S})}(V_0) \equiv \underset{\delta\in \Pi}{\min} \, \rho \left(\Phi(S^{(0,b)}_{N}) -V^\delta_{N}\right). \label{eq:risk_longshort} \end{align} Optimal hedging strategies are the minimizing arguments of such optimization problems: \begin{align*} \delta^{(\mathcal{L})}(V_0) &\equiv \underset{\delta\in \Pi}{\arg\min} \, \rho \left(-\Phi(S^{(0,b)}_{N}) -V^\delta_{N}\right), \quad \delta^{(\mathcal{S})}(V_0) \equiv \underset{\delta\in \Pi}{\arg\min} \, \rho \left(\Phi(S^{(0,b)}_{N}) -V^\delta_{N}\right). \end{align*} This leads to the definition of the \textit{equal risk price} $C^*_0$ of the derivative $\Phi$ as the initial portfolio value $V_0$ such that the optimal residual hedging risk is equal for both the long and short positions, i.e. \begin{equation} \label{ERPdef} \epsilon^{(\mathcal{L})}(-C^*_0) = \epsilon^{(\mathcal{S})}(C^*_0). \end{equation} Conditions on $\rho$ have to be imposed to guarantee the existence and uniqueness of the equal risk price (e.g. monotonicity of $\rho$). Under the assumption that $\rho$ is a convex risk measure, \cite{carbonneau2021equal} provide sufficient conditions to obtain existence and uniqueness of the solution to \eqref{ERPdef}, see Theorem $2.1$ of the latter paper. \begin{remark} \label{remark:convex_ERP} Under a convex measure $\rho$, \cite{marzban2020equal} and \cite{carbonneau2021equal} also obtain the following characterization of the equal risk price \begin{align} C^*_0 = 0.5B_N \left(\epsilon^{(\mathcal{S})}(0)-\epsilon^{(\mathcal{L})}(0)\right). \label{eq:ref_ERP_convex} \end{align} Representation \eqref{eq:ref_ERP_convex} is very convenient as it requires to only obtain the optimal residual risk exposure when the initial portfolio is null instead of having to iteratively try multiple initial portfolio values. However, when $\rho$ is not translation invariant, such representation does not hold anymore, and a tailor-made numerical scheme must thus be developed to solve for the root-finding problem \eqref{ERPdef}. \end{remark} The present work aims among others at examining a class of non-translation invariant risk measures. The main class of risk measures under study will be referred to as the \textit{semi-$\mathbb{L}^p$ risk measures}, which are defined as \begin{align} \rho(X) \equiv \mathbb{E}\left[ X^p \mathds{1}_{ \{X>0\}}\right]^{1/p}, \quad p > 0. \label{eq:ref_Lp_risk_measure} \end{align} The latter risk measure is clearly monotonous (i.e. $X \geq Y$ almost surely implies $\rho(X) \geq \rho(Y)$), but lacks the translation invariance property. One important advantageous property of this class of risk measures is in penalizing exclusively hedging losses, not gains. Furthermore, the parameter $p$ acts as a risk aversion barometer as higher values of $p$ put more relative weight on higher losses. The CVaR measure is also considered in some experiments of the present paper for benchmarking purposes as it is used in \cite{carbonneau2021equal} and \cite{carbonneau2021deep}. Such a risk measure can be formally defined as \begin{eqnarray*} \text{VaR}_\alpha (X) \equiv \inf \{ x : \mathbb{P}[X \leq x] \geq \alpha \}, \quad \text{CVaR}_\alpha (X) \equiv \frac{1}{1-\alpha}\int_{\alpha}^{1} \text{VaR}_\gamma (X) d \gamma \end{eqnarray*} for a confidence level $\alpha$ in $(0,1)$. Whenever $X$ is an absolutely continuous random variable, the CVaR admits the intuitive representation $\text{CVaR}_\alpha (X) = \mathbb{E}\left[X | X \geq \text{VaR}_\alpha (X)\right]$. The CVaR is a coherent risk measure as shown \cite{rockafellar2002conditional}, which implies it satisfies the monotonicity and translation invariance properties. \section{Methodology} \label{sec:methodology} The present section details the reinforcement learning approach followed to solve the optimization problems underlying the ERP methodology. The approach consists in applying the deep hedging algorithm of \cite{buehler2019deep} by representing hedging policies with neural networks. A slight modification to the latter paper's training methodology is required to solve the ERP global hedging problems when the risk measure is not translation invariant. An accuracy assessment is performed for the modified training algorithm. \subsection{Neural network approximation of the optimal solution} The approach followed to obtain a numerical solution to the optimization problems \eqref{eq:risk_longshort} is based on a parametric approximation of the trading policy with a neural network trained using reinforcement learning. The general idea is as follows. In multiple setups, especially those involving Markovian dynamics, the optimal trading strategies $\delta^{(\mathcal{S})}(V_0)$ and $\delta^{(\mathcal{L})}(V_0)$ often admit the following functional representation for some functions $\tilde{\delta}^{(\mathcal{L})}$ and $\tilde{\delta}^{(\mathcal{S})}$: \begin{equation} \label{eq:optPolFunc} \delta^{(\mathcal{L})}_{n+1}(V_0) = \tilde{\delta}^{(\mathcal{L})} \left(T - t_n, S_n^{(b)},V_n,\mathcal{I}_n\right), \quad \delta^{(\mathcal{S})}_{n+1}(V_0) = \tilde{\delta}^{(\mathcal{S})} \left(T - t_n, S_n^{(b)},V_n,\mathcal{I}_n\right), \quad n=0,\ldots,N-1, \end{equation} where $\delta^{(\mathcal{L})}_{n+1}(V_0)$ and $\delta^{(\mathcal{S})}_{n+1}(V_0)$ are to be understood as the optimal time-$t_n$ hedges for the long and short position when time-$0$ capital investment is $V_0$, and $\mathcal{I}_n$ is a $\mathcal{F}_n$-measurable random vector containing a set of additional state variables summarizing all necessary information to make the optimal portfolio rebalancing decision. For instance, $\mathcal{I}_n$ can contain underlying asset volatilities if the latter asset has a GARCH dynamics \citep[see][]{augustyniak2017assessing}, current probabilities of being in the various respective regimes when in a regime-switching setup \citep[see][]{franccois2014optimal}, implied volatilities when options are used as hedging instruments \citep[see][]{carbonneau2021deep}, current assets positions when in the presence of transaction costs \citep[see][]{breton2017global}, and so on. The functional representation \eqref{eq:optPolFunc} enables the approximation of the optimal policies as parameterized functions. The class of functions considered in this paper is the classical \textit{feedforward neural network} (FFNN) class, which is formally defined subsequently. Indeed, two distinct FFNNs are used to approximate the optimal trading policy of the long and short parties by mapping inputs $\{T - t_n,S_n^{(b)},V_n,\mathcal{I}_n\}$ into the respective (long or short) portfolio positions of risky assets $\delta_{n+1}^{(0:D)}$ for any $n=0,\ldots,N-1$.\footnote{Recall that since the trading strategy is self-financing, $\delta^{(B)}_{n+1}$ is characterized by $\delta^{(0:D)}_{n+1}$ and $V_n$.} % More precisely, denote by $F_{\theta}^{(\mathcal{L})}$ and $F_{\theta}^{(\mathcal{S})}$ the neural network mappings for respectively the long and short trading positions where $\theta \in \mathbb{R}^q$ is the $q$-dimensional set of parameters of the FFNNs.\footnote{ % While the neural network architecture of $F_{\theta}^{(\mathcal{L})}$ and $F_{\theta}^{(\mathcal{S})}$ considered in this paper is the same for both neural networks in terms of the number of hidden layers and neurons per hidden layer, and thus the total number $q$ of parameters to fit is the same for both neural networks, one could also consider two different architectures for $F_{\theta}^{(\mathcal{L})}$ and $F_{\theta}^{(\mathcal{S})}$ with no additional difficulty. % } For a given parameter set $\theta$ distinct for each neural network, the associated trading strategies are given by \begin{equation*} \delta^{(\mathcal{L}, \theta)}_{n+1}(V_0) \equiv F_{\theta}^{(\mathcal{L})} \left(T-t_n,S_n^{(b)},V_n,\mathcal{I}_n\right), \quad \delta^{(\mathcal{S}, \theta)}_{n+1}(V_0) \equiv F_{\theta}^{(\mathcal{S})} \left(T-t_n,S_n^{(b)},V_n,\mathcal{I}_n\right), \quad n=0,\ldots,N-1. \end{equation*} The optimization of trading strategy in problem \eqref{eq:risk_longshort} is thus replaced by the optimization of neural network parameters $\theta$ according to \begin{align} \tilde{\epsilon}^{(\mathcal{L})}(V_0) \equiv \underset{\theta \in \mathbb{R}^{q}}{\min} \, \rho \left(-\Phi(S^{(0,b)}_{N}) - V_N^{\delta^{(\mathcal{L}, \theta)}} \right), \quad \tilde{\epsilon}^{(\mathcal{S})}(V_0) \equiv \underset{\theta \in \mathbb{R}^{q}}{\min} \, \rho \left(\Phi(S^{(0,b)}_{N}) -V^{\delta^{(\mathcal{S}, \theta)}}_N \right). \label{eq:risk_long_short_NNet} \end{align} Note that the set of optimal parameters $\theta$ will be different for the long and the short trading strategies. Furthermore, problems \eqref{eq:risk_long_short_NNet} only lead to an approximate solution to the initial problems \eqref{eq:risk_longshort} since the FFNNs are approximations of the true functional representation $\tilde{\delta}^{\mathcal{(L)}}$ and $\tilde{\delta}^{\mathcal{(S)}}$. Nevertheless, by relying on the universal approximation property of FFNNs \citep[see for instance][]{hornik1991approximation}, \cite{buehler2019deep} show that there exist neural networks such that the solution $\tilde{\epsilon}^{(\mathcal{L})},\tilde{\epsilon}^{(\mathcal{S})}$ from \eqref{eq:risk_long_short_NNet} can be made arbitrarily close to the solution $\epsilon^{(\mathcal{L})},\epsilon^{(\mathcal{S})}$ from \eqref{eq:risk_longshort}. The mathematical definition of FFNNs architecture is now provided. For $L, d_0, \ldots, d_{L+1} \in \mathbb{N}$, let $F_{\theta}:\mathbb{R}^{d_0} \rightarrow \mathbb{R}^{d_{L+1}}$ be a FFNN: \begin{align} F_{\theta}(X)&\equiv o \circ h_{L} \circ \ldots \circ h_{1}, \nonumber % \\ \quad h_l(X) &\equiv g(W_l X + b_l), \quad l = 1, \ldots, L, \nonumber % \\ \quad o(X) &\equiv W_{L+1}X + b_{L+1}, \nonumber \end{align} where $\circ$ denotes the function composition operator. Thus, $F_\theta$ is a composite function of $h_1, \ldots, h_L$ commonly known as \textit{hidden layers} which each apply successively an affine and a nonlinear transformation to input vectors, and also of the \textit{output function} $o$ applying an affine transformation to the last hidden layer. The set of parameters $\theta$ to be optimized consists of all weight matrices $W_l \in \mathbb{R}^{d_{l} \times d_{l-1}}$ and bias vectors $b_l \in \mathbb{R}^{d_l}$ for $l=1,\ldots,L+1$. \subsection{Calibration of neural networks through reinforcement learning} \label{subsec:calibration_NNETS} As in \cite{buehler2019deep}, the training of neural networks in this paper relies on a stochastic policy gradient algorithm, also known as actor-based reinforcement learning. This class of procedures optimizes directly the policy (i.e. the actor) parameterized as a neural network with minibatch stochastic gradient descent (SGD) so as to minimize a cost function as in \eqref{eq:risk_long_short_NNet}. Without loss of generality, the training algorithm is hereby only provided for the neural network $F_{\theta}^{(\mathcal{S})}$ associated with the short position, as steps for the long position are entirely analogous. \subsubsection{Fixed and given $V_0$ case} \label{subsubsec:fixed_V0} The training procedure to calibrate $\theta$ is first described for a fixed and given initial capital investment $V_0$ as originally considered in \cite{buehler2019deep}. A slight modification to the algorithm will subsequently be presented in \cref{subsubsec_nontrans_inv} to tackle the non-translation invariant risk measure case studied in this paper. % Let $J : \mathbb{R}^{q} \times \mathbb{R} \rightarrow \mathbb{R}$ be the cost function for the short position hedge: \begin{align} J(\theta, V_0) \equiv \rho \left(\Phi(S^{(0,b)}_{N}) -V^{\delta^{(\mathcal{S}, \theta)}}_{N}\right), \quad \theta \in \mathbb{R}^{q}, V_0 \in \mathbb{R}. \label{eq:ref_cost_func} \end{align} The parameters set $\theta$ is sequentially refined to produce a sequence of estimates $\{\theta_j\}_{j\geq1}$ minimizing the cost function $J$ over time. This iterative procedure is as follows. First, parameters of the neural network are initialized with the Glorot uniform initialization of \cite{glorot2010understanding}, which gives the initial value of the sequence $\theta_0$. % Then, to start refining the parameters, a set of $M=400,\!000$ paths containing traded asset values and other exogenous variables associated with the assets dynamics is generated by Monte Carlo simulation. The set of such paths is referred to as a \textit{training set}. On each iteration of SGD, i.e. on each update of $\theta_j$ to $\theta_{j+1}$, a minibatch consisting in a subset of size $N_{\text{batch}}=1,\!000$ of paths from the training set is used to estimate the cost function in \eqref{eq:ref_cost_func}. More precisely, for $\theta = \theta_j$, $F_\theta^{(\mathcal{S})}$ is used to compute the assets positions at each rebalancing date and for each path within the minibatch. Let $\mathbb{B}_j \equiv \{\pi_{i,j}\}_{i=1}^{N_{\text{batch}}}$ be the resulting set of hedging errors from this minibatch, where $\pi_{i,j}$ is the $i$th hedging error when $\theta = \theta_j$. Then, for $\hat{\rho} :\mathbb{R}^{N_{\text{batch}}} \rightarrow \mathbb{R}$ the empirical estimator of $\rho(\pi)$ evaluated with $\mathbb{B}_j$, the update rule for $\theta_j$ to $\theta_{j+1}$ is $$\theta_{j+1} = \theta_{j} - \eta_j \nabla_\theta \widehat{\rho}(\mathbb{B}_j),$$ where $\{\eta_j\}_{j \geq 1}$ are small positive real values and $\nabla_\theta$ denotes the gradient operator with respect to $\theta$. For instance, under the semi-$\mathbb{L}^p$ class of risk measures which is extensively studied in the numerical section, the empirical estimator has the representation \begin{align} \widehat{\rho}\left(\mathbb{B}_j\right) \equiv \left(\frac{1}{N_{\text{batch}}} \sum_{i=1}^{N_{\text{batch}}} \pi_{i,j}^p \mathds{1}_{ \{\pi_{i,j}>0\} }\right)^{1/p}. \nonumber \end{align} Lastly, the computation of the gradient of the empirical cost function with respect to $\theta$ can be done explicitly with modern deep learning libraries such as Tensorflow \citep{abadi2016tensorflow}. Also, the Adam optimizer \citep{kingma2014adam} can be used to dynamically determined the $\eta_j$ values. The following section presents the modification to the training algorithm proposed in this paper to compute equal risk prices under non-translation invariant risk measures. \subsubsection{Non-translation invariant risk measures case} \label{subsubsec_nontrans_inv} The main objective of this paper is to study the valuation of financial derivatives with the ERP framework under non-translation invariant risk measures. This requires solving the root-finding problem of the initial portfolio value $V_0$ that equates $\tilde{\epsilon}^{(\mathcal{L})}(-V_0)$ and $\tilde{\epsilon}^{(\mathcal{S})}(V_0)$; this study considers a bisection scheme for such a purpose. However, one important drawback of the bisection algorithm in the context of this paper is the requirement to obtain multiple evaluations of $\tilde{\epsilon}^{(\mathcal{L})}(-V_0)$ and $\tilde{\epsilon}^{(\mathcal{S})}(V_0)$ for different values of $V_0$, which can be very costly from a computational standpoint. One naive approach to implement the bisection algorithm is to proceed as follows: \begin{itemize} \item [1)] For a given value of $V_0$, train the long and short neural networks $F_{\theta}^{(\mathcal{S})}$ and $F_{\theta}^{(\mathcal{L})}$ on the training set. \item [2)] Evaluate the optimal residual hedging risk $\tilde{\epsilon}^{(\mathcal{S})}(V_0)$ and $\tilde{\epsilon}^{(\mathcal{L})}(-V_0)$ with $F_{\theta}^{(\mathcal{S})}$ and $F_{\theta}^{(\mathcal{L})}$ on a \textit{test set} of $100,\!000$ additional independent simulated paths. \item [3)] If $\Delta(V_0) \equiv \tilde{\epsilon}^{(\mathcal{S})}(V_0) - \tilde{\epsilon}^{(\mathcal{L})}(-V_0) \approx 0$ according to some closeness criterion, then $C_0^{\star} = V_0$ is the equal risk price. Otherwise, update $V_0$ with the bisection algorithm and go back to step $1)$. \end{itemize} The important drawback of this naive approach lies in the necessity to retrain $F_{\theta}^{(\mathcal{S})}$ and $F_{\theta}^{(\mathcal{L})}$ for each iteration of the bisection algorithm in step $1$. To circumvent the latter pitfall, this study proposes to slightly modify the training algorithm such that the neural networks learn the optimal mappings not only for \textit{a unique fixed} initial capital investment, but rather for an \textit{interval} of values for $V_0$. This provides the important benefit of only having to train $F_{\theta}^{(\mathcal{S})}$ and $F_{\theta}^{(\mathcal{L})}$ once, which thus circumvents the previously described computational burden. The slight modification made to the training algorithm described in \cref{subsubsec:fixed_V0} is now described. At the beginning of each SGD step, on top of sampling a minibatch of paths of risky assets, the value of $V_0$ is also randomly sampled within the initial interval of values used for the bisection algorithm. For instance, in numerical experiments conducted in \cref{section:numerical_results}, the initial interval considered for the bisection algorithm is $[0.75C_0^{\mathbb{Q}}, 1.50C_0^{\mathbb{Q}}]$ where $C_0^{\mathbb{Q}}$ is the risk-neutral price of $\Phi$ under a chosen conventional equivalent martingale measure $\mathbb{Q}$.\footnote{ % If the equal risk price is outside the initial search interval $[0.75C_0^{\mathbb{Q}}, 1.50C_0^{\mathbb{Q}}]$, the bisection algorithm must be applied once again with a new initial search interval, and the neural networks $F_{\theta}^{(\mathcal{S})}$ and $F_{\theta}^{(\mathcal{L})}$ must be trained once again on this new interval. } This approach is simple to implement as it naturally leverages the fact that portfolio values are already used within input vectors of the neural networks. However, it should be noted that learning the optimal hedge for various initial capital investments is more complex, and thus a more challenging task for neural networks as compared to learning the optimal trading policy for a fixed $V_0$. Nevertheless, Monte Carlo experiments provided in \cref{appendix:proof_convergence_results} show that incorporating this slight modification to the training algorithm does not materially impact the optimized neural networks performance. Pseudo-codes of the training and bisection procedures are presented respectively in \cref{algo:pseudo_code_training} and \cref{algo:pseudo_code_bisection} of \cref{appendix:pseudo_code}. An implementation in Python and Tensorflow to replicate numerical experiments presented in \cref{section:numerical_results} can also be found online at \href{https://github.com/alexandrecarbonneau}{github.com/alexandrecarbonneau}. \begin{remark} In numerical experiments of \cref{section:numerical_results}, the benchmarking of equal risk prices generated under the class of semi-$\mathbb{L}^{p}$ risk measures to the ones obtained with a class of convex risk measures, namely the CVaR, is performed. The numerical scheme used to obtain equal risk prices under the CVaR$_{\alpha}$ risk measure follows the methodology of \cite{carbonneau2021equal} by evaluating $C_0^{\star}$ with \eqref{eq:ref_ERP_convex} where $\tilde{\epsilon}^{(\mathcal{L})}(0)$ and $\tilde{\epsilon}^{(\mathcal{S})}(0)$ are computed with the steps of \cref{subsubsec:fixed_V0} with $V_0 = 0$ and with the empirical estimator of $\rho(\pi)$ as $$\widehat{\rho}(\mathbb{B}_j) = \widehat{\text{VaR}}_{\alpha}(\mathbb{B}_{j}) + \frac{1}{(1-\alpha)N_{\text{batch}}}\sum_{i=1}^{N_{\text{batch}}}\max(\pi_{i,j}-\widehat{\text{VaR}}_{\alpha}(\mathbb{B}_{j}),0),$$ where $\widehat{\text{VaR}}_{\alpha}(\mathbb{B}_{j})$ is the usual empirical estimator of the Value-at-Risk statistic with the sample $\mathbb{B}_{j}$ at level $\alpha$. \end{remark} \begin{remark} For all numerical experiments under the semi-$\mathbb{L}^{p}$ risk measure conducted in this paper, a preprocessing of the feature vectors is applied, using $\{T-t_n, \log(S_n^{(b)}/K), V_n/\tilde{V}, \mathcal{I}_n\}$ instead of $\{T - t_n, S_n^{(b)},V_n,\mathcal{I}_n\}$ where $\tilde{V}$ is defined as the midpoint value of the initial search interval of the bisection algorithm $[V_A, V_B]$, i.e. $\tilde{V} \equiv 0.5(V_A + V_B)$. Note that \cite{carbonneau2021equal} and \cite{carbonneau2021deep} consider similar preprocessing for risky asset prices, while \cite{carbonneau2021deepIME} considers a similar preprocessing for portfolio values. Furthermore, under the CVaR$_{\alpha}$ objective function, the same preprocessing for risky asset prices is used, but portfolio values are not preprocessed as the bisection algorithm is not required to be used in this case, i.e. $V_n$ rather than $V_n/\tilde{V}$ is used in feature vectors. \end{remark} Lastly, it is worth highlighting an additional advantage from a computational standpoint of the class of semi-$\mathbb{L}^{p}$ objective functions described in this paper over the CVaR$_{\alpha}$ measures as considered for instance in \cite{carbonneau2021equal} and \cite{carbonneau2021deep} when relying on the neural network-based hedging scheme. Indeed, under the CVaR$_{\alpha}$ objective function, the use of minibatch stochastic gradient descent procedures to train neural networks restrain the use of extremely large quantiles for the CVaR$_{\alpha}$ (for instance, larger values than $0.99$). The latter stems from the following observations. From a statistical standpoint, the estimation variance of CVaR$_{\alpha}$ increases with $\alpha$. Furthermore, the empirical estimator of CVaR$_{\alpha}$ is biased in finite sample size, whereas the empirical estimator of the semi-$\mathbb{L}^{p}$ risk measure is unbiased for any sample size. However, while larger minibatches would provide a more accurate estimate of the gradient, i.e. reduce the variance and the bias of the CVaR estimator, this is not necessarily a favorable avenue for training neural networks. Indeed, as noted in \cite{goodfellow2016deep}, the amount of memory required by hardware setups can be a limiting factor to increasing minibatch size. Furthermore, most SGD algorithms converge faster in terms of total computation when allowed to approximate gradients faster (i.e. with smaller samples and more SGD steps). The interested reader is referred to Chapter $8.1.3$ of \cite{goodfellow2016deep} for additional information about the implications of the minibatch size on SGD procedures. This computational pitfall of pairing stochastic gradient descent with extreme values of $\alpha$ under the CVaR$_{\alpha}$ measure is not present under the semi-$\mathbb{L}^{p}$, which further motivates its use in the context of equal risk pricing and optimal hedging. \section{Numerical experiments} \label{section:numerical_results} This section presents several numerical experiments conducted to investigate prices produced by the ERP methodology under different setups. The common theme of all experiments is to examine option prices generated by the ERP framework under the class of semi-$\mathbb{L}^{p}$ risk measures. % The analysis starts in \cref{subsec:sens_analysis} with a sensitivity analysis of equal risk prices with respect to the choice of objective function. This is carried out by comparing $C_0^{\star}$ generated with the CVaR$_{\alpha}$ and semi-$\mathbb{L}^{p}$ across different values of $\alpha$ and $p$ controlling the risk aversion of the hedger. The hedging performance of embedded neural networks hedging policies obtained under these objective functions is also assessed. Moreover, a sensitivity analysis with respect to the choice of underlying asset price dynamics is carried out in \cref{subsec:diff_dyn} so as to test the impact of the inclusion of jump or volatility risk. Lastly, \cref{subsec:option_hedges} presents the benchmarking of equal risk prices for long maturity options obtained under the semi-$\mathbb{L}^{p}$ risk measures with trades involving exclusively the underlying stock against these generated with option hedges under the CVaR$_{\alpha}$ objective function. \subsection{Experiments setup} \label{subsec:market_setup} Unless specified otherwise, the option to price and hedge is a European put with payoff $\Phi(S_N^{(0,b)}) \equiv \max(K - S_N^{(0,b)}, 0)$ of maturity of $T=60/260$ and strike price $K$. Daily hedges with the underlying stock are used (i.e. $N=60$). The use of option hedges and different maturities for $\Phi$ is considered exclusively is \cref{subsec:option_hedges}. Furthermore, the stock has an initial price of $S_0^{(0,b)} = 100$ and the annualized continuous risk-free rate is set at $r = 0.02$. Different moneyness levels are considered with $K=90,100$ and $110$ for respectively out-of-the-money (OTM), at-the-money (ATM), and in-the-money (ITM) puts. Moreover, as described in \cref{sec:methodology}, two distinct feedforward neural networks are considered for the functional representation of the long and short hedging policies. The architecture of every neural networks is a FFNN of two hidden layers ($L=2$) with $56$ neurons per layer ($d_1 = d_2 = 56$). The activation function considered is the well-known rectified linear activation function (ReLU) with $g(x) \equiv \max(x,0)$. For the training procedure, a training set of $400,\!000$ paths is simulated with the $\mathbb{P}$-dynamics of the underlying stock. A total of $100$ epochs\footnote{ % An epoch is defined as a complete iteration of the training set with stochastic gradient descent. For example, for a training set of $400,\!000$ paths and a minibatch size of $1,\!000$, one epoch consists of $400$ updates of the set of trainable parameters $\theta$. % } is used with a minibatch size of $1,\!000$ sampled exclusively from the training set. The Adam optimizer with a learning rate hyperparameter of $0.0005$ is used with Tensorflow for the implementation of the stochastic gradient descent procedure. Also, all numerical results presented in subsequent sections are obtained in an out-of-sample fashion by using exclusively a test set of $100,\!000$ additional simulated paths. \subsection{Sensitivity analysis to risk measures} \label{subsec:sens_analysis} This section studies equal risk price values obtained under the semi-$\mathbb{L}^p$ and CVaR$_{\alpha}$ risk measures across different levels of risk aversion, i.e. different values for $p$ and $\alpha$. The main motivation is the following. \cite{carbonneau2021equal} observed that when hedging exclusively with the underlying stock, ERP under the CVaR$_{\alpha}$ measure produces option prices which are systematically inflated in comparison to those obtained under conventional risk-neutral measures, especially for OTM puts. This inflation phenomenon is significantly magnified with fat tails dynamics such as with a regime-switching (RS) model to an extent that can cast doubt on the applicability of ERP in practice. Furthermore, while the latter paper observed a positive relation between the risk aversion level $\alpha$ and equal risk prices $C_0^{\star}$, as shown in subsequent sections of this present paper, using smaller values for $\alpha$ leads to trading policies exhibiting poor risk mitigation performance with speculative behavior magnifying tail risk. % Consequently, the main motivation of this present section is to assess if the use of the semi-$\mathbb{L}^{p}$ class of risk measures helps alleviating this price inflation phenomenon while simultaneously resulting in optimized trading policies providing effective risk mitigation. Thus, a critical aspect of the sensitivity analysis performed in this section is the benchmarking of not only equal risk prices generated under different objective functions, but also the assessment of the effectiveness of the resulting global trading policies. \subsubsection{Regime-switching model} \label{subsubsec:RS_model} The conduction of a sensitivity analysis with respect to the objective function within the ERP framework necessitates the selection of a suitable dynamics for the underlying stock. Indeed, the model should incorporate salient stylized facts of financial markets with a specific focus on fat tails due to the assessment of the impact of objective functions within the ERP framework allowing more or less weights on extreme scenarios through their respective risk aversion parameter (i.e $\alpha$ and $p$ respectively for the CVaR$_{\alpha}$ and semi-$\mathbb{L}^{p}$ measures). Unless specified otherwise, this study considers a RS model for the risky asset dynamics. This class of model introduced in finance by \cite{hamilton1989new} exhibits, among others, fat tails, the leverage effect (i.e. negative correlation between assets returns and volatility) and heteroscedasticity. The examination of the impact of the presence of jump and volatility risk on $C_0^{\star}$ values generated with the semi-$\mathbb{L}^{p}$ objective functions is done in subsequent sections. Furthermore, unless specified otherwise, model parameters for the RS model (as well as for other dynamics considered subsequently) are estimated with maximum likelihood procedures on the same time series of daily log-returns on the S\&P 500 price index covering the period 1986-12-31 to 2010-04-01 (5863 observations). Parameter estimates are presented in \cref{appendix_MLE_Tables}. The description of the regime-switching model for the underlying stock is now formally defined. For $n=1,\ldots,N$, let $y_n \equiv \log(S_n^{(0,b)}/S_{n-1}^{(0,b)})$ be the time-$t_n$ log-return and $\{\epsilon_{n}\}_{n=1}^{N}$ be a sequence of independent and identically distributed (iid) standardized Gaussian random variables. The RS model assumes that the dynamics of the underlying stock changes between different regimes representing different economical states of the financial market. These regime changes are abrupt and they drastically impact the behavior of the dynamics of financial markets for a significant period of time, i.e. these regimes are persistent \citep{ang2012regime}. For instance, a two-regime RS model as considered in this study usually has a more bullish regime with positive expected returns and relatively small volatility, and a more bearish regime with negative expected returns and relatively large volatility. Prevalent examples of such regime changes are financial crises and important economical reforms. From a mathematical standpoint, the class of RS models characterizes regimes by an unobservable discrete-time Markov chain with a finite number of states, and models the conditional distribution of log-returns given the current regime as a Gaussian distribution with known parameters. More formally, denote the regimes as $\{h_n\}_{n=0}^{N}$ where $h_n \in \{1, \ldots, H\}$ is the regime in force during the time interval $[t_n, t_{n+1})$. The model specification for the transition probabilities of the Markov Chain can be stated as \begin{align} \mathbb{P}(h_{n+1}=j|\mathcal{F}_n, h_n, \ldots, h_0) &= \gamma_{h_n, j}, \quad j = 1, \ldots, H, \label{eq:ref_transtion_matrix} \end{align} where $\Gamma \equiv \{\gamma_{i,j}\}_{i=1,j=1}^{H,H}$ is the transition matrix with $\gamma_{i,j}$ being the time-independent probability of moving from regime $i$ to regime $j$. Furthermore, the dynamics of log-returns have the representation \begin{align} y_{n+1} &= \mu_{h_n} \Delta + \sigma_{h_n} \sqrt{\Delta}\epsilon_{n+1}, \quad n = 0,\ldots, N-1,\nonumber \end{align} where $\{\mu_i, \sigma_i\}_{i=1}^{H}$ are model parameters representing the means and volatilities on a yearly basis of each regime. The use of a RS model entails that additional state variables related to the regimes must be added to feature vectors of neural networks through the vectors $\mathcal{I}_n$. Indeed, while regimes are unobservable, useful information can be filtered from the observed stock path prices. Let $\{\xi_n\}_{n=0}^{N}$ be the \textit{predictive probability process} where $\xi_n \equiv [\xi_{n,1},\ldots,\xi_{n,H}]$ and $\xi_{n,j} \equiv \mathbb{P}(h_n = j|\mathcal{F}_n)$. Under the RS model, $\mathcal{I}_n = \xi_n$ for $n=0,\ldots,N-1$. Following the work of \cite{franccois2014optimal}, the predictive probabilities can be computed recursively for $n=0,\ldots,N-1$ as $$\xi_{n+1,j} = \frac{\sum_{i=1}^{H}\gamma_{i,j} \phi_i(y_{n+1}) \xi_{n,i}}{\sum_{i=1}^{H}\phi_i(y_{n+1})\xi_{n,i}}, \quad j=1,\ldots,H,$$ where $\phi_i$ is the probability density function of the Gaussian distribution with mean $\mu_i$ and volatility $\sigma_i$. For all numerical experiments, the time $0$ regime $h_0$ is sampled from the stationary distribution of the Markov Chain. Lastly, the benchmarking of equal risk prices to option prices obtained under conventional risk-neutral measures is also presented. Risk-neutral dynamics as well as the numerical scheme used to evaluate the risk-neutral price (including for alternative dynamics introduced subsequently) are presented in \cref{appendix:RN_dyn}. \subsubsection{Numerical results sensitivity analysis to objective function} \label{subsubsec:sensitivity_ERP_riskmeasure} \cref{table:sensitivity_analysis_ERP} presents equal risk prices obtained under the CVaR$_{\alpha}$ with $\alpha = 0.90, 0.95, 0.99$ as well as under the class of semi-$\mathbb{L}^{p}$ risk measures with $p = 2, 4, 6, 8, 10$. All equal risk prices are expressed relative to risk-neutral prices $C_0^{\mathbb{Q}}$. Hedging statistics obtained across the different objective functions are analyzed subsequently in \cref{subsubsec:hedgingstats_riskmeasure}. \begin{table}[ht] \caption {Sensitivity analysis of equal risk prices $C_0^{\star}$ for OTM ($K=90$), ATM ($K=100$) and ITM ($K=110$) put options of maturity $T=60/260$ under the regime-switching model.} \label{table:sensitivity_analysis_ERP} \renewcommand{\arraystretch}{1.15} \begin{adjustwidth}{-1in}{-1in} \centering \begin{tabular}{ccccccccccc} \hline\noalign{\smallskip} & & \multicolumn{3}{c}{$C_0^{\star}$ under $\text{CVaR}_{\alpha}$} & & \multicolumn{5}{c}{$C_0^{\star}$ under semi-$\mathbb{L}^{p}$} \\ \cline{3-5} \cline{7-11} $\text{Moneyness}$ & $C_0^{\mathbb{Q}}$ & $\text{CVaR}_{0.90}$ & $\text{CVaR}_{0.95}$ & $\text{CVaR}_{0.99}$ & & $\mathbb{L}^{2}$ & $\mathbb{L}^{4}$ & $\mathbb{L}^{6}$ & $\mathbb{L}^{8}$ & $\mathbb{L}^{10}$ \\ % \hline\noalign{\medskip} % $\text{OTM}$ & $0.56$ & $91\%$ & $119\%$ & $161\%$ & & $50\%$ & $88\%$ & $111\%$ & $140\%$ & $175\%$ \\ $\text{ATM}$ & $3.27$ & $18\%$ & $24\%$ & $29\%$ & & $10\%$ & $17\%$ & $22\%$ & $28\%$ & $35\%$ \\ $\text{ITM}$ & $10.36$ & $5\%$ & $7\%$ & $9\%$ & & $2\%$ & $5\%$ & $7\%$ & $8\%$ & $9\%$ \\ \noalign{\medskip}\hline \end{tabular}% \end{adjustwidth} Notes: $C_0^{\star}$ results are computed based on $100,\!000$ independent paths generated from the regime-switching model under $\mathbb{P}$ (see \cref{subsubsec:RS_model} for model definition and \cref{appendix_MLE_Tables} for model parameters). Risk-neutral prices $C_0^{\mathbb{Q}}$ are computed under $\mathbb{Q}$-dynamics described in \cref{appendix:RN_dyn}. The training of neural networks is performed as described in \cref{subsec:calibration_NNETS} with hyperparameters presented in \cref{subsec:market_setup}. $C_0^{\star}$ are expressed relative to $C_0^{\mathbb{Q}}$ (\% increase). \end{table} Values from \cref{table:sensitivity_analysis_ERP} indicate that equal risk prices generated by the class of semi-$\mathbb{L}^{p}$ risk measures can span much more than the interval of prices obtained under the $\text{CVaR}_{\alpha}$ risk measures with the selected values for the confidence level $\alpha$. The latter observation holds across all moneyness levels for puts. For instance, the relative increase in the equal risk price $C_0^{\star}$ as compared to the risk-neutral price $C_0^{\mathbb{Q}}$ for OTM puts is $91\%, 119\%$ and $161\%$ under CVaR$_{0.90}$, CVaR$_{0.95}$ and CVaR$_{0.99}$, and ranges between $50\%$ to $175\%$ using the semi-$\mathbb{L}^{p}$ with $p$ going from $2$ to $10$. Similar observations can be made for ATM and ITM moneyness levels. Furthermore, the use of the semi-$\mathbb{L}^{2}$ risk measure entails a significant reduction of $C_0^{\star}$ as compared to the price obtained under the $\text{CVaR}_{0.90}$. Indeed, the relative increase in the equal risk price $C_0^{\star}$ with $p=2$ as compared to the risk-neutral price $C_0^{\mathbb{Q}}$ for OTM, ATM and ITM moneyness levels is respectively $50\%$, $10\%$ and $2\%$, which is significantly smaller than the corresponding relative increases of $91\%, 18\%$ and $5\%$ under the $\text{CVaR}_{0.90}$ measure. % Moreover, as expected, equal risk prices $C_0^{\star}$ generated with the class of semi-$\mathbb{L}^{p}$ risk measures show a positive relation with the risk aversion parameter $p$. This observation can be explained by a rationale analogous to that mentioned in \cite{carbonneau2021equal} under the CVaR$_{\alpha}$ risk measure case: since the put option payoff is bounded below at zero, the short position hedging error has a thicker right tail than the corresponding right tail of the long position hedging error. Consequently, an increase in the risk aversion parameter $p$ entails placing more weight on extreme hedging losses, which results in a larger increase of perceived residual risk exposure for the short position than for the long position. The latter entails that $C_0^{\star}$ must be increased to equalize the residual hedging risk of both parties. In conclusion, all these results clearly demonstrate the benefit of using the class of semi-$\mathbb{L}^{p}$ risk measures from the standpoint of pricing derivatives by not only spanning wider ranges of prices than these generated by the CVaR with conventional confidence levels, but by also significantly alleviating the inflated option prices phenomenon observed under the CVaR$_{\alpha}$. However, the question about whether or not the optimized global policies under the semi-$\mathbb{L}^{p}$ risk measures are effective from the standpoint of risk mitigation remains. This is examined in the following section. \subsubsection{Hedging performance benchmarking} \label{subsubsec:hedgingstats_riskmeasure} This section conducts the benchmarking of the neural networks trading policies hedging performance under the CVaR$_{\alpha}$ and semi-$\mathbb{L}^{p}$ objective functions. For the sake of brevity, hedging metrics values considered to compare the different policies are only presented for the short position hedge of the ATM put with the usual market setup, i.e. time-to-maturity of $T=60/260$ under the regime-switching model with daily stock hedges. \cref{table:RS_hedge_stats} presents hedging statistics of the global hedging policies obtained with the CVaR$_{\alpha}$ and semi-$\mathbb{L}^{p}$ risk measures with the same objective functions used to generate the $C_0^{\star}$ values in the previous section (i.e. $\alpha = 0.90, 0.95, 0.99$ and $p=2,4,6,8,10$). To compare the trading policies on common grounds, the initial portfolio value is set as the risk-neutral price with $V_0 = 3.27$ for all examples.\footnote{ % Recall that optimal policies under the CVaR$_{\alpha}$ risk measures are independent of $V_0$ due to the translation invariance property. Furthermore, the optimal policies obtained under the semi-$\mathbb{L}^{p}$ risk measures can be used not only with a specific value for $V_0$, but with an interval of initial capital investments that includes the risk-neutral price due to the proposed modified training algorithm in this paper. % } Furthermore, hedging metrics used for the benchmarking consist of the VaR$_{\alpha}$ and CVaR$_{\alpha}$ statistics over various $\alpha$'s, the mean hedging error, the SMSE (i.e. semi-$\mathbb{L}^{2}$ metric) and the mean-squared-error (MSE). Note that all hedging statistics are estimated in an out-of-sample fashion on the test set of $100,\!000$ additional independent simulated paths. \begin{table}[ht] \caption {Hedging statistics for short position ATM put option of maturity $T=60/260$ under the regime-switching model.} \label{table:RS_hedge_stats} \renewcommand{\arraystretch}{1.15} \begin{adjustwidth}{-1in}{-1in} \centering \begin{tabular}{lccccccccc} \hline\noalign{\smallskip} & \multicolumn{3}{c}{$\text{CVaR}_{\alpha}$} & & \multicolumn{5}{c}{\text{semi}-$\mathbb{L}^p$} \\ \cline{2-4} \cline{6-10} Penalty & $\text{CVaR}_{0.90}$ & $\text{CVaR}_{0.95}$ & $\text{CVaR}_{0.99}$ & & $\mathbb{L}^{2}$ & $\mathbb{L}^{4}$ & $\mathbb{L}^{6}$ & $\mathbb{L}^{8}$ & $\mathbb{L}^{10}$\\ \hline\noalign{\medskip} $\underline{Statistics}$ & & & & & & & & & \\ $\text{Mean}$ & $0.11$ & $0.13$ & $0.14$ & & $\textBF{-0.04}$ & $0.03$ & $0.11$ & $0.13$ & $0.15$ \\ $\text{CVaR}_{0.90}$ & $\textBF{2.64}$ & $5.3\%$ & $22.6\%$ & & $5.4\%$ & $5.6\%$ & $7.4\%$ & $11.6\%$ & $16.9\%$ \\ $\text{CVaR}_{0.95}$ & $3.41$ & $-\textBF{8.4\%}$ & $1.6\%$ & & $-1.6\%$ & $-5.1\%$ & $-5.8\%$ & $-3.6\%$ & $0.2\%$\\ $\text{CVaR}_{0.99}$ & $6.86$ & $-31.7\%$ & $-\textBF{44.5\%}$ & & $-31.4\%$ & $-39.1\%$ & $-41.8\%$ & $-43.1\%$ & $-42.2\%$\\ $\text{CVaR}_{0.999}$ & $19.99$ & $-48.5\%$ & $-76.1\%$ & & $-65.6\%$ & $-72.4\%$ & $-74.2\%$ & $-75.8\%$ & $-\textBF{76.4}\%$\\ $\text{VaR}_{0.90}$ & $\textBF{1.75}$ & $34.7\%$ & $59.9\%$ & & $12.8\%$ & $21.3\%$ & $30.2\%$ & $36.7\%$ & $45.1\%$\\ $\text{VaR}_{0.95}$ & $\textBF{2.08}$ & $21.9\%$ & $54.6\%$ & & $21.4\%$ & $25.9\%$ & $29.6\%$ & $37.6\%$ & $44.9\%$\\ $\text{VaR}_{0.99}$ & $3.67$ & $-\textBF{9.6\%}$ & $-2.9\%$ & & $5.1\%$ & $-1.8\%$ & $-4.1\%$ & $-3.9\%$ & $-0.6\%$\\ $\text{VaR}_{0.999}$ & $11.00$ & $-43.3\%$ & $-\textBF{62.5}\%$ & & $-47.6\%$ & $-55.4\%$ & $-57.8\%$ & $-60.3\%$ & $-60.4\%$ \\ $\text{SMSE}$ & $1.83$ & $-7.0\%$ & $6.8\%$ & & $-\textBF{33.5\%}$ & $-30.5\%$ & $-22.2\%$ & $-15.4\%$ & $-5.7\%$ \\ $\text{MSE}$ & $2.93$ & $-1.8\%$ & $12.2\%$ & & $-\textBF{26.4\%}$ & $-24.2\%$ & $-15.6\%$ & $-9.7\%$ & $-0.2\%$\\ \noalign{\medskip}\hline \end{tabular}% \end{adjustwidth} Notes: Hedging statistics are computed based on $100,\!000$ independent paths generated from the regime-switching model under $\mathbb{P}$ (see \cref{subsubsec:RS_model} for model definition and \cref{appendix_MLE_Tables} for model parameters). The training of neural networks is performed as described in \cref{subsec:calibration_NNETS} with hyperparameters presented in \cref{subsec:market_setup}. All hedging statistics except the mean hedging error are expressed relative to values obtained under the $\text{CVaR}_{0.90}$ penalty (\% increase). \textbf{Bold} values are the lowest across all penalties. \end{table} Hedging metrics values show that while the trading policy optimized with the CVaR$_{0.90}$ objective function entails the smallest values for CVaR$_{0.90}$, VaR$_{0.90}$ and VaR$_{0.95}$ statistics, it exhibits poor mitigation of tail risk as compared to the other policies. For instance, the relative reduction of the CVaR$_{0.99}$ statistic achieved with all other penalties than the CVaR$_{0.90}$ ranges between $31.4\%$ and $44.5\%$ as compared to the CVaR$_{0.90}$ trading policy. Similar observations can be made for the CVaR$_{0.999}$ and VaR$_{0.999}$ statistics capturing extreme scenarios. The latter results cast doubt on the practical effectiveness of the CVaR$_{0.90}$ hedging policy from a risk mitigation standpoint, and thus also of trading policies optimized with CVaR$_\alpha$ with lower values for $\alpha$, due to their poor mitigation of risk for quantiles above the CVaR confidence level. This conclusion has important implications in the context of the ERP framework. Indeed, as shown in \cite{carbonneau2021equal}, the equal risk price $C_0^{\star}$ obtained with the CVaR$_{\alpha}$ exhibits a positive relation to $\alpha$ values. Consequently, the inflated equal risk price phenomenon observed under the class of CVaR$_{\alpha}$ measures cannot be effectively alleviated through the reduction of $\alpha$ as the resulting trading policies quickly exhibit poor hedging performance. On the other hand, hedging statistics obtained with the class of semi-$\mathbb{L}^{p}$ risk measures indicate that across all levels of risk aversion $p$ considered, optimized trading policies are effective for mitigating hedging risk. Recall that $p$ controls the weight associated with extreme hedging losses. From the combination of these hedging statistics values as well as equal risk price values presented in \cref{table:sensitivity_analysis_ERP}, we can conclude that the class of semi-$\mathbb{L}^{p}$ risk measures is a successful choice within the ERP framework by simultaneously generating lower and more reasonable equal risk prices than these obtained with the CVaR$_{\alpha}$ and by resulting in effective trading policies. \subsection{Sensitivity analysis to dynamics of risky assets} \label{subsec:diff_dyn} This section performs a sensitivity analysis of equal risk prices across different dynamics for the financial market. The motivation is to assess if the conclusion that the class of semi-$\mathbb{L}^{p}$ risk measures can dampen the inflated equal risk prices phenomenon as well as span wider price intervals than these obtained under the CVaR$_{\alpha}$ measures is robust to the presence of different equity risk features. For such a purpose, this paper considers the presence of jump risk with the Merton jump-diffusion model (MJD, \cite{merton1976option}) and of volatility risk with the GJR-GARCH model \citep{glosten1993relation}. The \cite{black1973pricing} and \cite{merton1973theory} (BSM) model is also considered due to its popularity and the fact that contrarily to the other dynamics, the BSM model does not exhibit fat tails. The assessment of the impact of the choice of risk measure controlling the weight associated to extreme scenarios is thus also of interest under the BSM dynamics since the optimal hedging strategies, and thus equal risk prices, should be less sensitive to the risk aversion parameter under a dynamics without fat tails. The dynamics of all three models is now formally presented. All model parameters are estimated with the same time series of daily log-returns on the S\&P 500 index covering the period 1986-12-31 to 2010-04-01 (5863 log-returns). Parameter estimates are presented in \cref{appendix_MLE_Tables}. \subsubsection{Black-Scholes model} The Black-Scholes model assumes that log-returns are iid Gaussian random variables of yearly mean $\mu - \sigma^2/2$ and volatility $\sigma$: $$y_{n} = \left(\mu - \frac{\sigma^2}{2}\right) \Delta + \sigma \sqrt{\Delta} \epsilon_n, \quad n = 1, \ldots, N.$$ Stock prices have the Markov property under $\mathbb{P}$ with respect to the market filtration $\mathbb{F}$. The latter entails that no additional information should be added to the state variables of the neural networks, i.e. $\mathcal{I}_n = 0$ for all $n$. \subsubsection{GJR-GARCH model} The GJR-GARCH model relaxes the constant volatility assumption of the BSM model by assuming the presence of stochastic volatility which incorporates the leverage effect. Log-returns under this model have the representation \begin{align} y_n &= \mu + \sigma_n \epsilon_n, \nonumber % \\ \sigma_{n+1}^{2} &= \omega + \upsilon \sigma_n^2(|\epsilon_n| - \gamma \epsilon_n)^2 + \beta \sigma_n^{2}, \nonumber \end{align} where $\{\sigma_n^{2}\}_{n=1}^{N+1}$ are the daily variances of log-returns, $\{\mu, \omega, \upsilon, \gamma, \beta\}$ are the model parameters with $\{\omega, \upsilon, \beta\}$ being positive real values and $\{\mu, \gamma\}$ real values. Note that given $\sigma_1^{2}$, the sequence of variances $\sigma^2_2, \ldots, \sigma^2_{N+1}$ can be computed recursively with the observed path of log-returns. In this paper, the initial value $\sigma_1^{2}$ is set as the stationary variance of the process: $\sigma_1^{2} \equiv \mathbb{E}[\sigma_n^{2}] = \frac{\omega}{1 - \upsilon(1+\gamma^{2})-\beta}$. Furthermore, it can be shown that $\{S_n^{(0,b)}, \sigma_{n+1}\}_{n=0}^{N}$ is an $(\mathbb{F}, \mathbb{P})$-Markov bivariate process. Consequently, the periodic volatility is added to the states variables of the neural networks at each time step: $\mathcal{I}_n = \sigma_{n+1}$ for $n=0,\ldots,N-1$. \subsubsection{Merton jump-diffusion model} \label{subsubsec:MJD_dyn} Contrarily to the GJR-GARCH model, the MJD dynamics assumes constant volatility, but deviates from the BSM assumptions by incorporating random Gaussian jumps to stock returns. Let $\{N_n\}_{n=0}^{N}$ be realizations of a Poisson process of parameter $\lambda > 0$, where $N_n$ represents the cumulative number of jumps of the stock price from time $0$ to time $t_n$. The \cite{merton1976option} model assumes that jumps, denoted by $\{\zeta_j\}_{j=1}^{\infty}$, are iid Gaussian random variables of mean $\mu_J$ and variance $\sigma_J^{2}$ under the physical measure:\footnote{ % The convention that $\sum_{j=N_{n-1}+1}^{N_n} \zeta_j = 0$ if $N_{n-1} = N_n$ is adopted. % } \begin{align} y_n = \left(\nu - \lambda(e^{\mu_J + \sigma_J^2/2} -1) - \frac{\sigma^{2}}{2}\right)\Delta + \sigma \sqrt{\Delta}\epsilon_n + \sum_{j=N_{n-1}+1}^{N_n} \zeta_j, \nonumber \end{align} where $\{\epsilon_n\}_{n=1}^{N}$, $\{N_n\}_{n=0}^{N}$ and $\{\zeta_j\}_{j=1}^{\infty}$ are independent. Model parameters consist of $\{\nu, \lambda, \sigma, \mu_J, \sigma_J\}$ where $\nu \in \mathbb{R}$ is the drift parameter and $\sigma > 0$ is the constant volatility term. Since stock returns are iid, this dynamics does not necessitate the addition of other state variables to the feature vectors, i.e. $\mathcal{I}_n = 0$ for all $n$. \subsubsection{Numerical results sensitivity analysis to dynamics} \cref{table:sensitivity_analysis_dyn_ERP} presents the sensitivity analysis of equal risk prices with the same setup as in previous sections, i.e. for put options of maturity $T=60/260$ with daily stock hedges, for the BSM, MJD and GJR-GARCH models. To save space, results are only presented for the OTM moneyness as the main conclusions are shared for both ATM and ITM moneyness levels. Furthermore, both the CVaR$_{\alpha}$ and semi-$\mathbb{L}^{p}$ classes of risk measures are considered with $\alpha = 0.90, 0.95, 0.99$ and $p=2,4,6,8,10$. \begin{table}[ht] \caption {Sensitivity analysis of equal risk prices for OTM put options of maturity $T=60/260$ under the BSM, MJD and GJR-GARCH models.} \label{table:sensitivity_analysis_dyn_ERP} \renewcommand{\arraystretch}{1.15} \begin{adjustwidth}{-1in}{-1in} \centering \begin{tabular}{ccccccccccc} \hline\noalign{\smallskip} & & \multicolumn{3}{c}{$C_0^{\star}$ under $\text{CVaR}_{\alpha}$} & & \multicolumn{5}{c}{$C_0^{\star}$ under semi-$\mathbb{L}^{p}$} \\ \cline{3-5} \cline{7-11} $\text{Dynamics}$ & $C_0^{\mathbb{Q}}$ & $\text{CVaR}_{0.90}$ & $\text{CVaR}_{0.95}$ & $\text{CVaR}_{0.99}$ & & $\mathbb{L}^{2}$ & $\mathbb{L}^{4}$ & $\mathbb{L}^{6}$ & $\mathbb{L}^{8}$ & $\mathbb{L}^{10}$ \\ % \hline\noalign{\medskip} % $\text{BSM}$ & $0.53$ & $5\%$ & $10\%$ & $17\%$ & & $3\%$ & $10\%$ & $22\%$ & $31\%$ & $43\%$ \\ $\text{MJD}$ & $0.46$ & $23\%$ & $34\%$ & $129\%$ & & $15\%$ & $41\%$ & $71\%$ & $102\%$ & $125\%$ \\ $\text{GJR-GARCH}$ & $0.57$ & $52\%$ & $71\%$ & $139\%$ & & $29\%$ & $96\%$ & $156\%$ & $219\%$ & $265\%$ \\ \noalign{\medskip}\hline \end{tabular}% \end{adjustwidth} Notes: Equal risk prices $C_0^{\star}$ results are computed based on $100,\!000$ independent paths generated from the BSM, MJD and GJR-GARCH model under $\mathbb{P}$ (see \cref{subsec:diff_dyn} for models definitions under $\mathbb{P}$ and \cref{appendix_MLE_Tables} for model parameters). Risk-neutral prices $C_0^{\mathbb{Q}}$ are computed under $\mathbb{Q}$-dynamics described in \cref{appendix:RN_dyn}. The training of feedforward neural networks is performed as described in \cref{subsec:calibration_NNETS} with hyperparameters presented in \cref{subsec:market_setup}. $C_0^{\star}$ are expressed relative to $C_0^{\mathbb{Q}}$ (\% increase). \end{table} These results clearly demonstrate that the conclusion that equal risk prices generated by the class of semi-$\mathbb{L}^{p}$ risk measures can alleviate the price inflation phenomenon observed under the CVaR$_{\alpha}$ measures is robust to different dynamics. Indeed, by using the semi-$\mathbb{L}^{2}$ risk measure, OTM equal risk prices $C_0^{\star}$ exhibit a relative increase over risk-neutral prices $C_0^{\mathbb{Q}}$ of respectively $3\%, 15\%$ and $29\%$ under the BSM, MJD and GARCH models as compared to $5\%$, $23\%$ and $52\%$ under the CVaR$_{0.90}$ objective function. Furthermore, values presented in \cref{table:sensitivity_analysis_dyn_ERP} demonstrate that the observation made in the previous section under the RS model with respect to the fact that equal risk prices generated by the class of semi-$\mathbb{L}^{p}$ risk measures can span a large interval of prices which encompasses values obtained with the $\text{CVaR}_{\alpha}$ measures is robust to different dynamics of the financial markets. Lastly, it is interesting to observe that the length of the price intervals generated by both classes of risk measures varies significantly with the dynamics of the financial market. Indeed, under the BSM model, the relative increase of $C_0^{\star}$ as compared to $C_0^{\mathbb{Q}}$ ranges between $5\%$ to $17\%$ under the CVaR$_{\alpha}$ and between $3\%$ to $43\%$ under the semi-$\mathbb{L}^{p}$. On the other hand, with the GJR-GARCH dynamics, the relative increase in $C_0^{\star}$ under the CVaR$_{\alpha}$ ranges between $52\%$ to $139\%$, while under the semi-$\mathbb{L}^{p}$, it ranges between $29\%$ to $265\%$. Similar observations can be made under the MJD dynamics. This can be explained by the fact that contrarily to the other models, the BSM dynamics does not exhibits fat tails as the market incompleteness solely stems from discrete-time trading. Consequently, the trading policies are much less sensitive to the choice of risk aversion parameter $p$ or $\alpha$ under the BSM model, which results in equal risk price values that are less sensitive to risk aversion parameters. % From these results, we can conclude that the choice of both the risky asset dynamics and of the risk measure among the classes of CVaR$_{\alpha}$ and semi-$\mathbb{L}^{p}$ measures has a material impact on equal risk prices, and this impact becomes more important as the dynamics exhibits fatter tails for risky assets returns. \subsection{Long-term maturity ERP with option hedges} \label{subsec:option_hedges} This section examines the use of semi-$\mathbb{L}^{p}$ risk measures within the ERP framework for pricing long-term options with trades involving exclusively the underlying stock as compared to equal risk prices generated under the CVaR$_{\alpha}$ with trades involving shorter-term options. The motivation for this experiment is the following. The main finding of \cite{carbonneau2021deep} is that under the CVaR$_{\alpha}$ measure, hedging long-term puts with shorter-term options in the presence of jump or volatility risks significantly reduces equal risk prices as compared to trading exclusively the underlying stock. However, the expected trading cost of setting up a dynamic trading strategy based solely on option hedges can be impractical in some cases in the face of highly illiquid options. In such context, the hedger could potentially be restricted to a trading strategy relying exclusively on the underlying stock, which as shown in previous sections can inflate equal risk prices under the CVaR$_{\alpha}$ measure. The objective of this last section is thus to assess if the use of the semi-$\mathbb{L}^{p}$ risk measure can achieve a similar equal risk prices reduction when trading exclusively the underlying stock to that obtained when trading options with the CVaR$_{\alpha}$ objective function. The setup to perform this experiment is the same as the one considered in \cite{carbonneau2021deep}, and numerical values for equal risk prices generated with trades involving exclusively options under the CVaR$_{\alpha}$ are taken directly from the latter work. This setup is now recalled. The derivative to price and hedge is a 1-year put with $252$ days per year of moneyness levels OTM, ATM and ITM with strike prices of $90, 100$ and $110$, respectively. The annualized continuous risk-free rate is $r=0.03$. Also, as noted in \cite{carbonneau2021deep}, option trading strategies optimized with the confidence level $\alpha$ smaller than $0.95$ when using the CVaR as the objective function often leads to hedging strategies exhibiting poor tail risk mitigation. Thus, the convex risk measure considered as the benchmark in the present study is the CVaR$_{0.95}$ measure with trades involving either exclusively the underlying stock on a daily or monthly basis (i.e. $N=252$ or $N=12$, respectively), or by trading solely with ATM 1-month and 3-months calls and puts (i.e. $N=12$ or $N=4$, respectively). Following the work of \cite{carbonneau2021deep}, the pricing of options used as hedging instruments is done through the modeling of the daily variations of the ATM logarithm implied volatility dynamics under $\mathbb{P}$ as an autoregressive (AR) model of order $1$, named log-AR(1) hereafter. Furthermore, the model assumes for convenience that the ATM 1-month and 3-months implied volatilities are the same.\footnote{ % Note that traded options with different maturities are never used simultaneously in the same hedging simulation. % } It is worth highlighting that the implied volatility model is used exclusively for pricing options used as hedging instruments, not for the 1-year put $\Phi$ to be priced. Also, note that while the rebalancing frequency is either daily, monthly or quarterly, IV variations are always generated on a daily basis. The log-AR(1) model is now formally defined. Denote by $\{IV_{n}\}_{n=0}^{252}$ the daily implied volatilities for the ATM calls and puts of $1$-month and $3$-months maturities which are used as hedging instruments. Also, let $\{Z_n\}_{n=1}^{252}$ be an additional sequence of iid standardized Gaussian random variables representing the random innovations of the log-IV dynamics. To capture the well-known leverage effect between asset returns and implied volatility variations (see for instance \cite{cont2002dynamics}), a correlation factor $\varrho \equiv corr(\epsilon_n, Z_n)$ set at $-0.6$ is considered where $\{\epsilon_n\}_{n=1}^{252}$ are the daily random innovations associated with stock returns. The log-AR(1) model has the represensation \begin{align} \log IV_{n+1} &= \log IV_{n} + \kappa(\vartheta - \log IV_{n}) + \sigma_{IV} Z_{n+1}, \quad n = 0,\ldots, 251,\label{eq:ref_IV_model} \end{align} where $\{\kappa, \vartheta, \sigma_{IV}\}$ are the model parameters with $\kappa$ and $\vartheta$ as real values and $\sigma_{IV} > 0$. The initial value of the process is set at the long-term parameter with $\log IV_{0} \equiv \vartheta$. Moreover, the pricing of the calls and puts used as hedging instruments is performed with the well-known Black-Scholes formula with the annualized volatility set at the current implied volatility value. More precisely, denote by $C(IV, \Delta T, S, K)$ and $P(IV, \Delta T, S, K)$ the price of a call and put option respectively if the current implied volatility is $IV$, the time-to-maturity is $\Delta T$, the underlying stock price is $S$ and the strike price is $K$: \begin{align} C(IV, \Delta T, S, K) &\equiv S \mathcal{N}(d_1) - e^{-r\Delta T}K \mathcal{N}(d_2), \label{eq:ref_BSM_optprice_call} % \\ P(IV, \Delta T, S, K)&\equiv e^{-r\Delta T} K \mathcal{N}(-d_2) - S \mathcal{N}(-d_1), \label{eq:ref_BSM_optprice_put} \end{align} where $\mathcal{N}(\cdot)$ denotes the cumulative distribution function of a standardized Gaussian random variable with $$d_1 \equiv \frac{\log(\frac{S}{K}) + (r+\frac{IV^2}{2})\Delta T}{IV\sqrt{\Delta T}}, \quad d_2 \equiv d_1 - IV\sqrt{\Delta T}.$$ Also, note that when option hedges are considered, the current implied volatility is added to the feature vectors of the neural networks. For instance, with $1$-month calls and puts hedges, the $n$th trade at time $t_n = n/12$ uses as input vectors for the neural networks $X_n = [S_{21 \times n}^{(0,b)}, IV_{21 \times n}, T- t_n, \mathcal{I}_{21 \times n}]$ for $n=0,1,\ldots,11$ where $21$ represents the number of days in a given month.\footnote{ % Note that with option hedges, the implied volatility of the options used as hedging instruments is added to feature vectors, not the price of each asset. This has the benefit of necessitating one less state variable with the implied volatility instead of adding two state variables with the price of the call and put used for hedging. Furthermore, this is a reasonable choice from a theoretical standpoint as implied volatilities are simply a nonlinear transformation of options prices due to the bijection relation between the two values. % } Moreover, the dynamics of the underlying asset returns considered for this last section is once again the MJD dynamics, but with different parameters than in previous sections since the ones considered in \cite{carbonneau2021deep} are used for comparability purposes. The MJD as well as the log-AR(1) model parameters values are presented in \cref{table:MJD_with_options} and \cref{table:all_OU_params}. These parameters were chosen in an ad hoc fashion so as to produce reasonable values for the dynamics of the financial market. \begin{table}[ht] \caption {Parameters of the $1$-year Merton jump-diffusion model.} \label{table:MJD_with_options} \begin{adjustwidth}{-1in}{-1in} \centering \begin{tabular}{ccccc} \hline $\nu$ & $\sigma$ & $\lambda$ & $\mu_J$ & $\sigma_J$ \\ \hline\noalign{\medskip} % $0.1111$ & $0.1323$ & $0.25$ & $-0.10$ & $0.10$ % \\ \noalign{\medskip}\hline \end{tabular}% \end{adjustwidth} \centering{Notes: $\nu$, $\sigma$ and $\lambda$ are on an annual basis.} \end{table} % \begin{table} [ht] \caption {Parameters of the log-AR(1) model for the evolution of implied volatilities.} \label{table:all_OU_params} \begin{adjustwidth}{-1in}{-1in} \centering \begin{tabular}{cccc} \hline $\kappa$ & $\vartheta $ & $\sigma_{\text{IV}}$ & $\varrho$ \\ \hline\noalign{\medskip} % $0.15$ & $\log(0.15)$ & $0.06$ & $-0.6$ \\ \noalign{\medskip}\hline \end{tabular}% \end{adjustwidth} \end{table} \subsubsection{Numerical results with option hedges} \cref{table:ERP_with_options} presents equal risk prices $C_0^{\star}$ under CVaR$_{0.95}$ measure with daily or monthly stock trades as well as with 1-month or 3-months ATM calls and puts trades. Note that the latter values are from Table $3$ of \cite{carbonneau2021deep}.\footnote{ % The type of neural networks considered in \cite{carbonneau2021deep} is the long-short term memory (LSTM). The current paper found that FFNN trading policies performed significantly better for the numerical experiments conducted under the semi-$\mathbb{L}^{p}$ risk measure which motivated their use over LSTMs. The reader is referred to Section $3$ of \cite{carbonneau2021deep} for the formal description of the LSTM architecture. % } Furthermore, $C_0^{\star}$ values under the semi-$\mathbb{L}^{2}$ objective function with daily and monthly stock hedges are also presented. \begin{table}[ht] \caption {Sensitivity analysis of equal risk prices to jump risk for OTM ($K=90$), ATM ($K=100$) and ITM ($K=110$) put options of maturity $T=1$.} \label{table:ERP_with_options} \renewcommand{\arraystretch}{1.15} \begin{adjustwidth}{-1in}{-1in} \centering \begin{tabular}{cccccccc} \hline\noalign{\smallskip} & \multicolumn{4}{c}{$C_0^{\star}$ under $\text{CVaR}_{0.95}$} & & \multicolumn{2}{c}{$C_0^{\star}$ under semi-$\mathbb{L}^{2}$} \\ \cline{2-5} \cline{7-8} $\text{Moneyness}$ & $\text{Daily stock}$ & $\text{Monthly stock}$ & $\text{1-month opts}$ & $\text{3-months opts}$ & & $\text{Daily stock}$ & $\text{Monthly stock}$ \\ % \hline\noalign{\medskip} % $\text{OTM}$ & $2.58$ & $2.60$ & $2.24$ & $2.08$ & & $2.18$ & $2.23$ \\ $\text{ATM}$ & $6.01$ & $5.77$ & $5.36$ & $5.12$ & & $5.38$ & $5.22$ \\ $\text{ITM}$ & $11.68$ & $11.44$ & $10.86$ & $10.51$ & & $10.42$ & $10.54$ \\ \noalign{\medskip}\hline \end{tabular}% \end{adjustwidth} Notes: These results are computed based on $100,\!000$ independent paths generated from the MJD model under $\mathbb{P}$ (see \cref{subsubsec:MJD_dyn} for model definition and \cref{table:MJD_with_options} for model parameters). Options used as hedging instruments are priced with implied volatility modeled with a log-AR(1) dynamics (see \cref{subsec:option_hedges} for model description and \cref{table:all_OU_params} for parameters values). Values for $C_0^{\star}$ under CVaR$_{0.95}$ are from Table $3$ of \cite{carbonneau2021deep}. Values for $C_0^{\star}$ under semi-$\mathbb{L}^{2}$ are obtained with the training algorithm described in \cref{subsubsec_nontrans_inv}. \end{table} Numerical results indicate that the use of the semi-$\mathbb{L}^{2}$ objective function is successful at reducing significantly equal risk prices when relying on trades involving exclusively the underlying stock. Indeed, the relative reduction in $C_0^{\star}$ obtained by using the semi-$\mathbb{L}^{2}$ risk measure as compared to the CVaR$_{0.95}$ for OTM, ATM and ITM moneyness levels is respectively $15\%, 11\%$ and $11\%$ with daily stock and $14\%, 10\%$ and $8\%$ with monthly stock rebalancing.\footnote{ % For instance, if $C_0^{\star}(\text{CVaR}_{0.95})$ and $C_0^{\star}(\mathbb{L}^{2})$ are respectively equal risk prices under the CVaR$_{0.95}$ and semi-$\mathbb{L}^{2}$ objective functions, the relative reduction is computed as $1 - \frac{C_0(\mathbb{L}^{2})}{C_0^{\star}(\text{CVaR}_{0.95})}.$ % } Furthermore, equal risk prices values under the semi-$\mathbb{L}^{2}$ risk measure with daily or monthly stock hedges are relatively close to those obtained with 1-month or 3-months option hedges under the CVaR$_{0.95}$. These results have important implications for ERP procedures. Indeed, this demonstrates that in the face of highly illiquid options, the use of the semi-$\mathbb{L}^{p}$ class of risk measures with stock hedges can effectively reduce equal risk prices to levels similar than these obtained with option hedges under the CVaR$_{\alpha}$ measures. This avenue is thus successful to alleviate the price inflation phenomenon when using ERP procedures for the pricing of long-term options. It is worth highlighting that in the presence of jump risk, the use of options as hedging instruments is much more effective for risk mitigation as compared to hedging strategies involving exclusively the underlying stock (see for instance \cite{coleman2007robustly} and \cite{carbonneau2021deepIME}). Nevertheless, $C_0^{\star}$ values presented in \cref{table:ERP_with_options} indicate that when setting up trading strategies with options is impractical due to high expected trading costs, the use of stock hedges coupled with semi-$\mathbb{L}^{p}$ risk measures can effectively reduce option prices. \section{Conclusion} \label{section:conclusion} This paper studies the class of semi-$\mathbb{L}^{p}$ risk measures in the context of equal risk pricing (ERP) for the valuation of European financial derivatives. The ERP framework prices contingent claims as the initial hedging portfolio value which equates the residual hedging risk of the long and short positions under optimal hedging strategies. Despite lacking the translation invariance property which complexifies the numerical evaluation of equal risk prices, the use of semi-$\mathbb{L}^{p}$ risk measures as the objective functions measuring residual hedging risk is shown to have several preferable properties over the use of the $\text{CVaR}_{\alpha}$, the latter being explored for instance in \cite{carbonneau2021equal} and \cite{carbonneau2021deep} in the context of ERP. The optimal hedging problems underlying the ERP framework are solved with deep reinforcement learning procedures by representing trading policies with neural networks as proposed in the work of \cite{buehler2019deep}. A modification to the training algorithm for neural networks is presented in this current paper to tackle the additional complexity of using semi-$\mathbb{L}^{p}$ risk measures within the ERP framework. This modification consists in training the neural networks to learn the optimal mappings for an interval of initial capital investments instead of a unique fixed value. The latter is shown not to lead to material deterioration in the hedging accuracy of the neural networks trading policies. Several numerical experiments are performed to examine option prices generated by the ERP framework under the class of semi-$\mathbb{L}^{p}$ risk measures. First, a sensitivity analysis of equal risk price values with respect to the choice of objective function is conducted by comparing prices obtained with the CVaR$_{\alpha}$ and semi-$\mathbb{L}^{p}$ objectives across different values of $\alpha$ and $p$ controlling the risk aversion of the hedger. Numerical results demonstrate that equal risk prices under the semi-$\mathbb{L}^{p}$ risk measures are spanning a larger interval of values than the one obtained with the CVaR$_{\alpha}$, thereby allowing alleviating the price inflation phenomenon observed under the CVaR$_{\alpha}$ documented in previous studies. Furthermore, the trading policies parameterized as neural networks are shown to be highly effective for risk mitigation under the semi-$\mathbb{L}^{p}$ objective functions across all values of $p$ considered, with the risk aversion parameter controlling the relative weight associated with extreme scenarios. Moreover, additional numerical experiments show that the use of the semi-$\mathbb{L}^{2}$ objective function for the pricing of long-term puts with hedges exclusively relying on the underlying asset is successful at reducing equal risk prices roughly to the level of prices produced with option hedges under the CVaR$_{\alpha}$ objective function. The latter conclusion is highly important in the context of ERP as it demonstrates that in the case where options are not or cannot be used within the hedging strategy, the ERP methodology used in conjunction with the semi-$\mathbb{L}^{p}$ class of risk measures can produce reasonable option prices. \section{Acknowledgements} Alexandre Carbonneau gratefully acknowledges financial support from the Fonds de recherche du Qu\'ebec - Nature et technologies (FRQNT, grant number 205683) and The Montreal Exchange. Fr{\'e}d{\'e}ric Godin gratefully acknowledges financial support from Natural Sciences and Engineering Research Council of Canada (NSERC, grant number RGPIN-2017-06837). \bibliographystyle{apalike}
2023-04-23T06:41:30.740Z
2021-07-26T02:22:06.000Z
redpajama/arxiv
arxiv_0001
2,558
14,818
7e818ecb11eabff9831e792ae391d902c1d73f7e
\section{Introduction} Let $p>1$, $\beta>0$ be real numbers. For every open bounded sets $K\subset\Omega\subset\mathbb{R}^n$, we define \begin{equation} \label{problema0} E_{\beta,p}(K,\Omega)=\inf_{\substack{v\in W^{1,p}(\Omega)\\ v=1 \text{ in } K}}\left(\int_\Omega \abs{\nabla v}^p\,dx+\beta\int_{\partial\Omega} \abs{v}^p\,d\mathcal{H}^{n-1}\right). \end{equation} We notice that it is sufficient to minimize among all functions $v\in H^1(\Omega)$ with $v=1$ in $K$ and $0\le v\le 1$ a.e., moreover if $\Omega$ is sufficiently smooth, a minimizer $u$ satisfies \[\begin{cases}u=1 &\text{in }K,\\[5 pt] \Delta_p u=0 &\text{in }\Omega \setminus K,\\[6 pt] \abs{\nabla u}^{p-2}\dfrac{\partial u}{\partial \nu}+\beta\abs{u}^{p-2}u=0 &\text{on }\partial \Omega,\end{cases}\] where $\Delta_p u =\divv\left(\abs{\nabla u}^{p-2} \nabla u\right)$ is the $p$-Laplacian of $u$ and $\nu$ is the outer unit normal to $\partial\Omega$.\medskip This problem is related to the so-called \emph{relative $p$-capacity of $K$ with respect to $\Omega$}, defined as \[ \capac_p(K,\Omega):=\inf_{\substack{v\in W^{1,p}_0(\Omega)\\ v=1 \text{ in } K}}\left(\int_\Omega \abs{\nabla v}^p\,dx\right). \] In the case $p=2$ it represents the electrostatic capacity of an annular condenser consisting of a conducting surface $\partial\Omega$, and a conductor $K$, where the electrostatic potential is prescribed to be 1 inside $K$ and 0 outside $\Omega$. Let $\omega_n$ be the measure of the unit sphere in $\mathbb{R}^n$, and let $M>\omega_n$, then it is well known that there exists some $r\ge1$ such that \[ \min_{\substack{\abs{K}=\omega_n\\ \abs{\Omega}\le M}}\capac_p(K,\Omega)=\capac_p(B_1,B_r). \] This is an immediate consequence of the Pólya-Szegő inequality for the Schwarz rearrangement (see for instance \cite{polya, kesavan}). We are interested in studying the same problem for the energy defined in \eqref{problema0}, which corresponds to changing the Dirichlet boundary condition on $\partial\Omega$ into a Robin boundary condition, namely we consider the following problem \begin{equation}\label{problema} \inf_{\substack{\abs{K}=\omega_n\\ \abs{\Omega}\le M}} E_{\beta,p}(K,\Omega).\end{equation} In this case, the previous symmetrization techniques cannot be employed anymore.\medskip Problem \eqref{problema} has been studied in the linear case $p=2$ in \cite{nahon}, with more general boundary conditions on $\partial\Omega$, namely \[ \frac{\partial u}{\partial \nu}+\frac{1}{2}\Theta'(u)=0, \] where $\Theta$ is a suitable increasing function vanishing at $0$. This problem has been addressed in relation to thermal insulation (see for instance \cite{CK, AC}). Our main result reads as follows. \begin{teor}\label{teorema} Let $\beta>0$ such that \[ \beta^{\frac{1}{p-1}}>\frac{n-p}{p-1}. \] Then, for every $M>\omega_n$ the solution to problem \eqref{problema} is given by two concentric balls $(B_1,B_r)$, that is \[ \min_{\substack{\abs{K}=\omega_n\\ \abs{\Omega}\le M}} E_{\beta,p}(K,\Omega) = E_{\beta, p}(B_1, B_r), \] in particular we have that either $r=1$ or $M=\omega_n r^n$. \end{teor} \begin{oss} In the case \[ \beta^{\frac{1}{p-1}}\le \frac{n-p}{p-1}, \] adapting the symmetrization techniques used in \cite{nahon}, it can be proved that a solution to problem \eqref{problema} is always given by the pair $(B_1,B_1)$. \end{oss} We point out that the proof of the theorem relies on the techniques involving the $H$-function introduced in \cite{bossel, daners}. \section{Proof of the theorem} In order to prove \autoref{teorema}, we start by studying the function \[R\mapsto E_{\beta,p}(B_1,B_R).\] A similar study of the previous function can also be found in \cite{Rossella}. Let \[ \Phi_{p,n}(\rho)=\begin{cases} \log(\rho) &\text{if } p=n,\\[7 pt] -\dfrac{p-1}{n-p}\dfrac{1}{\rho^\frac{n-p}{p-1}} &\text{if }p\ne n. \end{cases}\] For every $R>1$, consider \begin{equation} \label{eq: ustar} u^*(x)=1-\dfrac{\beta^{\frac{1}{p-1}}\left(\Phi_{p,n}(\abs{x})-\Phi_{p,n}(1)\right)_+}{\Phi'_{p,n}(R)+\beta^\frac{1}{p-1}\left(\Phi_{p,n}(R)-\Phi_{p,n}(1)\right)}, \end{equation} the solution to \[\begin{cases}u^*=1 &\text{in }B_1,\\[5 pt] \Delta_p u^*=0 &\text{in }B_R\setminus B_1,\\[6 pt] \abs{\nabla u^*}^{p-2}\dfrac{\partial u^*}{\partial \nu}+\beta\abs{u^*}^{p-2}u^*=0 &\text{on }\partial B_R.\end{cases}\] We have that \begin{equation} \label{eq: Ebetarho} \begin{split}E_{\beta,p}(B_1,B_R)&=\int_{B_R} \abs{\nabla u^*}^p\,dx+\beta\int_{\partial B_R} \abs{u^*}^p\,d\mathcal{H}^{n-1}\\[10 pt]&=\dfrac{n\omega_n\beta}{\left[\Phi'_{p,n}(R)+\beta^\frac{1}{p-1}\left(\Phi_{p,n}(R)-\Phi_{p,n}(1)\right)\right]^{p-1}}.\end{split} \end{equation} Notice that $E_{\beta,p}(B_1,B_R)$ is decreasing in $R>0$ if and only if \[\dfrac{d }{d R} \left(\Phi'_{p,n}(R)+\beta^{\frac{1}{p-1}}\Phi_{p,n}(R)\right)\ge 0\] that is, if and only if \[R\ge \dfrac{n-1}{p-1}\dfrac{1}{\beta^\frac{1}{p-1}}=:\alpha_{\beta,p}.\] Moreover \[\begin{split}&E_{\beta,p}(B_1,B_1)=n\omega_n \beta,\\[10 pt] \lim_{R\to\infty}&E_{\beta,p}(B_1,B_R)=\begin{cases} n\omega_n \left(\dfrac{n-p}{p-1}\right)^{p-1} &\text{if }p<n,\\[10 pt] 0 &\text{if }p\ge n. \end{cases}\end{split}\] Therefore, there are three cases: \begin{itemize} \item {if \[\beta^{\frac{1}{p-1}}\ge \dfrac{n-1}{p-1},\] $R\in[1,+\infty)\mapsto E_{\beta,p}(B_1,B_R)$ is decreasing; } \item {if \[\dfrac{n-p}{p-1}<\beta^{\frac{1}{p-1}}<\dfrac{n-1}{p-1},\] $R\in[1,+\infty)\mapsto E_{\beta,p}(B_1,B_R)$ increases on $[1,\alpha_{\beta,p}]$ and decreases on $[\alpha_{\beta,p},+\infty)$, with the existence of a unique $R_{\beta,p}>\alpha_{\beta,p}$ such that $E_{\beta,p}(B_1,B_{R_{\beta,p}})=E_{\beta,p}(B_1,B_1)$; } \item {if \[\beta^{\frac{1}{p-1}}\le \dfrac{n-p}{p-1},\] $R\in[1,+\infty)\mapsto E_{\beta,p}(B_1,B_R)$ reaches its minimum at $R=1$. } \end{itemize} See for instance \autoref{fig}, where \[ \beta_1=\left(\frac{n-p}{p-1}\right)^{p-1}, \qquad \beta_2=\left(\frac{n-1}{p-1}\right)^{p-1}, \qquad p=2.5, \qquad n=3. \] \begin{figure} \begin{tikzpicture} \node[anchor=south west,inner sep=0] (image) at (0,0) {\includegraphics[width=.50\linewidth]{graph.png}}; \begin{scope}[x={(image.south east)},y={(image.north west)}] \node[anchor=west] at (0.05,0.95) {$E_{\beta,p}(B_1,B_r)$}; \node[anchor=north] at (0.95,0.05) {$r$}; \node[anchor=east] at (0,0.55) {\textcolor{myblue}{$\beta\ge\beta_2$}}; \node[anchor=east] at (0,0.4) {\textcolor{myred}{$\beta_1<\beta<\beta_2$}}; \node[anchor=east] at (0,0.25) {\textcolor{myyellow}{$\beta\le\beta_1$}}; \end{scope} \end{tikzpicture} \caption{$E_{\beta,p}(B_1,B_r)$ depending on the value of $\beta$} \label{fig} \end{figure} In the following, we will need \begin{lemma}\label{lemma5} Let $R>1$,$\beta>0$ and let $u^*$ be the solution of the problem on $(B_1,B_R)$. Then \[\dfrac{\abs{\nabla u^*}}{u^*}\le \beta^{\frac{1}{p-1}}\] in $B_R\setminus B_1$, if and only if \[E_{\beta,p}(B_1,B_\rho)\ge E_{\beta,p}(B_1,B_R)\] for every $\rho\in[1,R]$. \end{lemma} \begin{proof} Recalling the expressions of $u^*$ in \eqref{eq: ustar}, by straightforward computations we have that \[\dfrac{\abs{\nabla u^*}}{u^*}\le \beta^{\frac{1}{p-1}}\] in $B_R\setminus B_1$ if and only if \begin{equation} \label{eq: monotonicity} \Phi'_{p,n}(R)+\beta^\frac{1}{p-1}\left(\Phi_{p,n}(R)-\Phi_{p,n}(1)\right)\ge \Phi'_{p,n}(\rho)+\beta^\frac{1}{p-1}\left(\Phi_{p,n}(\rho)-\Phi_{p,n}(1)\right) \end{equation} for every $\rho\in[1,R]$, using the expression of $E_{\beta,p}(B_1,B_\rho)$ in \eqref{eq: Ebetarho}, \eqref{eq: monotonicity} is equivalent to \[E_{\beta,p}(B_1,B_\rho)\ge E_{\beta,p}(B_1,B_R)\] for every $\rho\in[1,R]$. \end{proof} \begin{defi} Let $\Omega\subseteq\mathbb{R}^n$ be an open set, and let $U\subseteq\Omega$ be another set. We define the \emph{internal boundary} of $U$ as \[ \partial_i U=\partial U\cap \Omega, \] and the \emph{external boundary} of $U$ as \[ \partial_e U= \partial U\cap\partial\Omega. \] \end{defi} Let $K\subseteq\Omega\subseteq\mathbb{R}^n$ be open bounded sets, and let $u$ be a minimizer of $E_{\beta.p}(K,\Omega)$. In the following, we denote by \[ U_t=\Set{x\in\Omega | u(x)>t}. \] \begin{defi}[$H$-function] \label{defi: H} Let $\varphi\in W^{1,p}(\Omega)$. We define \[ H(t,\varphi)=\int_{\partial_i U_t}\abs{\varphi}^{p-1}\,d\mathcal{H}^{n-1} - (p-1)\int_{U_t}\abs{\varphi}^p\,d\mathcal{L}^n+\beta\mathcal{H}^{n-1}(\partial_e U_t). \] \end{defi} Notice that this definition is slightly different from the one given in \cite{bucdan}. \begin{lemma} \label{lemma1} Let $K\subseteq\Omega\subseteq\mathbb{R}^n$ be an open, bounded sets, and let $u$ be a minimizer of $E_{\beta,p}(K,\Omega)$. Then for a.e. $t\in(0,1)$ we have \[ H\left(t,\frac{\abs{\nabla u}}{u}\right)=E_{\beta,p}(K,\Omega). \] \begin{proof} Since $u$ is a minimizer of $E_{\beta,p}(K,\Omega)$, then for a.e. $t\in(0,1)$ \begin{equation}\label{eq: pre-H-function} \begin{split} 0&=\int_{\set{t<u<1}}\frac{\Delta_p u}{u^{p-1}}\,d\mathcal{L}^n \\[7 pt] &= \int_{\set{t<u<1}}\divv\left(\frac{\abs{\nabla u}^{p-2}\nabla u}{u^{p-1}}\right)\,d\mathcal{L}^n+(p-1)\int_{\set{t<u<1}}\left(\frac{\abs{\nabla u}}{u}\right)^p\,d\mathcal{L}^n \\[7 pt] &=\int_{\partial\set{t<u<1}}\frac{\abs{\nabla u}^{p-2}}{u^{p-1}}\frac{\partial u}{\partial \nu}\,d\mathcal{L}^n+(p-1)\int_{\set{t<u<1}}\left(\frac{\abs{\nabla u}}{u}\right)^p\,d\mathcal{L}^n \\[7 pt] &=-\beta\mathcal{H}^{n-1}(\partial{\set{t<u<1}}\cap\partial\Omega)-\int_{\partial_i U_t}\left(\frac{\abs{\nabla u}}{u}\right)^{p-1}\,d\mathcal{H}^{n-1}\\&\hphantom{\le}+\int_{\partial K\cap \Omega}\abs{\nabla u}^{p-1}\, d\mathcal{H}^{n-1}+(p-1)\int_{U_t}\left(\frac{\abs{\nabla u}}{u}\right)^p\,d\mathcal{L}^n, \end{split} \end{equation} where we have used the boundary condition on $\partial\Omega$ and the fact that the outer normals to the sets $\partial\set{u>t}\cap\Omega$ and $\partial\set{u<1}\cap\Omega$ are respectively $-\nabla u/\abs{\nabla u}$ and $\nabla u/\abs{\nabla u}$. Computing $E_{\beta,p}(K,\Omega)$, and using again the boundary condition on $\partial\Omega$, and the fact that the normal to $\partial K\cap \Omega$ is $\nabla u/\abs{\nabla u}$, we get \[ \begin{split} E_{\beta,p}(K,\Omega)&=\int_{\Omega\setminus K}\divv(u\abs{\nabla u }^{p-2}\nabla u)\,d\mathcal{L}^n+\beta\int_{\partial\Omega}u^p\,d\mathcal{H}^{n-1} \\[7 pt] &=-\beta\int_{\partial\Omega\setminus\partial K}u^p\,d\mathcal{H}^{n-1} +\int_{\partial K \cap \Omega} \abs{\nabla u}^{p-1}\,d\mathcal{H}^{n-1}\\[7 pt] &\hphantom{=}+\beta\int_{\partial\Omega}u^p\,d\mathcal{H}^{n-1} \\[7 pt] &=\int_{\partial K\cap \Omega}\abs{\nabla u}^{p-1}\,d\mathcal{H}^{n-1}+\beta\mathcal{H}^{n-1}(\partial K\cap \partial \Omega)\\[7 pt] &=\int_{\partial K\cap \Omega}\abs{\nabla u}^{p-1}\,d\mathcal{H}^{n-1}+\beta\mathcal{H}^{n-1}(\partial_e U_t)\\[7 pt] &\hphantom{=}-\beta\mathcal{H}^{n-1}(\partial\set{t<u<1}\cap\partial\Omega). \end{split} \] Using \eqref{eq: pre-H-function} the lemma is proved. \end{proof} \end{lemma} \begin{oss} Notice that if $K,$ and $\Omega$ are two concentric balls, the minimizer $u$ is radial, and its level sets are circular annuli, therefore the statement of the previous Lemma is true for every $t\in(0,1)$. \end{oss} \begin{lemma} \label{lemma2} Let $\varphi\in L^\infty(\Omega)$. Then there exists $t\in(0,1)$ such that \[ H(t,\varphi)\le E_{\beta, p}(K,\Omega). \] \begin{proof} Let \[ w=\abs{\varphi}^{p-1}-\left(\frac{\abs{\nabla u}}{u}\right)^{p-1}, \] then we evaluate \[ \begin{split} H(t,\varphi)-H\left(t,\frac{\abs{\nabla u}}{u}\right)\!&= \int_{\partial_i U_t}\!\!w\,d\mathcal{H}^{n-1} -(p-1)\!\int_{U_t}\!\left(\abs{\varphi}^p-\left(\frac{\abs{\nabla u}}{u}\right)^p\right)\,d\mathcal{L}^n\\[7 pt] &\le \int_{\partial_i U_t} w\,d\mathcal{H}^{n-1}-p\int_{U_t}\frac{\abs{\nabla u}}{u} w\,d\mathcal{L}^n\\[7 pt]&=-\frac{1}{t^{p-1}}\frac{d}{dt}\left(t^p\int_{U_t}\frac{\abs{\nabla u}}{u} w\,d\mathcal{L}^n \right), \end{split} \] where we used the inequality \[ a^p-b^p\le \frac{p}{p-1}a\,(a^{p-1}-b^{p-1}) \qquad \forall a,b>0. \] Multiplying by $t^{p-1}$ and integrating, we get \[ \int_0^1 t^{p-1}\left(H(t,\varphi)-H\left(t,\frac{\abs{\nabla u}}{u}\right)\right)\,dt\le-\left[t^p\int_{U_t}\frac{\abs{\nabla u}}{u} w\, d\mathcal{L}^n\right]_0^1=0, \] from which we obtain the conclusion of the proof. \end{proof} \end{lemma} In the following, we fix a radius $R$ such that $\abs{B_R}\ge\abs{\Omega}$, $u^*$ the minimizer of $E_{\beta,p}(B_1,B_R)$, and \[ \begin{split} H^*(t,\varphi)=&\int_{\partial\set{u^*>t}\cap\Omega}\abs{\varphi}^{p-1}\,d\mathcal{H}^{n-1} - (p-1)\int_{\set{u^*>t}}\abs{\varphi}^p\,d\mathcal{L}^n\\[7 pt] &+\beta\mathcal{H}^{n-1}(\partial\set{u^*<t}\cap\partial\Omega). \end{split} \] \begin{prop} \label{teorema part1} Let $\beta>0$. Assume that \begin{equation} \label{eq: ipotesisuphi} \frac{\abs{\nabla u^*}}{u^*}\le \beta^{\frac{1}{p-1}}. \end{equation} Then we have that \[ E_{\beta,p}(K,\Omega)\ge E_{\beta,p}(B_1,B_R). \] \end{prop} \begin{proof} In the following, if $v$ is a radial function on $B_R$ and $r\in(0,R)$, we denote with abuse of notation \[ v(r)=v(x), \] where $x$ is any point on $\partial B_r$. By \autoref{lemma1} we know that for every $t\in(0,1)$ \begin{equation} \label{eq: theqball} H^*\left(t,\frac{\abs{\nabla u^*}}{u^*}\right)=E_{\beta,p}(B_1,B_R), \end{equation} while by \autoref{lemma2}, for every $\varphi\in L^{\infty}(\Omega)$ there exists a $t\in(0,1)$ such that \begin{equation} \label{eq: thineqgen} E_{\beta,p}(K,\Omega)\ge H(t,\varphi). \end{equation} We aim to find a suitable $\varphi$ such that, for some $t$, \begin{equation} \label{eq: thineqgenball} H(t,\varphi)\ge H^*\left(t,\frac{\abs{\nabla u^*}}{u^*}\right), \end{equation} so that combining \eqref{eq: thineqgen}, \eqref{eq: thineqgenball}, and \eqref{eq: theqball} we conclude the proof. In order to construct $\varphi$, for every $t\in(0,1)$ we define \[ r(t)=\left(\frac{\abs{U_t}}{\omega_n}\right)^{\frac{1}{n}}, \] then we set, for every $x\in\Omega$, \[ \varphi(x)=\frac{\abs{\nabla u^*}}{u^*}(r(u(x))). \] \marcomm{\textbf{Claim}} The functions $\varphi\chi_{U_t}$ and $\frac{\abs{\nabla u^*}}{u^*}\chi_{B_{r(t)}}$ are equi-measurable, in particular \begin{equation} \label{eq: equi-meas} \int_{U_t}\varphi^p\,d\mathcal{L}^n=\int_{B_{r(t)}}\left(\frac{\abs{\nabla u^*}}{u^*}\right)^{p}\,d\mathcal{L}^n. \end{equation} Indeed, let $g(r)=\frac{\abs{\nabla u^*}}{u^*}(r)$, and by coarea formula, \begin{equation} \label{eq: measlevset} \begin{split} \abs{{U_t\cap\set{\varphi>s}}}&=\!\int_{U_t\cap\Set{g(r(u(x)))>s}}\,d\mathcal{L}^n \\[7 pt] &=\!\int_t^{+\infty}\int_{\partial^* U_{\tau}\cap\set{g(r(\tau))>s}}\frac{1}{\abs{\nabla u(x)}}\,d\mathcal{H}^{n-1}(x)\,d\tau \\[7 pt] &=\!\int_0^{r(t)}\!\!\int_{\partial^* U_{r^{-1}(\sigma)}}\frac{1}{\abs{\nabla u(x)}\abs{r'(r^{-1}(\sigma))}}\chi_{\set{g(\sigma)>s}}\,d\mathcal{H}^{n-1}(x)\,d\sigma. \end{split} \end{equation} Notice now that, since \[ \omega_n r(\tau)^n=\abs{U_\tau}, \] then \[ \abs{r'(\tau)}=\frac{1}{n\omega_n r(\tau)^{n-1}}\int_{\partial^* U_\tau}\frac{1}{\abs{\nabla u(x)}}\,d\mathcal{H}^{n-1}(x). \] Therefore, substituting in \eqref{eq: measlevset}, we get \[ \abs{{U_t\cap\set{\varphi>s}}}=\int_0^{r(t)}n\omega_n \sigma^{n-1}\chi_{\Set{g(\sigma)>s}}\,d\sigma=\abs*{B_{r(t)}\cap\Set{\frac{\abs{\nabla u^*}}{u^*}>s}}; \] where we have used polar coordinates to get the last equality. Thus, the claim is proved. Recalling the definition of $\varphi$, \eqref{eq: ipotesisuphi} reads \[ \beta\ge \varphi^{p-1}, \] then using \eqref{eq: equi-meas} and the definition of $H$ (see \autoref{defi: H}), we have \[ \begin{split} H(t,\varphi)&=\beta\mathcal{H}^{n-1}(\partial_e U_t)+\int_{\partial_i U_t}\varphi^{p-1}\,d\mathcal{H}^{n-1}-(p-1)\int_{U_t}\varphi^p\,d\mathcal{L}^n \\[7 pt] &\ge \int_{\partial U_t}\varphi^{p-1}\,d\mathcal{H}^{n-1}-(p-1)\int_{B_{r(t)}}\left(\frac{\abs{\nabla u^*}}{u^*}\right)^{p}\,d\mathcal{L}^n \\[7 pt] &\ge \int_{\partial B_{r(t)}}\frac{\abs{\nabla u^*}}{u^*}\,d\mathcal{H}^{n-1} -(p-1)\int_{B_{r(t)}}\left(\frac{\abs{\nabla u^*}}{u^*}\right)^{p}\,d\mathcal{L}^n\\[7 pt] &=H^*\left(u^*(r(t)),\frac{\abs{\nabla u^*}}{u^*}\right)\\[7 pt] &=E_{\beta,p}(B_1,B_R), \end{split} \] where in the last inequality we have used the isoperimetric inequality and the fact that $\varphi$ is constant on $\partial U_t$. \end{proof} \begin{proof}[Proof of \autoref{teorema}] Fix $M=\omega_n R^n$ with $R>1$. We divide the proof into two cases. \medskip If \[\beta^{\frac{1}{p-1}}\ge\dfrac{n-1}{p-1}\] we recall that the function \[\rho\in[1,+\infty)\mapsto E_{\beta,p}(B_1,B_\rho)\] is decreasing. Let $u^*$ be the minimizer of $E_{\beta,p}(B_1,B_R)$, by \autoref{lemma5} condition \eqref{eq: ipotesisuphi} holds and, by \autoref{teorema part1}, we have that a solution to \eqref{problema} is given by the concentric balls $(B_1,B_R)$.\medskip If \[\dfrac{n-p}{p-1}<\beta^{\frac{1}{p-1}}<\dfrac{n-1}{p-1},\] we recall that, letting \[\alpha_{\beta,p}=\dfrac{(n-1)}{(p-1)\beta^\frac{1}{p-1}},\] the function \[\rho\in[1,+\infty)\mapsto E_{\beta,p}(B_1,B_\rho)\] increases on $[1,\alpha_{\beta,p}]$ and decreases on $[\alpha_{\beta,p},+\infty)$, and there exist a unique $R_{\beta,p}>\alpha_{\beta,p}$ such that $E_{\beta,p}(B_1,B_{R_{\beta,p}})=E_{\beta,p}(B_1,B_1)$. If $R\ge R_{\beta,p}$ the function $u^*$, minimizer of $E_{\beta,p}(B_1,B_R)$, still satisfies condition \eqref{eq: ipotesisuphi} and, as in the previous case, a solution to \eqref{problema} is given by the concentric balls $(B_1,B_R)$. On the other hand, if $R<R_{\beta,p}$, we can consider $u^*_{\beta,p}$ the minimizer of $E_{\beta,p}(B_1,B_{R_{\beta,p}})$. By \autoref{lemma5} we have that, for the function $u^*_{\beta,p}$, condition \eqref{eq: ipotesisuphi} holds and, by \autoref{teorema part1}, we have that if $K$ and $\Omega$ are open bounded sets with $K\subseteq \Omega$, $\abs{K}=\omega_n$, and $\abs{\Omega}\le M$, then \[E_{\beta,p}(K,\Omega)\ge E_{\beta,p}(B_1,B_{R_{\beta,p}})=E_{\beta,p}(B_1,B_1)\] and a solution to \eqref{problema} is given by the pair $(B_1,B_1)$. \end{proof} \newpage \printbibliography[heading=bibintoc] \Addresses \end{document}
2023-04-23T06:41:31.381Z
2022-08-01T02:10:23.000Z
redpajama/arxiv
arxiv_0001
2,575
3,445
c2e392278abd14f6cdee9c08fbf65124b66f7b62
\section{Introduction} In the early twentieth century, Einstein formulated the first geometric theory of gravitation, the ``General Relativity'' (GR) \cite{Einstein:1916vd,Wald:1984rg}. In GR the gravitational field is described in terms of the metric tensor $g_{ij}$ and the spacetime presents a curvature connected to the matter distribution. Curvature and metric are related to each other by the Levi-Civita connection. Right after the formulation of GR, alternative ways to geometrize gravity were explored, for instance leading Weyl to develop a theory with a symmetrical and nonmetric connection, which aimed to unify gravity and electromagnetism \cite{Weyl:1918ib}. The adjective nonmetric indicates that the metric tensor $g_{ij}$ is not preserved under parallel transport, implying that the inner product between vectors changes when they are transported along a given curve. From the geometric point of view, all this is expressed by the nonmetricity tensor $Q_{hij}$. Another interesting geometric approach was considered by Cartan, who developed a theory where the connection is not symmetric but metric compatible \cite{E1922}, thus introducing torsional degrees of freedom. Both nonmetricity and torsion embody aspects of the connection different from the curvature, which, instead, is the only cornerstone of GR. Over the years, major developments have been made in extensions and generalizations of GR (see e.g. \cite{CAPOZZIELLO2011167}), involving also torsion \cite{Kr_k_2019, Cai:2015emx}. However, the nonmetric theories came into the limelight only in $1999$ when the so-called ``Symmetric Teleparallel Gravity'' (STG) \cite{https://doi.org/10.48550/arxiv.gr-qc/9809049,Adak:2005cd,Conroy:2017yln,Adak:2008gd} was proposed. In this theory, gravitation is strictly connected to the nonmetricity tensor and the related nonmetricity scalar $\mathcal{Q}$, while both curvature and torsion are set to zero. A generalization of STG, which recently has gained great attention, is $f(\mathcal{Q})$ gravity \cite{BeltranJimenez:2017tkd}, where the action of the gravitational field is described by a generic function of the nonmetricity scalar. $f(\mathcal{Q})$ has recently attracted great interest. In particular, its cosmology has been studied in detail \cite{Jim_nez_2020,Vignolo:2021frk,Esposito:2021ect, Atayde:2021pgb,Albuquerque:2022eac,Narawade:2022jeg,Lu:2019hra,Khyllep:2021pcu,Dimakis:2021gby,Mandal:2021wer,Hohmann:2021ast,De:2022shr,Barros_2020,Anagnostopoulos:2021ydo,https://doi.org/10.48550/arxiv.2205.11445,Ayuso:2021vtj,Xu:2019sbp, Xu:2020yeg,Bhattacharjee_2020}, some spherically symmetric models in \cite{Wang:2021zaz,Lin:2021uqa,DAmbrosio:2021zpm,https://doi.org/10.48550/arxiv.2203.13914,Mandal:2021qhx}, and wormhole solutions in \cite{Mustafa:2021bfs,Mustafa:2021ykn,Banerjee:2021mqk}. In the following, we focus on the Bianchi type-I universes in the context of $f(\mathcal{Q})$ theory. Bianchi type-I metrics are the simplest anisotropic generalization of the spatially flat Friedmann-Lema\^{\i}tre-Robertson-Walker (FLRW) cosmologies. We specifically aim to understand how nonmetricity could contribute to driving the actual Universe to be essentially isotropic. For our study, we endow the spacetime with a congruence of time-like curves, whose tangent vector field $u$ determines, at each point, the local direction of the time flow. The existence of this vector field implies the existence of preferred rest frames at each point. Thus, the field $u$ can be thought as the $4$-velocity field of a family of observers whose world lines coincide with the congruence. This assumption is the basis of the so-called $1+3$ covariant approach \cite{ellis_maartens_maccallum_2012,https://doi.org/10.48550/arxiv.1405.6319,poisson_2004}. The reason behind the use of this formalism is that it gives us a direct insight into the physical relevance of nonmetricity in both geometric and dynamical aspects of the spacetime. Specifically, we will see how nonmetricity affects the expansion and anisotropy in Bianchi type-I cosmologies. Other $1+3$ approaches to $f(\mathcal{Q})$ cosmology are given in \cite{Iosifidis:2018diy, Yang:2021fjy}. Since the resulting cosmological equations are in general difficult to solve, we attempt to obtain a global picture of the cosmic evolution by means of the so-called Dynamical Systems Approach (DSA), see e.g. \cite{perko2012differential, Bahamonde:2017ize}. This technique allows us to study a cosmological model by analyzing the behavior of the orbits in a phase space connected with the geometrical features and matter sources of space-time. Making use of DSA, it is possible to achieve a semi-quantitative analysis of the solutions of the dynamical equations and their stability. DSA has been widely used in gravitational theories \cite{wainwright_ellis_1997,Carloni:2013hna,https://doi.org/10.48550/arxiv.2106.13793,Dutta:2017fjw,Odintsov:2018uaw}, including the $f(\mathcal{Q})$ theory \cite{Narawade:2022jeg,Lu:2019hra,Khyllep:2021pcu}. The $1+3$ approach provides an ideal framework to employ the DSA, as it makes available convenient variables for the description of the phase space of the dynamical system. As we will see, in the context of the Bianchi type-I metric, the application of the $1+3$ approach leads to a remarkable simplification of the involved equations. For example, we will be able to deal with complex matter sources as well as non-trivial forms of the function $f(\mathcal{Q})$. The paper is organized as follows. Sec. \ref{sec:geometric_framework} is devoted to some geometrical preliminaries. Sec. \ref{sec:1_3_description} introduces the framework of the $1+3$ approach, defining all the necessary kinematic quantities and providing the time and space decomposition with respect to the given congruence of all the relevant geometrical quantities. A review of $f(Q)$ gravity and its cosmological equations is given in Sec. \ref{sec:f_Q_theory}, whereas Bianchi type-I universes are discussed in Sec. \ref{sec:bianchi_model}. In Sec. \ref{sec:dynamical_systems}, DSA is applied to investigate four different cosmological scenarios. Eventually, the obtained results are discussed in Sec. \ref{sec:conclusions}. Throughout the paper natural units ($c=8\pi G=1$) and metric signature ($-,+,+,+$) are used. \section{Geometrical preliminaries}\label{sec:geometric_framework} We consider a spacetime endowed with a metric tensor $g_{ij}$ and a torsion free affine connection $\Gamma_{ij}{}^{k}$. The latter can be decomposed as: \begin{equation}\label{eq:def_connection} \Gamma_{ij}{}^{k} = \tilde{\Gamma}_{ij}{}^{k} + N_{ij}{}^{k} , \end{equation} where $\tilde{\Gamma}_{ij}{}^{k}$ is the Levi-Civita connection induced by the metric tensor $g_{ij}$, \begin{equation}\label{eq:def_levicivita} \tilde{\Gamma}_{ij}{}^{k} = \frac{1}{2} g^{kh} \left( \partial_{i}g_{jh} + \partial_{j}g_{ih} - \partial_{h}g_{ij} \right), \end{equation} and $N_{ij}{}^{k}$ is the disformation tensor, \begin{equation}\label{eq:def_disformation} N_{ij}{}^{k} = \frac{1}{2} \left( Q^{k}{}_{ij} - Q_{i}{}^{k}{}_{j} - Q_{j}{}^{k}{}_{i} \right), \end{equation} defined in terms of the nonmetricity tensor, \begin{equation}\label{eq:def_nonmetricity} Q_{kij} = \nabla_{k}g_{ij}, \end{equation} being $\nabla$ the covariant derivative associated with the full connection \eqref{eq:def_connection}. Throughout the paper, we will denote by a tilde all quantities related to the Levi-Civita connection. For instance, we will indicate by $\tilde{\nabla}$ the covariant derivative associated with the Levi-Civita connection \eqref{eq:def_levicivita}. The curvature tensor of the full connection \eqref{eq:def_connection} is defined as \begin{equation}\label{eq:def_riemann_1} \begin{split} R^{h}{}_{kij} &= \partial_{i}\Gamma_{jk}{}^{h} - \partial_{j}\Gamma_{ik}{}^{h} + \Gamma_{ip}{}^{h}\Gamma_{jk}{}^{p} - \Gamma_{jp}{}^{h}\Gamma_{ik}{}^{p} =\\ &= \tilde{R}^{h}{}_{kij} + \tilde{\nabla}_{i}N_{jk}{}^{h} - \tilde{\nabla}_{j}N_{ik}{}^{h} + N_{ip}{}^{h}N_{jk}{}^{p} - N_{jp}{}^{h}N_{ik}{}^{p} \end{split} \end{equation} according to the Ricci identity, \begin{equation}\label{eq:def_riemann_0} R^{h}{}_{kij}w^{k} = \left(\nabla_{i}\nabla_{j}-\nabla_{j}\nabla_{i} \right)w^{h}, \end{equation} where $w^{h}$ is a generic vector field. In the presence of nonmetricity, we recall that the Riemann tensor \eqref{eq:def_riemann_1} satisfies the properties: \begin{itemize}\label{eq:riemann_properties} \item Antisymmetry in the last two indices, \begin{equation} R^{h}{}_{kij} = -R^{h}{}_{kji}; \end{equation} \item First Bianchi identity, \begin{equation} R^{h}{}_{[kij]}=0; \end{equation} \item Second Bianchi identity, \begin{equation} \nabla_{[a|}R^{h}{}_{k|ij]}=0; \end{equation} \end{itemize} By contracting first and third index of the Riemann tensor, we obtain the Ricci tensor, \begin{equation}\label{eq:def_ricci_tensor} R_{kj} = R^{h}{}_{khj}. \end{equation} The contraction between second and third index gives rise to, \begin{equation}\label{eq:def_second_contraction_riemann} \bar{R}_{hj} = R_{h}{}^{i}{}_{ij}. \end{equation} Also, the contraction of first and second index yields the homothetic curvature, \begin{equation}\label{eq:def_homothetic_tensor} \hat{R}_{ij} = R^{h}{}_{hij} = - \tilde{\nabla}_{[i}Q_{j]h}{}^{h} = - \partial_{[i}Q_{j]h}{}^{h}. \end{equation} Eqs. \eqref{eq:def_ricci_tensor} and \eqref{eq:def_second_contraction_riemann} contracted with the metric give the Ricci scalar, \begin{equation}\label{eq:def_ricci_scalar} R = g^{ij}R_{ij} = - g^{ij}\bar{R}_{ij}. \end{equation} \section{\texorpdfstring{$1+3$ framework}{}}\label{sec:1_3_description} In this section, we apply the $1+3$ formalism to the spacetime described in Sec. \ref{sec:geometric_framework}. The approach is based on the introduction of a congruence of time-like curves, or world lines, representing preferred observers. The aim is to analyze how nonmetricity affects them. \subsection{\texorpdfstring{$4-\hbox{velocity}$}{}} Given the congruence $x^{i}=x^{i}(\lambda)$, expressed in terms of an affine parameter $\lambda$, we define the $4$-velocity as the time-like vector: \begin{equation} u^{i} = \frac{d x^{i}}{d\lambda}, \qquad u_{i}=g_{ij}u^{j}. \end{equation} Due to nonmetricity, the scalar product is not in general preserved along the world lines. Consequently, we cannot normalize $u_{i}u^{i}$ to $-1$ and the affine parameter $\lambda$ does not coincide with the proper time $\tau$. Instead, we have in general, \begin{equation} \qquad u_{i}u^{i}=-\phi^{2}(x), \end{equation} being $\phi(x)$ a generic function on spacetime. In this regard, one can prove that the conditions, \begin{equation}\label{eq:condition_about_u} \begin{split} u^{i}\nabla_{i}u^{j} = 0, \\ Q_{kij}u^{k}u^{i}u^{k} = 0, \end{split} \end{equation} ensure that $\phi$ is constant along the congruence lines \cite{Iosifidis:2018diy}. Therefore, assuming systematically conditions \eqref{eq:condition_about_u}, we can arrange things in order to parameterize the curves using the proper time and define the $4$-velocity as, \begin{equation}\label{eq:def_4_velocity} u^{i} = \frac{d x^{i}}{d\tau}, \qquad u_{i}u^{i} = -1. \end{equation} As we will deal exclusively with cosmological models of Bianchi type-I, we will assume in Sec. \ref{sec:bianchi_model} that conditions \eqref{eq:condition_about_u} are satisfied. Indeed, in \ref{appendix:bianchi_coordinates} we will show that, because of the gauge choice, such conditions are not restrictive for our purposes. Once the $4$-velocity has been defined, we may introduce the projection operator along $u^{i}$, defined by means of the tensor, \begin{equation}\label{eq:4_velocity_projection} U^{i}{}_{j}=-u^{i}u_{j}, \end{equation} satisfying the properties, \begin{equation} U^{i}{}_{j}u^{j} = u^{i}, \qquad U^{i}{}_{k}U^{k}{}_{j} = U^{i}{}_{j}, \qquad U^{i}{}_{i} = 1. \end{equation} \subsubsection{Orthogonal projection} The existence of a preferred time direction allows us to single out a three-dimensional subspace of the tangent bundle at any point, orthogonal to the $4$-velocity $u^i$. The restriction of the metric to this spatial subspace is the so-called transverse metric, \begin{equation}\label{eq:def_3_metric} h_{ij} = g_{ij} + u_{i}u_{j}. \end{equation} Associated with the transverse metric \eqref{eq:def_3_metric} there is the spatial projection operator, \begin{equation}\label{eq:def_orthogonal_projection} h^{i}{}_{j}=\delta^{i}_{j} + u^{i}u_{j}, \end{equation} satisfying the properties \begin{equation} h^{i}{}_{k}h^{k}{}_{j}=h^{i}{}_{j}, \qquad h^{i}{}_{i}=3, \qquad h^{i}{}_{j}u^{j}=0. \end{equation} In the following discussion, we will also use the projected symmetric trace free (PSTF) part of a tensor. In particular, for any $1$-form $V_{i}$ and covariant $2$-tensor $T_{ij}$, the PSTF is expressed as, \begin{equation}\label{eq:def_PSTF} V_{\langle i\rangle } = h_{i}{}^{j}V_{j}, \qquad T_{\langle ij\rangle } = \left[h_{(i}{}^{m}h_{j)}{}^{n} - \frac{1}{3}h_{ij}h^{mn}\right]T_{mn}. \end{equation} \subsection{Time and spatial derivative}\label{sec:time_derivative} The time derivative of a generic tensor $T^{i\cdot\cdot\cdot}{}_{j\cdot\cdot\cdot}$ is defined as, \begin{equation}\label{eq:def_time_derivative} \begin{split} \dot{T}^{i\cdot\cdot\cdot}{}_{j\cdot\cdot\cdot} &= u^{h}\nabla_{h}T^{i\cdot\cdot\cdot}{}_{j\cdot\cdot\cdot}=\\ &= u^{h}\tilde{\nabla}_{h}T^{i\cdot\cdot\cdot}{}_{j\cdot\cdot\cdot} + u^{h}N_{hk}{}^{i}T^{k\cdot\cdot\cdot}{}_{j\cdot\cdot\cdot}+\cdot\cdot\cdot-u^{h}N_{hj}{}^{k}T^{i\cdot\cdot\cdot}{}_{k\cdot\cdot\cdot}-\cdot\cdot\cdot=\\ &= \mathring{{T}}^{i\cdot\cdot\cdot}{}_{j\cdot\cdot\cdot} + u^{h}N_{hk}{}^{i}T^{k\cdot\cdot\cdot}{}_{j\cdot\cdot\cdot}+\cdot\cdot\cdot-u^{h}N_{hj}{}^{k}T^{i\cdot\cdot\cdot}{}_{k\cdot\cdot\cdot}-\cdot\cdot\cdot, \end{split} \end{equation} where \begin{equation}\label{eq:def_levi_civita_time_derivative} \mathring{T}^{i\cdot\cdot\cdot}{}_{j\cdot\cdot\cdot} = u^{h}\tilde{\nabla}_{h}T^{i\cdot\cdot\cdot}{}_{j\cdot\cdot\cdot} \end{equation} is the time derivative with respect to the Levi-Civita connection. The spatial derivative is the spatial projection of the covariant derivative, \begin{eqnarray} D_{k}T^{i\cdot\cdot\cdot}{}_{j\cdot\cdot\cdot} &=& h_{k}{}^{p} h^{i}{}_{m}\cdot\cdot\cdot h_{j}{}^{n}\cdot\cdot\cdot \nabla_{p}T^{m\cdot\cdot\cdot}{}_{n\cdot\cdot\cdot} =\nonumber\\ &=& h_{k}{}^{p} h^{i}{}_{m} \cdot\cdot\cdot h_{j}{}^{n} \cdot\cdot\cdot \left(\tilde{\nabla}_{p}T^{m\cdot\cdot\cdot}{}_{n\cdot\cdot\cdot} + N_{pq}{}^{m}T^{q\cdot\cdot\cdot}{}_{n\cdot\cdot\cdot} + \cdot\cdot\cdot +\nonumber\right.\\ &&\left.- N_{pn}{}^{q}T^{m\cdot\cdot\cdot}{}_{q\cdot\cdot\cdot}-\cdot\cdot\cdot\right)=\nonumber\\ &=& \tilde{D}_{k}T^{i\cdot\cdot\cdot}{}_{j\cdot\cdot\cdot} + h_{k}{}^{p} h^{i}{}_{m} \cdot\cdot\cdot h_{j}{}^{n} \cdot\cdot\cdot N_{pq}{}^{m}T^{q\cdot\cdot\cdot}{}_{n\cdot\cdot\cdot} +\nonumber\\ &&+ \cdot\cdot\cdot - h_{k}{}^{p} h^{i}{}_{m} \cdot\cdot\cdot h_{j}{}^{n} \cdot\cdot\cdot N_{pn}{}^{q}T^{m\cdot\cdot\cdot}{}_{q\cdot\cdot\cdot}-\cdot\cdot\cdot, \label{eq:def_spatial_derivative} \end{eqnarray} with \begin{equation} \tilde{D}_{k}T^{i\cdot\cdot\cdot}{}_{j\cdot\cdot\cdot} = h_{k}{}^{p} h^{i}{}_{m} \cdot\cdot\cdot h_{j}{}^{n} \cdot\cdot\cdot \tilde{\nabla}_{p}T^{m\cdot\cdot\cdot}{}_{n\cdot\cdot\cdot} \end{equation} the spatial derivative with respect to the Levi-Civita connection. It is worth noticing that the spatial derivative of the metric $g_{ij}$ is equal to the spatial derivative of $h_{ij}$: \begin{equation}\label{eq:spatial_derivative_metric} D_{k}g_{ij} = h_{k}{}^{p} h_{i}{}^{m} h_{j}{}^{n} \nabla_{p}g_{mn} = h_{k}{}^{p} h_{i}{}^{m} h_{j}{}^{n} \nabla_{p}\left(h_{mn} - u_{m}u_{n}\right) = D_{k}h_{ij}. \end{equation} \subsubsection{\texorpdfstring{$4-\hbox{acceleration}$}{}}\label{sec:4_acceleration} Because of nonmetricity, scalar product and covariant derivative do not commute in general. For this reason, we use the convention that the contravariant, or covariant, counterparts of objects related to the covariant derivative are obtained raising, or lowering, the indices by the metric. This convention will be used throughout the paper. Accordingly, we define the $4$-acceleration as \begin{equation}\label{eq:def_cov_acceleration} \dot{u}_{i} = u^{h}\nabla_{h}u_{i} = \mathring{u}_{i} - N_{hi}{}^{k}u_{k}u^{h} = \mathring{{u}}_{i} + \frac{1}{2}Q_{ihk}u^{h}u^{k}. \end{equation} where $\mathring{{u}}_{i}:=u^{h}\tilde{\nabla}_{h}u_{i}$ is the $4$-acceleration with respect to the Levi-Civita connection. After that, the contravariant counterpart of \eqref{eq:def_cov_acceleration} is obtained as \begin{equation}\label{eq:def_vec_acceleration} \dot{u}^{i} := g^{ij} \dot{u}_{j} = g^{ij}u^{h}\nabla_{h}u_{j} = \mathring{{u}}^{i} + \frac{1}{2}g^{ij}Q_{jhk}u^{h}u^{k}. \end{equation} If Eq. \eqref{eq:condition_about_u} holds, $u_{i}$ and $\dot{u}_{i}$ are orthogonal to each other, \begin{equation} \dot{u}_{i}u^{i} = \dot{u}^{i}u_{i} = 0. \end{equation} \subsubsection{Extrinsic curvature} The extrinsic curvature is defined as the spatial derivative of $4$-velocity, \begin{equation}\label{eq:def_extrinsic_curvature} K_{ij} = D_{i}u_{j} = h_{i}{}^{m}h_{j}{}^{n}\nabla_{m}u_{n} = \tilde{K}_{ij} - h_{i}{}^{m}h_{j}{}^{n}N_{mn}{}^{h}u_{h}, \end{equation} being \begin{equation}\label{eq:def_levi_civita_extrinsic_curvature} \tilde{K}_{ij} = \tilde{D}_{i}u_{j} \end{equation} the extrinsic curvature induced by the Levi-Civita connection. Raising the second index, we obtain \begin{equation} K_{i}{}^{j}= g^{jp}K_{ip} = \tilde{K}_{i}{}^{j} - g^{jp}h_{i}{}^{m}h_{p}{}^{n}N_{mn}{}^{k}u_{k} \end{equation} \subsection{Kinematic quantities} The covariant derivative of the $4$-velocity can be decomposed in its temporal and spatial projections, \begin{equation}\label{eq:covariant_derivative_4_velocity} \begin{split} \nabla_{i}u_{j} &= -u_{i}\dot{u}_{j} + D_{i}u_{j} - u_{j}h_{i}{}^{k}u^{l}\nabla_{k}u_{l}=\\ &= -u_{i}\dot{u}_{j} + \frac{1}{3}h_{ij}\Theta + \sigma_{ij} + \omega_{ij} - \frac{1}{2}u_{j}h_{i}{}^{k}Q_{kmn}u^{m}u^{n}, \end{split} \end{equation} with \begin{equation} D_{i}u_{j} := \frac{1}{3}h_{ij}\Theta + \sigma_{ij} + \omega_{ij}, \end{equation} and where: \begin{itemize} \item $\Theta$ is related to the rate of volume expansion, \begin{equation}\label{eq:def_Theta} \Theta = g^{ij} D_{i}u_{j} = g^{ij}h_{i}{}^{p}h_{j}{}^{q}\nabla_{p}u_{q} = h^{ij}\tilde{D}_{i}u_{j} - h^{ij}N_{ij}{}^{k}u_{k} = \tilde{\Theta} - h^{ij}N_{ij}{}^{k}u_{k}, \end{equation} with \begin{equation}\label{eq:def_Theta_levi_civita} \tilde{\Theta} = \tilde{D}_{i}u^{i}; \end{equation} \item $\sigma_{ij}$ is the trace-free symmetric tensor called ``shear tensor'', describing the volume preserving distortion of the fluid flow, \begin{equation}\label{eq:def_sigma} \begin{split} \sigma_{ij} = D_{ \langle i}u_{j \rangle} &= \left[ h_{(i}{}^{m}h_{j)}{}^{n} - \frac{1}{3}h_{ij}h^{mn}\right] \left( \tilde{D}_{m}u_{n} - h_{m}{}^{p}h_{n}{}^{q}N_{pq}{}^{k}u_{k} \right) =\\ &= \tilde{\sigma}_{ij} - \left[ h_{(i}{}^{m}h_{j)}{}^{n} - \frac{1}{3}h_{ij}h^{mn}\right]N_{mn}{}^{k}u_{k}, \end{split} \end{equation} \begin{equation} \sigma_{ij}u^{j}=0, \qquad \sigma_{i}{}^{i}=0, \end{equation} with \begin{equation}\label{eq:def_sigma_levi_civita} \tilde{\sigma}_{ij} = \tilde{D}_{\langle i}u_{j\rangle }, \qquad \tilde{\sigma}_{ij}u^{j}=0, \qquad \tilde{\sigma}_{i}{}^{i}= 0; \end{equation} \item $\omega_{ij}$ is the skew-symmetric tensor called ``vorticity tensor'' describing rotation of the fluid flow, \begin{equation}\label{eq:def_omega} \omega_{ij} = D_{[i}u_{j]} = \tilde{D}_{[i}u_{j]} - h_{[i}{}^{m}h_{j]}{}^{n}N_{mn}{}^{k}u_{k} = \tilde{D}_{[i}u_{j]} = \tilde{\omega}_{ij}, \end{equation} \begin{equation}\label{eq:def_omega_levi_civita} \tilde{\omega}_{ij} = \tilde{D}_{[i}u_{j]}, \qquad \omega_{ij}u^{j}=\tilde{\omega}_{ij}u^{j}=0. \end{equation} \end{itemize} It is useful to introduce the magnitudes of shear and vorticity tensors: \begin{equation} \sigma^{2} = \frac{1}{2}\sigma_{ij}\sigma^{ij}, \qquad \omega^{2} = \frac{1}{2}\omega_{ij}\omega^{ij}, \end{equation} \begin{equation} \tilde{\sigma}^{2} = \frac{1}{2}\tilde{\sigma}_{ij}\tilde{\sigma}^{ij}, \qquad \tilde{\omega}^{2} = \frac{1}{2}\tilde{\omega}_{ij}\tilde{\omega}^{ij}. \end{equation} Substituting Eqs. \eqref{eq:def_Theta}, \eqref{eq:def_sigma} and \eqref{eq:def_omega} in Eq. \eqref{eq:covariant_derivative_4_velocity} and considering Eq. \eqref{eq:condition_about_u}, we get the expression, \begin{equation}\label{eq:covariant_derivative_4_velocity_levi_civita} \begin{split} \nabla_{i}u_{j} &= \tilde{\nabla}_{i}u_{j} - \frac{1}{2}u_{i}h_{j}{}^{k}Q_{kmn}u^{m}u^{n} - h_{i}{}^{m}h_{j}{}^{n}N_{mn}{}^{p}u_{p} - \frac{1}{2}u_{j}h_{i}{}^{k}Q_{kmn}u^{m}u^{n} =\\ &= -u_{i}\mathring{{u}}_{j} +\frac{1}{3}\tilde{\Theta} h_{ij} + \tilde{\sigma}_{ij} + \tilde{\omega}_{ij} - u_{(i}h_{j)}{}^{k}Q_{kmn}u^{m}u^{n} - h_{i}{}^{m}h_{j}{}^{n}N_{mn}{}^{p}u_{p}. \end{split} \end{equation} \subsection{Gauss Relation} Given a spatial vector field $v^{p}$, we define the spatial Riemann tensor through the relation, \begin{equation}\label{eq:def_3_riemann} \, ^{3}R^{p}{}_{qij}v^{q} := \left(D_{i}D_{j}-D_{j}D_{i}\right)v^{p} -2 \omega_{ij}u^{r}h_{s}{}^{p}\nabla_{r}v^{s}. \end{equation} This relation can be recast as the so-called ``Gauss relation'', \begin{equation}\label{eq:gauss_relation} \, ^{3}R^{p}{}_{qij} = h_{i}{}^{m}h_{j}{}^{n}h^{p}{}_{s}h_{q}{}^{r}R^{s}{}_{rmn} + K_{j}{}^{p}K_{iq} - K_{i}{}^{p}K_{jq} +2 h_{[i}{}^{m}K_{j]q}h^{pn}Q_{mns}u^{s}. \end{equation} Since Riemann tensor is not antisymmetric in the first two indices, we can obtain two ``contracted Gauss relations''. The first contracting the first and third index of the Gauss relation, \begin{equation}\label{eq:contracted_gauss_relation_1} \, ^{3}R_{qj} = \, ^{3}R^{i}{}_{qij} = h_{s}{}^{m}h_{j}{}^{n}h_{q}{}^{r}R^{s}{}_{rmn} + K_{j}{}^{i}K_{iq} - K_{i}{}^{i}K_{jq} + 2 h_{[i}{}^{m}K_{j]q}h^{in}Q_{mns}u^{s}, \end{equation} and a second one contracting the second and third indices: \begin{equation}\label{eq:contracted_gauss_relation_2} \, ^{3}\bar{R}_{qj} = \, ^{3}R_{q}{}^{i}{}_{ij} = h_{r}{}^{m}h_{j}{}^{n}h_{q}{}^{s}R_{s}{}^{r}{}_{mn} + K_{jq}K_{i}{}^{i} - K_{iq}K_{j}{}^{i} + 2 h_{[i}{}^{m}K_{j]}{}^{i}h_{q}{}^{n}Q_{mns}u^{s}. \end{equation} The trace of both Eqs. \eqref{eq:contracted_gauss_relation_1} and Eq. \eqref{eq:contracted_gauss_relation_2} leads to the ``scalar Gauss relation'', \begin{equation}\label{eq:scalar_gauss_relation} \begin{split} \, ^{3}R &= g^{qj}\, ^{3}R_{qj} = - g^{qj}\, ^{3}\bar{R}_{qj} =\\ &= h_{s}{}^{m}h^{rn}R^{s}{}_{rmn} + K^{ji}K_{ij} - K_{i}{}^{i}K_{j}{}^{j} + 2 h_{[i}{}^{m}K_{j]}{}^{j}h^{in}Q_{mns}u^{s}, \end{split} \end{equation} which generalizes the ``Theorema Egregium'' in the presence of nonmetricity. It is also useful to rewrite Eq. \eqref{eq:gauss_relation} in the form \begin{equation}\label{eq:gauss_relation_levi_civita} \begin{split} \, ^{3}R^{p}{}_{qij} =& \, ^{3}\tilde{R}^{p}{}_{qij} + 2 \tilde{D}_{[i}N_{j]q}{}^{p} + 2 \tilde{K}_{[i}{}^{p}h_{|j]}{}^{m}h_{q}{}^{n}N_{mn}{}^{k}u_{k} +\\ &+ 2 \tilde{K}_{[i|q}h_{j]}{}^{m}h^{p}{}_{n}N_{mk}{}^{n}u^{k} + 2 h_{[i}{}^{r}h_{j]}{}^{m}h_{s}{}^{p}h_{q}{}^{n}h_{l}{}^{k}N_{rk}{}^{s}N_{mn}{}^{l}, \end{split} \end{equation} in which the contributions due to Levi-Civita and nonmetricity terms are made evident. \subsection{Energy-momentum tensor} The energy-momentum tensor of the matter fluid can be decomposed in its irreducible parts as, \begin{equation}\label{eq:energy_momentum_tensor} \Psi_{ij} = \rho u_{i}u_{j} + q_{i}u_{j} + u_{i}q_{j} + p h_{ij} + \pi_{ij}, \end{equation} where \begin{equation} \rho = \Psi_{ij}u^{i}u^{j} \end{equation} is the relativistic energy density, \begin{equation} q_{i} = - h_{i}{}^{k}\Psi_{kj}u^{j} \end{equation} the relativistic energy flux, \begin{equation} p = \frac{1}{3}h^{ij}\Psi_{ij} \end{equation} the isotropic pressure, and \begin{equation} \pi_{ij} = \Psi_{\langle ij \rangle} \end{equation} the trace-free anisotropic pressure. The trace of tensor \eqref{eq:energy_momentum_tensor} is equal to, \begin{equation} \Psi = \Psi_{i}{}^{i} = -\rho + 3p. \end{equation} \subsection{Nonmetricity decomposition}\label{sec:nonmetricity_decomposition} Similarly to what we have done for the energy-momentum tensor \eqref{eq:energy_momentum_tensor}, we can decompose the nonmetricity tensor using $u_i$ and $h_{ij}$ as: \begin{equation}\label{eq:non_metricity_decomposition} \begin{split} Q_{kij} =& -Q_{0}u_{k}u_{i}u_{j} - \frac{1}{3} Q_{1} u_{k}h_{ij} - \frac{2}{3}Q_{2}u_{(i}h_{j)k} + Q^{(0)}{}_{k}u_{i}u_{j} + 2Q^{(1)}{}_{(i}u_{j)}u_{k} + \\ &+ \frac{1}{3}Q^{(2)}{}_{k}h_{ij} + \frac{2}{3}Q^{(3)}{}_{(i}h_{j)k} - Q^{(0)}{}_{ij}u_{k} - 2Q^{(1)}{}_{k(i}u_{j)} + \, ^{3}Q_{kij}, \end{split} \end{equation} where \begin{equation} Q_{0}= Q_{kij}u^{k}u^{i}u^{j}, \quad Q_{1} = Q_{kij}u^{k}h^{ij}, \quad Q_{2} = Q_{kij}h^{ki}u^{j} \end{equation} are scalar quantities, \begin{equation} Q^{(0)}{}_{k} = Q_{pij} h^{p}{}_{k}u^{i}u^{j}, \qquad Q^{(1)}{}_{k} = Q_{pij}u^{p}u^{j}h^{i}{}_{k}, \end{equation} \begin{equation} Q^{(2)}{}_{k} = Q_{pij}h^{p}{}_{k}h^{ij}, \qquad Q^{(3)}{}_{k} = Q_{pij} h^{i}{}_{k} h^{pj} \end{equation} are covectors, \begin{equation} Q^{(0)}{}_{ij} = Q^{(0)}{}_{ji} = \left(h_{(i}{}^{p}h_{j)}{}^{q} - \frac{1}{3}h_{ij}h^{pq}\right)Q_{kpq} u^{k}, \end{equation} \begin{equation} Q^{(1)}{}_{ij}=\left(h_{i}{}^{p}h_{j}{}^{q} - \frac{1}{3}h_{ij}h^{pq}\right)Q_{pkq} u^{k} \end{equation} are trace-free tensors and \begin{equation} \, ^{3}Q_{kij} = h_{k}{}^{p}h_{i}{}^{q}h_{j}{}^{r}Q_{pqr} - \frac{1}{3}h_{ij}h^{ef}h_{k}{}^{d}Q_{def} - \frac{1}{3}h_{ki}h^{de}h_{j}{}^{f}Q_{def} - \frac{1}{3}h_{kj}h^{df}h_{i}{}^{e}Q_{def} \end{equation} is a fully spatial tensor, whose traces are given by \begin{equation} \, ^{3}Q_{ji}{}^{j} = -\frac{1}{3}Q^{(2)}{}_{i} -\frac{1}{3}Q^{(3)}_{i} \quad \hbox{and} \quad \, ^{3}Q_{ij}{}^{j} = - \frac{2}{3}Q^{(3)}{}_{i}. \end{equation} However, unlike the energy-momentum tensor, Eq. \eqref{eq:non_metricity_decomposition} is not an irreducible decomposition. Moreover, because of Eq. \eqref{eq:condition_about_u}, $Q_{0}=0$. We can now rewrite Eqs. \eqref{eq:def_cov_acceleration}, \eqref{eq:def_Theta}, and \eqref{eq:def_sigma} in terms of different contributions of nonmetricity: \begin{equation}\label{eq:4_velocity_non_metricity_decomposition} \dot{u}_{i} = \mathring{u}_{i} + \frac{1}{2} Q^{(0)}{}_{i}, \end{equation} \begin{equation}\label{eq:4_velocity_non_metricity_decomposition_2} \Theta = \tilde{\Theta} - \frac{1}{2}Q_{1} + Q_{2}, \end{equation} \begin{equation}\label{eq:sigma_nonmetricity_decomposition} \sigma_{ij} = \tilde{\sigma}_{ij} - \frac{1}{2}Q^{(0)}{}_{ij} + Q^{(1)}{}_{( ij )}. \end{equation} Eqs. \eqref{eq:4_velocity_non_metricity_decomposition}, \eqref{eq:4_velocity_non_metricity_decomposition_2} and \eqref{eq:sigma_nonmetricity_decomposition} show how nonmetricity affects the kinematic quantities associated with the given congruence. \section{\texorpdfstring{$f(\mathcal{Q})$}{} theory}\label{sec:f_Q_theory} $f(\mathcal{Q})$ gravity is a generalization of Symmetric Teleparallel Gravity, where the gravitational Lagrangian $f(\mathcal{Q})$ is a given function of the nonmetricity scalar. The latter is defined as, \begin{equation}\label{eq:def_non_metricity_scalar} \begin{split} \mathcal{Q} &= N_{hp}{}^{h}N_{k}{}^{kp} - N_{kp}{}^{h}N_{h}{}^{kp} = -Q_{hij}P^{hij} =\\ &= \frac{1}{4}Q_{hij}Q^{hij} - \frac{1}{2}Q_{hij}Q^{ijh} - \frac{1}{4} q_{h}q^{h} + \frac{1}{2}q_{h}Q^{h}, \end{split} \end{equation} where \begin{equation} P^{h}{}_{ij} = - \frac{1}{4} Q^{h}{}_{ij} + \frac{1}{2} Q_{(ij)}{}^{h} + \frac{1}{4} q^{h} g_{ij} - \frac{1}{4} Q^{h} g_{ij} - \frac{1}{4} \delta^{h}_{(i} q_{j)} \end{equation} is the conjugate tensor of $Q_{hij}$, and \begin{equation} q_{h} = Q_{hi}{}^{i} \qquad Q_{h} = Q_{ih}{}^{i} \end{equation} are its two independent traces. Writing the Ricci scalar in the form, \begin{equation}\label{eq:ricci_scalar_Q} R = \tilde{R} + \tilde{\nabla}_{h}N_{k}{}^{kh}- \tilde{\nabla}_{k}N_{h}{}^{kh} + \mathcal{Q}, \end{equation} allows us to highlight how the Lagrangians of STG and GR differ by a total divergence (and a sign). In a metric-affine framework, the field equations of $f(\mathcal{Q})$ gravity are derived from the action \begin{equation}\label{eq:f(Q)_action} A = \int \hbox{d}^4x \left[-\frac{1}{2} \sqrt{-g} f(\mathcal{Q}) + \lambda_{a}{}^{bij}R^{a}{}_{bij} + \lambda_{a}{}^{ij}T_{ij}{}^{a} + \sqrt{-g}\mathcal{L}_{m}\right], \end{equation} where $\mathcal{L}_{m}$ is the matter Lagrangian, $\lambda_{a}{}^{bij}$ and $\lambda_{a}{}^{ij}$ are Lagrange multipliers introduced to impose the vanishing of curvature and torsion. Performing variations, we get \begin{equation} R^{h}{}_{kij} = 0, \qquad T_{ij}{}^{h} = 0, \end{equation} \begin{equation} \label{eq:metric_equation} \frac{2}{\sqrt{-g}}\nabla_{h} \left( \sqrt{-g} f' P^{h}{}_{ij} \right) + \frac{1}{2}g_{ij}f(\mathcal{Q}) + f' \left( P_{ihk}Q_{j}{}^{hk} - 2 Q^{hk}{}_{i}P_{hkj} \right) = \Psi_{ij}, \end{equation} and \begin{equation}\label{eq:connection_equation} \nabla_{i}\nabla_{j}\left(\sqrt{-g} f' P^{ij}{}_{h}\right) +\nabla_{i}\nabla_{j} \Phi^{ij}{}_{h}=0, \end{equation} with \begin{equation} \Psi_{ij}= -\frac{2}{\sqrt{-g}}\frac{\delta \left(\sqrt{-g} \mathcal{L}_{m}\right)}{\delta g^{ij}} \:\: \hbox{and} \:\: \Phi^{ij}{}_{h} = - \frac{1}{2}\frac{\delta \left( \sqrt{-g}\mathcal{L}_{m}\right)}{\delta {\Gamma_{ij}{}^{h}}}. \end{equation} From Eq. \eqref{eq:connection_equation} and the Levi-Civita divergence of Eq. \eqref{eq:metric_equation}, we derive the energy-momentum conservation law, \begin{equation}\label{eq:energy-momentum_conservation} \tilde{\nabla}_{i}\Psi^{i}{}_{h} + \frac{2}{\sqrt{-g}}\nabla_{i}\nabla_{j} \Phi^{ij}{}_{h} = 0. \end{equation} Since we consider matter independent of nonmetricity, $\Phi^{ij}{}_{h}$ is identically zero. In view of condition $R^{a}{}_{bij} = 0$, Eq. \eqref{eq:metric_equation} can be reformulated more suitably for the $1+3$ formalism as (see \ref{appendix:field_equations_derivation}): \begin{equation}\label{eq:final_field_equation} \tilde{R}_{ij} = \frac{1}{f'} \left( \Psi_{ij} - \frac{1}{2}g_{ij}\Psi \right) + \frac{1}{2}g_{ij}\left(\frac{f}{f'} - \mathcal{Q}\right) - 2 \frac{f''}{f'}\left( P^{h}{}_{ij}-\frac{1}{2}g_{ij}P^{hk}{}_{k} \right)\partial_{h}\mathcal{Q}. \end{equation} By replacing $f(\mathcal{Q}) = \mathcal{Q}$ into Eq. \eqref{eq:final_field_equation}, we recover the field equations of General Relativity, \begin{equation} \tilde{R}_{ij} = \Psi_{ij} - \frac{1}{2}g_{ij}\Psi. \end{equation} \subsection{Cosmological equations} At this point, making use of the following relations for $\tilde{R}_{ij}$, \begin{equation}\label{eq:ricci_u_u_levi_civita} \tilde{R}_{ij}u^{i}u^{j} = u^{j}\tilde{\nabla}_{h}\tilde{\nabla}_{j}u^{h} - \left( \tilde{\nabla}_{h}u^{h}\right)^{\cdot} = -\mathring{\tilde{\Theta}}-\frac{1}{3}\tilde{\Theta}^{2}-2\left(\tilde{\sigma}^{2}-\tilde{\omega}^{2}\right) + \tilde{D}_{h}\mathring{u}^{h} + \mathring{u}^{h}\mathring{u}_{h}, \end{equation} and \begin{equation}\label{eq:gauss_ricci_levi_civita} \, ^{3}\tilde{R}_{ij} = h_{j}{}^{q}h_{i}{}^{p} \tilde{R}_{pq} - \tilde{K}_{p}{}^{p}\tilde{K}_{ji} - h_{j}{}^{q}h_{i}{}^{p}u^{m}\tilde{\nabla}_{m}\tilde{K}_{qp} + \tilde{D}_{j}\mathring{{u}}_{i} + \mathring{{u}}_{i}\mathring{{u}}_{j}, \end{equation} we can derive the $1+3$ cosmological equations for a generic $f(\mathcal{Q})$ theory, namely: \begin{itemize} \item Raychaudhuri equation, obtained from Eq. \eqref{eq:ricci_u_u_levi_civita}, \begin{equation}\label{eq:raychaudhuri_equation} \begin{split} \mathring{\tilde{\Theta}} + \frac{1}{3}\tilde{\Theta}^{2} +& 2 \left(\tilde{\sigma}^{2} - \tilde{\omega}^{2}\right) -\tilde{D}_{i}\mathring{{u}}^{i} - \mathring{{u}}^{i}\mathring{{u}}_{i} + \frac{1}{2f'}\left( \rho + 3p \right) +\\ &- \frac{1}{2}\left(\frac{f}{f'} - \mathcal{Q}\right) - 2 \frac{f''}{f'}\left( P^{h}{}_{ij}u^{i}u^{j} + \frac{1}{2}P^{hk}{}_{k} \right)\partial_{h}\mathcal{Q} = 0; \ \end{split} \end{equation} \item Spatial equations, derived from Eqs. \eqref{eq:gauss_ricci_levi_civita} and \eqref{eq:raychaudhuri_equation}, given by the three-dimensional Ricci scalar, i.e. the Friedmann equation, \begin{equation}\label{eq:3Ricci_scalar_f(Q)} \begin{split} \, ^{3}\tilde{R} =& \frac{2}{f'}\rho + \frac{f}{f'} - \mathcal{Q} - \frac{2}{3} \tilde{\Theta}^{2} + 2\left(\tilde{\sigma}^{2} - \tilde{\omega}^{2}\right) +\\ &+ 2\frac{f''}{f'} \partial_{h}\mathcal{Q}\left( P^{hi}{}_{i} - h^{ij}P^{h}{}_{ij} - P^{h}{}_{ij} u^{i}u^{j} \right), \end{split} \end{equation} and the projected traceless three-dimensional Ricci tensor, \begin{equation}\label{eq:3Ricci_f(Q)} \begin{split} \bigg( h_{i}{}^{p}h_{j}{}^{q} - \frac{1}{3}h_{ij}h^{pq} \bigg) \, ^{3}\tilde{R}_{pq} =& \frac{1}{f'} \left[ \pi_{ij} - 2 f'' \partial_{h}\mathcal{Q}\left( h_{i}{}^{p}h_{j}{}^{q} - \frac{1}{3} h_{ij} h^{pq}\right)P^{h}{}_{pq}\right] +\\ &- \tilde{\Theta}\tilde{\sigma}_{ij} + \tilde{\Theta}\tilde{\omega}_{ij} + \tilde{D}_{\langle i}\mathring{{u}}_{j\rangle} - \tilde{D}_{[i}\mathring{{u}}_{j]} + \mathring{{u}}_{\langle i}\mathring{{u}}_{j\rangle } +\\ &- \mathring{\tilde{\sigma}}_{ij} + \mathring{\tilde{\omega}}_{ij}. \end{split} \end{equation} \end{itemize} As we will see in the following sections, Eqs. \eqref{eq:raychaudhuri_equation}-\eqref{eq:3Ricci_f(Q)}, together with the energy-momentum conservation law \eqref{eq:energy-momentum_conservation} and Eq. \eqref{eq:non_metricity_decomposition}, form a closed system able to describe the evolution of Bianchi type-I universes. \section{Bianchi type-I model}\label{sec:bianchi_model} Bianchi type-I models describe anisotropic and homogeneous universes characterized by zero vorticity, $\omega_{ij} = 0$, and autoparallel world lines, $u^{i}\nabla_{i}u^{j} = 0$. In particular, these conditions and the identity $\omega_{ij}=\tilde\omega_{ij}$ imply that the congruence is hypersurface orthogonal. Moreover, in Bianchi type-I models the spatial hypersurfaces foliating the universe are assumed flat, i.e. $\, ^{3}R^{h}{}_{kij} = 0$. In addition to this, and remembering that we have chosen $Q_0=0$, we can assume without loss of generality that the only non-zero projections of the nonmetricity tensor are $Q_1$ and $Q^{(0)}{}_{ij}$, so that the nonmetricity tensor results to be of the particular form \begin{equation}\label{eq:nonmetricity_decomposition_bianchi} Q_{kij} = - \frac{1}{3}Q_{1}u_{k}h_{ij} - Q^{(0)}{}_{ij}u_{k}. \end{equation} In local coordinates, expression \eqref{eq:nonmetricity_decomposition_bianchi}, and in particular the condition $Q_0=0$, can be justified by adopting the so--called coincidence gauge $\Gamma_{ij}{}^{h}=0$, which is a common assumption in $f(\cal Q)$ gravity. Since the projections of the nonmetricity tensor are tensor quantities, once the identity \eqref{eq:nonmetricity_decomposition_bianchi} has been proved in the coincidence gauge, it remains valid in any other gauge. For more detail on this point, the reader is referred to \ref{appendix:bianchi_coordinates}. In view of the field equations $R^{h}{}_{kij} = 0$ and the assumption $\, ^{3}R^{h}{}_{kij} = 0$, from the Gauss relation, the following relations hold\footnote{It is worth to stress here that Eqs. \eqref{eq:Theta_bianchi} and \eqref{eq:sigma_bianchi} are valid because of the flatness of the spatial hypersurfaces.}: \begin{equation}\label{eq:Theta_bianchi} \Theta = \tilde{\Theta} - \frac{1}{2}Q_{1} = 0, \end{equation} \begin{equation}\label{eq:sigma_bianchi} \sigma_{ij} = \tilde{\sigma}_{ij} - \frac{1}{2}Q^{(0)}{}_{ij} = 0, \end{equation} \begin{equation}\label{eq:bianchi_nonmetricity} \mathcal{Q} = - \frac{1}{4}Q^{(0)}{}_{ij}Q^{(0)}{}^{ij} + \frac{1}{6}Q_{1}{}^{2} = - 2\tilde{\sigma}^{2} + \frac{2}{3}\tilde{\Theta}^{2}. \end{equation} Inserting Eq. \eqref{eq:nonmetricity_decomposition_bianchi} into \begin{equation} u^{i}\nabla_{i}u^{j}=u^{i}\tilde{\nabla}_{i}u^{j} + N_{ik}{}^{j}u^{k}u^{i}=0, \end{equation} we obtain \begin{equation} \mathring{u}^{j} = 0. \end{equation} In addition, Eq. \eqref{eq:gauss_relation_levi_civita} leads to \begin{equation} \, ^{3}\tilde{R}^{h}{}_{kij} =0. \end{equation} In the subsequent discussion, we consider a matter source described by the energy-momentum tensor, \begin{equation}\label{eq:energy_momentum_tensor_perfect_fluid} \Psi_{ij} = \rho u_{i}u_{j} + p h_{ij} + \pi_{ij}, \end{equation} where $p$ and $\rho$ satisfy the barotropic linear equation of state, \begin{equation} p = w\rho, \qquad w=const. \end{equation} Inserting all the above results into Eqs. \eqref{eq:raychaudhuri_equation}, \eqref{eq:3Ricci_scalar_f(Q)} and \eqref{eq:3Ricci_f(Q)}, we can write the $1+3$ cosmological equations for Bianchi type-I universes: \begin{itemize} \item Raychaudhuri equation, \begin{equation}\label{eq:raychaudhuri_equation_bianchi} \mathring{{\tilde{\Theta}}} + \frac{1}{3}\tilde{\Theta}^{2} + 2 \tilde{\sigma}^{2} + \frac{1}{2f'}\left( \rho + 3 p \right) - \frac{1}{2}\left(\frac{f}{f'} - \mathcal{Q} \right) + \frac{f''}{f'} \tilde{\Theta} \mathring{\mathcal{Q}} = 0; \end{equation} \item Spatial equations, \begin{equation}\label{eq:3R_Bianchi} 2 \tilde{\sigma}^{2} - \frac{2}{3}\tilde{\Theta}^{2} + \frac{2}{f'}\rho + \frac{f}{f'} - \mathcal{Q} = 0, \end{equation} \begin{equation}\label{eq:3Ricci_Bianchi} \mathring{\tilde{\sigma}} + \tilde{\Theta}\tilde{\sigma} + \frac{f''}{f'}\tilde{\sigma}\mathring{\mathcal{Q}} - \frac{1}{2f'}\frac{\pi_{ij}\tilde{\sigma}^{ij}}{\tilde{\sigma}}= 0; \end{equation} \item Energy-momentum conservation, \begin{equation}\label{eq:energy_momentum_conservation_bianchi} \mathring{\rho} + \tilde{\Theta} \left( \rho + p \right) + \pi^{ij}\sigma_{ij} = 0. \end{equation} \end{itemize} Equation \eqref{eq:3Ricci_Bianchi} is obtained multiplying Eq. \eqref{eq:3Ricci_f(Q)} by $\tilde{\sigma}^{ij}/\left(2\tilde{\sigma}\right)$, whereas Eq. \eqref{eq:energy_momentum_conservation_bianchi} is derived by the temporal projection of Eq. \eqref{eq:energy-momentum_conservation}\footnote{We should remark here that the derivatives are with respect to the proper time, not the coordinate one. The two time parameterizations coincide only when $g_{00} = -1$.}. \section{Dynamical System}\label{sec:dynamical_systems} In this section, we will apply the DSA to analyze the dynamics of Bianchi type-I universes in the framework of $f(\mathcal{Q})$ gravity. We will deal with some specific models associated with particular functions $f(\mathcal{Q})$. In one of the examples, we will also consider the presence of anisotropic pressure. In our analysis we will always consider an expanding universe, hence $\tilde{\Theta}>0$. \subsection{\texorpdfstring{$f(\mathcal{Q})$}{} as a power law without anisotropic pressure}\label{sec:alphaQ} As a first example, we consider the function \begin{equation}\label{eq:f(Q)_example_1} f(\mathcal{Q})=\alpha\mathcal{Q}^{n}, \end{equation} with $\alpha$ a dimensional constant, and a null anisotropic pressure $\pi_{ij} = 0$. In this case, Eqs. \eqref{eq:raychaudhuri_equation_bianchi}-\eqref{eq:energy_momentum_conservation_bianchi} assume the form, \begin{equation}\label{eq:raychaudhuri_equation_bianchi_alphaQ} \mathring{\tilde{\Theta}} + \frac{1}{3}\tilde{\Theta}^{2} + 2 \tilde{\sigma}^{2} + \frac{n-1}{2n}\mathcal{Q} + \left(n-1\right)\tilde{\Theta}\frac{\mathring{\mathcal{Q}}}{\mathcal{Q}} +\frac{1}{2 \alpha n}\left(1 + 3w \right) \mathcal{Q}^{1-n} \rho = 0, \end{equation} \begin{equation}\label{eq:3R_Bianchi_alphaQ} 2\tilde{\sigma}^{2} - \frac{2}{3}\tilde{\Theta}^{2} + \frac{1-n}{n} \mathcal{Q} +\frac{2}{\alpha n}\mathcal{Q}^{1-n}\rho = 0, \end{equation} \begin{equation}\label{eq:3Ricci_Bianchi_alphaQ} \mathring{\tilde{\sigma}}+ \tilde{\Theta}\tilde{\sigma} + \left(n-1\right) \tilde{\sigma} \frac{\mathring{\mathcal{Q}}}{\mathcal{Q}} = 0, \end{equation} \begin{equation}\label{eq:energy_momentum_alphaQ} \mathring{\rho} + \tilde{\Theta} \left( 1 + w \right)\rho = 0. \end{equation} In order to recast these equations in a form more suitable for a dynamical system analysis, we define the following dimensionless variables \footnote{In the natural units we are using, the quantities $\tilde{\Theta}^{2}$, $\tilde{\sigma}^{2}$, $\rho$, and $\mathcal{Q}$ have the dimension of a length to the power of $-2$.}, \begin{equation}\label{eq:dynamical_variables_1} \Sigma^{2} = 3 \frac{\tilde{\sigma}^{2}}{\tilde{\Theta}^{2}}, \qquad \Omega^{2} = 3 \frac{1}{\alpha}\frac{1}{\tilde{\Theta}^{2n}}\rho. \end{equation} Notice that the dynamical variables related to the shear and the matter sources have been chosen non-negative, offering the advantage of a partial compactification of the phase space. The choice of the matter variable should also be discussed. In general, one chooses as the variable associated to $\rho$, simply $3\rho^{2}/\tilde{\Theta}^{2}$ or $3\rho^{2}/(f' \tilde{\Theta}^{2})$, which directly relates to the cosmic matter parameters, and therefore, it is easier to compare with observational results. In fact these parameters appear via Eq. \eqref{eq:3R_Bianchi} in many observable quantities, such as the luminosity distance relation of the lookback time etc. However, here and in the following examples, except for Sec. \ref{sec:expQ}, we choose a different form for $\Omega$. The reason is that such form allows us to introduce fewer dynamical variables. In addition, our choice allows us to obtain a variable that involves only the expansion rate and energy density, which can be measured independently, leading to an equally good variable in terms of comparison with observations. We also define the ``average length scale'' $l$ by using $\tilde{\Theta}$, \begin{equation} \frac{\mathring{l}}{l} = \frac{1}{3}\tilde{\Theta}, \end{equation} that allows us to introduce the conformal time, \begin{equation} \mathcal{T} = \hbox{ln} \: l. \end{equation} Making use of the above variables and recalling the identity \eqref{eq:bianchi_nonmetricity}, we can rewrite the system of cosmological equation in the new form, \begin{equation}\label{eq:dynamical_equations_1_2} (1 - 2n) \left(1 - \Sigma^{2}\right) + \left(\frac{3}{2}\right)^{n-1} \Omega^{2} \left(1 - \Sigma^{2} \right)^{1-n} = 0, \end{equation} \begin{equation}\label{eq:dynamical_equations_1_3} \frac{\hbox{d}\Sigma}{\hbox{d}\mathcal{T}} = \frac{1}{3n}\Sigma \left(\Sigma^{2} - 1\right) \left[3 (n+1) - 3^{n} (3 w+1) \left(2-2 \Sigma ^2\right)^{-n} \Omega^{2}\right]. \end{equation} \begin{table}[t] \centering \renewcommand{\arraystretch}{1.5} \caption{The stability of the fixed points and evolution of $l$, $\tilde{\sigma}$, and $\rho$ for $f(\mathcal{Q})=\alpha\mathcal{Q}^{n}$ and $\pi_{ij} =0$. The parameters $\tau_{0}$, $l_{0}$, $\sigma_{0}$, $\sigma_{1}$, $\rho_{0}$, and $\rho_{1}$ are constants of integration.} \begin{tabular}{ lccccccc } \toprule & \multicolumn{3}{c}{$w=0$} & & \multicolumn{3}{c}{$0 < w \leq 1$} \\ \cmidrule{2-4} \cmidrule{6-8} Point & Attractor & Repeller & Saddle & & Attractor & Repeller & Saddle \\ \midrule $P_{1}$ & $n\geq \frac{1}{2}$ & & & & $\frac{1}{2} \leq n < \frac{1 + w}{2w}$ & $n > \frac{1 + w}{2w}$ & \\ \toprule & \multicolumn{2}{c}{Average length} & \multicolumn{2}{c}{Shear} & \multicolumn{2}{c}{Energy density}\\ \midrule $P_{1}$ & \multicolumn{2}{c}{$l = l_{0} \left( \tau - \tau_{0} \right)^{\frac{2 n}{3 (1+w)}}$} & \multicolumn{2}{c}{$\tilde{\sigma} = \sigma_{0} = 0$} & \multicolumn{2}{c}{$\rho = \rho_{0} + \frac{\rho_{1}}{\left( \tau - \tau_{0} \right)^{2n}}$}\\ \bottomrule \end{tabular} \label{table_1} \end{table} From Eq. \eqref{eq:dynamical_equations_1_2} we derive $\Omega$ as a function of $\Sigma$, \begin{equation}\label{Omega} \Omega = \sqrt{2n-1}\left(\frac{3}{2}\right)^{\frac{1-n}{2}} \left(1 - \Sigma^{2}\right)^{\frac{n}{2}}, \end{equation} which makes Eq. \eqref{eq:dynamical_equations_1_3} a differential equation for $\Sigma$, \begin{equation}\label{eq:dynamical_equations_1_4} \frac{\hbox{d}\Sigma}{\hbox{d}\mathcal{T}} = \frac{3}{2n} \Sigma \left(1 - \Sigma^{2}\right) \left[(2 n-1) w-1\right]. \end{equation} A first consideration about the above equations is that $\Sigma = 1$ is not an acceptable value, since we derive Eq. \eqref{Omega} from Eq. \eqref{eq:dynamical_equations_1_2}, assuming that $\Sigma\neq1$. The same problem will occur in Sec. \ref{sec:alphaQ_anisotropic_pressure}, in which the function $f(\mathcal{Q})$ is again \eqref{eq:f(Q)_example_1}. Furthermore, being $\Omega$ and $\Sigma$ non-negative, Eq. \eqref{Omega} is only meaningful if $n \geq 1/2$ and $0 \leq \Sigma < 1$, and for some values of $n$ in the intervals $n \geq 1/2$ and $\Sigma > 1$, or $n \leq 1/2$ and $\Sigma > 1$. However, we consider only the condition $0 \leq \Sigma < 1$. This choice has two motivations. The first is that, from a physical point of view, we are interested in the states of phase space describing an isotropic universe, i.e. $\Sigma=0$. This state cannot be reached by any orbit starting at $\Sigma > 1$. A second motivation is that in Eq. \eqref{Omega} there is the term $\left(1 - \Sigma^{2}\right)^{\frac{n}{2}}$, the value of which depends strictly on the choice of $n$ (e.g. even, odd, or a rational number) when $\Sigma > 1$. The case $n \geq 1/2$ and $0 \leq \Sigma \leq 1$, on the other hand, being a continuous interval for $n$, offers a wider setting for a parameter analysis aimed to comparison with observations. Similar constraints will be necessary also in the models we will consider in the following sections. \begin{figure}[t] \centering \begin{subfigure}[ht]{0.49\textwidth} \includegraphics[width=0.9\linewidth]{figure1a.pdf} \caption{} \label{fig:Phase_space_alpha_Q_n_0_1} \end{subfigure} \hfill \begin{subfigure}[ht]{0.49\textwidth} \includegraphics[width=0.9\linewidth]{figure1b.pdf} \caption{} \label{fig:Phase_space_alpha_Q_n_0_2} \end{subfigure} \hfill \begin{subfigure}[ht]{0.49\textwidth} \includegraphics[width=0.9\linewidth]{figure1c.pdf} \caption{} \label{fig:Phase_space_alpha_Q_n_0_3_ver_3} \end{subfigure} \caption{Evolution of $\Sigma(l)$ and $\Omega(l)$ with: (a) $w=0$, $n=3$, and $\Sigma_{1}=-1/3$; (b) $w=1/3$, $n=3$, and $\Sigma_{1}=2/3$; (c) $w=1/3$, $n=3/2$, and $\Sigma_{1}=-1/4$. The empty (half-)circles represent the conditions $\Sigma \neq 1$ and $\Omega \neq 0$.} \label{fig:Phase_space_alpha_Q_n_0} \end{figure} The system \eqref{Omega} and \eqref{eq:dynamical_equations_1_4} presents only one critical point, \begin{equation} P_{1} = \Bigg\lbrace \Sigma = 0,\: \Omega = \sqrt{2n-1}\left( \frac{3}{2} \right)^{\frac{1-n}{2}} \Bigg\rbrace, \end{equation} which represents a universe where the shear is negligible with respect to the matter. The derivative of Eq. \eqref{eq:dynamical_equations_1_4} with respect to $\Sigma$ allows us to discuss the stability of the solutions near the critical point, which depends on the values of $w$ and $n$. The results are shown in Table \ref{table_1}. We can obtain an ``approximation'' for the time dependence of $l$, $\tilde{\sigma}$, and $\rho$ near to a critical point by substituting Eq. \eqref{eq:dynamical_variables_1} into Eqs. \eqref{eq:raychaudhuri_equation_bianchi_alphaQ}, \eqref{eq:3Ricci_Bianchi_alphaQ}, and \eqref{eq:energy_momentum_alphaQ}. The results are again reported in Table \ref{table_1}. Equation \eqref{eq:dynamical_equations_1_4} is a differential equation which is easy to solve, so we can also get exact solutions for $\Sigma$ and $\Omega$ as a function of average length scale $l$, \begin{equation}\label{eq:SolEs1} \begin{split} \Sigma(l) &= \frac{1}{\sqrt{1+e^{2 \Sigma_{1}} l^{\frac{3 (1 + w - 2nw)}{n}}}},\\ \Omega(l) &= \sqrt{2n-1}\left(\frac{3}{2}\right)^{\frac{1-n}{2}} \left(\frac{e^{2 \Sigma_{1}} l^{\frac{3 (1 + w - 2nw)}{n}}}{1+e^{2 \Sigma_{1}} l^{\frac{3 (1 + w - 2nw)}{n}}}\right)^{n/2}, \end{split} \end{equation} where $\Sigma_{1}$ is a constant of integration. Using Eq. \eqref{eq:SolEs1}, we can compare the evolution of $\Sigma$ and $\Omega$ with the results coming from the stability analysis in Table \ref{table_1}. As it can be seen in Figure \ref{fig:Phase_space_alpha_Q_n_0}, once the appropriate parameters have been chosen, the results are consistent with Table \ref{table_1}. In Figure \ref{fig:Phase_space_alpha_Q_n_0_1}, where $w=0$ and $n \geq 1/2$, we have that $P_{1}$ is an attractor, whereas in Figure \ref{fig:Phase_space_alpha_Q_n_0_2}, and Figure \ref{fig:Phase_space_alpha_Q_n_0_3_ver_3}, with $0 < w \leq 1$, $P_{1}$ is a repeller or an attractor, respectively, depending on the value of $n$. \begin{figure}[t] \centering \begin{subfigure}[ht]{0.49\textwidth} \includegraphics[width=0.9\linewidth]{figure2a.pdf} \caption{} \label{fig:comparison_1} \end{subfigure} \begin{subfigure}[ht]{0.49\textwidth} \includegraphics[width=0.9\linewidth]{figure2b.pdf} \caption{} \label{fig:comparison_2} \end{subfigure} \caption{(a) Evolution of the scale factor $a$, $b$, and $c$ in function of the proper time $\tau$, with $w=0$ and $n=3$. (b) Evolution of $\Sigma$, and $\Omega$ in function of the average length scale $l$, with $w=0$ and $n=3$. The empty (half-)circles represent the conditions $\Sigma \neq 1$ and $\Omega \neq 0$.} \label{fig:comparison} \end{figure} In \cite{Esposito:2021ect}, we used the reconstruction method to find exact Bianchi type-I cosmologies in $f(\mathcal{Q})$ gravity. It is interesting to compare these results with the more general description we have obtained from the above phase space analysis. For example, in \cite{Esposito:2021ect} we found, for $f(\mathcal{Q})=\mathcal{Q}^{n}$, $w=0$ and $n$ an odd integer, the following solution for the scale factors, \begin{equation}\label{eq:scale_factors} \begin{split} a(\tau) =& a_{1} \left[(\tau-\tau_{0})^2 - K^{2} \right]^{\frac{n}{3}}\left[ \frac{\left( \tau - \tau_{0} \right) - K}{\left( \tau - \tau_{0} \right) + K} \right]^{\frac{n}{3}},\\ c(\tau) =& c_{1} \left[(\tau-\tau_{0})^2 - K^{2}\right]^{\frac{n}{3}} \left[\frac{\left( \tau - \tau_{0} \right) - K}{\left( \tau - \tau_{0} \right) + K} \right]^{-\frac{2n}{3}}, \end{split} \end{equation} represented by Figure \ref{fig:comparison_1}. $K$ is a parameter of the theory, while $a_{1}$, $c_{1}$, and $\tau_{0}$ are constants of integration. It is clear that the scale factors tend to have the same expansion rate as the time increases, thus describing a universe which tends to isotropize. Such isotropization is evident in Figure \ref{fig:comparison_2}, which shows the behavior of $\Sigma$ and $\Omega$ calculated for the \eqref{eq:scale_factors}. As expected, this behavior matches exactly the one of Figure \ref{fig:Phase_space_alpha_Q_n_0_1} once the parameter are chosen in a consistent way. \subsection{\texorpdfstring{$f(\mathcal{Q})$}{} as a power law with anisotropic pressure}\label{sec:alphaQ_anisotropic_pressure} We consider again the function $f(\mathcal{Q}) = \alpha\mathcal{Q}^{n}$, but now we add an anisotropic pressure of the form, \begin{equation} \pi_{ij} = - \mu \tilde{\sigma}_{ij} , \end{equation} being $\mu$ a suitable dimensional constant. Under these assumptions, the dynamical equations are: \begin{equation}\label{eq:raychaudhuri_equation_bianchi_alphaQ_pi} \mathring{\tilde{\Theta}} + \frac{1}{3}\tilde{\Theta}^{2} + 2 \tilde{\sigma}^{2} + \frac{n-1}{2n}\mathcal{Q} + \left(n-1\right)\tilde{\Theta}\frac{\mathring{\mathcal{Q}}}{Q} + \frac{1}{2 \alpha n}\left(1 + 3w \right) Q^{1-n} \rho = 0, \end{equation} \begin{equation}\label{eq:3R_Bianchi_alphaQ_pi} 2\tilde{\sigma}^{2} - \frac{2}{3}\tilde{\Theta}^{2} + \frac{1-n}{n} \mathcal{Q} +\frac{2}{\alpha n}\mathcal{Q}^{1-n}\rho = 0, \end{equation} \begin{equation}\label{eq:3Ricci_Bianchi_alphaQ_pi} \mathring{\tilde{\sigma}}+ \tilde{\Theta}\tilde{\sigma} + \left(n-1\right) \tilde{\sigma} \frac{\mathring{\mathcal{Q}}}{Q} + \frac{\mu }{\alpha n} \mathcal{Q}^{1-n}\tilde{\sigma} = 0, \end{equation} \begin{equation} \mathring{\rho} + \tilde{\Theta} \left( 1 + w \right)\rho - 2\mu \sigma^{2} = 0. \end{equation} In this case, we have an additional variable related to the anisotropic pressure, \begin{equation} \mathcal{M} = \frac{\mu}{\alpha} \tilde{\Theta}^{1-2n}, \end{equation} together with \begin{equation}\label{eq:dynamical_variables_2_2} \Sigma^{2} = 3 \frac{\tilde{\sigma}^{2}}{\tilde{\Theta}^{2}}, \quad \Omega^{2} = 3 \frac{1}{\alpha}\frac{1}{\tilde{\Theta}^{2n}}\rho. \end{equation} By following a similar procedure as in the previous example, we obtain the final system of dynamical equations: \begin{eqnarray} \:\:\:\Omega &=& \sqrt{2n - 1} \left(\frac{3}{2}\right)^{\frac{1-n}{2}} \left(1 - \Sigma^{2}\right)^{\frac{n}{2}},\\ \frac{\hbox{d}\Sigma}{\hbox{d}\mathcal{T}} &=& -\frac{1}{2n}\left(\frac{3}{2}\right)^{n} \Sigma \left(1-\Sigma^{2}\right)^{1-n} \left[4 \mathcal{M} + 3^{1-n} \left(2-2 \Sigma^{2}\right)^n (1 + w - 2nw)\right],\label{eq:dynamical_equation_Sigma_prime_2}\\ \frac{\hbox{d}\mathcal{M}}{\hbox{d}\mathcal{T}} &=& \: \frac{3^{n}}{2n}\mathcal{M} \Big\{ 8\left(n-1\right) \mathcal{M} \Sigma ^{2} \left(2-2 \Sigma ^2\right)^{-n} +\nonumber\\ &&- 3^{1-n} (2 n-1) \left[\Sigma ^{2} \left(2 n w-w-1\right)-w-1\right]\Big\}.\label{eq:dynamical_equation_emme_prime_2} \end{eqnarray} As anticipated in Sec. \ref{sec:alphaQ}, in the above system of equations, $\Sigma=1$ is not an acceptable value. In addition, the conditions to have $\Omega$ and $\Sigma$ real and non-negative are $n \geq 1/2$ and $0 \leq \Sigma < 1$. \begin{table} \centering \renewcommand{\arraystretch}{1.5} \caption{The stability of the fixed points and evolution of $l$, $\tilde{\sigma}$, and $\rho$ for $f(\mathcal{Q})=\alpha\mathcal{Q}^{n}$ and $\pi_{ij}=-\mu\sigma_{ij}$. The parameters $\tau_{0}$, $l_{0}$, $\sigma_{0}$, $\rho_{0}$, and $\rho_{1}$ are constants of integration.} \begin{tabular}{ lccccccc } \toprule & \multicolumn{3}{c}{$w=0$} & & \multicolumn{3}{c}{$0 < w \leq 1$} \\ \cmidrule{2-4} \cmidrule{6-8} Point & Attractor & Repeller & Saddle & & Attractor & Repeller & Saddle \\ \midrule $P_{1}$ & & & $n \geq \frac{1}{2}$ & & & $n>\frac{w+1}{2 w}$ & $\frac{1}{2} \leq n<\frac{w+1}{2 w}$ \\ \toprule & \multicolumn{2}{c}{Average length} & \multicolumn{2}{c}{Shear} & \multicolumn{2}{c}{Energy density}\\ \midrule $P_{1}$ & \multicolumn{2}{c}{$l = l_{0} \left( \tau - \tau_{0} \right)^{\frac{2 n}{3 (1+w)}}$} & \multicolumn{2}{c}{$\tilde{\sigma} = \sigma_{0} = 0$} & \multicolumn{2}{c}{$\rho = \rho_{0} + \frac{\rho_{1}}{\left( \tau - \tau_{0} \right)^{2n}}$}\\ \bottomrule \end{tabular} \label{table_2} \end{table} The invariant submanifolds of the system include $\Sigma = 0$ and $\mathcal{M}=0$. Notice that the invariant submanifold $\Sigma=0$ represents isotropic universes, whereas $\mathcal{M}=0$ implies that either we are in a situation in which the terms associated to the coupling $\mu$ are negligible (and therefore the universe described in Sec. \ref{sec:alphaQ}) or that the expansion is going to infinity. It is not immediate in this framework to distinguish these two cases, however. Only a more detailed analysis of the equations, or a different choice of variables might shed clarity on this point. We will not attempt such analysis here. In the parameter range we consider, there is one critical point, \begin{equation} P_{1} = \bigg\{ \Sigma = 0,\: \mathcal{M} = 0, \: \Omega = \sqrt{2 n-1}\left(\frac{3}{2}\right)^{\frac{1-n}{2}} \bigg\}, \end{equation} where matter dominates over the shear and the anisotropic pressure. The analysis of the stability and the approximate evolution of $l$, $\tilde{\sigma}$ and $\rho$ are summarized in Table \ref{table_2}. The phase space is described in Figures \ref{fig:example2_1}, \ref{fig:example2_2} and \ref{fig:example2_3}, for different values of $w$ and $n$. To proceed in the analysis, we define \begin{equation} P_{2} := \bigg\{ \Sigma = 1,\: \mathcal{M} = 0, \: \Omega = 0 \bigg\}, \end{equation} which is {\it not} a critical point, but it will be useful to describe the orbits of the phase space. \begin{figure}[t] \centering \begin{subfigure}[ht]{0.45\textwidth} \includegraphics[width=0.9\linewidth]{figure3a.pdf} \caption{} \label{fig:example2_1} \end{subfigure} \begin{subfigure}[ht]{0.45\textwidth} \includegraphics[width=0.9\linewidth]{figure3b.pdf} \caption{} \label{fig:example2_2} \end{subfigure} \begin{subfigure}[ht]{0.45\textwidth} \includegraphics[width=0.9\linewidth]{figure3c.pdf} \caption{} \label{fig:example2_3} \end{subfigure} \caption{Phase space portrait of the system \eqref{eq:dynamical_equation_Sigma_prime_2}-\eqref{eq:dynamical_equation_emme_prime_2} with (a) $w=0$ and $n=3$, (b) $w=\frac{1}{3}$ and $n=3$, (c) $w=\frac{1}{3}$ and $n=3/2$.} \label{fig:example2} \end{figure} The phase space we obtained shows several types of cosmic evolutions. For example in Figure \ref{fig:example2_1}, close to $P_{2}$, with $\mathcal{M}$ positive, we are in a universe where matter is negligible compared to shear. As the time progresses, the universe isotropizes with a decreasing expansion rate. In contrast, in the negative half-plane for $M$, after a phase of isotropization and approach to $P_{1}$, the orbits return to their starting point, i.e. to an anisotropic state. A similar behavior is found in Figure \ref{fig:example2_3}. On the other hand, in Figure \ref{fig:example2_2} we have that orbits move away from an isotropic universe, represented by the points of the phase space near $P_{1}$. In the positive half-plane, the region near $P_{2}$ is a transition phase for the system leading to a decelerated isotropization, whereas in the negative half-plane, there are decelerated and accelerated expansion phases that lead the universe to anisotropy. As expected, the invariant submanifold $\mathcal M=0$ mirror exactly the phase space of the case of Section \ref{sec:alphaQ}. \subsection{The case \texorpdfstring{ $f(\mathcal{Q})=\alpha\left(\sqrt{\mathcal{Q}} + \beta \mathcal{Q}^{n}\right)$}{}}\label{sec:sqrtQ} We now consider the following function, \begin{equation} f\left(\mathcal{Q}\right) = \alpha\left(\sqrt{\mathcal{Q}} + \beta \mathcal{Q}^{n}\right), \end{equation} where $\alpha$ and $\beta$ are dimensional constants, and set the anisotropic pressure $\pi_{ij}$ equal to zero. The resulting cosmological equations are, \begin{equation}\label{eq:raychaudhuri_equation_bianchi_alphasqrtQ} \begin{split} \mathring{\tilde{\Theta}} + \frac{1}{3}&\tilde{\Theta}^{2} + 2 \tilde{\sigma}^{2} + \frac{\mathcal{Q}}{2} - \frac{\mathcal{Q} + \beta \mathcal{Q}^{n+\frac{1}{2}}}{1 + 2 \beta n \mathcal{Q}^{n-\frac{1}{2}}} +\\ &- \frac{1}{2} \frac{\mathring{\mathcal{Q}}}{\mathcal{Q}}\frac{1 - 4 \beta (n-1) n \mathcal{Q}^{n-\frac{1}{2}}}{1 + 2 \beta n \mathcal{Q}^{n-\frac{1}{2}}}\tilde{\Theta} + \frac{ \sqrt{\mathcal{Q}} }{\alpha \left(1 + 2 \beta n Q^{n-\frac{1}{2}}\right)}\left(1 +3 w\right)\rho = 0, \end{split} \end{equation} \begin{equation} 2 \tilde{\sigma}^{2} - \frac{2}{3}\tilde{\Theta}^{2} - \mathcal{Q} + 2\frac{\mathcal{Q} +\beta \mathcal{Q}^{n+\frac{1}{2}}}{1 + 2 \beta n \mathcal{Q}^{n-\frac{1}{2}}} + \frac{ 4 \sqrt{\mathcal{Q}} }{\alpha \left(1 + 2 \beta n Q^{n-\frac{1}{2}}\right)}\rho = 0, \end{equation} \begin{equation}\label{eq:3Ricci_Bianchi_alphasqrtQ} \mathring{\tilde{\sigma}} + \tilde{\Theta} \tilde{\sigma} - \frac{1}{2} \frac{\mathring{\mathcal{Q}}}{\mathcal{Q}}\frac{1 - 4 \beta (n-1) n \mathcal{Q}^{n-\frac{1}{2}}}{1 + 2 \beta n \mathcal{Q}^{n-\frac{1}{2}}}\tilde{\sigma} = 0, \end{equation} \begin{equation} \mathring{\rho} + \tilde{\Theta} \left( 1 + w \right)\rho = 0. \end{equation} \begin{table} \centering \renewcommand{\arraystretch}{1.5} \caption{The stability of the fixed points and evolution of $l$, $\tilde{\sigma}$, and $\rho$ for $f(\mathcal{Q})=\alpha\left(\sqrt{\mathcal{Q}} + \beta \mathcal{Q}^{n}\right)$ and $\pi_{ij}= 0$. The parameters $\tau_{0}$, $l_{0}$, $\sigma_{0}$, $\sigma_{1}$, and $\rho_{0}$ are constants of integration.} \begin{tabular}{ lcccccc } \toprule & \multicolumn{6}{c}{\texorpdfstring{$0 \leq w \leq 1$}{}} \\ \cmidrule{2-7} Point & \multicolumn{2}{c}{Attractor} & \multicolumn{2}{c}{Repeller} & \multicolumn{2}{c}{Saddle} \\ \midrule $P_{1}$ & \multicolumn{2}{c}{$n>\frac{1}{2}$} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} \\ $P_{2}$ & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{$n>\frac{1}{2}$}\\ \toprule & \multicolumn{2}{c}{Average length} & \multicolumn{2}{c}{Shear} & \multicolumn{2}{c}{Energy density}\\ \midrule $P_{1}$ & \multicolumn{2}{c}{$l = l_{0} \left( \tau - \tau_{0} \right)^{\frac{2 n}{3 (1+w)}}$} & \multicolumn{2}{c}{$\tilde{\sigma} = \sigma_{0} = 0$} & \multicolumn{2}{c}{$\rho = \rho_{0} = 0$}\\ $P_{2}$ & \multicolumn{2}{c}{$l = l_{0} \left( \tau - \tau_{0} \right)^{\frac{2 n}{3 (1 + 2n + w)}}$} & \multicolumn{2}{c}{$\sigma = \sigma_{0} + \sigma_{1} \left( \tau - \tau_{0} \right)^{-1}$} & \multicolumn{2}{c}{$\rho = \rho_{0} = 0$}\\ \bottomrule \end{tabular} \label{table_3} \end{table} By defining the dynamical variables, \begin{equation}\label{eq:dynamical_variables_3} \Sigma^{2} = 3 \frac{\tilde{\sigma}^{2}}{\tilde{\Theta}^{2}}, \qquad \mathcal{B} = \beta \tilde{\Theta}^{2n-1}, \qquad \Omega^{2} = 3 \frac{1}{\alpha}\frac{1}{\tilde{\Theta}}\rho, \end{equation} the reduced system of dynamical equations is \begin{eqnarray} \: \: \Omega &=& \left(\frac{3}{2}\right)^{\frac{1-n}{2}}\sqrt{\left(2n - 1\right){\mathcal B}} \left(1 - \Sigma ^2\right)^{\frac{n}{2}} \\ \frac{\hbox{d}\Sigma}{\hbox{d}\mathcal{T}} &=& -\frac{3 \Sigma \left(1-\Sigma ^2\right) }{ 3^{n}\sqrt{2}+2^{n+1}\sqrt{3} n \mathcal{B} \left(1-\Sigma^{2}\right)^{n-\frac{1}{2}} }\left[ 3^{n}\sqrt{2} +\right. \nonumber\\ &&\left.+ 2^{n}\sqrt{3} \mathcal{B} (1 + w -2 n w)\left(1-\Sigma ^{2}\right)^{n-\frac{1}{2}} \right],\label{eq:dynamical_equation_Sigma_prime_3}\\ \frac{\hbox{d}\mathcal{B}}{\hbox{d}\mathcal{T}} &=& \frac{3 \left(1 - 2n\right) \mathcal{B}}{ n \left[3^{n}\sqrt{2} + 2^{n+1}\sqrt{3} n \mathcal{B} \left(1-\Sigma^{2}\right)^{n-\frac{1}{2}}\right]}\Big\{ 3^{n} \sqrt{2} n \Sigma^{2} +\nonumber\\ &&+ \frac{3^{n}}{\sqrt{2}}\left(1+w\right) + 2^{n} \sqrt{3} n \mathcal{B} \left[1 + w + \left(1 + w - 2nw\right) \Sigma^{2}\right] \left(1-\Sigma ^2\right)^{n-\frac{1}{2}}\Big\}.\label{eq:dynamical_equation_B_prime_3} \end{eqnarray} We assume $\mathcal{B} \geq 0$, $n \geq 1/2$, and $0 \leq \Sigma \leq 1$, so that $\Omega$ is non-negative. The invariant submanifold $\Sigma = 0$ represents isotropic universes, whereas $\Sigma = 1$ anisotropic ones, and $\mathcal{B} = 0$, similarly to the previous section, a surface where the Lagrangian function is $f(\mathcal{Q}) = \alpha \sqrt{\mathcal{Q}}$, when $\beta$ is negligible, or the expansion rate $\tilde{\Theta}$ is zero. The critical points are \begin{eqnarray} P_{1} &=& \lbrace \Sigma = 0,\: \mathcal{B} = 0,\: \Omega = 0 \rbrace ,\\ P_{2} &=& \lbrace \Sigma = 1,\: \mathcal{B} = 0,\: \Omega = 0 \rbrace. \end{eqnarray} Both critical points have $\mathcal{B}$ and $\Omega$ equal to zero, and they are distinguished by the presence or absence of the shear $\Sigma$. The stability of the system and the approximate solutions are summarized in Table \ref{table_3}. A representation of the stability is given in Figure \ref{fig:example3_1}. We notice that all the orbits converge in $P_{1}$, which is a global attractor. Hence, in this theory the universe always becomes isotropic. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{figure4.pdf} \caption{Phase space portrait of the system \eqref{eq:dynamical_equation_Sigma_prime_3}-\eqref{eq:dynamical_equation_B_prime_3} for $w=0$ and $n=3$.} \label{fig:example3_1} \end{figure} \subsection{\texorpdfstring{$f(\mathcal{Q})$}{} as Lambert function}\label{sec:expQ} For this last example we consider the function, \begin{equation} f(\mathcal{Q}) = \mathcal{Q} \: e^{\alpha \mathcal{Q}}, \end{equation} where $\alpha$ is a dimensional constant, and the anisotropic pressure $\pi_{ij}$ is zero. The cosmological equations are, \begin{equation}\label{eq:raychaudhuri_equation_bianchi_expQ} \mathring{\tilde{\Theta}} + \frac{1}{3}\tilde{\Theta}^{2} + 2 \tilde{\sigma}^{2} + \frac{\alpha \mathcal{Q}^{2}}{2 \left(1 + \alpha \mathcal{Q} \right)} + \frac{\alpha (2 + \alpha \mathcal{Q})}{1 + \alpha \mathcal{Q}} \mathring{\mathcal{Q}} \tilde{\Theta} + \frac{ e^{-\alpha \mathcal{Q}}}{2\left(1 + \alpha \mathcal{Q}\right)}\left(1 + 3 w\right)\rho = 0, \end{equation} \begin{equation} 2 \tilde{\sigma}^{2} - \frac{2}{3}\tilde{\Theta}^{2} - \mathcal{Q} + \frac{\mathcal{Q}}{1 + \alpha \mathcal{Q}} + \frac{2 \: e^{-\alpha \mathcal{Q}}}{1 + \alpha \mathcal{Q}} \rho = 0, \end{equation} \begin{equation} \mathring{\tilde{\sigma}} + \tilde{\Theta} \tilde{\sigma} + \frac{\alpha \left(2 + \alpha \mathcal{Q}\right)}{1 + \alpha \mathcal{Q}} \mathring{\mathcal{Q}} \sigma = 0, \end{equation} \begin{equation}\label{eq:energy_momentum_conservation_bianchi_exp} \mathring{\rho} + \tilde{\Theta} \left( 1 + w \right)\rho = 0. \end{equation} The introduction of the following dynamical variables, \begin{equation}\label{eq:dynamical_variables_4} \Sigma^{2} = 3 \frac{\tilde{\sigma}^{2}}{\tilde{\Theta}^{2}}, \qquad \mathcal{A} = \alpha \tilde{\Theta}^{2} , \qquad \Omega^{2} = 3 \frac{1}{\tilde{\Theta}^{2}}\rho, \end{equation} leads to the equation for $\Omega$, \begin{equation} \Omega = \sqrt{\left(1 - \Sigma^{2}\right)\left[1 + \frac{4}{3} \mathcal{A} \left(1-\Sigma^{2}\right)\right]} \: e^{\frac{1}{3} \mathcal{A} \left(1 - \Sigma^{2}\right)} \end{equation} and the system of two differential equations, \begin{eqnarray} \frac{\hbox{d}\Sigma}{\hbox{d}\mathcal{T}} &=& -\frac{3 \Sigma \left(1-\Sigma^{2}\right) \big\{ 3 - w \left[3 + 4 \mathcal{A} \left(1-\Sigma^{2}\right) \right]\big\}}{2 \left[3 + 2 \mathcal{A} \left(1-\Sigma^{2}\right)\right]},\label{eq:dynamical_equation_Sigma_prime_4}\\ \frac{\hbox{d}\mathcal{A}}{\hbox{d}\mathcal{T}} &=& 6 w \mathcal{A}\: \Sigma^{2} - \frac{9}{2} \left(1 + w\right)\mathcal{\mathcal{A}}\bigg\{\frac{2 \Sigma^{2}}{3 + 2 \mathcal{A} \left(1 - \Sigma^{2}\right)} +\nonumber\\ && + \frac{2 \left[3 + 4 \mathcal{A} \left(1-\Sigma^{2}\right) \right]}{9 + 2 \mathcal{A} \left(1-\Sigma^{2}\right) \left[15 + 4 \mathcal{A} \left(1 - \Sigma^{2}\right) \right]}\bigg\}.\label{eq:dynamical_equation_A_prime_4} \end{eqnarray} To guarantee that $\rho>0$, we need to impose $\Omega \geq 0$, which in turn implies the conditions, \begin{equation}\label{eq:condition_Omega_4_1} \mathcal{A} \leq -\frac{3}{4} \quad {\rm and} \quad \frac{1}{2} \sqrt{\frac{3 + 4 \mathcal{A}}{\mathcal{A}}}\leq \Sigma \leq 1 \end{equation} or \begin{equation}\label{eq:condition_Omega_4_2} \mathcal{A} > -\frac{3}{4} \quad {\rm and} \quad 0\leq \Sigma \leq 1. \end{equation} We identify the invariant submanifolds $\Sigma = 0$, $\Sigma = 1$, and $\mathcal{A} = 0$. The first two outline isotropic and anisotropic universes, respectively; $\mathcal{A} = 0$ is the surface where the theory reduces to $f(\mathcal{Q})=\mathcal{Q}$, or to a cosmology where $\tilde{\Theta}=0$. In the range given by Eqs. \eqref{eq:condition_Omega_4_1} and \eqref{eq:condition_Omega_4_2}, the critical points are, \begin{eqnarray} P_{1} &=& \lbrace \Sigma = 0,\: \mathcal{A} = 0,\: \Omega = 1 \rbrace, \\ P_{2} &=& \lbrace \Sigma = 1,\: \mathcal{A} = 0,\: \Omega = 0 \rbrace, \\ P_{3} &=& \bigg\{ \Sigma = 0,\: \mathcal{A} = - \frac{3}{4},\: \Omega = 0 \bigg\}. \end{eqnarray} Moreover, for $w=1$ and $\mathcal{A}=0$, the system of Eqs. \eqref{eq:dynamical_equation_Sigma_prime_4} and \eqref{eq:dynamical_equation_A_prime_4} admits the solution, \begin{equation} P_{4} = \Bigg\{ \Sigma = \Sigma^{*},\: \mathcal{A} = 0, \: \Omega = \sqrt{1 - \left( \Sigma^{*}\right)^{2}} \Bigg\}, \end{equation} where $\Sigma^{*}$ is an arbitrary constant. \begin{table}[t] \centering \renewcommand{\arraystretch}{1.5} \caption{The stability of the fixed points and evolution of $l$, $\tilde{\sigma}$, and $\rho$ for $f(\mathcal{Q})=\mathcal{Q}e^{\alpha\mathcal{Q}}$ and $\pi_{ij}= 0$. The parameters $\tau_{0}$, $l_{0}$, $\sigma_{0}$, $\rho_{0}$ and $\rho_{1}$ are constants of integration.} \begin{tabular}{ lcccccc } \toprule Point & \multicolumn{2}{c}{Attractor} & \multicolumn{2}{c}{Repeller} & \multicolumn{2}{c}{Saddle} \\ \midrule $P_{1}$ & \multicolumn{2}{c}{$0 \leq w < 1$} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} \\ $P_{2}$ & \multicolumn{2}{c}{$0 \leq w \leq 1$} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} \\ $P_{3}$ & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{$0 \leq w < 1$} \\ $P_{4}$ & \multicolumn{2}{c}{$w=1$} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{} \\ \toprule & \multicolumn{2}{c}{Average length} & \multicolumn{2}{c}{Shear} & \multicolumn{2}{c}{Energy density}\\ \midrule $P_{1}$ & \multicolumn{2}{c}{$l(\tau) = l_{0} \left( \tau - \tau_{0} \right)^{\frac{2}{3 (1+w)}}$} & \multicolumn{2}{c}{$\tilde{\sigma} = \sigma_{0} = 0$} & \multicolumn{2}{c}{$\rho (\tau) = \rho_{0} + \frac{\rho_{1}}{\left( \tau - \tau_{0} \right)^{2}}$}\\ $P_{2}$ & \multicolumn{2}{c}{$l(\tau) = l_{0} e^{\frac{\tau}{\tau_{0}}}$} & \multicolumn{2}{c}{$\tilde{\sigma} = \sigma_{0} = 0$} & \multicolumn{2}{c}{$\rho(\tau) = \rho_{0} = 0$}\\ $P_{3}$ & \multicolumn{2}{c}{$l(\tau) = l_{0} \sqrt[3]{3\left( \tau - \tau_{0}\right)}$} & \multicolumn{2}{c}{$\tilde{\sigma} (\tau) = \sigma_{0} + \frac{1}{\sqrt{3} (\tau - \tau_{0})}$} & \multicolumn{2}{c}{$\rho(\tau) = \rho_{0} = 0$}\\ $P_{4}$ & \multicolumn{2}{c}{$l(\tau) = l_{0} \sqrt[3]{3\left( \tau - \tau_{0}\right)}$} & \multicolumn{2}{c}{$\tilde{\sigma} (\tau) = \sigma_{0} + \frac{\Sigma^{*}}{\sqrt{3} (\tau - \tau_{0})}$} & \multicolumn{2}{c}{$\rho (\tau) = \rho_{0} + \frac{1 - \Sigma^{*}{}^{2}}{3 \left(\tau - \tau_{0}\right)^{2}}$}\\ \bottomrule \end{tabular} \label{table_4} \end{table} The results of the stability analysis near the critical points and the approximate solutions are outlined in Table \ref{table_4}. \begin{figure}[t] \centering \begin{subfigure}[ht]{0.49\textwidth} \includegraphics[width=0.9\linewidth]{figure5a.pdf} \caption{} \label{fig:Phase_space_alpha_expQ_1} \end{subfigure} \begin{subfigure}[ht]{0.49\textwidth} \includegraphics[width=0.9\linewidth]{figure5b.pdf} \caption{} \label{fig:Phase_space_alpha_expQ_2} \end{subfigure} \caption{Phase space portrait of the system \eqref{eq:dynamical_equation_Sigma_prime_4}-\eqref{eq:dynamical_equation_A_prime_4} for (a) $w=0$, and (b) $w=1$. Shaded areas are non-physical regions for the phase space.} \label{fig:Phase_space_alpha_expQ} \end{figure} The phase space of the Eqs. \eqref{eq:dynamical_equation_Sigma_prime_4}, and \eqref{eq:dynamical_equation_A_prime_4} is represented in Figure \ref{fig:Phase_space_alpha_expQ}. In Figure \ref{fig:Phase_space_alpha_expQ_1}, $P_{1}$ and $P_{3}$ are attractors, and $P_{2}$ is a saddle point. In Figure \ref{fig:Phase_space_alpha_expQ_2}, in addition to the point $P_{3}$, all the space identified by $\mathcal{A}=0$, i.e. the central heavy line in the figure, is an attractor. In both figures, the phase space is divided into three regions by two curves. The dash-dotted line indicates the curve \begin{equation} 3 + 4 \mathcal{A}\left(1-\Sigma^{2}\right)=0, \end{equation} determining the lower boundary for which, by Eq. \eqref{eq:condition_Omega_4_1}, $\Omega$ is positive. Therefore, the phase space is not physical below this line and in the figures correspond to the shaded area. Instead, the dashed curve represents one of the denominators of Eq. \eqref{eq:dynamical_equation_A_prime_4}, \begin{equation} 9 + 2 \mathcal{A} \left(1-\Sigma^{2}\right) \left[15 + 4 \mathcal{A} \left(1 - \Sigma^{2}\right) \right]=0, \end{equation} the other denominator of Eq. \eqref{eq:dynamical_equation_A_prime_4} is irrelevant as it lays below the dash-dotted curve. The presence of the sectors delimited by the dashed and dash-dotted curves is an essential difference from the other examples discussed above. In Sec. \ref{sec:alphaQ_anisotropic_pressure} we have analyzed different behaviors of the orbits according to the positivity or negativity of the constants related to the dynamical parameter. Here, however, for $\alpha<0$ there are different attractors, depending on whether an orbit is above or below the divergence line. Therefore, the final state of cosmology depends crucially on the initial conditions. For example, in the case $w=1$, the orbits below the divergence line describe universes which tends toward isotropy, whereas orbits above it tend to a finite value of $\Sigma$, i.e. the universe approaches an anisotropic state. \section{Discussion and conclusions}\label{sec:conclusions} We investigated the dynamics of Bianchi type-I cosmologies within the framework of $f(Q)$ gravity using a combination of the $1+3$ covariant formalism and the Dynamical Systems Approach. The $1+3$ formalism allowed us to obtain a very clear and detailed description of the geometric and dynamic properties of $f(Q)$ cosmologies. In particular, we were able to characterize the effect of nonmetricity on the autoparallel motion of the observers and to obtain cosmological equations which are independent of any specific coordinate system. In addition, the $1+3$ decomposition made it possible to single out the different contributions of the of the nonmetricity tensor $Q_{kij}$, making more explicit the effect of nonmetricity on the kinematic quantities. We proved that in Bianchi type-I metric the decomposition of the tensor $Q_{kij}$ involves only the scalar and traceless symmetric tensors which affect the expansion rate $\Theta$ and shear $\sigma$. One of the main difficulties of applying the $1+3$ formalism to nonmetric theories of gravity is normalizing the vector field tangent to the given timelike congruence. However, in the case of Bianchi type-I cosmologies, this problem can be overcome, thus obtaining complete equivalence between the affine parameter of the world lines and the proper time of the observers associated with the congruence. This aspect is crucial, allowing the introduction of an unambiguous cosmic time and then the definition of a cosmic history. After writing the cosmological equations in the $1+3$ framework, we separated the contributions due to Levi-Civita from the nonmetricity terms, in order to better understand the differences between GR and $f(\mathcal{Q})$ gravity. As it happens in many other extensions of GR, we were able to describe in a complete way the additional terms that nonmetricity induces in the gravitational field equations as contributions due to an effective energy-momentum tensor. This formulation allowed an immediate application of the DSA. Although semi-quantitative, a phase space analysis of the $f(\mathcal{Q})$ cosmological models allows us to derive several interesting and general features. We considered here four applications, involving different functions $f(\mathcal{Q})$ and thermodynamical properties of the sources. In the first application the function $f(\mathcal{Q})$ was a power law (Sec. \ref{sec:alphaQ}). We obtained a one-dimensional dynamical system which was solvable analytically. We compared the results with those of the paper \cite{Esposito:2021ect}, exhibiting a perfect match when the universe, filled with dust, is initially anisotropic and then isotropizes. This is not surprising as the phase space contains all cosmological solutions, and thus it must include the one we reconstructed in \cite{Esposito:2021ect} too. We also analyzed a cosmology with the same power law action, but in the presence of an anisotropic pressure which we assumed proportional to the shear (Sec. \ref{sec:alphaQ_anisotropic_pressure}). In this scenario, an isotropic universe is seen to have a transition phase associated with a saddle point, from which the orbits either diverge completely from it or return to the anisotropic state from which they started. This behavior suggests a universe with a ``cyclic'' evolution, in which after a phase of isotropy, anisotropies start to grow again. In \cite{Esposito:2021ect} we found that the reconstructed forms of $f(\mathcal{Q})$ always have a $\sqrt{\mathcal{Q}}$ term which plays a role similar to an integration constant. As another application (Sec. \ref{sec:sqrtQ}), we investigated the effect of this term when it is added in the functions $f(\mathcal{Q})$ used in the previous two examples. Our analysis showed that the main effect of this additional term is, as expected, constraining the sign of the nonmetricity scalar $\mathcal{Q}$, which in turn excludes some possible cosmic histories (the ones for $\Sigma>1$). In the cases we considered, the additional term forces all cosmologies to become isotropic in the future. As a final example, we attempted the evaluation of the effects due to a gravitational action consisting of an infinite series of power law terms. Such effects can be evaluated considering a function $f(\mathcal{Q})$ as the Lambert function (Sec. \ref{sec:expQ}). In this case, the phase space differs considerably from the ones of the previous examples. The most important difference turned out to be the appearance of separate regions of the phase space. The presence of these regions shows that the cosmology will have different behaviors and different final attractors depending on the initial conditions. In all the examples we considered, some areas of the phase space needed to be excluded. We saw that these forbidden regions can appear for different reasons. For instance, in Secs \ref{sec:alphaQ} and \ref{sec:alphaQ_anisotropic_pressure}, the chosen dynamical variables and the request to have a matter with physical thermodynamical quantities implied the exclusion of the line $\Sigma = 1$. In other cases, like the ones given in Sec. \ref{sec:sqrtQ} and \ref{sec:expQ}, the limitations were related to the nature of the function $f(\mathcal{Q})$. For example, the condition $\Sigma \neq 1$ is connected to the fact that the function $f(\mathcal{Q})$ might take, along an orbit, values that change dramatically the structure of the gravitational field equations, giving rise to singularities or degeneracies. We conclude by remarking that the DSA, especially combined with the $1+3$ covariant approach, showed yet again a great potential in clarifying the physics of cosmological models. In particular, $f(\mathcal{Q})$ cosmology exhibits a behavior of the anisotropy which is much richer than the one of GR, and this constitutes an important element in the search for experimental constraints of these models. Moreover, a deeper understanding of the differences between $f(\mathcal{Q})$ gravity and other extensions or modifications of GR will be certainly a challenge for future investigations.
2023-04-23T06:41:31.566Z
2022-08-01T02:10:16.000Z
redpajama/arxiv
arxiv_0001
2,586
12,992
7e076f9ae52e86a60c09d14a92ba32c5aba97295
\section{Introduction}\label{sec:introduction}} \label{sec:intro} \IEEEPARstart{T}{he} restoration of images under adverse impacts of weather conditions such as heavy rain or snow is of wide interest to computer vision research. At the extreme, observed images to be restored may contain severe weather related obstructions of the true background (e.g., snow flakes, dense hazing effects), causing a well known ill-posed inverse problem where various solutions can be obtained for the unknown ground truth background. Deep neural networks (DNNs) are shown to excel at such image restoration tasks compared to traditional approaches~\cite{cai2016dehazenet,Fu:2017DDN,liu2018desnownet}, and this success extends with the current progress in DNN architectural designs, e.g., with vision transformers~\cite{liang2021swinir,Zamir2022Restormer}. State-of-the-art designs have recently shown its effectiveness in low-level weather restoration problems with transformers~\cite{xiao2022image,Valanarasu:2022CVPR} and multi-layer perceptron based models~\cite{Tu:2022}. Beyond task-specialized solutions, recent work also proposed to tackle this problem for multiple weather corruptions in unified architectures~\cite{Li:2020CVPR,chen2022learning,li2022all,Valanarasu:2022CVPR}. Earlier deep learning based solutions to adverse weather restoration have extensively explored task-specific generative modeling methods, mainly with generative adversarial networks (GANs) \cite{qian2018attentive,Zhang:2019IDCGAN,li2019heavy}. In this setting generative models aim to learn the underlying data distribution for cleared image backgrounds, given weather-degraded examples from a training set. Due to their stronger expressiveness in that sense, generative approaches further accommodate the potential of better generalization to multi-task vision restoration problems. Along this line, we introduce a novel solution to this problem by using a state-of-the-art conditional generative modeling approach, with denoising diffusion probabilistic models~\cite{Sohl:2015,Ho:2020}. Denoising diffusion models have recently demonstrated remarkable success in various generative modeling tasks~\cite{Dhariwal:2021,Rombach:2021,ho2022cascaded,saharia2022photorealistic}. These architectures were however not yet considered for image restoration under adverse weather conditions, or demonstrated to generalize across multiple image restoration problems. A major obstacle for their usage in image restoration is their architectural constraint that prohibits size-agnostic image restoration, whereas image restoration benchmarks and real-world problems consist of images with various sizes. We present a novel perspective to the problem of improving vision in adverse weather conditions using denoising diffusion models. Particularly for image restoration, we introduce a novel patch-based diffusive restoration approach to enable size-agnostic processing. Our method uses a guided denoising process for diffusion models by steering the sampling process based on smoothed noise estimates for overlapping patches. Proposed patch-based image processing scheme further introduces a light-weight diffusion modeling approach, and extends practicality of state-of-the-art diffusion models with extensive computational resource demands. We experimentally use extreme weather degradation benchmarks on removing snow, combined rain with haze, and removal of raindrops obstructing the camera sensor. We demonstrate our diffusion modeling perspective to excel at several associated problems. Our contributions are summarized as follows: \begin{itemize} \item We present a novel patch-based diffusive image restoration algorithm for arbitrary sized image processing with denoising diffusion models. \item We empirically demonstrate our approach to achieve state-of-the-art performance on both weather-specific and multi-weather restoration tasks. \item We qualitatively present strong generalization from synthetic to real-world multi-weather restoration with our generative modeling perspective. \end{itemize} \section{Related Work} \label{sec:background} \subsection{Diffusion-based Generative Models} Diffusion based \cite{Sohl:2015} and score-matching based \cite{Hyvarinen:2005,Vincent:2011} generative models recently regained interest with improvements adopted in \textit{denoising diffusion probabilistic models} \cite{Ho:2020,Nichol:2021} and \textit{noise-conditional score networks} \cite{Song:2019,Song:2020}, reaching exceptional image synthesis capabilities \cite{Dhariwal:2021}. Both approaches relate to a class of generative models that are based on learning to reverse the process of sequentially corrupting data samples with increasing additive noise, until the perturbed distribution matches a standard normal prior. This is achieved either by optimizing a time-conditional additive noise estimator~\cite{Ho:2020} or a noise conditional score function (i.e., gradient of log-likelihood)~\cite{Song:2019} parameterized by a DNN. These models are then used for step-wise denoising of samples from a noise distribution, to obtain samples from the data distribution via Langevin dynamics~\cite{Welling:2011}. Denoising diffusion models were shown to also implicitly learn these score functions at each noise scale, and both methods were later reframed in a unified continuous-time formulation based on stochastic differential equations~\cite{Song:2021}. Another resembling perspective links \textit{energy-based models} to this class of generative methods~\cite{DuMordatch:2019,Song:2021HowTo}. Energy-based models estimate an unnormalized probability density defined via the Boltzmann distribution, by optimizing a DNN that represents the energy function. At test time one can similarly perform Langevin sampling starting from pure noise towards the learned distribution, however this time using the gradient of the energy function. Notably, energy-based models differ in its training approach which relies on contrastive divergence methods \cite{hinton2002training,tieleman2009using}, whereas diffusion- and score-based models exploit the sequential forward noising (diffusion) scheme to cover a smoother density across isolated modes of the training data distribution. Recently, diffusion-based conditional generative models have shown state-of-the-art performance in various tasks such as class-conditional data synthesis with classifier guidance~\cite{Dhariwal:2021}, image super-resolution~\cite{Saharia:2021,ho2022cascaded}, image deblurring~\cite{whang2022deblurring}, text-based image synthesis and editing~\cite{Rombach:2021,saharia2022photorealistic}, and general image-to-image translation tasks (e.g., inpainting, colorization)~\cite{Saharia:2021Palette,Choi:2021,Lugmayr:2022}. Similar conditional generative modeling applications also exist from a score-based modeling perspective~\cite{Meng:2022,chung2022come}. Notably, Kawar et al.~\cite{Kawar:2022} recently proposed \textit{denoising diffusion restoration models} for general linear inverse image restoration problems, which exploits pre-trained denoising diffusion models for unsupervised posterior sampling. In contrast to our model, this approach does not perform conditional generative modeling and does not consider image size agnostic restoration. More generally, diffusion models were so far not considered for image restoration under adverse weather conditions. \subsection{Image Restoration in Adverse Weather Conditions} \label{sec:bg_restoration} The inverse problem of restoring single images by estimating the background scene under weather related foreground degradations is ill-posed. In this scenario the observed image only contains a mixture of pixel intensities from the weather distortion (e.g., rain streaks) and the background, which can even be fully occluded. Traditional model-based restoration methods explored various weather distortion characteristic priors to address this problem~\cite{Yang:2020TPAMI}. \textbf{Image Deraining \& Dehazing:} Earliest deep learning era breakthroughs extensively studied the problem of image deraining with convolutional neural networks (CNN), see e.g.~the deep detail network~\cite{Fu:2017DDN,Fu:2017TIP}, and the joint rain detection and removal (JORDER) method~\cite{Yang:2017JORDER}. Following works explored novel mechanisms such as recurrent context aggregation proposed in RESCAN~\cite{li2018recurrent}, or spatial attention maps in SPANet~\cite{Wang:2019SPANet}. Concurrently popularized GAN based image-to-image translation models (e.g., pix2pix~\cite{isola2017image}, CycleGAN~\cite{zhu2017unpaired}, perceptual adversarial networks~\cite{Wang:2018PAN}) were found successful in modeling underlying image background structures when simply applied to these problems. This subsequently led to dedicated generative models tailored for weather restoration tasks, such as image deraining conditional GANs~\cite{Zhang:2019IDCGAN}, or conditional variational image deraining~\cite{Du:2020CVID} based on VAEs. There has been an independent line of work focusing solely on image dehazing \cite{cai2016dehazenet,liu2019griddehazenet,zhao2021refinednet}, where also similar GAN based generative solutions were adopted~\cite{yang2018towards}. Recently, more challenging natural extensions to this problem were explored, such as heavy rain removal combined with dehazing tasks in a realistic setting by Li et al.~\cite{li2019heavy} via the heavy rain GAN (HRGAN). Novel solutions introduced hierarchical multi-scale feature extraction and fusion~\cite{Jiang:2020CVPR}, as well as its extension progressive coupled networks (PCNet)~\cite{Jiang:2021TIP} which were shown to outperform several methods on combined deraining and dehazing tasks. Most recently Zamir et al.~\cite{zamir2021multi} proposed multi-stage progressive image restoration networks with supervised attention modules (MPRNet), which was shown to excel across several general image restoration tasks. \textbf{Removing Raindrops:} Beyond removal of rain streaks, another natural extension considers removing raindrops that introduce artifacts on the camera sensor. Originally Qian et al.~\cite{qian2018attentive} presented a dataset on this phenomena, and proposed an Attentive GAN for raindrop removal. Concurrently Quan et al.~\cite{quan2019deep} proposed an image-to-image CNN with an attention mechanism (RaindropAttn) for the same problem, and Liu et al.~\cite{liu2019dual} demonstrated the effectiveness of dual residual networks (DuRN), a general purpose image restoration model, on this particular task. Subsequent work focused on restoring multiple degradation effects such as simultaneous removal of raindrops and rain streaks~\cite{Quan:2021CVPR}. Most recently Xiao et al. proposed an image deraining transformer (IDT)~\cite{xiao2022image} with state-of-the-art results on generating rain-free images for rain streak removal tasks at various severities, and for raindrop removal. \textbf{Image Desnowing:} One of the earliest deep learning methods for removing snow artifacts from images was proposed by DesnowNet~\cite{liu2018desnownet} with a CNN-based architecture. Several existing image deraining solutions were later also shown to perform relatively well on this task (e.g., SPANet~\cite{Wang:2019SPANet}, RESCAN~\cite{li2018recurrent}). Later Chen et al.~\cite{chen2020jstasr} proposed JSTASR which is specifically designed for size and transparency aware snow removal in a unified framework. Most recently Zhang et al.~\cite{zhang2021deep} proposed a deep dense multi-scale network (DDMSNet) which exploits simultaneous semantic image segmentation and depth estimation mechanisms to improve image desnowing performance, being one of the most effective solutions presented so far. \textbf{Multi-Weather Restoration:} There have been recent attempts in unifying multiple restoration tasks within single deep learning frameworks, including generative modeling solutions to restore superimposed noise types~\cite{feng2021deep}, restoring test-time unknown mixtures of noise or weather corruptions~\cite{li2022all}, or specifically adverse multi-weather image degradations~\cite{Li:2020CVPR,chen2022learning,Valanarasu:2022CVPR}. Seminal work by Li et al.~\cite{Li:2020CVPR} in this context proposed the All-in-One unified weather restoration method which utilizes a multi-encoder and decoder architecture and neural architecture search across task-specific optimized encoders. Most recently Valanarasu et al.~\cite{Valanarasu:2022CVPR} proposed an alternative state-of-the-art solution to this problem with TransWeather, as an end-to-end vision transformer based multi-weather image restoration model. Notably, to our interest, these two studies~\cite{Li:2020CVPR,Valanarasu:2022CVPR} use the same combination of weather degradation benchmark datasets~\cite{liu2018desnownet,li2019heavy,qian2018attentive}, hence constructing an accumulated line of comparable progress for this research problem. \section{Adverse Weather Image Restoration with Patch-Based Denoising Diffusion Models} \label{sec:methods} \subsection{Denoising Diffusion Probabilistic Models} Denoising diffusion models~\cite{Sohl:2015,Ho:2020} are a class of generative models that learn a Markov Chain which gradually converts a Gaussian noise distribution into the data distribution that the model is trained on. The \textit{diffusion process} (i.e., \textit{forward process}) is a fixed Markov Chain that sequentially corrupts the data $\x_0\sim q(\x_0)$ at $T$ diffusion time steps, by injecting Gaussian noise according to a variance schedule $\beta_1,\ldots,\beta_T$: \begin{equation} q(\x_{t}\vert\x_{t-1}) = \mathcal{N}(\x_{t};\sqrt{1-\beta_t}\x_{t-1},\beta_t \I), \label{eq:forward} \end{equation} \begin{equation} q(\x_{1:T}\vert\x_0) = \prod_{t=1}^T q(\x_{t}\vert\x_{t-1}). \end{equation} Diffusion models learn to reverse this predefined forward process in~\eqref{eq:forward} utilizing the same functional form. The \textit{reverse process} defined by the joint distribution $p_{\theta}(\x_{0:T})$ is a Markov Chain with learned Gaussian denoising transitions starting at a standard normal prior $p(\x_T)=\mathcal{N}(\x_T;\mathbf{0},\I)$: \begin{equation} p_{\theta}(\x_{0:T}) = p(\x_{T}) \prod_{t=1}^T p_{\theta}(\x_{t-1}\vert\x_t), \end{equation} \begin{equation} p_{\theta}(\x_{t-1}\vert\x_t) = \mathcal{N}(\x_{t-1};\bm{\mu}_{\theta}(\x_t,t),\mathbf{\Sigma}_{\theta}(\x_t,t)). \label{eq:reverse} \end{equation} Here the reverse process is parameterized by a neural network that estimates $\bm{\mu}_{\theta}(\x_t,t)$ and $\mathbf{\Sigma}_{\theta}(\x_t,t)$. The \textit{forward process} variance schedule $\beta_t$ can be learned jointly with the model or kept constant~\cite{Ho:2020}, ensuring that $\x_T$ approximately follows a standard normal distribution. The model is trained by optimizing a variational bound on negative data log likelihood $\mathbb{E}_{q(\x_0)}[-\log p_{\theta}(\x_0)]\leq L_{\theta}$, which can be expanded into~\cite{Ho:2020,Dhariwal:2021}: \begin{equation} \begin{split} L_{\theta} = \EX_{q} \Big[ & \underbrace{D_{\text{KL}}(q(\x_T|\x_0)\,||\,p(\x_T))}_{L_{T}} \underbrace{-\log p_{\theta}(\x_0|\x_1)}_{L_0} \\ & + \sum_{t>1}\underbrace{D_{\text{KL}}(q(\x_{t-1}|\x_t,\x_0)\,||\,p_{\theta}(\x_{t-1}|\x_t))}_{L_{t-1}} \Big]. \label{eq:obj_expanded} \end{split} \end{equation} This loss was shown to be efficiently optimized via stochastic gradient descent over randomly sampled $L_{t-1}$ terms~\cite{Ho:2020}, taking into consideration that we can marginalize the Gaussian diffusion process to sample intermediate $\x_t$ terms directly from clean data $\x_0$ through: \begin{equation} q(\x_t\vert\x_0)=\mathcal{N}(\x_t;\sqrt{\bar{\alpha}_t}\x_0,(1-\bar{\alpha}_t)\I), \end{equation} which also can be expressed in closed form: \begin{equation} \x_t=\sqrt{\bar{\alpha}_t}\x_0+\sqrt{1-\bar{\alpha}_t}\bm{\epsilon}_t, \label{eq:sampled_xt} \end{equation} where $\alpha_t=1-\beta_t$, $\bar{\alpha}_t=\prod_{i=1}^t\alpha_i$, and $\bm{\epsilon}_t\sim\N(\textbf{0},\I)$ has the same dimensionality as data $\x_0$ and latent variables $\x_t$. Here the $L_{t-1}$ terms in \eqref{eq:obj_expanded} compare the KL divergence between two Gaussians, $p_{\theta}(\x_{t-1}|\x_t)$ from \eqref{eq:reverse} and $q(\x_{t-1}|\x_t,\x_0)$. The latter is the true unknown generative process posterior conditioned on $\x_0$, denoted by: \begin{equation} q(\x_{t-1}|\x_t,\x_0)=\N(\x_{t-1};\bm{\Tilde{\mu}}_t(\x_t,\x_0),\Tilde{\beta}_t\I), \label{eq:true_cond_on_x0} \end{equation} where the distribution parameters can be written as: \begin{equation} \bm{\Tilde{\mu}}_t=\frac{1}{\sqrt{\alpha_t}} \left(\x_t - \frac{\beta_t}{\sqrt{1-\bar{\alpha}_t}}\bm{\epsilon}_t\right),\;\; \Tilde{\beta}_t=\frac{(1-\bar{\alpha}_{t-1})}{(1-\bar{\alpha}_t)}\beta_t, \end{equation} by incorporating the property~\eqref{eq:sampled_xt} into $\bm{\Tilde{\mu}}_t(\x_t,\x_0)$~\cite{Ho:2020}. One can either consider fixed reverse process variances for a simple training objective $\mathbf{\Sigma}_{\theta}(\x_t,t)=\mathbf{\sigma}_t^2\I$ (e.g., $\mathbf{\sigma}_t^2=\Tilde{\beta}_t$)~\cite{Ho:2020}, or optimize $\mathbf{\Sigma}_{\theta}(\x_t,t)$ with a hybrid learning objective~\cite{Nichol:2021}. The overall training objective for the former, when $p_{\theta}(\x_{t-1}\vert\x_t) = \mathcal{N}(\x_{t-1};\bm{\mu}_{\theta}(\x_t,t),\mathbf{\sigma}_t^2\I)$, corresponds to training a network $\bm{\mu}_{\theta}(\x_t,t)$ that predicts $\bm{\Tilde{\mu}}_t$. Using an alternative reparameterization of the reverse process by: \begin{equation} \bm{\mu}_{\theta}(\x_t,t) = \frac{1}{\sqrt{\alpha_t}} \left(\x_t - \frac{\beta_t}{\sqrt{1-\bar{\alpha}_t}}\bm{\epsilon}_{\theta}(\x_t,t)\right), \label{eq:reparameterization} \end{equation} the model can instead be trained to predict the noise vector $\bm{\epsilon}_{\theta}(\x_t,t)$ by optimizing the re-weighted simplified objective: \begin{equation} \mathbb{E}_{\x_0,t,\bm{\epsilon}_t\sim\N(\mathbf{0},\I)}\Big[\vert\vert\bm{\epsilon}_t - \bm{\epsilon}_{\theta}(\sqrt{\bar{\alpha}_t}\x_0+\sqrt{1-\bar{\alpha}_t}\bm{\epsilon}_t,t)\vert\vert^2 \Big]. \label{eq:training_obj} \end{equation} In this setting we optimize a network that predicts the noise $\bm{\epsilon}_t$ at time $t$, from $\x_t$. Sampling with the learned parameterized Gaussian transitions $p_{\theta}(\x_{t-1}\vert\x_t)$ can then be performed starting from $\x_T\sim\N(\mathbf{0},\I)$ by: \begin{equation} \x_{t-1}=\frac{1}{\sqrt{\alpha_t}}\left(\x_t - \frac{\beta_t}{\sqrt{1-\bar{\alpha}_t}} \bm{\epsilon}_{\theta}(\x_t,t)\right) + \sigma_t\bm{z}, \end{equation} where $\bm{z}\sim\mathcal{N}(\mathbf{0},\I)$, which resembles one step of sampling via Langevin dynamics~\cite{Welling:2011}. A large $T$ and small $\beta_t$ for the forward steps allows the assumption that the reverse process becomes close to a Gaussian, which however leads to costly sampling, e.g., when $T=1000$. The variance schedule is generally chosen to be $\beta_1<\beta_2<\ldots<\beta_T$, leading to larger updates to be performed for noisier samples. We focus on using a fixed, linearly increasing variance schedule as originally found sufficient in~\cite{Ho:2020}, whereas learning this schedule based on e.g., signal-to-noise ratio estimates~\cite{Kingma:2021} is also possible. \subsection{Deterministic Implicit Sampling} Denoising diffusion implicit models~\cite{song2021ddim} present an accelerated deterministic sampling approach for pre-trained diffusion models, which were shown to yield consistent and better quality image samples. Implicit sampling exploits a generalized non-Markovian forward process formulation: \begin{equation} q_{\lambda}(\x_{1:T}\vert\x_0) = q_{\lambda}(\x_T\vert\x_0)\prod_{t=2}^T q_{\lambda}(\x_{t-1}\vert\x_t,\x_0), \end{equation} where we will rewrite the distribution in \eqref{eq:true_cond_on_x0} in terms of a particular choice of its standard deviation $\lambda_t$ as: \begin{equation} q_{\lambda}(\x_{t-1}\vert\x_t,\x_0)=\N(\x_{t-1};\bm{\Tilde{\mu}}_t(\x_t,\x_0),\lambda_t^2\I), \end{equation} and the mean denoted in terms of the variance as: \begin{equation} \bm{\Tilde{\mu}}_t=\sqrt{\bar{\alpha}_{t-1}}\x_0+\sqrt{1-\bar{\alpha}_{t-1}-\lambda_t^2}\cdot\bm{\epsilon}_t, \end{equation} by incorporating the property~\eqref{eq:sampled_xt} into $\bm{\Tilde{\mu}}_t(\x_t,\x_0)$. Here, by setting $\lambda_t^2=\Tilde{\beta}_t$ the forward process becomes Markov and one recovers the original diffusion model formulation described earlier. Importantly, the training objective~\eqref{eq:training_obj} remains the same, but only embedded non-Markov forward processes are exploited for inference~\cite{song2021ddim}. A deterministic implicit sampling behavior sets $\lambda_t^2=0$, hence after generating an initial $\x_T$ from the marginal noise distribution sampling becomes deterministic. We will similarly use our models by setting $\lambda_t^2=0$. Implicit sampling using a noise estimator network can then be performed by: \begin{equation} \begin{split} \x_{t-1} = & \sqrt{\bar{\alpha}_{t-1}}\left(\frac{\x_t-\sqrt{1-\bar{\alpha}_t}\cdot\bm{\epsilon}_{\theta}(\x_t,t)}{\sqrt{\bar{\alpha}_t}}\right) \\ & + \sqrt{1-\bar{\alpha}_{t-1}}\cdot\bm{\epsilon}_{\theta}(\x_t,t). \end{split} \label{eq:ddim} \end{equation} During accelerated sampling one only needs a sub-sequence $\tau_1,\tau_2,\ldots,\tau_S$ of the complete $\{1,\ldots,T\}$ timestep indices. This helps reducing the number of sampling timesteps up to two orders of magnitude. We determine this sub-sequence by uniformly interleaving from $\{1,\ldots,T\}$: \begin{equation} \tau_i = (i-1)\cdot T / S + 1\,, \end{equation} which sets $\tau_1=1$ at the final step of reverse sampling. \subsection{Conditional Diffusion Models} Conditional diffusion models have shown state-of-the-art image-conditional data synthesis and editing capabilities. The core idea is to learn a conditional reverse process $p_{\theta}(\x_{0:T}|\xw)$ without modifying the diffusion process $q(\x_{1:T}|\x_0)$ for $\x$, such that the sampled $\x$ has high fidelity to the data distribution conditioned on $\xw$ (see Figure~\ref{fig:diffusion_illustration}). \begin{figure}[!t] \centering \includegraphics[width=0.48\textwidth]{diffusion_illustration.pdf}% \caption{An overview of the forward diffusion (dashed line) and reverse denoising (solid line) processes for a conditional diffusion model.} \label{fig:diffusion_illustration} \end{figure} During training we sample $(\x_0,\xw)\sim q(\x_0,\xw)$ from a paired data distribution (e.g., a clean image $\x_0$ and weather degraded image $\xw$), and learn a conditional diffusion model where we provide $\xw$ as input to the reverse process: \begin{equation} p_{\theta}(\x_{0:T}|\xw) = p(\x_{T}) \prod_{t=1}^T p_{\theta}(\x_{t-1}\vert\x_t,\xw). \end{equation} Our previous formulation of optimizing a noise estimator network via~\eqref{eq:training_obj} then uses $\bm{\epsilon}_{\theta}(\x_t,\xw,t)$. For image-based conditioning, inputs $\x$ and $\xw$ are concatenated channel-wise, resulting in six dimensional input image channels. Note that conditioning the reverse process on $\xw$ maintains its compatibility with implicit sampling. In this formulation one samples from $\x_{t-1}\sim p_{\theta}(\x_{t-1}\vert\x_t,\xw)$ with: \begin{equation} \begin{split} \x_{t-1} = & \sqrt{\bar{\alpha}_{t-1}}\left(\frac{\x_t-\sqrt{1-\bar{\alpha}_t}\cdot\bm{\epsilon}_{\theta}(\x_t,\xw,t)}{\sqrt{\bar{\alpha}_t}}\right) \\ & + \sqrt{1-\bar{\alpha}_{t-1}}\cdot\bm{\epsilon}_{\theta}(\x_t,\xw,t), \end{split} \label{eq:cddim} \end{equation} which follows a deterministic reverse path towards $\x_0$ with fidelity to the condition $\xw$, starting from $\x_T\sim\N(\mathbf{0},\I)$. \begin{figure*}[!t] \centering \subfloat[Patch-based diffusive image restoration\label{fig:restoration_illustration}]{\includegraphics[width=0.59\textwidth]{restoration_pipeline.pdf}}% \hspace{0.05cm} \subfloat[Illustrating sampling for overlapping patches\label{fig:grid_illustration}]{\includegraphics[width=0.4\textwidth]{patch_grids.pdf}}% \caption{(a) Illustration of the patch-based diffusive image restoration pipeline detailed in Algorithm~\ref{alg:inference}. (b) Illustrating \textit{mean estimated noise} guided sampling updates for overlapping pixels across patches. We demonstrate a simplified example where $r=p/2$, and there are only four overlapping patches sharing the grid cell marked with the white border and gratings. In this case, we would perform sampling updates for the pixels in this region based on the mean estimated noise over the four overlapping patches, at each denoising time step $t$.} \label{fig:patch_based_illustrations} \end{figure*} \subsection{Patch-based Diffusive Image Restoration} Image restoration benchmarks, as well as real world pictures, consist of images with various sizes. Contrarily, existing generative architectures are mostly tailored for fixed-size image processing. From a diffusion-based modeling perspective, there has been one recent work which studied size-agnostic restoration of blurred images~\cite{whang2022deblurring}. Their model is optimized using fixed-size patches and then used for image restoration by simply providing arbitrary sized inputs to the model, hence strictly depending on a modified fully-convolutional architecture for their network. Differently, we decompose images into overlapping fixed-sized patches also at test-time and blend them during sampling. The general idea of patch-based restoration is to operate locally on patches extracted from the image and optimally merge the results. An important drawback of this approach so far has been that the resulting image can contain merging artifacts from independently restored intermediate results, which was extensively studied in traditional restoration methods~\cite{kervrann2006optimal,zoran2011learning,papyan2015multi}. We will tackle this problem by guiding the reverse sampling process towards consistency between neighboring patches. We define the unknown ground truth image of arbitrary size as $\X_0$, the weather-degraded observation as $\XW$, and $\bm{P}_i$ to be a binary mask matrix of same dimensionality as $\X_0$ and $\XW$, indicating the $i$-th $p\times p$ patch location from the image. Our training approach is outlined in Algorithm~\ref{alg:training}, in which we learn the conditional reverse process: \begin{equation} p_{\theta}(\x_{0:T}^{(i)}|\xw^{(i)}) = p(\x_{T}^{(i)}) \prod_{t=1}^T p_{\theta}(\x_{t-1}^{(i)}\vert\x_t^{(i)},\xw^{(i)}), \end{equation} with $\x_0^{(i)}=\text{Crop}(\bm{P}_i\circ\X_0)$ and $\xw^{(i)}=\text{Crop}(\bm{P}_i\circ\XW)$ denoting $p\times p$ patches from a training set image pair $(\X_0,\XW)$, where $\text{Crop}(.)$ operation extracts the patch from the location indicated by $\bm{P}_i$. During training we randomly sample (with uniform probability) the $p\times p$ patch location for $\bm{P}_i$ within the complete range of image dimensions. Our test-time patch-based diffusive image restoration method is illustrated in Figure~\ref{fig:restoration_illustration} and outlined in Algorithm~\ref{alg:inference}. Firstly, we decompose the image $\XW$ of arbitrary size by extracting all overlapping $p\times p$ patches from a grid-like arranged parsing scheme. We consider a grid-like arrangement over the complete image where each grid cell contains $r\times r$ pixels ($r<p$), and extract all $p\times p$ patches by moving over this grid with a step size of $r$ in both horizontal and vertical dimensions (see Figure~\ref{fig:grid_illustration} for an illustration). We define $D$ as the total number of extracted patches, defining a dictionary of overlapping patch locations. Due to the ill-posed nature of the problem, different restoration estimates for overlapping grid cells will be obtained when performing conditional reverse sampling based on neighboring overlapping patches. We alleviate this by performing reverse sampling based on the \textit{mean estimated noise} for each pixel in overlapping patch regions, at any given denoising time step $t$ (see Figure~\ref{fig:grid_illustration}). Our approach effectively steers the reverse sampling process to ensure higher fidelity across all contributing neighboring patches. More specifically at each time step $t$ of sampling, (1) we estimate the additive noise for all overlapping patch locations $d\in\{1,\ldots,D\}$ using $\bm{\epsilon}_{\theta}(\x_t^{(d)},\xw^{(d)},t)$, (2) accumulate these overlapping noise estimates at their respective patch locations in a matrix $\bm{\hat{\Omega}}_t$ of same size as the whole image (line 8 in Alg.~\ref{alg:inference}), (3) normalize $\bm{\hat{\Omega}}_t$ based on the number of received estimates for each pixel (line 11 in Alg.~\ref{alg:inference}), (4) perform an implicit sampling update using the smoothed whole-image noise estimate $\bm{\hat{\Omega}}_t$ (line 12 in Alg.~\ref{alg:inference}). Our method is different from a naive baseline of averaging overlapping final reconstructions after sampling. Such an approach destroys the local patch distribution fidelity to the learned posterior if applied post-sampling. Differently from our overlapping patch based guided sampling principle, however in a similar spirit, there are also recently successful image editing methods based on steering the reverse process in the latent space to achieve sampling from a condensed subspace of the learned density~\cite{Choi:2021,Kawar:2022}. \begin{algorithm}[t] \caption{Diffusive weather restoration model training} \label{alg:training} \begin{algorithmic}[1] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \REQUIRE Clean and weather-degraded image pairs $(\X_0,\XW)$ \REPEAT \STATE Randomly sample a binary patch mask $\mathbf{P}_i$ \STATE $\x_0^{(i)}=\text{Crop}(\bm{P}_i\circ\X_0)$ and $\xw^{(i)}=\text{Crop}(\mathbf{P}_i\circ\XW)$ \STATE $t\sim \text{Uniform}\{1,\ldots,T\}$ \STATE $\bm{\epsilon}_t\sim\mathcal{N}(\mathbf{0},\I)$ \STATE Perform a single gradient descent step for \\ \qquad $\nabla_{\theta}\vert\vert\bm{\epsilon}_t - \bm{\epsilon}_{\theta}(\sqrt{\bar{\alpha}_t}\x_0^{(i)}+\sqrt{1-\bar{\alpha}_t}\bm{\epsilon}_t\,,\xw^{(i)},t)\vert\vert^2$ \UNTIL converged \RETURN $\theta$ \end{algorithmic} \end{algorithm} \begin{algorithm}[t] \caption{Patch-based diffusive image restoration} \label{alg:inference} \begin{algorithmic}[1] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \REQUIRE Weather-degraded image $\XW$, conditional diffusion model $\bm{\epsilon}_{\theta}(\x_t,\xw,t)$, number of implicit sampling steps $S$, dictionary of $D$ overlapping patch locations. \STATE $\X_{t}\sim\N(\mathbf{0},\mathbf{I})$ \FOR {$i = S,\ldots,1$} \STATE $t = (i-1)\cdot T / S + 1$ \STATE $t_{\text{next}} = (i-2)\cdot T / S + 1\;$ \textbf{if} $\,i>1$ \textbf{else} $\,0$ \STATE $\bm{\hat{\Omega}}_t=\mathbf{0}$ and $\mathbf{M}=\mathbf{0}$ \FOR {$d = 1,\ldots,D$} \STATE $\x_t^{(d)}=\text{Crop}(\mathbf{P}_d\circ\X_t)$ and $\xw^{(d)}=\text{Crop}(\mathbf{P}_d\circ\XW)$ \STATE $\bm{\hat{\Omega}}_t = \bm{\hat{\Omega}}_t + \mathbf{P}_d\cdot\bm{\epsilon}_{\theta}(\x_t^{(d)},\xw^{(d)},t)$ \STATE $\mathbf{M} = \mathbf{M} + \mathbf{P}_d$ \ENDFOR \STATE $\bm{\hat{\Omega}}_t = \bm{\hat{\Omega}}_t\oslash\mathbf{M}$\qquad\quad$\mathbin{/\mkern-4mu/}$\;\,$\oslash$: element-wise division \STATE $\X_{t}\leftarrow\sqrt{\bar{\alpha}_{t_{\text{next}}}}\left(\frac{\X_t-\sqrt{1-\bar{\alpha}_t}\,\cdot\,\bm{\hat{\Omega}}_t}{\sqrt{\bar{\alpha}_t}}\right) + \sqrt{1-\bar{\alpha}_{t_{\text{next}}}}\cdot\bm{\hat{\Omega}}_t$ \ENDFOR \RETURN $\X_t$ \end{algorithmic} \end{algorithm} Note that a smaller $r$ increases overlap between patches and hence smoothness, however also the computational burden. We used $p=64$ or $128$ pixels for $\mathbf{P}_i$, and $r=16$ pixels. Before processing, we resized whole image dimensions to be multiples of 16 as also conventionally done with vision transformers~\cite{Valanarasu:2022CVPR}. Here, choosing $r=p$ would construct a set of non-overlapping patches for processing, hence would assume independency across patches during restoration. However such neighboring patches are clearly not independent in images, and this would lead to a suboptimal approximation with edge artifacts in restored images (see Section~1.3 of Supplementary Materials). \section{Experimental Results} \label{sec:results} \subsection{Datasets} \label{sec:datasets} We used three standard benchmark image restoration datasets considering adverse weather conditions of snow, heavy rain with haze, and raindrops on the camera sensor. \textbf{Snow100K \cite{liu2018desnownet}} is a dataset for evaluation of image desnowing models. It consists of 50,000 training and 50,000 test images split into approximately equal sizes of three Snow100K-S/M/L sub-test sets (16,611/16,588/16,801), indicating the synthetic snow strength imposed via snowflake sizes (light/mid/heavy). This dataset also contains additional 1,329 realistic snowy images to evaluate real world generalization of models trained with synthetic data. \textbf{Outdoor-Rain \cite{li2019heavy}} is a dataset of simultaneous rain and fog which exploits a physics-based generative model to simulate not only dense synthetic rain streaks, but also incorporating more realistic scene views, constructing an inverse problem of simultaneous image deraining and dehazing. The Outdoor-Rain training set consists of 9,000 images, and the test set we used, denoted in~\cite{li2019heavy} as Test1, is of size 750 for quantitative evaluations. \textbf{RainDrop \cite{qian2018attentive}} is a dataset of images with raindrops introducing artifacts on the camera sensor and obstructing the view. It consists of 861 training images with synthetic raindrops, and a test set of 58 images dedicated for quantitative evaluations, denoted in~\cite{qian2018attentive} as RainDrop-A. \begin{figure*}% \centering \subfloat[Image Desnowing \label{tab:snow100k}]{ \scalebox{0.79}{ \begin{tabular}{l c c c c} \toprule & \multicolumn{2}{c}{Snow100K-S~\cite{liu2018desnownet}} & \multicolumn{2}{c}{Snow100K-L~\cite{liu2018desnownet}} \\ \cmidrule(l{.5em}r{.5em}){2-3}\cmidrule(l{.5em}r{.5em}){4-5} & PSNR $\uparrow$ & SSIM $\uparrow$ & PSNR $\uparrow$ & SSIM $\uparrow$ \\ \midrule SPANet~\cite{Wang:2019SPANet} & 29.92 & 0.8260 & 23.70 & 0.7930 \\ JSTASR~\cite{chen2020jstasr} & 31.40 & 0.9012 & 25.32 & 0.8076 \\ RESCAN~\cite{li2018recurrent} & 31.51 & 0.9032 & 26.08 & 0.8108 \\ DesnowNet~\cite{liu2018desnownet} & 32.33 & 0.9500 & 27.17 & 0.8983 \\ DDMSNet~\cite{zhang2021deep} & 34.34 & 0.9445 & 28.85 & 0.8772 \\ \midrule \textbf{SnowDiff$_{64}$} & \textbf{36.59} & \textbf{0.9626} & \textbf{30.43} & \textbf{0.9145} \\ \textbf{SnowDiff$_{128}$} & \underline{36.09} & \underline{0.9545} & \underline{30.28} & \underline{0.9000} \\ \midrule \midrule All-in-One \cite{Li:2020CVPR} & - & - & 28.33 & 0.8820 \\ TransWeather \cite{Valanarasu:2022CVPR} & 32.51 & 0.9341 & \underline{29.31} & 0.8879 \\ \midrule \textbf{WeatherDiff$_{64}$} & \textbf{35.12} & \textbf{0.9539} & \textbf{29.55} & \textbf{0.8988} \\ \textbf{WeatherDiff$_{128}$} & \underline{34.72} & \underline{0.9509} & 29.21 & \underline{0.8911} \\ \bottomrule \end{tabular}}}% \quad \subfloat[Image Deraining \& Dehazing\label{tab:outdoorrain}]{ \scalebox{0.79}{ \begin{tabular}{l c c c c} \toprule & \multicolumn{2}{c}{Outdoor-Rain~\cite{li2019heavy}}\\ \cmidrule(l{.5em}r{.5em}){2-3} & PSNR $\uparrow$ & SSIM $\uparrow$ \\ \midrule CycleGAN~\cite{zhu2017unpaired} & 17.62 & 0.6560 \\ pix2pix~\cite{isola2017image} & 19.09 & 0.7100 \\ HRGAN~\cite{li2019heavy} & 21.56 & 0.8550 \\ PCNet~\cite{Jiang:2021TIP} & 26.19 & 0.9015 \\ MPRNet~\cite{zamir2021multi} & \underline{28.03} & \underline{0.9192} \\ \midrule \textbf{RainHazeDiff$_{64}$} & \textbf{28.38} & \textbf{0.9320} \\ \textbf{RainHazeDiff$_{128}$} & 26.84 & 0.9152 \\ \midrule \midrule All-in-One \cite{Li:2020CVPR} & 24.71 & 0.8980 \\ TransWeather \cite{Valanarasu:2022CVPR} & 28.83 & 0.9000 \\ \midrule \textbf{WeatherDiff$_{64}$} & \underline{28.86} & \textbf{0.9257} \\ \textbf{WeatherDiff$_{128}$} & \textbf{29.53} & \underline{0.9208} \\ \bottomrule \end{tabular}}}% \quad \subfloat[Removing Raindrops\label{tab:raindrop}]{ \scalebox{0.79}{ \begin{tabular}{l c c c c} \toprule & \multicolumn{2}{c}{RainDrop~\cite{qian2018attentive}}\\ \cmidrule(l{.5em}r{.5em}){2-3} & PSNR $\uparrow$ & SSIM $\uparrow$ \\ \midrule pix2pix~\cite{isola2017image} & 28.02 & 0.8547 \\ DuRN~\cite{liu2019dual} & 31.24 & 0.9259 \\ RaindropAttn~\cite{quan2019deep} & 31.44 & 0.9263 \\ AttentiveGAN~\cite{qian2018attentive} & 31.59 & 0.9170 \\ IDT~\cite{xiao2022image} & 31.87 & 0.9313 \\ \midrule \textbf{RainDropDiff$_{64}$} & \underline{32.29} & \textbf{0.9422} \\ \textbf{RainDropDiff$_{128}$} & \textbf{32.43} & \underline{0.9334} \\ \midrule \midrule All-in-One \cite{Li:2020CVPR} & \textbf{31.12} & \underline{0.9268} \\ TransWeather \cite{Valanarasu:2022CVPR} & 30.17 & 0.9157 \\ \midrule \textbf{WeatherDiff$_{64}$} & \underline{30.26} & \textbf{0.9277} \\ \textbf{WeatherDiff$_{128}$} & 29.37 & 0.9213 \\ \bottomrule \end{tabular}}} \caption{Quantitative comparisons in terms of PSNR and SSIM (higher is better) with state-of-the-art image desnowing and deraining methods. Above half of the tables show comparisons of our weather-specific SnowDiff$_{p}$, RainHazeDiff$_{p}$ and RainDropDiff$_{p}$ models individually evaluated for each task. Bottom half of the tables show evaluations of our unified multi-weather model WeatherDiff$_{p}$ on all three test sets with respect to All-in-One~\cite{Li:2020CVPR} and TransWeather~\cite{Valanarasu:2022CVPR} multi-weather restoration methods. Best and second best values are indicated with bold text and underlined text respectively.}% \label{tab:image_restoration}% \end{figure*} \subsection{Diffusion Model Implementations} \label{sec:impl_diffusion} We performed experiments both in weather-specific and multi-weather image restoration settings. We denote our weather-specific restoration models as \textbf{SnowDiff$_p$}, \textbf{RainHazeDiff$_p$} and \textbf{RainDropDiff$_p$}, and our multi-weather restoration model as \textbf{WeatherDiff$_p$}, with the subscripts denoting the input patch size of the model. We trained both 64x64 and 128x128 patch size versions of all models. We used the same network architecture for all trained diffusion models. We grounded our model selection and hyper-parameters via the definitions used in previous seminal work by~\cite{Ho:2020,song2021ddim}. The network had a U-Net architecture~\cite{ronneberger2015u} based on WideResNet~\cite{Zagoruyko:2016}, which uses group normalization~\cite{wu2018group} and self-attention blocks at 16x16 feature map resolution~\cite{vaswani2017attention,wang2018non}. We used input time step embedding for $t$ through sinusoidal positional encoding~\cite{vaswani2017attention} and provided these embeddings as input to each residual block, enabling the model to share parameters across time. For input image conditioning we channel-wise concatenate the patches $\x_t$ and $\xw$, resulting in six dimensional input image channels (i.e., RGB for both images). We did not perform task-specific parameter tuning or modifications to the neural network architecture. Further specifications on the model configurations are provided in Table~\ref{tab:hyperparams}. Our code is available at: \href{https://github.com/IGITUGraz/WeatherDiffusion}{https://github.com/IGITUGraz/WeatherDiffusion}. \subsection{Training Specifications} \label{sec:training_specs} At each training iteration of 64x64 patch diffusion models, we initially sampled 16 images from the training set and randomly cropped 16 patches of size 64x64 from each, resulting in mini-batches of size 256 patches. For 128x128 patch diffusion models, we randomly cropped 8 patches from each of the 8 sampled training images per iteration, resulting in mini-batches of size 64. We used all training set images per epoch for weather-specific restoration. For WeatherDiff$_p$ we used the curated \textit{AllWeather} dataset from \cite{Valanarasu:2022CVPR}, which has 18,069 samples composed of subsets of training images from Snow100K, Outdoor-Rain and RainDrop, in order to create a balanced training set across three weather conditions with a similar approach to \cite{Li:2020CVPR}. Our multi-weather models are effectively conditioned to generate the most likely background for any of the three conditions, as we use a mixture of degradations in training batches. We trained all models for 2,000,000 iterations, except for WeatherDiff$_{128}$ which was trained for 2,500,000 iterations due to complexity of this task (see Section~1.1 of Supplementary Materials for an empirical analysis). We used an Adam optimizer with a fixed learning rate of $0.00002$ without weight decay. An exponential moving average with a weight of 0.999 was applied during parameter updates, as it was shown to facilitate more stable learning~\cite{Song:2020,Nichol:2021}. \begin{table}[t!] \centering \caption{Diffusion model configurations and parameter choices.} \begin{tabular}{l c} \toprule & Hyper-parameters \\ \midrule Diffusion steps ($T$) & $1000$ \\ Noise schedule ($\beta_t$) & linear: $0.0001\rightarrow0.02$ \\ Base channels & $128$ \\ Channel multipliers & \{1, 1, 2, 2, 4, 4\} \\ Residual blocks per resolution & $2$ \\ Attention resolutions & 16$\times$16 \\ Time step embedding length & $512$ \\ Number of parameters & 110M \\ \bottomrule \end{tabular} \label{tab:hyperparams} \end{table} \begin{figure*}[!ht] \subfloat[Input\label{fig-snow:input}]{% \begin{tabular}{c} \includegraphics[width=0.192\textwidth]{winter__street_03590_input_annotated.png} \\ \includegraphics[width=0.192\textwidth]{winter_weather_01579_input_annotated.png} \end{tabular}% }\hspace{-.45cm} \subfloat[DesnowNet~\cite{liu2018desnownet}\label{fig-snow:desnownet}]{% \begin{tabular}{c} \includegraphics[width=0.192\textwidth]{winter__street_03590_desnownet_annotated.png} \\ \includegraphics[width=0.192\textwidth]{winter_weather_01579_desnownet_annotated.png} \end{tabular}% }\hspace{-.45cm} \subfloat[DDMSNet~\cite{zhang2021deep}\label{fig-snow:ddmsnet}]{% \begin{tabular}{c} \includegraphics[width=0.192\textwidth]{winter__street_03590_ddmsnet_annotated.png} \\ \includegraphics[width=0.192\textwidth]{winter_weather_01579_ddmsnet_annotated.png} \end{tabular}% }\hspace{-.45cm} \subfloat[\textbf{Ours (SnowDiff$_{64}$)}\label{fig-snow:ours}]{% \begin{tabular}{c} \includegraphics[width=0.192\textwidth]{winter__street_03590_ours_annotated.png} \\ \includegraphics[width=0.192\textwidth]{winter_weather_01579_ours_annotated.png} \end{tabular}% }\hspace{-.45cm} \subfloat[Ground truth\label{fig-snow:gt}]{% \begin{tabular}{c} \includegraphics[width=0.192\textwidth]{winter__street_03590_gt_annotated.png} \\ \includegraphics[width=0.192\textwidth]{winter_weather_01579_gt_annotated.png} \end{tabular}% }\hspace{-.45cm} \caption{Qualitative reconstruction comparisons of our best model on SnowTest100K test samples with DesnowNet~\cite{liu2018desnownet} and DDMSNet~\cite{zhang2021deep}.} \label{fig:snow_reconstructions} \end{figure*} \subsection{Comparison Methods and Evaluation Metrics} \label{sec:eval_metrics} We perform comparisons of our weather-specific models with several state-of-the-art methods discussed in Section~\ref{sec:bg_restoration} for image desnowing~\cite{li2018recurrent,Wang:2019SPANet,chen2020jstasr,liu2018desnownet,zhang2021deep}, combined image deraining and dehazing~\cite{zhu2017unpaired,isola2017image,li2019heavy,Jiang:2021TIP,zamir2021multi}, and removing raindrops~\cite{isola2017image,liu2019dual,quan2019deep,qian2018attentive,xiao2022image}. We compare WeatherDiff$_p$ with two state-of-the-art multi-weather image restoration methods: All-in-One~\cite{Li:2020CVPR}, which utilizes a multi-encoder and decoder pipeline with a neural architecture search mechanism, and TransWeather~\cite{Valanarasu:2022CVPR}, which exploits an end-to-end vision transformer. Notably, both of these works were presented for multi-weather image restoration using the same three benchmark datasets. Our comparison method choices were mainly driven in accordance with the baselines from~\cite{Valanarasu:2022CVPR,Li:2020CVPR}, as well as methods that are in directly comparable setting since they either reported identical test set evaluations with the datasets we used, or publicly provided their pretrained models. Quantitative evaluations between ground truth and restored images were performed via the conventional peak signal-to-noise ratio (PSNR) \cite{huynh2008scope} and structural similarity (SSIM) \cite{wang2004image} metrics. We evaluated PSNR and SSIM based on the luminance channel Y of the YCbCr color space in accordance with the previous convention \cite{qian2018attentive,Valanarasu:2022CVPR,zamir2021multi,xiao2022image}. \subsection{Weather-Specific Image Restoration Results} \label{sec:weatherspecific} \begin{figure*}[!ht] \subfloat[Input\label{fig-rainfog:input}]{% \begin{tabular}{c} \includegraphics[width=0.192\textwidth]{im_0314_s95_a05_input_annotated.png} \\ \includegraphics[width=0.192\textwidth]{im_0341_s85_a06_input_annotated.png} \end{tabular}% }\hspace{-.45cm} \subfloat[HRGAN~\cite{li2019heavy}\label{fig-rainfog:hrgan}]{% \begin{tabular}{c} \includegraphics[width=0.192\textwidth]{im_0314_s95_a05_hrgan_annotated.png} \\ \includegraphics[width=0.192\textwidth]{im_0341_s85_a06_hrgan_annotated.png} \end{tabular}% }\hspace{-.45cm} \subfloat[MPRNet~\cite{zamir2021multi}\label{fig-rainfog:mprnet}]{% \begin{tabular}{c} \includegraphics[width=0.192\textwidth]{im_0314_s95_a05_mprnet_annotated.png} \\ \includegraphics[width=0.192\textwidth]{im_0341_s85_a06_mprnet_annotated.png} \end{tabular}% }\hspace{-.45cm} \subfloat[\textbf{Ours (RainHazeDiff$_{64}$)}\label{fig-rainfog:ours}]{% \begin{tabular}{c} \includegraphics[width=0.192\textwidth]{im_0314_s95_a05_ours_annotated.png} \\ \includegraphics[width=0.192\textwidth]{im_0341_s85_a06_ours_annotated.png} \end{tabular}% }\hspace{-.45cm} \subfloat[Ground truth\label{fig-rainfog:gt}]{% \begin{tabular}{c} \includegraphics[width=0.192\textwidth]{im_0314_s95_a05_gt_annotated.png} \\ \includegraphics[width=0.192\textwidth]{im_0341_s85_a06_gt_annotated.png} \end{tabular}% }\hspace{-.45cm} \caption{Qualitative reconstruction comparisons of our best model on Outdoor-Rain test samples with HRGAN~\cite{li2019heavy} and MPRNet~\cite{liu2018desnownet}.} \label{fig:rainhaze_reconstructions} \end{figure*} \begin{figure*}[!t] \subfloat[Input\label{fig-raindrop:input}]{% \begin{tabular}{c} \includegraphics[width=0.192\textwidth]{48_rain_input_annotated.png} \\ \includegraphics[width=0.192\textwidth]{54_rain_input_annotated.png} \end{tabular}% }\hspace{-.45cm} \subfloat[RaindropAttn~\cite{quan2019deep}\label{fig-raindrop:raindropattn}]{% \begin{tabular}{c} \includegraphics[width=0.192\textwidth]{48_rain_raindropattn_annotated.png} \\ \includegraphics[width=0.192\textwidth]{54_rain_raindropattn_annotated.png} \end{tabular}% }\hspace{-.45cm} \subfloat[AttentiveGAN~\cite{qian2018attentive}\label{fig-raindrop:attentgan}]{% \begin{tabular}{c} \includegraphics[width=0.192\textwidth]{48_rain_deraindropgan_annotated.png} \\ \includegraphics[width=0.192\textwidth]{54_rain_deraindropgan_annotated.png} \end{tabular}% }\hspace{-.45cm} \subfloat[\textbf{Ours (RainDropDiff$_{128}$)}\label{fig-raindrop:ours}]{% \begin{tabular}{c} \includegraphics[width=0.192\textwidth]{48_rain_ours_annotated.png} \\ \includegraphics[width=0.192\textwidth]{54_rain_ours_annotated.png} \end{tabular}% }\hspace{-.45cm} \subfloat[Ground truth\label{fig-raindrop:gt}]{% \begin{tabular}{c} \includegraphics[width=0.192\textwidth]{48_rain_gt_annotated.png} \\ \includegraphics[width=0.192\textwidth]{54_rain_gt_annotated.png} \end{tabular}% }\hspace{-.45cm} \caption{Qualitative reconstruction comparisons of our best model on Raindrop test samples with RaindropAttn~\cite{quan2019deep} and AttentiveGAN~\cite{qian2018attentive}.} \label{fig:raindrop_reconstructions} \end{figure*} Figure~\ref{tab:image_restoration} presents our quantitative evaluations. The top half of the tables contain results from weather-specific image restoration, where we show $S=10$ sampling time steps for $p=64$, and $S=50$ for $p=128$ (see Section~1.2 of Supplementary Materials for other choices, where sometimes better results can be achieved by tuning $S$ for each task individually). Our models achieve performances superior to all compared existing methods on all tasks. For image desnowing and combined deraining and dehazing tasks, our 64x64 patch models yield the best results (i.e., 36.59/0.9626 on Snow100K-S, 30.43/0.9145 on Snow100K-L and 28.38/0.9320 on Outdoor-Rain). For removing raindrops, with both input patch resolutions we outperform the recent image de-raining transformers~\cite{xiao2022image} with RainDropDiff$_{128}$ having the best PSNR of 32.52. Figure~\ref{fig:snow_reconstructions} depicts some visualizations of image desnowing reconstructions for sample test images, comparing our method with DesnowNet~\cite{liu2018desnownet} and DDMSNet~\cite{zhang2021deep}. As illustrated, while DDMSNet appears to achieve noticable higher visual quality than DeSnowNet in reconstructions, our method SnowDiff$_{64}$ shows remarkable restoration quality in fine details (enlarged in red and blue bounding boxes). Figure~\ref{fig:rainhaze_reconstructions} depicts visualizations on sample Outdoor-Rain test images, demonstrating the superiority of our model RainHazeDiff$_{64}$ over HRGAN~\cite{li2019heavy} and MPRNet~\cite{quan2019deep}. In particular, smoothing effects from dehazing results in loss of details in reconstructions with other methods, while our model can recover these (e.g., second example in Figure~\ref{fig:rainhaze_reconstructions}, metal railing lines enlarged in the bounding boxes). Figure~\ref{fig:raindrop_reconstructions} visualizes raindrop removal examples, comparing our best model RainDropDiff$_{128}$ with AttentiveGAN~\cite{qian2018attentive} and RaindropAttn~\cite{quan2019deep}. Note that we particularly illustrate HRGAN on Outdoor-Rain, and AttentiveGAN on RainDrop test sets, since these approaches are earlier generative modeling based applications to this problem with GANs. Our models generate more resembling reconstructions to the ground truths in all comparisons, and diffusion generative modeling significantly outperforms the ones based on GANs. We could not present visual comparisons to IDT~\cite{xiao2022image} due to publicly unavailable implementations. \subsection{Multi-Weather Image Restoration Results} \label{sec:multiweather} The bottom half of Figure~\ref{tab:image_restoration} presents quantitative evaluations for multi-weather image restoration in comparison to All-in-One and TransWeather, where we show $S=25$ for $p=64$, and $S=50$ for $p=128$ (see Section~1.2 of Supplementary Materials for other choices, where sometimes better results can be achieved by tuning $S$ for each task individually). We present PSNR/SSIM for publicly available TransWeather predictions with our definitions from Section~\ref{sec:eval_metrics}, which gave different results than reported in~\cite{Valanarasu:2022CVPR}. Generally our method yields exceptional image quality and ground truth similarity on all three test sets. For the image desnowing task, WeatherDiff$_{64}$ achieves the best PSNR/SSIM metrics with 35.12/0.9539 and 29.55/0.8988 for Snow100K-S and Snow100K-L respectively. Notably, on combined image deraining and dehazing, WeatherDiff$_{128}$ yields a better PSNR of 29.53 which also outperforms all dedicated weather-specific models at the above half of Figure~\ref{tab:outdoorrain}. This is particularly important as WeatherDiff$_{128}$ significantly outperforms our RainHazeDiff$_{p}$ models on this task. This indicates an improvement of the background generative capability when combined with other tasks and datasets. None of the existing multi-weather restoration methods showed a similar knowledge transfer in comparison to their weather-specific counterparts. Our models are only outperformed in a single metric, by All-in-One for PSNR on the RainDrop task (All-in-One: 31.12, Ours: 30.26). Nevertheless our results show better ground truth similarity for this case (All-in-One: 0.9268, Ours: 0.9277). These results demonstrate that WeatherDiff models can successfully learn the underlying data distribution under several adverse weather corruption tasks. \begin{figure*}[!ht] \centering \subfloat[Input\label{fig-realsnow:input}]{% \begin{tabular}{c} \includegraphics[width=0.32\textwidth]{winter__street_01480_cond_annotated.png} \\ \includegraphics[width=0.32\textwidth]{snow_animal_02078_cond_annotated.png} \end{tabular}% }\hspace{-.45cm} \subfloat[TransWeather~\cite{Valanarasu:2022CVPR}\label{fig-realsnow:transweather}]{% \begin{tabular}{c} \includegraphics[width=0.32\textwidth]{winter__street_01480_transweather_annotated.png} \\ \includegraphics[width=0.32\textwidth]{snow_animal_02078_transweather_annotated.png} \end{tabular}% }\hspace{-.45cm} \subfloat[\textbf{Ours (WeatherDiff$_{64}$)}\label{fig-realsnow:ours}]{% \begin{tabular}{c} \includegraphics[width=0.32\textwidth]{winter__street_01480_output_annotated.png} \\ \includegraphics[width=0.32\textwidth]{snow_animal_02078_output_annotated.png} \end{tabular}% }\hspace{-.45cm} \caption{Qualitative reconstructions for real-world snowy image reconstructions of TransWeather and WeatherDiff$_{64}$ with $S=25$ from Snow100K~\cite{liu2018desnownet}.} \label{fig:realistic_snow_reconstructions} \end{figure*} \subsection{Weather Restoration Generalization from Synthetic to Real-World Images} \label{sec:real_restoration} We evaluate our models trained on synthetic data with real-world image restoration test cases. For illustrations we compare our best performing WeatherDiff$_{64}$ model with the recent TransWeather network, which are both specialized on multi-weather restoration. Figure~\ref{fig:realistic_snow_reconstructions} presents qualitative image desnowing comparisons for selected images with light snow from the miscellaneous realistic snowy images set of Snow100K~\cite{liu2018desnownet}. First example in Figure~\ref{fig:realistic_snow_reconstructions} shows a case where reconstructions by TransWeather removes the side view mirrors of cars, whereas our model preserves this detail (enlarged in the bounding boxes). In the second example, clearer reconstructions with our model can be observed for a detailed image with light snow artifacts. We also included additional real-world test cases from the raindrop removal test set of the RainDS dataset presented by~\cite{Quan:2021CVPR}. Figure~\ref{fig:realistic_rainds_reconstructions} presents qualitative comparisons for removing raindrops from real images using the same multi-weather restoration models. First example in Figure~\ref{fig:realistic_rainds_reconstructions} depicts a detailed image where TransWeather reconstructions removes partly obstructed background components (i.e., leaves and stones), whereas our generative model completes these details during restoration. Second example shows a case with very bright raindrop artifacts on the camera sensor which were not completely restored by TransWeather, whereas our model is comparably better. We provide more visual examples in Section~2 of Supplementary Materials. \begin{figure*}[!ht] \centering \subfloat[Input\label{fig-realrainds:input}]{% \begin{tabular}{c} \includegraphics[width=0.32\textwidth]{163_input_annotated.png} \\ \includegraphics[width=0.32\textwidth]{169_input_annotated.png} \end{tabular}% }\hspace{-.45cm} \subfloat[TransWeather~\cite{Valanarasu:2022CVPR}\label{fig-realrainds:transweather}]{% \begin{tabular}{c} \includegraphics[width=0.32\textwidth]{163_transweather_annotated.png} \\ \includegraphics[width=0.32\textwidth]{169_transweather_annotated.png} \end{tabular}% }\hspace{-.45cm} \subfloat[\textbf{Ours (WeatherDiff$_{64}$)}\label{fig-realrainds:ours}]{% \begin{tabular}{c} \includegraphics[width=0.32\textwidth]{163_ours_annotated.png} \\ \includegraphics[width=0.32\textwidth]{169_ours_annotated.png} \end{tabular}% }\hspace{-.45cm} \caption{Qualitative reconstructions for real-world raindrop image reconstructions of TransWeather and WeatherDiff$_{64}$ with $S=25$ from RainDS~\cite{Quan:2021CVPR}.} \label{fig:realistic_rainds_reconstructions} \end{figure*} \section{Discussion} \label{sec:discussion} We present a novel patch-based image restoration approach based on conditional denoising diffusion probabilistic models, to improve vision under adverse weather conditions. Our solution is shown to yield state-of-the-art performance on weather-specific and multi-weather image restoration tasks on benchmark datasets. Importantly, our method is general to any conditional diffusive generative modeling task with arbitrary sized images. Our approach also introduces a light-weight generative diffusion modeling capability since the architecture can be based on a simpler backbone network for image restoration at lower patch resolutions. This way we extend the practicality of state-of-the-art diffusion model architectures with large computational resource demands in terms of the number of parameters and memory requirements during training. Our approach also eliminates the restriction for the diffusion model backbone to have a fully-convolutional structure to be able to perform arbitrary sized image processing, and therefore our model can benefit from widely used resolution-specific attention mechanisms~\cite{vaswani2017attention,wang2018non}. Main limitation of our approach is its comparably longer inference durations if one prioritizes real-time applicability, with respect to existing end-to-end image restoration networks which only require a single forward pass for processing. To illustrate an empirical example, our WeatherDiff$_{64}$ model requires 20.52 seconds (wall-clock time) to restore an image of size $640\times432$ at $S=10$ sampling time steps on a single NVIDIA A40 GPU, whereas TransWeather requires 0.88 seconds. Such timing specifications of our method also directly rely on the choice of hyper-parameters for our patch-based diffusive image restoration algorithm (e.g., a lower value of $r$ slightly increases image quality but also the inference times), as well as implementation efficiency. Our empirical analyses are mainly grounded on default architectural choices and minimal parameter settings used in seminal diffusion modeling works~\cite{Ho:2020,song2021ddim}. By incorporating novel methods that improve diffusion models in terms of better sample quality~\cite{choi2022perception} or faster sampling mechanisms~\cite{Kingma:2021}, we argue that quantitative results can further be improved on particular weather restoration problems. \section*{Acknowledgments} This work has been supported by the ``University SAL Labs" initiative of Silicon Austria Labs (SAL) and its Austrian partner universities for applied fundamental research for electronic based systems. \bibliographystyle{IEEEtran}
2023-04-23T06:41:31.658Z
2022-08-01T02:12:28.000Z
redpajama/arxiv
arxiv_0001
2,589
8,562
24eb79444458815ccea8a60053119af5b6f9e46c
\section{Introduction} Weather forecasting plays an essential role in resource planning in cases of severe natural phenomena such as heat waves (extreme temperatures), droughts, and hurricanes. It also influences decision-making in agriculture, aviation, retail markets, and other sectors, since unfavorable weather negatively impacts corporate revenues \citep{Ivana19}. Over the years, with technological developments, predictions of meteorological variables are becoming more accurate. However, due to the stochastic behavior of the Earth systems, which is governed by physical laws, traditional forecasting requires complex, physics-based models to predict the weather \citep{Karpatne18}. In recent years, an extensive volume of data about the Earth systems has become available. The remote sensing data collected by satellites provide meteorological data about the entire globe at specific time intervals (e.g., 6h or daily) and with a regular spatial resolution (e.g., 1km or 5km). The availability of historical data allows researchers to design deep learning models that can make more accurate predictions about the weather \citep{Reichstein19}. Even though meteorological data exhibits both spatial and temporal structures, weather forecasting can be modeled as a sequence problem. In sequence modeling tasks, an input sequence is encoded to map the representation of the sequence output, which may have a different length than the input. In \citet{Shi15}, the authors proposed the ConvLSTM architecture to solve the sequence prediction problem using a radar echo dataset for precipitation forecasting. They integrated the convolution operator, adopted by the convolutional neural network (CNN), into a recurrent neural network (RNN) to simultaneously learn the spatial and temporal context of input data to predict the future sequence. Although ConvLSTM architecture has been considered the potential approach to build prediction models for geoscience data \citep{Reichstein19}, new opportunities have emerged from recent advances in deep learning. In \citet{Wang17,Wang19}, authors proposed improved versions of the long short-term memory (LSTM) unit for memorizing spatiotemporal information. RNN-based architectures may be ideal for multi-step forecasting tasks using spatiotemporal data \citep{Shi15,Wang17,Wang19}, due to the ability to respect the temporal order (causal constraint) and predict long sequences. However, these architectures maintain the information from previous time steps to generate the output, which consequently leads to a high training time. Taking this as motivation, we address the spatiotemporal forecasting problem by proposing a new architecture using entirely 3D CNN. CNN are an efficient method for capturing spatial context and have attained state-of-the-art results for image classification using a 2D kernel \citep{Krizhevsky12}. In recent years, researchers expanded CNN actuation field, such as machine translation \citep{Gehring17} using a 1D kernel, which is useful to capture temporal patterns in a sequence. 3D CNN-based models are commonly used for video analysis and action recognition \citep{Yuan18, Tran18} or climate event detection \citep{Racah17}. However, CNN-based models are generally not considered for multi-step forecasting tasks, because of two intrinsic limitations. They violate the temporal order, allowing future information during temporal reasoning \citep{Singh19}, and they cannot generate a predictive output sequence longer than the input sequence \citep{Bai18}. To tackle these limitations, we introduce STConvS2S (\emph{Spatiotemporal Convolutional Sequence to Sequence Network}), a spatiotemporal predictive model for multi-step forecasting task. To our knowledge, STConvS2S is the first 3D CNN-based architecture built as an end-to-end trainable model, suitable to satisfy the causal constraint and predict flexible length output sequences (i.e., not limited to be equal to the input sequence length). We compared STConvS2S to RNN-based architectures through experimental studies in terms of both predictive performance and time efficiency. The proposed architecture matches or outperforms state-of-the-art methods on meteorological datasets obtained from satellites and in-situ stations - CHIRPS \citep{Funk15}, and climate model - CFSR \citep{Saha14}. The contributions of this paper are twofold. Firstly, we provide two variants of the STConvS2S architecture that satisfy the causal constraint. One adapts the causal convolution in 3D convolutional layers, and the other introduces a new approach that strategically applies a reverse function in the sequence. Secondly, we devise a temporal generator block designed to extend the length of the output sequence, which encompasses a new application of the transposed convolutional layers. The rest of this paper is organized as follows. Section \ref{sec:works} discusses works related both to weather forecasting and spatiotemporal architectures. Section \ref{sec:problem} presents the formulation of the spatiotemporal data forecasting problem. Section \ref{sec:architecture} describes our proposed deep learning architecture. Section \ref{sec:experiments} presents our experiments and results. Section \ref{sec:conclusion} provides the conclusions of the paper. \section{Related work} \label{sec:works} Several statistical methods and machine learning techniques have been applied to historical data about temperature, precipitation, and other meteorological variables to predict the weather conditions. Auto-regressive integrated moving average (ARIMA) are traditional statistical methods for times series analysis \citep{Babu12}. Other studies have also applied artificial neural networks (ANN) to time series prediction in weather data, such as temperature measurements \citep{Corchado99, Baboo10, Mehdizadeh18}. Recently, some authors have been developing new approaches based on deep learning to improve time series forecasting results, in particular, using LSTM networks. Traffic flow analysis \citep{Yang19}, displacement prediction of landslide \citep{Xu18}, petroleum production \citep{Sagheer19} and sea surface temperature forecasting \citep{Zhang17} are some applications that successfully use LSTM architectures. However, these approaches (addressed to time series) are unable to capture spatial dependencies in the observations. Spatiotemporal deep learning models deal with spatial and temporal contexts simultaneously. In \citet{Shi15}, the authors formulate weather forecasting as a sequence-to-sequence problem, where the input and output are 2D radar map sequences. In addition, they introduce the convolutional LSTM (ConvLSTM) architecture to build an end-to-end model for precipitation nowcasting. The proposed model includes the convolution operation into LSTM network to capture spatial patterns. \citet{Kim19} also define their problem as a sequence task and adopt ConvLSTM for extreme climate event forecasting. Their model uses hurricane density map sequences as spatiotemporal data. The work proposed in \citet{Souto18} implements a spatiotemporal aware ensemble approach adopting ConvLSTM architecture. Based on \citet{Shi15}, \citet{Wang17} present a new LSTM unit that memorizes spatial and temporal variations in a unified memory pool. In \citet{Wang19}, they present an improved memory function within LSTM unit adding non-stationarity modeling. Although related to the use of deep learning for climate/weather data, our model adopts only CNN rather than a hybrid approach that combines CNN and LSTM. Some studies have applied spatiotemporal convolutions \citep{Yuan18,Tran18} for video analysis and action recognition. In \citet{Tran18}, the authors compare several spatiotemporal architectures using only 3D CNN and show that factorizing the 3D convolutional kernel into separate and successive spatial and temporal convolutions produces accuracy gains. A limitation of both 3D CNN or factorized 3D CNN \citep{Tran18} is the lack of causal constraint, violating the temporal order. \citet{Singh19} and \citet{Cheng19} factorize the 3D convolution as \citet{Tran18}. \citet{Singh19} propose a recurrent convolution unit approach to address causal constraint in temporal learning for action recognition tasks, and \citet{Cheng19} satisfy the causal constraint by adopting causal convolution in separate and parallel spatial and temporal convolutions. We also adopt a factorized 3D CNN, but with a different implementation, where Figure \ref{fig:architecture-comparison} highlights our approach. In contrast to \citet{Singh19}, we use an entirely CNN approach, and to \citet{Cheng19}, besides not using parallel convolutions when adopting a causal convolution, we introduce a new method to not violate the temporal order (details in Section~\ref{subsec:temporal-block}). Following the success of 2D CNN in capturing spatial correlation in images, \citet{Xu19} propose a model to predict vehicle pollution emissions using 2D CNN to capture temporal and spatial correlation separately. \citet{Racah17} use a 3D CNN in an encoder-decoder architecture for extreme climate event detection. Their architecture consists of a downsampling path in the encoder using a stack of convolutional layers, and an upsampling path in the decoder using a stack of transposed convolutional layers. Their model adopts the typical use of transposed convolutional layers to reconstruct the output to match the entire input dimension. Instead, we use these layers to generate an output with a larger dimension, different from the dimensions of the input. Furthermore, unlike our work, they do not satisfy the causal constraint in their models. \section{Problem Statement} \label{sec:problem} Spatiotemporal data forecasting can be modeled as a sequence-to-sequence problem. Thus, the observations of spatiotemporal data (e.g. meteorological variables) measured in a specific geographic region over a period of time serve as the input sequence to the forecasting task. More formally, we define a spatiotemporal dataset as $[\widetilde{X}^{(1)}, \widetilde{X}^{(2)},\ldots, \widetilde{X}^{(m)}]$ with $m$ samples of $\widetilde{X}^{(i)} \in \mathbb{R}^{T \times H \times W \times C}$, where $1 \leq i \leq m$. Each training example is a tensor $\widetilde{X}^{(i)} = [X_1^{(i)}, X_2^{(i)},\ldots,X_T^{(i)}]$, that is a sequence of $T$ observations containing historical measurements. Each observation $X_j^{(i)} \in \mathbb{R}^{H \times W \times C}$, for $j = 1, 2,\ldots, T$ (i.e. the length of input sequence), consists of a $H \times W$ grid map that determines the spatial location of the measurements, where $H$ and $W$ represent the latitude and longitude, respectively. In the observations, $C$ represents how many meteorological variables (e.g. temperature, humidity) are used simultaneously in the model. This structure is analogous to 2D images, where $C$ would indicate the amount of color components (RGB or grayscale). Modeled as sequence-to-sequence problem in Equation \ref{eq:prediction}, the goal of spatiotemporal data forecasting is to apply a function $f$ that maps an input sequence of past observations, satisfying the causal constraint at each time step $t$, in order to predict a target sequence $\hat{X} \in \mathbb{R}^{H \times W \times C}$, where the length $T^{\prime\prime}$ of output sequence may differ from the length $T$ of input sequence. \begin{equation} \hat{X}_{t+1},\hat{X}_{t+2},\ldots, \hat{X}_{t+T^{\prime\prime}} = f(X_{t-T+1},\ldots,X_{t-1},X_{t}) \label{eq:prediction} \end{equation} \section{STConvS2S architecture} \label{sec:architecture} STConvS2S is an end-to-end deep neural network suited for learning spatiotemporal predictive patterns, which are common in domains weather forecasting. Our approach makes multi-step (sequences) prediction without feeding the predicted output back into the input sequence. Figure \ref{fig:abstraction-seq2seq} is an overview of our proposed deep learning architecture. \begin{figure*} \includegraphics[width=\linewidth]{figures/abstraction-seq2seq.png} \caption{An illustration of STConvS2S architecture, which comprises three components: temporal block, spatial block, and temporal generator block. Each block is a set of layers. The temporal block learns a temporal representation of the input sequence, the spatial block extracts spatial features from the output of the previous block. On top of the spatial block, there is the temporal generator block designed to increase the sequence length $T$ if the task requires a longer predictive horizon, where $T^{\prime \prime} \geqslant T$. Finally, the output of this block is further fed into a final convolutional layer to complete the prediction} \label{fig:abstraction-seq2seq} \end{figure*} Although some methods for weather forecasting using a radar echo dataset apply a hybrid approach, combining 2D CNN (to learn spatial representations) and LSTM (to learn temporal representations) \citep{Shi15, Wang17, Wang19}, our method uses only 3D convolutional layers to learn spatial and temporal contexts. Distinct from conventional convolution applied in some 3D CNN architectures \citep{Tran15, Tran18, Racah17}, during temporal learning, STConvS2S takes care not to depend on future information, a crucial constraint on forecasting tasks. Another core feature of our designed network is the capability to allow flexible output sequence length, which means the possibility to predict many time-steps ahead, regardless of the fixed-length of the input sequence. In the following, we provide more details about the components which comprise our architecture. \subsection{Factorized 3D convolutions} Instead of adopting a conventional $t \times d \times d$ kernel for 3D convolutional layers, where $d$ and $t$ are the kernel size in space ($H \times W$) and time ($T$) dimensions, respectively, we use a factorized 3D kernel adapted from R(2+1)D network, proposed in \citet{Tran18}. The factorized kernel $1 \times d \times d$ and $t \times 1 \times 1$ split the convolution operation of one layer into two successive operations, named as a spatial convolution and a temporal convolution in their work. In our new architecture, we take a different approach: operations are not successive inside each convolutional layer. Instead, the factorized kernels are separated into two blocks, giving them specific learning skills. The temporal block applies the $t \times 1 \times 1$ kernel in its layers to learn only temporal dependencies, while the next component, the spatial block, encapsulates spatial dependencies using $1 \times d \times d$ kernel. Figure \ref{fig:architecture-comparison} schematically illustrates the difference between these three approaches. \begin{figure} \begin{minipage}[b]{0.33\columnwidth} \centering \includegraphics{figures/3dconv.png}\\ \subcaption{} \end{minipage}% \begin{minipage}[b]{0.33\columnwidth} \centering \includegraphics{figures/2+1D.png}\\ \subcaption{} \end{minipage}% \begin{minipage}[b]{0.33\columnwidth} \centering \includegraphics{figures/my-architecture.png}\\ \subcaption{} \end{minipage}% \caption{Comparison of convolution operations applied in three convolutional layers. The spatial kernel is defined as $1 \times d \times d$ and the temporal kernel as $t \times 1 \times 1$, where $d$ and $t$ are the kernel size in spatial ($H \times W$) and time ($T$) dimensions, respectively. (a) Representation of the standard 3D convolution operation using the $t \times d \times d$ kernel. (b) Factorized 3D kernels proposed in \citet{Tran18} as successive spatial and temporal convolution operations in a unique block called (2+1)D. (c) Our proposal for the factorized 3D kernels usage is in separate blocks. First, the temporal block stacks three convolutional layers, each performing convolutions using only the temporal kernel. Likewise, the spatial block applies the spatial kernel to its layers.} \label{fig:architecture-comparison} \end{figure} Compared to the full 3D kernel applied in standard convolutions, the kernel decomposition used in STConvS2S offers the advantage of increasing the number of nonlinearities in the network (additional activation functions between factorized convolutions), which leads to an increase in the complexity of representable patterns \citep{Tran18}. An advantage of our proposed approach over the (2+1)D block is flexibility since temporal and spatial blocks can have a distinct number of layers, facilitating their optimization. \subsection{Temporal Block} \label{subsec:temporal-block} In STConvS2S, the temporal block is a stack of 3D convolutional layers which adopt $t \times 1 \times 1$ kernel during convolutions. Each layer receives a 4D tensor with dimensions $T \times H \times W \times C_{l-1}$ as input, where $C_{l-1}$ is the number of filters used in the previous layer ($l-1$), $T$ is the sequence length (time dimension), $H$ and $W$ represent the size of the spatial coverage for latitude and longitude, respectively. Within the block, the filters $C$ are increased twice in the feature maps as the number of layers increases, but the final layer reduces them again to the number of filters initially defined. In detail, this block uses batch normalization and leaky rectified linear unit (LeakyReLU) with a negative slope set as 0.01 after each convolutional layer. This block discovers patterns over the time dimension $T$ exclusively. Besides, since we are using 3D convolutional layers to analyze historical series of events, we must prevent data leakage from happening. That is, the model should not violate the temporal order and should ensure that, at step $t$, the learning process uses no future information from step $t+1$ onward. To satisfies this constraint, we propose two variants of the temporal block. \textbf{Temporal Causal Block.} We name our architecture as \emph{STConvS2S-C} when it adopts this block to learn the temporal patterns. We apply causal convolutions within the block to incorporate the ability to respect the temporal order during learning in convolutional layers. Causal convolution was originally presented in WaveNet \citep{Oord16} for 1D CNN and applied with factorized 3D convolutions in \citet{Cheng19}. This technique can be implemented by padding the input by $k-1$ elements, where $k$ is the kernel size. Figure \ref{fig:causal-conv} shows the operation in details. \begin{figure}[h] \centering \includegraphics[width=0.5\linewidth]{figures/causal-conv.png} \caption{Causal convolution operation in a 1D convolutional layer (used to simplify the illustration) with $k = 3$ (kernel size). Input is padded by $k-1$ elements to avoid learning future information. To ensure that the output feature map has the same length as the input, the last $k-1$ elements are removed since they are related to the zeros added to the right of the input.} \label{fig:causal-conv} \end{figure} \textbf{Temporal Reversed Block.} When dealing with historical data, respecting the temporal order (causal constraint) is an essential behavior of deep learning models. This because, in real applications, future information is not available in forecasting. The common approach in the literature for adapting convolutional layers to satisfy this constraint is through causal convolutions. Here, we introduce a better alternative to avoid violating the temporal order, applying a function $\psi$ in the time dimension to reverse the sequence order. This function is a linear transformation $\psi:\mathbb{R}^{T \times H \times W \times C} \rightarrow \mathbb{R}^{T \times H \times W \times C}$. The architecture is named as \emph{STConvS2S-R} when composed with this block. Formally, STConvS2S-R computes the output feature map $R$ of a temporal reversed block using \begin{equation} R^{\prime}_{1:l_r} = \begin{cases} g(W_u \ast \psi(I_u) + b_u) , & \text{if}\ u = 1 \\ g(W_u \ast I_u + b_u), & \text{if}\ 2 \leqslant u \leqslant l_r \end{cases} \label{eq:inner-output} \end{equation} \begin{equation} R = \psi(R^{\prime}_{l_r}) \label{eq:output} \end{equation} where $W_{1:l_r}$ and $b_{1:l_r}$ is the learnable weight tensor and bias term in $l_r$ layers of this block, $\ast$ denotes a convolution operator and $g(\cdot)$ is a non-linear activation function. For the first layer of the temporal reversed block, $I_1$ is the input sequence $\widetilde{X}$ previously defined in Section \ref{sec:problem}, and for the subsequent layers, $I_{2:l_r}$ is the feature map calculated in the previous layer $R^{\prime}_{l_{r}-1}$. \subsection{Spatial Block} The spatial block is built on top of the temporal block and has a similar structure with batch normalization and LeakyReLU as non-linearity. In contrast, each 3D convolutional layer of this block extracts only spatial representations since kernel decomposition allows us to analyze the spatial and temporal contexts separately. In the STConvS2S, each feature map generated has a fixed-length in $H \times W$ dimensions and, to ensure this, the input in the spatial block is padded following $p = \frac{k_s - 1}{2}$, where $k_s$ is the size of spatial kernel. This design choice differentiates our model from 3D encoder-decoder architecture \citep{Racah17}, which needs to stack upsample layers after all convolutional layers, due to the downsampling done in the latter. \subsection{Temporal Generator Block} In addition to ensuring that our model satisfies the causal constraint, another contribution of our work is generating output sequences with longer lengths than the length of the input sequence. When CNNs are used for sequence-to-sequence learning, such as multi-step forecasting, the length of the output sequence must be the same size or shorter than the input sequence \citep{Gehring17, Bai18}. To tackle this limitation, we designed a component placed on top of the spatial block used when the task requires a more extended sequence (e.g., from the previous 5 grids, predict the next 15 grids). First, we compute the intermediate feature map $G^{\prime}$: \begin{equation} G^{\prime}_{1:l_{g_t}} = tconv(I_{1:l_{g_t}}) \label{eq:inner-output} \end{equation} where $l_{g_t} = \ceil*{\frac{T^{\prime \prime} - T}{2 T}}$ is the number of transposed convolutional layers ($tconv$) necessary to guarantee that $G^{\prime}$ has the size of time dimension $T_{G^{\prime}} \geqslant T^{\prime \prime}- T$. The kernel size, stride and padding of $tconv$ are fixed and extend the feature map by a factor of 2 in time dimension only. For the first layer, $I_1$ is the output of the spatial block $S$ and for the other layers, $I_{2:l_{g_t}}$ is the feature map calculated in the previous layer $G^{\prime}_{l_{g_t}-1}$. Follow, given $G^{\prime}$ and $S$, we can compute $G^{\prime \prime}$: \begin{equation} G^{\prime \prime} = \rho(S \oplus G^{\prime}) \label{eq:g-prime-prime} \end{equation} In the equation above, $\oplus$ denotes a concatenation operator in the time dimension and $\rho(\cdot)$ is a function to ensure that the feature map $G^{\prime \prime}$ matches exactly the length $T^{\prime \prime}$ of the desired output sequence. Finally, the output feature map $G$ of this block can be defined as \begin{equation} G_{1:l_{g_c}} = g(W_{1:l_{g_c}} \ast I_{1:l_{g_c}} + b_{1:l_{g_c}}) \label{eq:output} \end{equation} where $l_{g_c} = \floor*{\frac{T^{\prime \prime}}{T}}$ is the number of convolutional layers that use factorized kernels as in the spatial block. $W_{1:l_{g_c}}$ and $b_{1:l_{g_c}}$ is the learnable weight tensor and bias term in $l_{g_c}$ layers, $\ast$ denotes a convolution operator and $g(\cdot)$ is a non-linear activation function. For the first convolutional layer, $I_1$ is $G^{\prime \prime}$. Unlike the temporal and spatial block, where the number of layers is a defined hyperparameter to execute the model, in the temporal generator block $l_{g_t}$ and $l_{g_c}$ are calculated based on the length $T^{\prime \prime}$ of the desired output sequence and the length $T$ of the input sequence (the size of time dimension). \section{Experiments} \label{sec:experiments} We perform experiments on two publicly available meteorological datasets containing air temperature and precipitation values to validate our proposed architecture. The deep learning experiments were conducted on a server with a single Nvidia GeForce GTX1080Ti GPU with 11GB memory. We executed the ARIMA methods on 8 Intel i7 CPUs with 4 cores and 66GB RAM. We start by explaining the datasets (Section~\ref{sec:datasets}) and evaluation metrics (Section~\ref{sec:Metrics}). Further, we describe the main results of the experiments for each dataset (Section~\ref{sec:cfsr-results} and \ref{sec:chirps-results}) and summarize the results of ablation studies (Section~\ref{sec:ablation}). \subsection{Datasets} \label{sec:datasets} The CFSR\footnote{\url{https://climatedataguide.ucar.edu/climate-data/climate-forecast-system-reanalysis-cfsr}} is a reanalysis\footnote{Scientific method used to produce best estimates (analyses) of how the weather is changing over time \citep{Fujiwara17}.} product that contains high-resolution global land and ocean data \citep{Saha14}. The data contain a spatial coordinate (latitude and longitude), a spatial resolution of 0.5 degrees (i.e., $0.5^{\circ} \times 0.5^{\circ}$ area for each grid cell) and a frequency of 6 hours for some meteorological variables, such as air temperature and wind speed. In the experiments, we use a subset of CFSR with the air temperature observations from January 1979 to December 2015, covering the space in 8$^{\circ}$N-54$^{\circ}$S and 80$^{\circ}$W-25$^{\circ}$W as shown in Figure \ref{fig:datasets} (a). As data preprocessing, we scale down the grid to $32 \times 32$ in the $H$ and $W$ dimensions to fit the data in GPU memory. The other dataset, CHIRPS\footnote{\url{https://chc.ucsb.edu/data/chirps}}, incorporates satellite imagery and in-situ station data to create gridded rainfall times series with daily frequency and spatial resolution of 0.05 degrees \citep{Funk15}. We use a subset with observations from January 1981 to March 2019 and apply interpolation to reduce the grid size to $50 \times 50$. Figure \ref{fig:datasets} (b) illustrates the coverage space 10$^{\circ}$N-39$^{\circ}$S and 84$^{\circ}$W-35$^{\circ}$W adopted in our experiments. \begin{figure} \centering \begin{minipage}[b]{0.35\linewidth} \centering \includegraphics[width=\textwidth]{figures/cfsr-dataset.png} \subcaption{CFSR-temperature dataset} \end{minipage} \hspace{0.5cm} \begin{minipage}[b]{0.35\linewidth} \centering \includegraphics[width=\textwidth]{figures/chirps-dataset.png} \subcaption{CHIRPS-rainfall dataset} \end{minipage} \caption{Spatial coverage of the datasets used in all experiments. (a) It shows the selected grid on January 1, 1979 with air temperature values.(b) It shows the selected grid of the sequence on March 31, 2019 with rainfall values.} \label{fig:datasets} \end{figure} Similar to \citet{Shi15}, we define the input sequence length as 5, which indicates the use of the previous five grids to predict the next $T''$ grids. Thus, the input data shapes to the deep learning architectures are $5 \times 32 \times 32 \times 1$ for CFSR dataset and $5 \times 50 \times 50 \times 1$ for CHIRPS dataset. The value 1 in both dataset shapes indicates the one-channel (in this aspect similar to a grayscale image), 5 is the size of the sequence considered in the forecasting task, and 32 and 50 represent the numbers of latitudes and longitudes used to build the spatial grid in each dataset. We create 54,041 and 13,960 grid sequences from the temperature dataset and rainfall datasets, respectively. Finally, we divide both datasets into non-overlapping training, validation, and test set following 60\%, 20\%, and 20\% ratio, in this order. The adoption of temperature and rainfall datasets in our experimental evaluation relies on the fact that they are the two main meteorological variables. Research about their spatiotemporal representation is relevant to short-term forecasting and improves the understanding of long-term climate variability \citep{Rahman17}. However, the proposed architecture is suitable for other meteorological variables or other domains, as long as the training data can be structured as defined in Section \ref{sec:problem}. \subsection{Evaluation metrics} \label{sec:Metrics} To evaluate the proposed architecture, we compare our results against ARIMA models, traditional statistical approaches for time series forecasting, and state-of-the-art models for spatiotemporal forecasting. To accomplish this, we use the two evaluation metrics presented in Equation \ref{eq:rmse} and \ref{eq:mae}. RMSE, denoted as $E_r$, is based on MSE metric, which is the average of squared differences between real observation and prediction. The MSE square root gives the results in the original unit of the output, and is expressed at a specific spatiotemporal volume as: \begin{equation} E_r(T,H,W) = \sqrt{\frac{1}{N} \sum_{n=1}^{N} \sum_{t \in T} \sum_{h \in H} \sum_{w \in W} [x(t,h,w) - \hat{x}(t,h,w)]^2} \label{eq:rmse} \end{equation} where $N$ is the number of test samples, $x(t,h,w)$ and $\hat{x}(t,h,w)$ are the real and predicted values at the location $h$ and $w$ at time $t$, respectively. MAE, denoted as $E_m$, is the average of differences between real observation and prediction, which measures the magnitude of the errors in prediction. MAE also provides the result in the original unit of the output, and is expressed at a specific spatiotemporal volume as: \begin{equation} E_m(T,H,W) = \frac{1}{N} \sum_{n=1}^{N} \sum_{t \in T} \sum_{h \in H} \sum_{w \in W} |x(t,h,w) - \hat{x}(t,h,w)| \label{eq:mae} \end{equation} where $N$, $t$, $h$, $w$ are defined as shown in Equation \ref{eq:rmse}. \subsection{CFSR Dataset: results and analysis} \label{sec:cfsr-results} We first conduct experiments with distinct numbers of layers, filters, and kernel sizes to investigate the best hyperparameters to fit the deep learning models. As a starting point, we set the version $1$ based on the settings described in \citet{Shi15} with two layers, each containing 64 filters and a kernel size of 3\footnote{$3 \times 3$ kernel for ConvLSTM, PredRNN, and MIM. $3 \times 1 \times 1$ temporal kernel and $1 \times 3 \times 3$ spatial kernel for STConvS2S.}. To make fair comparisons using the chosen datasets, we explored variations of the hyperparameters for our architecture (STConvS2S-C and STConvS2S-R) and the following state-of-the-art methods: ConvLSTM \citep{Shi15}, PredRNN \citep{Wang17}, and MIM \citep{Wang19}. Thus, for versions $2$-$4$, we defined the number of layers ($L$), kernel size ($K$) and the number of filters ($F$) in a way that would help us understand the behavior of the models during the learning process by increasing L (versions $1$ and $3$), K (versions $2$ and $4$) or F (versions $2$ and $3$). In the training phase, we perform for all models mini-batch learning with 50 epochs, and RMSprop optimizer with a learning rate of $10^{-3}$. We applied dropout after convolutional layers during the training of PredRNN and MIM models \footnote{STConvS2S and ConvLSTM models do not overfit during training on any version.} to reduce the model complexity and avoid overfitting. Without dropout, these models do not generalize well for this dataset and make less accurate predictions in the validation set. We adopt 0.5 as the dropout rate for both models after evaluating the best rate employing the grid search technique, which performed several experiments changing the rate by \{0.3, 0.5, 0.8\}. Figure \ref{fig:lineplot} (a) and (b) illustrate the differences in the learning curve, where the former shows a high error on the validation set early in the training stage for both models, and the latter indicates the learning curve with dropout applied. As a sequence-to-sequence task, we use the previous five grids, as we established before in Section \ref{sec:datasets}, to predict the next five grids (denoted as {$5 \rightarrow 5$}). Table \ref{tab:cfsr-exp1} provides the models considered in our investigation with four different settings, the values of the RMSE metric on the test set, the training time, and the memory usage by GPU. The results present the superiority of version $4$, which has the highest values of L and K, reaching the lowest RMSE for all models, except PredRNN, where version $2$ is superior. Another aspect to note is that when increasing the number of filters (versions $2$ and $3$) the impact is more significant in the training time than when increasing the number of layers (versions $1$ and $3$) for state-of-the-art models, indicating that version $2$ is faster for these RNN-based architectures. This impact is not seen in our models (CNN-based) compared to version $3$, as in versions $1$ and $2$ they spend almost the same time during training, showing that STConvS2S models have more stability when increasing the hyperparameters. \begin{figure} \centering \begin{minipage}[b]{0.47\linewidth} \centering \includegraphics[width=\textwidth]{figures/overfitting-cfsr-predrnn-mim.png} \subcaption{Learning curve - overfitting} \end{minipage} \begin{minipage}[b]{0.47\linewidth} \centering \includegraphics[width=\textwidth]{figures/cfsr-predrnn-mim.png} \subcaption{Learning curve - dropout 0.5} \end{minipage} \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=\textwidth]{figures/cfsr-all-models.png} \subcaption{Training curve for all models} \end{minipage} \caption{Learning curves after running 50 epochs on temperature dataset (CFSR). (a) To exemplify, we select version $2$ to illustrate the overfitting observed when analyzing the training and validation curve of PredRNN and MIM models. (b) The same version and models using dropout to improve its generalization. (c) Comparison of training curve for the best version for each model. Our models (STConvS2S-R and STConvS2S-C) achieved lower RMSE and thus, better ability to learn spatiotemporal representations.} \label{fig:lineplot} \end{figure} \begin{figure} \centering \includegraphics[width=0.70\linewidth]{figures/barplot-cfsr-rmse-time.png} \caption{Comparison between training time in hours (bar plot) and RMSE (line plot) for each model version on temperature dataset (CFSR).} \label{fig:rmse-all-models} \end{figure} \begin{table} \centering \caption{Evaluation of different settings on the CFSR dataset for STConvS2S and state-of-the-art methods, where the best version has the lowest RMSE value.} \label{tab:cfsr-exp1} \begin{tabular}{lccccc@{\extracolsep{\fill}}} \toprule & & & \multicolumn{3}{c}{$5 \rightarrow 5$} \\ \cmidrule{4-6} Model & Version & Setting & RMSE & Training time & \parbox{2.5cm}{\centering Memory usage (MB)} \\ \midrule \multirow{4}{*}{\parbox{3cm}{ConvLSTM \\ \citep{Shi15}}} & 1 & L=2, K=3, F=64 & 2.1306 & 01:49:16 & 1119 \\ & 2 & L=3, K=3, F=32 & 2.0090 & \textbf{01:12:16} & \textbf{920} \\ & 3 & L=3, K=3, F=64 & 1.9607 & 02:53:00 & 1358 \\ & 4 & L=3, K=5, F=32 & \textbf{1.8770} & 01:58:52 & 922 \\ \midrule \multirow{4}{*}{\parbox{3cm}{PredRNN \\ \citep{Wang17}}} & 1 & L=2, K=3, F=64 & 1.7497 & 06:59:45 & 3696 \\ & 2 & L=3, K=3, F=32 & \textbf{1.6928} & \textbf{04:55:19} & \textbf{2880} \\ & 3 & L=3, K=3, F=64 & 1.7004 & 10:44:25 & 5242 \\ & 4 & L=3, K=5, F=32 & 1.7028 & 06:41:14 & 2892 \\ \midrule \multirow{4}{*}{\parbox{3cm}{MIM \\ \citep{Wang19}}} & 1 & L=2, K=3, F=64 & 1.7623 & 09:37:07 & 4826 \\ & 2 & L=3, K=3, F=32 & 1.7199 & \textbf{07:31:59} & \textbf{4124} \\ & 3 & L=3, K=3, F=64 & 1.7163 & 16:27:42 & 7789 \\ & 4 & L=3, K=5, F=32 & \textbf{1.6621} & 09:49:40 & 4145 \\ \midrule \multirow{4}{*}{\parbox{3cm}{STConvS2S-C (ours)}} & 1 & L=2, K=3, F=64 & 1.6355 & \textbf{01:16:40} & \textbf{991} \\ & 2 & L=3, K=3, F=32 & 1.5681 & 01:22:42 & 1021 \\ & 3 & L=3, K=3, F=64 & 1.5459 & 03:14:25 & 1554 \\ & 4 & L=3, K=5, F=32 & \textbf{1.3791} & 02:25:26 & 1040 \\ \midrule \multirow{4}{*}{\parbox{3cm}{STConvS2S-R (ours)}} & 1 & L=2, K=3, F=64 & 1.4614 & \textbf{01:05:48} & \textbf{880} \\ & 2 & L=3, K=3, F=32 & 1.3663 & 01:07:33 & 891 \\ & 3 & L=3, K=3, F=64 & 1.3359 & 02:42:53 & 1283 \\ & 4 & L=3, K=5, F=32 & \textbf{1.2773} & 01:58:39 & 895 \\ \bottomrule \end{tabular} \end{table} To improve the comprehension of the analysis, Figure \ref{fig:rmse-all-models} highlights the differences between the performances of RMSE metric and training time for the models. As shown, STConvS2S-R and STConvS2S-C models perform favorably against the state-of-the-art models for the CFSR dataset in all versions, demonstrating that our architectures can simultaneously capture spatial and temporal correlations. Comparing the best version of each model, our models significantly outperform the state-of-the-art architectures for spatiotemporal forecasting. In detail, STConvS2S-R (version $4$) takes only 1/4 memory space, is 5x faster in training, and has achieved a 23\% improvement in RMSE over MIM (version $4$), the RNN-based model with better performance. These results reinforce that our models have fewer parameters to optimize than MIM and PredRNN. Furthermore, our model can be completely parallelized, speeding up the learning process, as the output of the convolutional layers does not depend on the calculations of the previous step, as occurs in recurrent architectures. Figure \ref{fig:lineplot} (c) illustrates that STConvS2S-R has a lower training error in 50 epochs compared to other models, including STConvS2S-C, proving to be a better alternative to make CNN-based models respect the temporal order. To further evaluate our models, we chose the most efficient version for each model to perform new experiments. For STConvS2S-R, STConvS2S-C, ConvLSTM and MIM models, version $4$ was chosen with 3 layers, 32 filters, and kernel size of 5, and for PredRNN, version $2$ with the same number of layers and filters, but with kernel size of 3. We also included a comparison with ARIMA methods to serve as baseline for deep learning models, since they are a traditional approach to time series forecasting. The experiment for the baseline takes into account the same temporal pattern and spatial coverage. Thus, predictions were performed throughout all the 1,024 time series, considering in each analysis the previous 5 values in the sequence. In this phase, we have not defined a specific number of epochs for each deep learning model's execution. Therefore to avoid overfitting during the training of models, we apply the early stopping technique with patience hyperparameter set to 16 on the validation dataset. As the models run with different numbers of epochs, we include the training time/epoch to be able to compare the time efficiency of the models. We train and evaluate each deep learning model 3 times and compute the mean and the standard deviation of RMSE and MAE metrics on the test set. This time, we evaluate the models in two horizons: 5-steps ({$5 \rightarrow 5$}) and 15-steps ({$5 \rightarrow 15$}). These experiments are relevant to test the capability of our model to predict a long sequence. As shown in Table \ref{tab:dataset1}, STConvS2S-R and STConvS2S-C perform much better than the baseline on RMSE and MAE metrics, indicating the importance of spatial dependence on geoscience data since ARIMA models only analyze temporal relationships. They also outperform the state-of-the-art models in these evaluation metrics in both horizons, demonstrating that our models can be efficiently adopted to predict future observations. However, beyond that, the designed temporal generator block in STConvS2S architecture can convincingly generate a more extended sequence regardless of the fixed-input sequence length. In a closer look at the best CNN-based architecture and RNN-based architecture in the task {$5 \rightarrow 15$}, STConvS2S-R takes less memory space and is faster than PredRNN. To provide an overview, Figure \ref{fig:stackedplot-cfsr} illustrates the cumulative error based on both horizons. \begin{table} \centering \caption{Performance results for temperature forecasting using the previous five observations (grids) to predict the next five observations ($5 \rightarrow 5$), and the next 15 observations ($5 \rightarrow 15$).} \label{tab:dataset1} \begin{adjustbox}{width=\textwidth} \begin{tabular}{lccccc@{\extracolsep{\fill}}} \toprule & \multicolumn{5}{c}{$5 \rightarrow 5$} \\ \cmidrule{2-6} Model & RMSE & MAE & \parbox{2cm}{ \centering Memory usage (MB)} & \parbox{2cm}{\centering Mean training time} & \parbox{2cm}{\centering Training time/epoch} \\ \midrule \parbox{4.5cm}{ARIMA} & 2.1880 & 1.9005 & \textemdash & \textemdash & \textemdash \\ \parbox{4.5cm}{ConvLSTM \citep{Shi15}} & 1.8555 $\pm$ 0.0033 & 1.2843 $\pm$ 0.0028 & 922 & \textbf{02:38:27} & 00:02:21 \\ \parbox{4.5cm}{PredRNN \citep{Wang17}} & 1.6962 $\pm$ 0.0038 & 1.1885 $\pm$ 0.0020 & 2880 & 06:59:34 & 00:05:52 \\ \parbox{4.5cm}{MIM \citep{Wang19}} & 1.6731 $\pm$ 0.0099 & 1.1790 $\pm$ 0.0055 & 4145 & 11:05:37 & 00:10:43 \\ \midrule STConvS2S-C (ours) & 1.3699 $\pm$ 0.0024 & 0.9434 $\pm$ 0.0020 & 1040 & 03:34:52 & 00:02:48 \\ STConvS2S-R (ours) & \textbf{1.2692} $\pm$ 0.0031 & \textbf{0.8552} $\pm$ 0.0018 & \textbf{895} & 03:15:12 & \textbf{00:02:13} \\ \bottomrule \end{tabular} \end{adjustbox} \begin{adjustbox}{width=\textwidth} \begin{tabular}{lccccc@{\extracolsep{\fill}}} & \multicolumn{5}{c}{$5 \rightarrow 15$} \\ \cmidrule{2-6} Model & RMSE & MAE & \parbox{2cm}{ \centering Memory usage (MB)} & \parbox{2cm}{\centering Mean training time} & \parbox{2cm}{\centering Training time/epoch} \\ \midrule \parbox{4.5cm}{ARIMA} & 2.2481 & 1.9077 & \textemdash & \textemdash & \textemdash \\ \parbox{4.5cm}{ConvLSTM \citep{Shi15}} & 2.0728 $\pm$ 0.0069 & 1.4558 $\pm$ 0.0076 & 1810 & 5:29:30 & 00:07:32 \\ \parbox{4.5cm}{PredRNN \citep{Wang17}} & 2.0237 $\pm$ 0.0067 & 1.4311 $\pm$ 0.0149 & 7415 & 11:45:48 & 00:17:03 \\ \parbox{4.5cm}{MIM \citep{Wang19}} & 2.0287 $\pm$ 0.0361 & 1.4330 $\pm$ 0.0250 & 10673 & 19:19:00 & 00:31:19 \\ \midrule STConvS2S-C (ours) & 1.8739 $\pm$ 0.0107 & 1.2946 $\pm$ 0.0061 & 1457 & \textbf{03:12:24} & 00:05:17 \\ STConvS2S-R (ours) & \textbf{1.8051} $\pm$ 0.0040 & \textbf{1.2404} $\pm$ 0.0068 & \textbf{1312} & 03:15:42 & \textbf{00:05:03} \\ \bottomrule \end{tabular} \end{adjustbox} \end{table} \begin{figure} \centering \begin{minipage}[b]{1\linewidth} \centering \includegraphics[width=0.7\textwidth]{figures/stackedplot-cfsr-rmse.png} \end{minipage} \hspace{0.5cm} \begin{minipage}[b]{1\linewidth} \centering \includegraphics[width=0.7\textwidth]{figures/stackedplot-cfsr-mae.png} \end{minipage} \caption{Cumulative error based on both horizons ($5 \rightarrow 5$ and $5 \rightarrow 15$) using temperature dataset (CFSR). Evaluations on RMSE and MAE metrics.} \label{fig:stackedplot-cfsr} \end{figure} \subsection{CHIRPS Dataset: results and analysis} \label{sec:chirps-results} Similar to what we did with CFSR dataset, we divide the experiments into two phases. The first phase aims to investigate the best hyperparameter settings to adjust the models. In the second, we take into account the best version of each model and perform experiments with different initialization values for weight and bias to consolidate the analysis. In detail, we first set the hyperparameters as in the previous experiments on CFSR dataset. However, as the CHIRPS dataset is almost 4x smaller, all models overfit with those configurations. Thus, as an initial method to address this problem, we reduce the models' complexity by decreasing the number of layers ($L$) and the number of filters ($F$). However, we ensure fair comparability in the way we analyze the learning process when changing $L$ (versions $1$ and $3$), $K$ (versions $2$ and $4$) or $F$ (versions $2$ and $3$). Again, all models were trained mini-batch learning with 50 epochs, and RMSprop optimizer with a learning rate of $10^{-3}$. Although we reduced the complexity, the overfitting problem remained with PredRNN and MIM models. We apply dropout to improve their performance in comparison with our proposed models. As before, we apply a search to find the best dropout rate among {0.2, 0.5, 0.8}. Figure \ref{fig:chirps-lineplot} (a) and (b) show the learning curves of these models with overfitting and with a dropout rate of 0.5 applied, respectively. Table \ref{tab:chirps-exp1} shows the experimental results of predicting five grids into the future by observing five grids ($5 \rightarrow 5$). For all models, version $1$ with the fewest layers has the lowest memory usage and was faster in training than the other versions. Another notable analysis is that version $2$ an $4$ consume the GPU memory equally, except for the STConvS2S-C model, thus increasing the kernel size affects only computational time. Figure \ref{fig:chirps-rmse-all-models} compares these results version by version. STConvS2S models outperform ConvLSTM with compatible training time on all versions. Comparing the best version of each model (version $4$ for STConvS2S models and ConvLSTM, and version$2$ for PredRNN and MIM), STConvS2S-R has the lowest prediction error and, compared to PredRNN, it is 3x faster and occupies only 1/3 of memory space. Besides, Figure \ref{fig:chirps-lineplot} (c) illustrates training in 50 epochs and indicates that STConvS2S-R learns the spatiotemporal representation of rainfall better than RNN-based architectures. For the second phase, we train the models in the same set up as previously indicated for the CFSR dataset. We also include ARIMA methods as a baseline and evaluate the proposed architectures and state-of-the-art models in two tasks: feeding only five observations (grids) into the network and predicting the next 5 and 15 observations, denoted as $5 \rightarrow 5$ and $5 \rightarrow 15$, respectively. For ARIMA, predictions were performed throughout all the 2,500 time series, considering the previous five values in the sequence in each analysis. Results of Table \ref{tab:chirps-exp2} demonstrate that STConvS2S-R achieves a better trade-off between computational cost and prediction accuracy than state-of-the-art models in both tasks. Figure \ref{fig:stackedplot-chirps} summarizes these results in an overview of the cumulative error based on the two forecast horizons. Trained on the rainfall dataset, our proposed architecture equipped with the temporal reversed block achieves performance comparable to RNN-based architecture. Besides, it can predict short and even long sequences in the spatiotemporal context. Such a statement is confirmed by Figure \ref{fig:grids-chirps}, which shows the observations at each time step for STConvS2S-R and PredRNN models. STConvS2S-R can predict in the long-term without many distortions since it presents a predictive result similar to the PredRNN. Given the high variability of rainfall, both models have difficulties in making an accurate forecast. \begin{figure} \centering \begin{minipage}[b]{0.47\linewidth} \centering \includegraphics[width=\textwidth]{figures/overfitting-chirps-predrnn-mim.png} \subcaption{Learning curve - overfitting} \end{minipage} \begin{minipage}[b]{0.47\linewidth} \centering \includegraphics[width=\textwidth]{figures/chirps-predrnn-mim.png} \subcaption{Learning curve - dropout 0.5} \end{minipage} \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=\textwidth]{figures/chirps-all-models.png} \subcaption{Training curve for all models} \end{minipage} \caption{Learning curves after running 50 epochs on the rainfall dataset (CHIRPS). (a) To exemplify, we select version $2$ to illustrate the overfitting observed when analyzing the PredRNN and MIM models' training and validation curves. (b) The same version and models using dropout to improve its generalization. (c) Comparison of training curve for the best version for each model. STConvS2S-R achieved lower RMSE and, thus, better ability to learn spatiotemporal representations.} \label{fig:chirps-lineplot} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{figures/barplot-chirps-rmse-time.png} \caption{Comparison between training time in hours (bar plot) and RMSE (line plot) for each model version on the rainfall dataset (CHIRPS).} \label{fig:chirps-rmse-all-models} \end{figure} \begin{table} \centering \caption{Evaluation of different settings on the CHIRPS dataset for STConvS2S and state-of-the-art methods, where the best version has the lowest RMSE value.} \label{tab:chirps-exp1} \begin{tabular}{lccccc@{\extracolsep{\fill}}} \toprule & & & \multicolumn{3}{c}{$5 \rightarrow 5$} \\ \cmidrule{4-6} Model & Version & Setting & RMSE & Training time & \parbox{2.5cm}{\centering Memory usage (MB)} \\ \midrule \multirow{4}{*}{\parbox{3cm}{ConvLSTM \\ \citep{Shi15}}} & 1 & L=1, K=3, F=16 & 6.4321 & \textbf{00:06:39} & \textbf{746} \\ & 2 & L=2, K=3, F=8 & 6.4076 & 00:08:01 & 756 \\ & 3 & L=2, K=3, F=16 & 6.3963 & 00:13:35 & 989 \\ & 4 & L=2, K=5, F=8 & \textbf{6.3681} & 00:10:19 & 756 \\ \midrule \multirow{4}{*}{\parbox{3cm}{PredRNN \\ \citep{Wang17}}} & 1 & L=1, K=3, F=16 & 6.2787 & \textbf{00:30:31} & \textbf{1673} \\ & 2 & L=2, K=3, F=8 & \textbf{6.2572} & 00:34:19 & 1740 \\ & 3 & L=2, K=3, F=16 & 6.2638 & 01:01:00 & 2775 \\ & 4 & L=2, K=5, F=8 & 6.2600 & 00:39:24 & 1740 \\ \midrule \multirow{4}{*}{\parbox{3cm}{MIM \\ \citep{Wang19}}} & 1 & L=1, K=3, F=16 & 6.3126 & \textbf{00:25:32} & \textbf{1447} \\ & 2 & L=2, K=3, F=8 & \textbf{6.2586} & 00:43:52 & 2231 \\ & 3 & L=2, K=3, F=16 & 6.2634 & 01:19:18 & 3521 \\ & 4 & L=2, K=5, F=8 & 6.2626 & 00:48:00 & 2231 \\ \midrule \multirow{4}{*}{\parbox{3cm}{STConvS2S-C (ours)}} & 1 & L=1, K=3, F=16 & 6.3991 & \textbf{00:06:16} & \textbf{609} \\ & 2 & L=2, K=3, F=8 & 6.3660 & 00:08:23 & 654 \\ & 3 & L=2, K=3, F=16 & 6.3623 & 00:12:45 & 807 \\ & 4 & L=2, K=5, F=8 & \textbf{6.3131} & 00:13:16 & 662 \\ \midrule \multirow{4}{*}{\parbox{3cm}{STConvS2S-R (ours)}} & 1 & L=1, K=3, F=16 & 6.3910 & \textbf{00:06:05} & \textbf{584} \\ & 2 & L=2, K=3, F=8 & 6.3310 & 00:07:14 & 616 \\ & 3 & L=2, K=3, F=16 & 6.3205 & 00:11:01 & 735 \\ & 4 & L=2, K=5, F=8 & \textbf{6.2288} & 00:10:55 & 616 \\ \bottomrule \end{tabular} \end{table} \begin{table} \centering \caption{Performance results for rainfall forecasting using the previous five observations (grids) to predict the next five observations ($5 \rightarrow 5$), and the next 15 observations ($5 \rightarrow 15$).} \label{tab:chirps-exp2} \begin{adjustbox}{width=\textwidth} \begin{tabular}{lccccc@{\extracolsep{\fill}}} \toprule & \multicolumn{5}{c}{$5 \rightarrow 5$} \\ \cmidrule{2-6} Model & RMSE & MAE & \parbox{2cm}{ \centering Memory usage (MB)} & \parbox{2cm}{\centering Mean training time} & \parbox{2cm}{\centering Training time/epoch} \\ \midrule \parbox{4.5cm}{ARIMA} & 7.4377 & 6.1694 & \textemdash & \textemdash & \textemdash \\ \parbox{4.5cm}{ConvLSTM \citep{Shi15}} & 6.3666 $\pm$ 0.0019 & 2.9074 $\pm$ 0.0185 & 752 & \textbf{00:15:15} & 00:00:13 \\ \parbox{4.5cm}{PredRNN \citep{Wang17}} & 6.2625 $\pm$ 0.0039 & 2.7880 $\pm$ 0.0110 & 1740 & 00:39:59 & 00:00:43 \\ \parbox{4.5cm}{MIM \citep{Wang19}} & 6.2621 $\pm$ 0.0051 & 2.7900 $\pm$ 0.0178 & 2231 & 00:52:13 & 00:00:52 \\ \midrule STConvS2S-C (ours) & 6.3091 $\pm$ 0.0029 & 2.8487 $\pm$ 0.0280 & 662 & 00:15:54 & 00:00:15 \\ STConvS2S-R (ours) & \textbf{6.2248} $\pm$ 0.0006 & \textbf{2.7821} $\pm$ 0.0261 & \textbf{616} & 00:16:48 & \textbf{00:00:13} \\ \bottomrule \end{tabular} \end{adjustbox} \begin{adjustbox}{width=\textwidth} \begin{tabular}{lccccc@{\extracolsep{\fill}}} & \multicolumn{5}{c}{$5 \rightarrow 15$} \\ \cmidrule{2-6} Model & RMSE & MAE & \parbox{2cm}{ \centering Memory usage (MB)} & \parbox{2cm}{\centering Mean training time} & \parbox{2cm}{\centering Training time/epoch} \\ \midrule \parbox{4.5cm}{ARIMA} & 7.9460 & 5.9379 & \textemdash & \textemdash & \textemdash \\ \parbox{4.5cm}{ConvLSTM \citep{Shi15}} & 6.3244 $\pm$ 0.0025 & 2.8972 $\pm$ 0.0264 & 1308 & 00:44:30 & \textbf{00:00:33} \\ \parbox{4.5cm}{PredRNN \citep{Wang17}} & 6.2600 $\pm$ 0.0013 & \textbf{2.7850} $\pm$ 0.0067 & 4115 & 01:53:30 & 00:01:58 \\ \parbox{4.5cm}{MIM \citep{Wang19}} & 6.2722 $\pm$ 0.0020 & 2.7935 $\pm$ 0.0246 & 5276 & 02:48:38 & 00:02:34 \\ \midrule STConvS2S-C (ours) & 6.2962 $\pm$ 0.0039 & 2.8452 $\pm$ 0.0130 & 916 & \textbf{00:35:43} & 00:00:41 \\ STConvS2S-R (ours) & \textbf{6.2590} $\pm$ 0.0023 & 2.8054 $\pm$ 0.0175 & \textbf{912} & 00:39:35 & 00:00:39 \\ \bottomrule \end{tabular} \end{adjustbox} \end{table} \begin{figure}[htb] \centering \begin{minipage}[b]{1\linewidth} \centering \includegraphics[width=0.7\textwidth]{figures/stackedplot-chirps-rmse.png} \end{minipage} \hspace{0.5cm} \begin{minipage}[b]{1\linewidth} \centering \includegraphics[width=0.7\textwidth]{figures/stackedplot-chirps-mae.png} \end{minipage} \caption{Cumulative error based on both horizons ($5 \rightarrow 5$ and $5 \rightarrow 15$) using the rainfall dataset (CHIRPS). Evaluations on RMSE and MAE metrics.} \label{fig:stackedplot-chirps} \end{figure} \begin{figure*} \includegraphics[width=\linewidth]{figures/grids-chirps.png} \caption{Prediction example on test set of rainfall dataset (CHIRPS). Comparison between the best CNN-based and RNN-based models: STConvS2S-R and PredRNN, respectively} \label{fig:grids-chirps} \end{figure*} \subsection{Ablation Study} \label{sec:ablation} We conduct a series of ablation studies, where the goal is to understand our architecture by removing/changing its main components and observing the impact on the evaluation metrics. For fair comparisons, we trained all models with the same settings of version $4$ for CFSR (see Table \ref{tab:cfsr-exp1}) and CHIRPS (see Table \ref{tab:chirps-exp1}) datasets. Besides analyzing the structure of our model, we also compare it with three models: vanilla 3D CNN, 3D Encoder-Decoder architecture \citep{Racah17}, and a CNN model using (2+1)D blocks \citep{Tran18}. In Table \ref{tab:cfsr-ablation} and \ref{tab:chirps-ablation}, the results of these comparisons are shown on rows 1-3, rows 4-11 show the ablation experiments and on rows 12-13 the proposed models in this work. Following, we discuss the experimental results in detail. \begin{itemize} \item \textbf{Removal of the factorized convolutions.} In these experiments, there is no separation in temporal and spatial blocks in our architecture, since this split is only possible due to factorized convolutions. These models in relation to layers are similar to 3D CNN (see Figure \ref{fig:architecture-comparison} (a) and (c)). Removing factorized convolutions from the models underperform the proposed models in RMSE and MAE metrics (see rows 4 and 12; rows 5 and 13). To reinforce that the performance gain comes from design options rather than increased model parameters, we also performed experiments removing the progressive filters from our models (rows 10-11) for comparison. The models without factorized convolutions still performed worse in most cases. \item \textbf{Removal of the causal constraint.} In general, respecting the temporal order is more a restriction of the problem domain than an additional feature to improve models' performance. However, the STConvS2S-R model slightly improves the results, at least in one of the evaluation metrics on both datasets (comparison between rows 6 and 13). This enhancement can also be observed in 3D CNN and "not factorized" STConvS2S-R (rows 1 and 5), as both use the same layers, but differ concerning the causal constraint. On the other hand, the STConvS2S-C model results do not show the same contribution to performance. \item \textbf{Removal of the temporal block.} This experiment analyzes the importance of this component in our network since we propose two variations of STConvS2S based on the temporal block adopted. The results on row 7 indicate that although faster than the proposed models, this removal has a critical impact on RMSE and MAE, especially for the CFSR dataset. \item \textbf{Inverted blocks.} Understanding the influence of the temporal and spatial blocks on each other is not straightforward. Thus, to analyze the model structure, we change the blocks from (temporal $\Rightarrow$ spatial) to (spatial $\Rightarrow$ temporal). Both GPU memory usage and training time are very similar, comparing each STConvS2S model with the respective inverted version. Regarding the evaluation metrics, the inverted versions have the worst performance on both datasets, except for MAE on CHIRPS using STConvS2S-R. \item \textbf{Comparison with baselines methods.} There is no significant differentiation among STConvS2S models and the baselines on the CHIRPS dataset concerning memory usage and training time. These metrics almost increase twice on the CFSR dataset compared to 3D CNN and 3D Encoder-Decoder, but as a result, STConvS2S-R achieves a 4\% improvement in RMSE over those same baselines. STConvS2S-R had a favorable or matching performance in the results of the experiments on both datasets, except for the MAE metric on the CHIRPS dataset comparing against (2+1)D Conv. STConvS2S-C does not perform as well as STConvS2S-R in these comparisons. \item \textbf{Comparison of strategies to satisfy the causal constraint.} The results show the superiority of STConvS2S-R compared to STConvS2S-C in the evaluation metrics and time performance in all the experiments. The hypothesis of STConvS2S-R to better predict is that this model adds less zero-padding before performing the convolution operation on each layer inside the temporal block. Concerning time efficiency, in STConvS2S-R, the reverse function is performed only twice in the temporal reversed block, making it faster than STConvS2S-C. The latter performs its operations on each layer within the temporal causal block (Section~\ref{subsec:temporal-block}). This study demonstrates the relevance of our original method to make convolutional layers satisfy the causal constraint during the learning process. \end{itemize} \begin{table} \centering \caption{Quantitative comparison of ablation experiments, baseline methods using 3D convolutional layers, and our proposed models on temperature dataset (CFSR) for $5 \rightarrow 5$ task. } \label{tab:cfsr-ablation} \begin{adjustbox}{width=\textwidth} \begin{threeparttable} \begin{tabular}{lcccccc@{\extracolsep{\fill}}} \toprule \qquad Model & \parbox{2cm}{\centering Factorized Conv.} & \parbox{1.5cm}{ \centering Causal const.} & RMSE & MAE & \parbox{2.5cm}{ \centering Memory usage (MB)} & \parbox{2cm}{\centering Training time} \\ \midrule \parbox{4cm}{\rownumber. \space 3D CNN} & \textemdash & \textemdash & 1.3307 & 0.9015 & 578 & 01:13:54 \\ \parbox{4cm}{\rownumber. \space 3D Encoder-Decoder} & \textemdash & \textemdash & 1.3327 & 0.9291 & 544 & 01:11:26 \\ \parbox{4cm}{\rownumber. \space (2+1)D Conv} & \checkmark & \textemdash & 1.2944 & 0.8763 & 847 & 01:51:24 \\ \midrule \parbox{4cm}{\rownumber. \space STConvS2S-C} & \textemdash & \checkmark & 1.4450 & 1.0068 & 605 & 01:39:47 \\ \parbox{4cm}{\rownumber. \space STConvS2S-R} & \textemdash & \checkmark & 1.3215 & 0.8958 & 580 & 01:16:53 \\ \parbox{4cm}{\rownumber. \space STConvS2S} & \checkmark & \textemdash & 1.2811 & 0.8645 & 884 & 02:00:52 \\ \parbox{4cm}{\rownumber. \space STConvS2S\tnote{*}} & \checkmark & \textemdash & 1.6780 & 1.1828 & 740 & 01:14:47 \\ \parbox{4cm}{\rownumber. \space STConvS2S-C\tnote{**}} & \checkmark & \checkmark & 1.4152 & 0.9750 & 1000 & 02:15:46 \\ \parbox{4cm}{\rownumber. \space STConvS2S-R\tnote{**}} & \checkmark & \checkmark & 1.3044 & 0.8796 & 895 & 01:56:05 \\ \parbox{4cm}{\rownumber. STConvS2S-C\tnote{***}} & \checkmark & \checkmark & 1.4218 & 0.9821 & 698 & 01:04:58 \\ \parbox{4cm}{\rownumber. STConvS2S-R\tnote{***}} & \checkmark & \checkmark & 1.3234 & 0.8966 & 649 & 00:57:11 \\ \midrule \parbox{4cm}{\rownumber. STConvS2S-C} & \checkmark & \checkmark & 1.3791 & 0.9492 & 1040 & 02:25:26 \\ \parbox{4cm}{\rownumber. STConvS2S-R} & \checkmark & \checkmark & 1.2773 & 0.8646 & 895 & 01:58:39 \\ \bottomrule \end{tabular} \begin{tablenotes} \small \item[*] No temporal block \item[**] Inverted (spatial $\Rightarrow$ temporal) \item[***] No filter increase \end{tablenotes} \end{threeparttable} \end{adjustbox} \end{table} \setcounter{magicrownumbers}{0} \begin{table} \centering \caption{Quantitative comparison of ablation experiments, baseline methods using 3D convolutional layers, and our proposed models on rainfall dataset (CHIRPS) for $5 \rightarrow 5$ task. } \label{tab:chirps-ablation} \begin{adjustbox}{width=\textwidth} \begin{threeparttable} \begin{tabular}{lcccccc@{\extracolsep{\fill}}} \toprule \qquad Model & \parbox{2cm}{\centering Factorized Conv.} & \parbox{1.5cm}{ \centering Causal const.} & RMSE & MAE & \parbox{2.5cm}{ \centering Memory usage (MB)} & \parbox{2cm}{\centering Training time} \\ \midrule \parbox{4cm}{\rownumber. \space 3D CNN} & \textemdash & \textemdash & 6.2519 & 2.8519 & 534 & 00:09:27 \\ \parbox{4cm}{\rownumber. \space 3D Encoder-Decoder} & \textemdash & \textemdash & 6.2540 & 2.7977 & 513 & 00:09:26 \\ \parbox{4cm}{\rownumber. \space (2+1)D Conv} & \checkmark & \textemdash & 6.2323 & 2.7243 & 660 & 00:12:09 \\ \midrule \parbox{4cm}{\rownumber. \space STConvS2S-C} & \textemdash & \checkmark & 6.3310 & 2.9161 & 553 & 00:12:38 \\ \parbox{4cm}{\rownumber. \space STConvS2S-R} & \textemdash & \checkmark & 6.2510 & 2.8082 & 534 & 00:09:55 \\ \parbox{4cm}{\rownumber. \space STConvS2S} & \checkmark & \textemdash & 6.2281 & 2.8134 & 609 & 00:10:19 \\ \parbox{4cm}{\rownumber. \space STConvS2S\tnote{*}} & \checkmark & \textemdash & 6.3539 & 2.8980 & 572 & 00:06:57 \\ \parbox{4cm}{\rownumber. \space STConvS2S-C\tnote{**}} & \checkmark & \checkmark & 6.3255 & 2.8594 & 656 & 00:12:10 \\ \parbox{4cm}{\rownumber. \space STConvS2S-R\tnote{**}} & \checkmark & \checkmark & 6.2397 & 2.7971 & 616 & 00:10:56 \\ \parbox{4cm}{\rownumber. STConvS2S-C\tnote{***}} & \checkmark & \checkmark & 6.3171 & 2.8418 & 591 & 00:10:23 \\ \parbox{4cm}{\rownumber. STConvS2S-R\tnote{***}} & \checkmark & \checkmark & 6.2434 & 2.7829 & 565 & 00:09:25 \\ \midrule \parbox{4cm}{\rownumber. STConvS2S-C} & \checkmark & \checkmark & 6.3131 & 2.8327 & 662 & 00:13:16 \\ \parbox{4cm}{\rownumber. STConvS2S-R} & \checkmark & \checkmark & 6.2288 & 2.8060 & 616 & 00:10:55 \\ \bottomrule \end{tabular} \begin{tablenotes} \small \item[*] No temporal block \item[**] Inverted (spatial $\Rightarrow$ temporal) \item[***] No filter increase \end{tablenotes} \end{threeparttable} \end{adjustbox} \end{table} \section{Conclusion} \label{sec:conclusion} Predicting future information many steps ahead can be challenging, and a suitable sequence-to-sequence architecture to better represent spatiotemporal data for this purpose is still open for research. However, RNN-based models have been widely adopted in these cases \citep{Shi15,Wang17, Wang19}. Previously to this work, CNN-based architectures were not considered for this task, due to two limitations. Firstly, they do not respect the temporal order in the learning process. They also cannot generate an output sequence of length that is higher than the input sequence. Considering these limitations, we proposed STConvS2S, an end-to-end trainable deep learning architecture. STConvS2S can do spatiotemporal data forecasting by only using 3D convolutional layers. First, we address the problem of causal constraint, proposing two variations of our architecture, one using the temporal causal block and the other, the temporal reversed block. The former adopts causal convolution, which is commonly used in 1D CNN. We also introduced a new technique in the temporal reversed block that makes it not violate the temporal order by applying a reverse function in the sequence. These implementations are essential for a fair comparison with state-of-the-art methods, which are causal models due to the chain-like structure of LSTM layers. To overcome the sequence length limitation, we designed a temporal generator block at the end of the architecture to extend the spatiotemporal data only in the time dimension. We further compared our models with state-of-the-art models through experimental studies in terms of both performance and time efficiency on meteorological datasets. The results indicate that our model manages to analyze spatial and temporal data dependencies better since it has achieved superior performance in temperature forecasting, and comparable results in the rainfall forecasting, with the advantage of being up to 5x faster than RNN-based models. Thus, STConvS2S could be a natural choice for sequence-to-sequence tasks when using spatiotemporal data. We evaluate our architecture in the weather forecasting problem, but it is not limited to this domain. We expect that the results presented in this work will foment more research with comparisons between convolutional and recurrent architectures. For future work, we will search for ways to decrease rainfall dataset error. Directions may include applying preprocessing techniques to sparse data and adding data from other geographic regions. Besides, we will investigate more architectures for spatiotemporal data forecasting. \section*{Computer Code Availability} We implemented the deep learning models presented in this paper using PyTorch 1.0, an open-source framework. Our source code is publicly available at \url{https://github.com/MLRG-CEFET-RJ/stconvs2s} \section*{Data Availability} In this paper, spatiotemporal datasets in NetCDF format were used and can be downloaded at \url{http://doi.org/10.5281/zenodo.3558773}, an open-source online data repository. \section*{Acknowledgment} The authors thank CNPq, CAPES, FAPERJ, and CEFET/RJ for partially funding this research. \bibliographystyle{elsarticle-harv}
2023-04-23T06:41:32.093Z
2020-11-11T02:08:49.000Z
redpajama/arxiv
arxiv_0001
2,604
10,384
14b9bf370346fcf7d81c5beba9c2daf9b9d6c0e8
\section{\label{sect:intro}Introduction} Accurate computation of the Lyapunov exponent (LE) of particle motion in accelerators and comparison with numerical dynamic aperture (DA) simulations have been well studied. Past examples include ~\cite{Zimmermann:1994pz, Habib:1995, Scandale:1997jr, Giovannozzi:1997jn, Giovannozzi:1997uc, Turchetti:2018, Schmidt:1988,Fischer:1995,Schmidt:1991}. A general correlation between the LE and the DA has been confirmed, but a universal or quantitative equivalence has yet to be established. In some studies, the LE was found to underestimate the DA in storage rings~\cite{Giovannozzi:1997uc}. Additionally, accurate calculation of the LE ~\cite{Wolf:1985, Habib:1995} is time-consuming due to the long-term numerical integrations required, making its use difficult in direct dynamic aperture optimization. The discovery of another indicator of chaos, obtained by comparing forward integrations and corresponding reversals (i.e. backward integration), can be traced back to the 1950's~\cite{Cole:1994vc}. The method is also known as ``the trajectory reversing method'', and has been widely used to estimate stable regions of dynamical systems since then~\cite{Miller:1964, genesio1984new, chiang1988stability, loccufier2000new, lee2000analysis, jaulin2001nonlinear}. One of the more recent uses of this indicator have been to understand the DA of the Integrable Optics Test Accelerator (IOTA) in the presence of space charges~\cite{Hwang:2019bdh}. The indicator is intrinsically associated with the LE, because it also represents the sensitivity of chaotic motion to an initial condition. We found that implementing just a few turns of forward-reversal (F-R) integrations reveal an observable difference using high precision (e.g. 64-bit) floats for modern storage rings. Therefore, the chaos indicator can be computed at a faster rate. By combining population-based optimization, such as multi-objective genetic algorithm (MOGA)~\cite{Deb, Yang:2009, Yang:2011, Li:2016, Li:2018, Wan:2019, Liu:2015, Liu:2017} with the trajectory reversing method, a fast approach for DA optimization has been developed and demonstrated with two examples in this paper. Tracking-based optimization has traditionally been limited by time-consuming tracking simulations. The new approach provides a potential solution to using short-term tracking simulations to optimize the DA for large scale storage rings. To further explain this approach, the remaining sections are outlined as follows: Sect.~\ref{sect:fb} briefly explains the F-R integration as an indicator of chaos. A H\'{e}non map's chaos is studied with this method for proof-of-principle in Sect.~\ref{sect:henon}. In Sect.~\ref{sect:application}, we take the National Synchrotron Light Source II (NSLS-II) storage ring and another test diffraction-limited light source ring as two examples to demonstrate the application of this approach. A brief summary is given in Sect.~\ref{sect:summary}. \section{\label{sect:fb}Forward-reversal (F-R) integrations} In dynamical systems, the Lyapunov exponent (LE) is used to characterize the rate of separation of two infinitesimally close trajectories. In phase space, two trajectories with initial separation $\Delta \bs{z}(0)$ diverge at a rate given by, \begin{equation}\label{eq:Lyapunov} |\Delta\bs{z}(t)| \approx e^{\lambda t}|\Delta\bs{z}(0)|, \end{equation} where, $\bs{z}(t)=(x,p_x;y,p_y;s,p_s)^T$ is a vector composed of canonical coordinates in phase space at time $t$, and $\lambda$ is the LE. The superscript ($^T$) represents the transpose of a vector. Bold symbols, such as ``$\bs{z}$'', are used to denote vectors throughout this paper. The above rate calculation assumes the divergence is treated as a linearized approximation. The rate of separation can be different for different orientations of the initial separation vector, which yields multiple LEs for a given dynamical system. The largest LE of a system is referred to as the maximal Lyapunov exponent (MLE), which is defined as, \begin{equation}\label{eq:mle_t} \lambda=\lim_{t\to\infty }\lim _{\Delta\bs{z}(0)\to \bs{0}} \frac{1}{t}\ln\frac{|\Delta\bs{z}(t)|}{|\Delta\bs{z}(0)|}. \end{equation} Here $\Delta\bs{z}(0)\to\bs{0}$ ensures the validity of the linear approximation at any given time. The MLE provides valuable information about the dynamical system's predictability. In accelerators, it is more practical to use the path length of a reference particle $s$ rather than time $t$ as the free variable. The trajectory of an arbitrary particle can therefore be described as a deviation from a reference particle. For example, the momentum offset is denoted as $\delta=\frac{\Delta p}{p_0}$. After some canonical transformations~\cite{Ripken:1985qn}, the time $t$-integration can be converted to a path length $s$-integration. A new MLE $\lambda_s$ can then be re-defined as, \begin{equation}\label{eq:mle_s} \lambda_s=\lim_{s\to\infty }\lim _{\Delta\bs{z}(0)\to\bs{0}} \frac{1}{s}\ln\frac{|\Delta\bs{z}(s)|}{|\Delta\bs{z}(0)|}, \end{equation} where $\bs{z}(s)=(x,p_x;y,p_y;s-ct,\delta)^T$ are new canonical coordinates in the phase space at position $s$, and $s-ct$ is the longitudinal coordinate offset. For convenience, the rest of this manuscript will use path length $s$ of particle motion as the free variable unless stated otherwise. Generally speaking, the calculation of MLEs as defined above in Eqs.~\ref{eq:mle_t}-\ref{eq:mle_s}, often cannot be carried out analytically. In these cases, the calculation would therefore require the use of numerical techniques~\cite{Habib:1995, Wolf:1985}. An alternative, empirical method to measure the chaos of a dynamical system is to use a reversal integration as suggested in Ref.~\cite{Cole:1994vc, Miller:1964, Hwang:2019bdh}. During proof of concept, the properties of the system under time symmetry were calculated by letting the system evolve through some number of integration steps, then switching the sign of the time step and letting the system run backward until the total time variable reached zero. On the return to time zero, the changes in corresponding velocities and positions were calculated and collated, as was the value of the time variable during the change of sign. A new set of initial conditions could then be re-established. Due to the unavoidable numerical round-off error~\cite{ieee754-2019, Laslett:1957} for a chaotic trajectory, the re-established initial conditions deviated from the original ones as illustrated in Fig.~\ref{fig:fb}. The difference, a.k.a. the consistency error is an indicator of chaos which is associated with its LE. \begin{figure}[!ht] \centering \includegraphics[width=1.\columnwidth]{fb.png} \caption{\label{fig:fb} Schematic illustration of forward and time-reversal integrations for a dynamical system. The solid line represents the exact trajectory from $A$ at $t=0$ to $B$ at $t=T$. The dashed line is the numerical integration, which becomes $B^{\prime}$ at $t=T$. The difference between $B$ and $B^{\prime}$ indicates the chaos, but in practice, $B$ is usually unknown. The dotted line is the time-reversal integration starting from $B^{\prime}$ and ending at $A^{\prime}$. The difference between two initial conditions $A$ and $A^{\prime}$ is an indicator of chaos of the system for this specific initial condition.} \end{figure} The principle of the F-R integration approach can be briefly outlined as follows: A nonlinear transfer function, denoted as $f$, propagates through an $N$-dimensional phase space coordinate $\bs{z}=\left(x_1,x_2,\cdots,x_N;p_1,p_2,\cdots,p_D\right)^{T}$ iteratively. In a finite-precision computation process, the iteration from the $(n-1)^{\text{th}}$ state $\bs{z}_{n-1}$ to the next state $\bs{z}_{n}$ reads as: \begin{equation} \bs{z}_n=f\left(\bs{z}_{n-1}\right)+\Delta \bs{z}_n. \end{equation} where $\Delta\bs{z}_n=\left(\Delta x_1,...,\Delta x_N;\Delta p_1,\dots,\Delta p_N\right)^T$ is the round-off error vector when performing the $n^{\text{th}}$ iteration. Similarly, the reversal integration can be written as, \begin{equation} \bs{z}_{n-1}^{\prime}=f^{-1}\left(\bs{z}_n^{\prime}\right)+\Delta \bs{z}_n^{\prime}, \end{equation} where, $f^{-1}$ is the inverse map, and primes ($^{\prime}$) denote the coordinates of the reversal so as to distinguish from the forward trajectory. The errors $\Delta\bs{z},\;\Delta\bs{z}^{\prime}$ are distributed uniformly and randomly within a range determined by the values of $\bs{z},\;\bs{z}^{\prime}$, and the number of bit of the computation unit~\cite{ieee754-2019}. When considering a case in which only one F-R iteration is computed, $\bs{z}_0\overset{f}{\rightarrow}\bs{z}_1 \overset{f^{-1}}{\rightarrow}\bs{z}_0^{\prime}$. The difference between $\bs{z}_0$ and $\bs{z}_0^{\prime}$ can be estimated with local linear derivatives, \begin{align}\label{eq:fbDiff} \bs{z}_0^{\prime}-\bs{z}_0 & = f^{-1}\left(f(\bs{z}_0)+\Delta \bs{z}_1\right)+\Delta \bs{z}_1^{\prime}-\bs{z}_0\nonumber \\ & \approx \frac{\partial f^{-1}}{\partial\bs{z}}\biggr\rvert_{f(\bs{z}_0)}\Delta \bs{z}_1+\Delta \bs{z}_1^{\prime}\nonumber\\ & =\left[\frac{\partial f}{\partial\bs{z}}\biggr\rvert_{\bs{z}_0}\right]^{-1}\Delta \bs{z}_1+\Delta \bs{z}_1^{\prime}, \end{align} where the inverse Jacobian matrix of $f$ is evaluated at $\bs{z}_0$. For the sake of simplicity, the linearized matrix for 1-dimension $x$-$p$ is shown as, \begin{equation}\label{eq:localMat} \left[\frac{\partial f}{\partial\bs{z}}\biggr\rvert_{\bs{z}_0}\right]^{-1} = \left[\begin{array}{cc} \frac{\partial x_1}{\partial x_0} & \frac{\partial x_1}{\partial p_0} \\ \frac{\partial p_1}{\partial x_0} & \frac{\partial p_1}{\partial p_0} \end{array}\right]^{-1}. \end{equation} Equation~\ref{eq:fbDiff} indicates that the difference $|\bs{z}_0^{\prime}-\bs{z}_0|$ originates from random round-off errors, which are scaled by an inverse Jacobian matrix Eq.~\ref{eq:localMat} on the passage of $\bs{z}_0$. The difference, $\left|\bs{z}_0^{\prime}-\bs{z}_0\right|$, from one iteration can sometimes be impacted by random round-off noise, rather than the dynamical systems themselves as desired. On the other hand, if chaos is sufficiently weak, the difference is still invisible by just one-time scaling. Therefore, to overcome this difficulty, it may be necessary to implement multiple iterations, $\bs{z}_0\overset{f^N}{\rightarrow} \bs{z}_N\overset{f^{-N}}{\rightarrow}\bs{z}_0^{\prime}$, ($N\ge2$). The difference can be estimated similarly as Eq.~\ref{eq:fbDiff}, \begin{align}\label{eq:niter} \left(\bs{z}_0^{\prime}-\bs{z}_0\right){}_{N} & \approx \Delta\bs{z}_1^{\prime}+\sum_{n=2}^{N}\left(\prod_{j=0}^{n-2}\left[ \frac{df}{d\bs{z}}\biggr\rvert_{\bs{z}=f^{j}\left(\bs{z}_0\right)} \right]^{-1}\right)\Delta\bs{z}^{\prime}_{n}\nonumber \\ & + \sum_{n=1}^{N}\left(\prod_{j=0}^{n-1}\left[ \frac{df}{d\bs{z}}\biggr\rvert_{\bs{z}=f^{j}\left(\bs{z}_0\right)} \right]^{-1}\right)\Delta\bs{z}_{n}. \end{align} Here $f^j$ represents the value of the $j^{\text{th}}$-iterations of the map $f$ without round-off error. Equation~\ref{eq:niter} illustrates that round-off errors are accumulated during each iteration and are then scaled by local linear matrices along the trajectories in both directions. With sufficient iterations, the cumulative difference indicates the chaos of the trajectory. It is worth noting that, even if a system has no chaos, the cumulative random error between forward integration and its corresponding reversal is directly proportional to the number of iterations executed~\cite{Laslett:1957}. If chaos is present, however, the error will grow exponentially. In large scale modern accelerators, F-R integrations need to be evaluated magnet-by-magnet. A full cycle around an accelerator equates to one iteration as described above. The round-off errors $\Delta\bs{z}$ receive a contribution from each integration step. A short-term tracking simulation could generate an observable difference when 64 bit floats were used. To be specific, only one-turn F-R integrations are sufficient to optimize the DA of the NSLS-II storage ring. Usually these differences are observable but still at quite a small scale. Therefore, a base-ten logarithm is used to allow a large range to better represent them, \begin{equation}\label{eq:chaos} \Delta = \log_{10} |\bs{z}_0-\bs{z}_0^{\prime}|. \end{equation} \section{\label{sect:henon}H\'{e}non map} In this section, the F-R integration method is used to study a 1-dimensional H\'enon map, \begin{equation} \left(\begin{array}{c} x\\ p \end{array}\right)_{n}=\left(\begin{array}{cc} \cos\mu & \sin\mu\\ -\sin\mu & \cos\mu \end{array}\right)\left(\begin{array}{c} x\\ p-x^{2} \end{array}\right)_{n-1}. \end{equation} This discrete H\'enon map represents a thin-lens sextupole kick followed by a linear phase space rotation at a phase advance $\mu$. Its reversal map can be expressed as an inverse rotation followed by an inverse thin-lens kick, \begin{align} \left(\begin{array}{c} x_{t}\\ p_{t} \end{array}\right) & =\left(\begin{array}{cc} \cos\mu & -\sin\mu\\ \sin\mu & \cos\mu \end{array}\right)\left(\begin{array}{c} x\\ p \end{array}\right)_n,\nonumber \\ \left(\begin{array}{c} x\\ p \end{array}\right)_{n-1} & =\left(\begin{array}{c} x_{t}\\ p_{t}+x_{t}^{2} \end{array}\right), \end{align} where $x_t,\;p_t$ are the intermediate variables. The H\'enon map's linear phase advance is chosen as $\mu=0.205\times2\pi$ in order to observe the $5^{\text{th}}$-order resonance line at certain amplitudes. The difference between initial conditions obtained from the F-R integration is illustrated in Fig.~\ref{fig:henon}. When the F-R integrations are calculated with only 10-50 iterations (as shown in the top row), the area of the stable region is overestimated and the inside resonances are almost invisible. After 100 iterations, the resonance lines and stable islands become gradually visible. More iterations can provide much more detailed chaos information as illustrated in the two bottom subplots. \begin{figure}[!ht] \includegraphics[width=0.48\columnwidth]{Henon010turns.png} \includegraphics[width=0.48\columnwidth]{Henon050turns.png} \includegraphics[width=0.48\columnwidth]{Henon100turns.png} \includegraphics[width=0.48\columnwidth]{Henon500turns.png} \caption{\label{fig:henon}(Colored) Contour of the F-R integrations with different numbers of iterations for a H\'enon map. The colormap are the difference of initial conditions as a function of the phase space coordinates $x$-$p$. The white area represents unbounded trajectories by manually setting a threshold of $|x|>10$. More iterations provide more detailed chaos information, but even with just a few dozen iterations, an early indicator of chaos can be determined.} \end{figure} \section{\label{sect:application}Applications} In this section we demonstrate this method by optimizing the dynamic apertures for the National Synchrotron Light Source II (NSLS-II)~\cite{NSLS-II:2013} main storage ring and a test diffraction-limited light source ring. \subsection{\label{sect:nsls-ii}NSLS-II storage ring} NSLS-II is a dedicated $3^{\text{rd}}$ generation medium energy (3 GeV) light source operated by Brookhaven National Laboratory. Its main storage ring's lattice is a typical double-bend-achromat structure. Its linear optics for one cell is illustrated in Fig.~\ref{fig:nsls2cell}. The whole ring is composed of 30 such cells. The natural chromaticities are corrected to $+2/+2$ at the transverse plane by the chromatic sextupoles. The optimization knobs are six families of harmonic sextupoles located at dispersion-free sections. The goal of optimization is to obtain a sufficient DA ($|x|>15\;\textrm{mm}, |y|>5\;\textrm{mm}$) for the off-axis injection at the long straight section center where $\beta_x= 20.5\;\textrm{m},\;\beta_y=3.4\;\textrm{m}$, and a $|\delta|>2.5\%$ momentum acceptance to ensure a 3 hour lifetime at a 500 mA beam current. \begin{figure}[!ht] \centering \includegraphics[width=1.\columnwidth]{nsls2cell.png} \caption{\label{fig:nsls2cell} (Colored) The linear optics and magnet layout for one of the 30 cells of the NSLS-II storage ring. The red blocks represent sextupoles. The three located between two dipoles are used to correct the natural chromaticity. The remaining six are used here for DA optimization.} \end{figure} \subsection{\label{sect:zone}Optimization objectives and results} On the transverse $x$-$y$ plane at the injection point, multiple initial conditions are uniformly populated within a Region Of Interest (ROI). The ROI is chosen to cover the needed aperture. The virtual particle trajectories are simulated with a $4^{th}$ order kick-drift symplectic integrator~\cite{Yoshida:1990} in which negative physical length elements are allowed. The symplectic integration is implemented with a python code, which has been independently benchmarked with another reliable tracking simulation code \textsc{impactz}~\cite{QIANG2000434}. After evolving some revolutionary periods (usually an integer number of turns), their reversal trajectories are computed by switching the sign of the coordinate $s$ and letting particles run back to $s=0$. Newly re-established initial conditions deviate from the original ones. A forward integration and its reversal make up a pair of trajectories for comparison. A larger difference between a pair of initial conditions indicate a stronger chaos. The goal of optimization then becomes minimizing the difference for all pairs of initial conditions within the ROI. It is not practical or necessary to minimize so many pairs of initial conditions simultaneously, therefore, the ROI is divided into several zones as shown in Fig.~\ref{fig:zone}. For each zone, the difference of initial conditions are averaged over all F-R integrations pairs. Then the averaged values for all zones are used as the optimization objectives, which need to be minimized simultaneously to suppress the chaos inside the whole ROI. The optimization objective functions $g$ reads as \begin{equation}\label{eq:obj} \bar{\Delta}_i = g_i(K_{2,j}), \end{equation} where, $i,\;j$ are the indices of the ROI zones and the sextupoles respectively, $\bar{\Delta}_i$ is the average difference in the $i^{th}$ zone, and $K_{2,j}$ is the $j^{th}$ sextupole's normalized gradient. \begin{figure}[!ht] \centering \includegraphics[width=1.\columnwidth]{zone.png} \caption{\label{fig:zone} (Colored) Dividing the region of interest (ROI) into $n$ zones in the $x$-$y$ plane. In each zone, multiple initial conditions (represented with same-colored dots) are uniformly populated. The optimization objectives are the difference between the initial conditions of F-R integrations averaged over each zone.} \end{figure} Quantitatively, the difference in Eq.~\ref{eq:chaos} and ~\ref{eq:obj} for a pair of initial conditions in the normalized phase space can be computed as \begin{equation}\label{eq:delta} \Delta=\log_{10}\sqrt{\Delta\bar{x}^2+\Delta\bar{p}_x^2+ \Delta\bar{y}^2+\Delta\bar{p}_y^2}, \end{equation} where $\bar{x},\;\bar{p}_x;\;\bar{y},\;\bar{p}_y$ are the difference of canonical coordinates normalized with Courant-Snyder parameters as follows~\cite{Courant:1958}, \begin{align}\label{eq:csn} \begin{bmatrix} \Delta\bar{u}\\ \Delta\bar{p}_u \end{bmatrix} = \begin{bmatrix} \frac{1}{\sqrt{\beta_u}} & 0\\ \frac{\alpha_u}{\sqrt{\beta_u}} & \sqrt{\beta_u} \end{bmatrix} \begin{bmatrix} \Delta u\\ \Delta p_u \end{bmatrix}. \end{align} where $u=x,\;\text{or}\;y$, The normalization of Eq.~\ref{eq:csn} expresses the canonical coordinate pairs in the same units $m^{1/2}$ for arithmetic addition. To obtain sufficient beam lifetime and DA simultaneously, one must optimize them simultaneously~\cite{Borland:2015}. Direct optimization of beam lifetime is time-consuming. An alternative is to optimize different off-momentum DA. This was achieved by a $\delta$-slicing method as illustrated in Fig.~\ref{fig:zone_dp}. First, the desired energy acceptance range is determined based on the beam scattering lifetime calculation at a certain beam current. Then several sliced off-momentum DA are included into the optimization objectives. At each slice, the objective functions are evaluated in the same way as Fig.~\ref{fig:zone}. \begin{figure}[!ht] \centering \includegraphics[width=1.\columnwidth]{zone_dp.png} \caption{\label{fig:zone_dp} (Colored) Optimizing several fixed off-momentum DA simultaneously. By separating a 5-dimensional phase space ($x,p_x;y,p_y;\delta$) into several slices along the $\delta$-axis, DA for off-momentum particles can be optimized simultaneously.} \end{figure} Multiple zones within the ROI for different momentum slices need to be minimized simultaneously. The multi-objective genetic algorithm (MOGA) was used for this task. More turns of particle tracking simulation can indicate the chaos more accurately, but this requires more computation time. After manually checking the dependence of the chaos indicator against the number of turns, one-turn F-R integration (crossing 30 cells) was chosen to compute this early chaos indicator as illustrated in Fig.~\ref{fig:zone_obj}. Although the early indicator of chaos from F-R integration provides an optimistic approximation, it does rule out many of the less competitive candidates and narrows down the parameter search range quickly. By allowing a small-scale population, which includes the evolution of only 1,000 candidates over just 50 generations, the top candidates' average fitness is seen to converge. It took about 6 hours to complete the optimization with 50 Intel\textsuperscript{\textregistered} Xeon\textsuperscript{\textregistered} 2.2-2.3 GHz CPU cores. Another reliable tracking code \textsc{elegant}~\cite{Borland:2000gvh} was then used to check the DA for all the candidates only inside the last generation. Among them, some of the elite candidates are selected for more extensive simulation studies to check their final performance. \begin{figure}[!ht] \centering \includegraphics[width=1.\columnwidth]{zone_obj.png} \caption{\label{fig:zone_obj} (Colored) The objective function evaluated in 9 zones for the $\delta=0$ slice. These were obtained for a specific set of sextupoles settings for the NSLS-II storage ring. Blank points represent lost particles ($|x,y|>1$ m) within 1 turn tracking. The maximum allowed number of lost particles is used as an optimization constraint. The black line is the dynamic aperture obtained by multi-turn (1,024) tracking simulation with the code \textsc{elegant}. The one-turn F-R integrations give a more optimistic result than the multi-turn tracking simulation. As an early indicator of chaos, however, it does provide a reasonable criteria for the optimizer.} \end{figure} The DA profiles for the top 100 candidates inside the last generation are illustrated in Fig.~\ref{fig:moga}. Although the six sextupole families settings are very different, their DA satisfy the minimum requirement for top-off injection. This observation confirms that short-term F-R integration can indeed be used for DA optimization. Among these candidates, one from the elite cluster was selected to carry out a more detailed frequency map analysis (FMA) to verify its nonlinear dynamics performance. The FMA results are summarized later in Sect.~\ref{subsect:fma}. \begin{figure}[!ht] \centering \includegraphics[width=1.\columnwidth]{moga_vs_current.png} \caption{\label{fig:moga} (Colored) DA of the top 100 candidates (measured with the area) from the $50^{th}$ generation of the evolved population obtained with the MOGA optimizer. The light yellow box is the required aperture for the off-axis top-off injection.} \end{figure} In the specific example of a dedicated light source machine (the NSLS-II storage ring), the longitudinal synchrotron oscillation has not been included. It is straightforward to include it if needed. It can be done by extending Eq.~\ref{eq:delta} to the 6-dimensional space when the betatron-synchrotron coupling resonances become critically important, e.g. in the case of collider rings. \subsection{Comparison with frequency map analysis}\label{subsect:fma} The frequency map analysis (FMA) is widely used to evaluate the performance of a nonlinear lattice~\cite{Robin:2000, Laskar:2003, Papaphilippou:2014, Todesco:1996}. The FMA was also applied directly to optimizing the DA of a light source ring~\cite{Steier:2010}. By comparing the tune diffusion rate determined by two pieces of turn-by-turn simulation or measurement data, the resonances of the lattice can be visualized. In our example, one elite solution was selected from the last generation of candidates to carry out a detailed FMA to characterize its nonlinear dynamics performance. In the meantime, a multi-turn (1,024 turns) F-R analysis was conducted for a comparison with the FMA results. The sextupole settings for the current NSLS-II lattice and the selected elite solution are listed in Tab.~\ref{tab:sextK2} for comparison. \begin{table}[!ht] \centering \caption{\label{tab:sextK2}Comparison of two sextupoles settings} \begin{tabular}{p{2.2cm} | p{1.2cm} p{2.2cm} p{2.2cm}} \hline Sextupole & unit & $K_2$ (current) & $K_2$ (F-R) \\ \hline\hline SH1 & $\textrm{m}^{-3}$ & 19.8329 & 19.8495 \\ SH3 & $\textrm{m}^{-3}$ & -5.8551 & -0.4017 \\ SH4 & $\textrm{m}^{-3}$ &-15.8209 & -22.0160 \\ SL3 & $\textrm{m}^{-3}$ &-29.4609 & -29.0057 \\ SL2 & $\textrm{m}^{-3}$ & 35.6779 & 27.9185 \\ SL1 & $\textrm{m}^{-3}$ &-13.2716 & -2.6051 \\ \hline \end{tabular} \end{table} Figure~\ref{fig:fmafb} illustrates the on-momentum DA in the transverse $x$-$y$ plane. Figure~\ref{fig:fmafb_dp} shows the off-momentum acceptance in the $x$-$\delta$ plane. The FMA results yield more copious and fine patterns of the resonance than the FRI results. Even the accuracy of the NAFF can be theoretically proportional to $1/N^4$ ($N$ is the total number of the sampling data) with a Hanning window~\cite{Laskar:2002bk}, it is still relatively slower compared to the exponential improvement of the reversal method. Therefore, a much less number of the FRI tracking is needed to drive the optimizer. In the previous example, only one-turn FRI has been used to speed up the convergence. \begin{figure}[!ht] \centering \includegraphics[width=1.\columnwidth]{fma_vs_fb.png} \caption{\label{fig:fmafb} (Colored) Top: FMA on the $x$-$y$ plane for 1,024 turns of data (512 leading and 512 trailing turns) with the code \textsc{elegant}. Bottom: F-R analysis for 1,024 turns. Using the FMA, some unusual diffusion rate (as shown in yellow stripes near $x=0$) can be observed.} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=1.\columnwidth]{fma_vs_fb_dp.png} \caption{\label{fig:fmafb_dp} (Colored) Top: FMA on the $x$-$\delta$ plane for 1,024-turns of data (512 leading and 512 trailing turns) with the code \textsc{elegant}. Bottom: F-R analysis for 1,024 turns.} \end{figure} A control of higher order chromaticities and amplitude-dependent-tune-shifts to avoid destructive resonance-crossing is critical in DA optimization. This could be achieved by minimizing some specific nonlinear driving terms~\cite{Dragt:2011vea, Chao:2002st}. For example, $C_{2200,0}, C_{0022,0}, C_{1111,0}$ are the first order amplitude-dependent-tune-shift coefficients; then $C_{1100,n}, C_{0011,n}, n\ge2$ are the higher order chromaticity coefficients. These terms can be used as either objective functions or explicit constraints. In the F-R integration method, no explicit constraints are used to limit them. The final tracking simulation on the selected solution, however, shows that both the amplitude-dependent-tune-shifts (Fig.~\ref{fig:tswa}) and the higher order chromaticities (Fig.~\ref{fig:chrom}) are automatically and passively suppressed. The on-momentum and two off-momentum ($\pm2.5\%$) DA, computed with the code \textsc{elegant}, are shown in Fig.~\ref{fig:on_off_DA}. \begin{figure}[!ht] \centering \includegraphics[width=1.\columnwidth]{tswa.png} \caption{\label{fig:tswa} Tune shift with initial coordinate in the horizontal plane for the selected candidate. Although the vertical tune rises suddenly faster at larger horizontal amplitudes, the tune variations are within $\pm0.03$ inside the range of $x\in[-15,15]\;\textrm{mm}$.} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=1.\columnwidth]{chrom.png} \caption{\label{fig:chrom} Tune variation with the momentum offset, i.e. chromaticity for the selected candidate. The linear chromaticities were tuned to +2 for both $x$ and $y$ planes.} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=1.\columnwidth]{on_off_DA.png} \caption{\label{fig:on_off_DA} (colored) On- and two off-momentum ($\delta=\pm2.5\%$) DAs for the selected candidate.} \end{figure} The optimization was implemented on an error-free model. Then the systematic and random magnetic field errors and the misalignments have been included to confirm the robustness of the solution. An online beam test on the NSLS-II storage ring was also carried out to confirm that the off-axis top-off injection efficiency is between 95-100\%, which is comparable with our current operation lattice. The beam lifetime at a 400 mA beam current is longer than 5.5 hour, with a diffraction-limited vertical beam emittance of 8 pm. While the current operation lattice was observed having 4.5 hours in similar conditions. \subsection{MBA lattice for diffraction-limited light source} The F-R integration method has also been used to test on a multi-bend-achromat (MBA) structure, which could potentially be used as a diffraction-limited light source storage ring lattice in the future. The horizontal emittance of the test MBA lattice used was 78 pm at a beam energy of 2 GeV. The linear lattice is shown in Fig.~\ref{fig:hals_optics}, in which most sextupoles are chromatic sextupoles. The MOGA result showing the top 100 candidates’ apertures are illustrated in Fig.~\ref{fig:hals}. The preliminary result confirms that the F-R integration could also be applied to a more complicated nonlinear lattice, and the approach itself should be general in optimizing other lattices. \begin{figure}[!ht] \centering \includegraphics[width=1.\columnwidth]{hals_optics.png} \caption{\label{fig:hals_optics} (Colored) Linear optics and magnet layout for one cell of a test MBA lattice.} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=1.\columnwidth]{hals.png} \caption{\label{fig:hals} (Colored) On-momentum DA for the top 100 candidates for the test MBA lattice.} \end{figure} \section{\label{sect:summary}Summary} An indicator of chaos obtained with forward-reversal integration has been used for optimization of dynamic aperture of storage rings. The indicator, intrinsically but empirically associated with the Lyapunov exponent, gives an early indication of the chaos of beam motion in storage rings. Although the indicator cannot give the exact dynamic aperture profile with a short-term tracking simulation, a concrete correlation and large MOGA candidate pool yield some optimal lattice solutions. The NSLS-II storage ring and a test MBA lattice are used as examples to illustrate the application of this method. The computation of the difference of F-R integrations has been implemented in the \textsc{elegant}~\cite{Borland:2000gvh} code since version 2019.4.0. Besides the F-R integration, the \textsc{elegant} code also provides another option for the users to compute the change in linear actions $J_{x,y}$ from two forward-only trackings, with a small difference in their initial conditions. By properly choosing small changes based on the machine precision, the signs and absolute values of initial conditions, and the round-off method in computer operation system, one should be able to get a similar result as the F-R integration. However, the needed implementation in this option is more complicated than the F-R integration. The fundamental principle of this method is to numerically characterize the sensitivity of a chaotic motion to its initial conditions by using round-off errors. \section*{Acknowledgements} We would like to thank C. Mitchell and R. Ryne (LBL) for the stimulating and collaborative discussions, J. Qiang (LBL) for providing the \textsc{impactz} code, Z. Bai (USTC) for providing the test MBA lattice, M. Giovannozzi (CERN) for fruitful discussions and constructive suggestions, M. Borland (ANL) for implementing this method in the \textsc{elegant} code, and I. Morozov (BINP) for pointing out a numerical error in the manuscript. This research used resources of the National Synchrotron Light Source II, a U.S. Department of Energy (DOE) Office of Science User Facility operated for the DOE Office of Science by Brookhaven National Laboratory (BNL) under Contract No. DE-SC0012704, and the computer resources at the National Energy Research Scientific Computing Center. This work was also supported by (1) the Accelerator Stewardship program under the Office of High Energy Physics; (2) Lawrence Berkeley National Laboratory operated for the DOE Office of Science under Contract No. DE-AC02-05CH11231, (3) BNL's Laboratory Directed Research and Development program ``NSLS-II High Brightness Upgrade and Design Studies'' No. 17-015. (4) DOE SBIR grant under Contract No. DE-SC0019538. One author (KH) acknowledges the support from the U.S. DOE Early Career Research Program under the Office of High Energy Physics.
2023-04-23T06:41:32.400Z
2020-08-17T02:13:49.000Z
redpajama/arxiv
arxiv_0001
2,614
5,448
e68240ff5d4d2fc16b9c6cc5838f5e558652577a
\section{Introduction} The Anti-de Sitter/Conformal field theory (AdS/CFT)\cite{Maldacena:1997re,Gubser:1998bc,Witten:1998qj,Aharony:1999ti} duality states that the gravity theory of the AdS space-time can be described by the conformal field theory on the boundary. This not only offers us a new way to calculate the physical quantities on the field theory side, but also provides us with a new ideas for understanding the nature of space-time. In particular, according to the holographic entanglement entropy \cite{Ryu:2006bv}, there is a basic connection between quantum information theory and gravitational physics. However, in view of the thermo-field double state (TFD state) of the eternal black hole, it has been proved that entanglement entropy cannot provide us all the information in the evolution of the AdS wormhole \cite{Maldacena:2001kr,Hartman:2013qma}. The Einstein-Rosen Bridge (ERB) usually connects the two sides of the Penrose diagram of the eternal AdS black hole, and classically it will grow forever. On the other hand, the dual TFD state on the boundary reaches its thermal equilibrium very quickly. So how to describe the continuing growth of ERB for a long time when the quantum states on both two sides stop evolving in the dual theory? To solve this problem, Susskind and his collaborators \cite{Susskind:2013aaa,Susskind:2014rva,Susskind:2016tae,Stanford:2014jda} proposed a new concept called quantum computational complexity of black hole which can describe the quantum evolution of the boundary state after reaching thermal equilibrium. This concept can help us with some useful tools on studying problems of quantum complexity \cite{Brown:2015bva}. Note that Maldacena and Susskind have established a connection between Einstein-Podolsky-Rosen (EPR) in quantum mechanics and the Einstein-Rosen bridge (ERB) in gravity (so-called ER = EPR) \cite{Maldacena:2013xja,Susskind:2014yaa}. Based on the above conjecture, Alice on one side of the ERB can establish communication with Bob on the other side, but how difficult is it ? Quantum computational complexity can be understood as a candidate quantity to characterize how difficult it is in the calculation. In quantum circuits \cite{Hayden:2007cs}, complexity is usually defined as the minimal number of gates used for processing the unitary operation\cite{Susskind:2014rva}. Susskind related computational complexity to the distance from the layered stretched horizon in \cite{Susskind:2013aaa}, and further proposed a conjecture that the length of the ERB is proportional to complexity of quantum state of the dual CFT. Inspired by the work of Hartman and Maldacena \cite{Hartman:2013qma}, Susskind and Stanford proposed a new version that the complexity is dual to the volume of the maximal spatial slice crossing the ERB called Complexity-Volume (CV) duality\cite{Stanford:2014jda} \begin{equation} \mathcal{C}\sim\frac{V}{G l_{AdS}}, \end{equation} where $l_{AdS}$ is the length scale that has to be chosen appropriately for the configuration. While this proposal captures the linear growth at late time, there is also a minor problem that length scales must be introduced manually. In a recent work \cite{Brown:2015bva}(see \cite{Brown:2015lvg} for details), Susskind further proposed an alternative conjecture that the quantum complexity of a holographic state is dual to the action of certain Wheeler-DeWitt (WDW) patch in the AdS bulk so-called Complexity-Action (CA) \begin{equation} \mathcal{C}=\frac{A}{\pi\hbar}, \end{equation} where $A$ is the action of the Wheeler-DeWitt patch. This proposal solves the length scale problem of CV-duality and has the practical advantage that the WDW patch is easier to work with than the maximal volume. It was noted that both the CV duality and the CA duality share the same properties as follows \cite{Brown:2015lvg}: \textit{The rate of complexity growth is bounded by the product of entropy and temperature $\frac{d \mathcal{C}}{dt}\sim T S$}. There have been many studies on CV duality and CA duality, such as the divergence structure \cite{Carmi:2016wjl,Kim:2017lrw}, complexity growth rate\cite{Lehner:2016vdi,Miao:2017quj,Carmi:2017jqz,An:2018xhv,Cai:2017sjv,Jiang:2019qea,Ghaffarnejad:2018prc,roy18}, and the generalization beyond Einstein gravity \cite{Cai:2016xho,Jiang:2018pfk,Cano:2018aqi,An:2018dbz,Jiang:2019fpz}(see also \cite{Fan:2019mbp,Couch:2016exn,Fan:2018wnv}), For example, some of us have tried to relate complexity with the phenomena of the accelerating expansion of our universe \cite{Ge:2017rak}. There is also a study on the complexity of disk-shape subregion in various (2+1)-dimensional gapped systems with gravity dual \cite{duwu2018} . Given the fact that many literatures \cite{Kastor:2009wy,Kubiznak:2014zwa,Dolan:2012jh,Frassino:2015oca,Kubiznak:2016qmn,Johnson:2014yja} have taken the cosmological constant as pressure, an improved version of the CV conjecture was proposed in \cite{Couch:2016exn} called ``complexity=volume 2.0" (CV 2.0) \begin{equation} \mathcal{C}\sim \frac{1}{\hbar}P (Spacetime \ Volume) . \end{equation} In the late time regime, it was proposed that \begin{equation} \dot{\mathcal{C}}\sim\frac{PV}{\hbar}\label{c10} . \end{equation} The above equation relates complexity to pressure and thermodynamic volume. The authors in \cite{Couch:2016exn} claimed that the CA duality would violate the Lloyd bound in some cases of charged black hole, while CV 2.0 would not, which shows the rationality of this conjecture. Subsequently, it was proposed CA2.0 \cite{Fan:2018wnv} \begin{equation}\label{scope} \mathcal{C}=\frac{A_{\Lambda}}{\pi\hbar} , \end{equation} where $A_{\Lambda}$ is a part of the non-derivative action evaluated on the WDW patch. The application scope of equation (\ref{scope}) is not limited to the late time of the black hole evolution. Especially $A_{\Lambda}$ reduces to PV in the stationary limit. That will return to CV2.0. In a very recent paper, a new CV duality was proposed $\dot{\mathcal{C}}=2P\Delta V$ in \cite{Liu:2019mxz}. The rationality of the various versions of complexity and the existence of more reasonable conjectures deserve further study. From the standard thermodynamics, we know the relation between $PV$ and the grand potential $\Omega$ \begin{equation}\label{pvm} pV=-\Omega. \end{equation} This relation together with (\ref{c10}) stimulates us to think about building some deep connections among the complexity growth rate, thermodynamical quantities and statistical physics. Moreover, the comparison between ordinary thermodynamics and black hole thermodynamics is shown in table \ref{tableone}. When the system is in its thermodynamic equilibrium, $\Omega$ is at its minimum. The thermodynamical stability requires $d \Omega \leq 0$ \footnote{This in turn implies $d \dot{C}\geq 0$. The variation of the complex growth rate then has a property similar to entropy. Actually, the second law of complexity states \cite{Brown2018The}: \textit{If the computational complexity is less than maximum, then with overwhelming likelihood it will increase, both into the future and into the past. }}. \begin{table}\label{tableone} \centering \begin{tabular}{|c|c|c|c|} \hline &Thermodynamics&Neutral Black Hole&Charged Black Hole\\ \hline First Law&dH=TdS+VdP&dM=TdS+VdP&dM=TdS+VdP+$\Phi$dQ\\ \hline Enthalpy&H=U+PV&H=M=U+PV&H=M=U+PV\\ \hline Free energy&F=U-TS&F=U-TS&F=U-TS\\ \hline Gibbs free energy&G=H-TS&G=M-TS&G=M-TS\\ \hline \end{tabular} \caption{Ordinary thermodynamic relationship and black hole thermodynamic relationship} \end{table} For canonical ensembles, the grand potential reduces to the free energy $F$. In ordinary thermodynamics, the principle of maximum work states that: \textit{For all thermodynamic processes between the same initial and final state, the delivery of work is a maximum for a reversible process, obeying $dW\leq -dF$. } Quantum complexity is a kind of computational resource. The quantum computational process can be regarded as a thermodynamic process in which work should be delivered. Less complexity indicates less time or work is required. Then equations (\ref{c10}) and (\ref{pvm}) together with the principle of maximum work indicates that complexity may related to the free energy of the system. Moreover, this could lead to $d \mathcal{\dot{C}} \leq -dF $. On the other hand, by studying the thermodynamics of black holes, we can understand the thermodynamic behavior of strongly coupled field theory systems at finite temperatures, while traditional quantum field theory is hard to work. Black hole phase transition is an important part of black holes. The famous Hawking-Page phase transition\cite{Haking:1983} corresponds to the confinement and deconfinement phase transition in field theory. As complexity can be related to the grand potential, it would be interesting to study the black hole phase transition by using complexity as a probe. We can also test the rationality of this version of the conjecture by calculating the evolutionary behavior of complexity over time in the phase transition process. On the other hand, we can also reconstruct the space-time in the bulk from the nature of the boundary complexity. In this paper, our main purpose is to examine the universality of CV2.0 by establishing a connection between this conjecture, the grand potential and the grand partition function. The reasons for connecting CV 2.0 with the partition function are largely due to the fact that complexity of the TFD state of various extended Sachdev-Ye-Kitaev (SYK) model calls for further investigation. Being one of the simplest strongly interacting system with a gravity dual, the SYK model has many appealing features including thermodynamical and transporting properties. These properties suggest that the SYK models are connected holographically to black holes with nearly $AdS_2$ horizons. The operator complexity of the SYK model has been studied in \cite{rqyang19} in which it was concluded that the complexity grows linearly. We are going to investigate the complexity of the corresponding TFD state of various deformations of the SYK model by exploring the complexity/partition function relation. This may provide additional evidence supporting the SYK model/gravity duality. The structure of this paper is organized as follows. In section 2, we will establish the connection between complexity, the grand potential and the corresponding partition function. We take various deformations of the SYK model as concrete examples and calculate the corresponding complexity growth rate of the corresponding TFD state. Then, in section 3, we extend our discussions to Schwarzschild-AdS and Reissner-Nordstrom AdS black holes. In section 4, we relate complexity growth rate to black hole phase transitions since the grand potential $\Omega$ can describe phase transitions. In section 5, we investigate whether our proposal violates the Lloyd bound. The conclusions and discussions are provided in the last section. \section{Complexity and partition function} In the standard statistical physics, the grand thermodynamic potential is closely related to the grand partition function $\mathcal{Z}$ via \cite{Pathria1996Statistical} \begin{equation}\label{partition} \Omega=-k T \ln \mathcal{Z}. \end{equation} From the ansatz $\dot{C}\sim pV/\hbar\sim -\Omega/\hbar$, we have \begin{equation} \dot{C}= \frac{k T}{\hbar} \ln \mathcal{Z}. \end{equation} We refer this ansatz as ``complexity growth rate/partition function relation". An alternative approach to obtain the relation between $\mathcal{Z}$ and $pV$ is as follows. The partition function is given by \begin{equation} \mathcal{Z}= e^{\sum_{r,s}(-\alpha N_r-\beta E_s)}, \end{equation} where $\alpha=\frac{\mu}{kT}$, $\beta=\frac{1}{kT}$, $N_r$, $E_s$ and $\mu$ denote the particle number, energy and chemical potential of a system, respectively. Note that \begin{equation} \label{twofour} d\ln\mathcal{Z}= -\bar{N}d\alpha-\bar{E}d\beta-\frac{\beta}{\mathcal{N}}\sum_{r,s}\langle n_{r,s}\rangle dE_s, \end{equation} where the averaged particle number and energy are given by \bea \bar{N}&\equiv&-\frac{\partial}{\partial \alpha}\ln \mathcal{Z}= \frac{\sum_{r,s}N_r e^{(-\alpha N_r-\beta E_s)}}{\sum_{r,s}e^{(-\alpha N_r-\beta E_s)}},\\ \bar{E}&\equiv& -\beta\frac{\partial}{\partial \beta}\ln \mathcal{Z}. \eea Equation (\ref{twofour}) can be recast as \be d(\ln \mathcal{Z}+\alpha \bar{N}+\beta \bar{E})=\beta\bigg(\frac{\alpha}{\beta}d\bar{N}+d\bar{E}-\frac{1}{\mathcal{N}}\sum_{r,s}\langle n_{r,s}\rangle dE_s\bigg). \ee In comparison with the first law of thermodynamics \be \delta Q=d\bar{E}+\delta W-\mu d\bar{N}, \ee we arrive at \be \beta \delta Q=d(\ln \mathcal{Z}+\alpha \bar{N}+\beta \bar{E})=\frac{S}{k}. \ee Therefore, we obtain \be \ln \mathcal{Z}=\frac{S}{k}-\alpha \bar{N}-\beta \bar{E}. \ee By further using the relation $G=\bar{E}-TS+pV$, we finally obtain \begin{equation} \ln\mathcal{Z}=\frac{PV}{kT}\label{commutative2} . \end{equation} This is a fundamental relation between thermodynamics and statistic physics. Connecting quantum complexity to partition functions through (\ref{c10}) and (\ref{commutative2}) goes beyond the original ``CV 2.0" conjecture. One may call it ``Complexity/Grand potential/Partition Function" relation or simply ``CV 3.0" \begin{equation} \ln\mathcal{Z}=\frac{PV}{kT}\sim\frac{\dot{\mathcal{C}}{\hbar}}{kT}\label{commutative3} . \end{equation} This formula is able to relate complexity closely to the microscopic physics of the SYK model and black holes. We will evaluate this formula for various deformations of the SYK model and black holes. Hereafter, we take $\hbar=k=1$. \subsection{The complexity/partition function relation in the SYK model} The SYK model as a quantum many-body model has many beautiful structures and properties similar to a black hole. It is solvable in the large $N$ limit and the low-energy limit of the SYK model leads to a nonconformal contribution to four-point functions captured by a Schwarzian derivative. In this section, we examine the CP conjecture for the SYK model. The original SYK model has been studied in \cite{kitaev15,Maldacena:2016hyu,davison16}. We are going to examine the complexity/partition function relation by utilizing the partition function given in \cite{kitaev15,Maldacena:2016hyu}. The SYK model is a quantum-mechanical model with $N$ Majorana fermions with random interactions involving $q$ of these fermions at a time, where $q$ is an even number. The Hamiltonian is \cite{Maldacena:2016hyu} \begin{equation} H=(i)^{q/2}\sum_{1\leq i_1<i_2...i_{q}\leq N}j_{i_1 i_2...i_q}\chi_{i_1}\chi_{i_2}\cdots \chi_{i_q}. \end{equation} Each coefficient is a real variable drawn from a random Gaussian distribution satisfies $\langle j^2_{i_1\ldots i_q}\rangle=J^2 (q-1)!/N^{q-1}$. After writing the original partition function of the theory as a functional integral with a collective action based on the Luttinger-Ward analysis \cite{georges01}, one can obtain the free energy and the entropy. The general expression of the free energy, in a low temperature expansion, has the form \cite{kitaev15} \begin{equation} \log \mathcal{Z}=-\beta E_{0}+S_{0}+\frac{c}{2\beta}+\cdots, \end{equation} where the ground state energy, entropy and specific heat are all proportional to $N$. The zero temperature entropy is given for general $q$, that is $ \frac{S_{0}}{N}\sim \frac{1}{2}\log 2-\frac{\pi^{2}}{4q^{2}}+\cdots$. From the relation $\dot{\mathcal{C}}\sim T \ln \zz$, the complexity growth rate is then given by \begin{equation} \label{rate} \dot{\mathcal{C}} \sim -E_0+\frac{S_0}{\beta}+\frac{c}{2\beta^2}+... \end{equation} The first two terms in (\ref{rate}) are consistent with the complexity of charged black holes obtained in \cite{Susskind:2014rva,Brown:2015lvg}, while the third term can be considered as a higher order correction. Equation (\ref{rate}) reflects that $TS_0$ is competing with the ground energy $E_0$. This also agrees with the behavior one would expect based on a quantum circuit model of complexity \cite{Susskind:2014rva,Hayden:2007cs}: The rate of quantum computation measured in gates per unit time is proportional to the product $T S$; The entropy appears because it represents the width of the circuit and the temperature is an obvious choice for the local rate at which a particular qubit interacts. \subsection{The complex SYK model} We can extend our discussion to include the case of the complex SYK fermions. The zero-dimensional SYK model with complex fermions $f_i$ label by $i=1,...N$. The Hamiltonian is \cite{davison16} \be H_0=\sum_{1\leq i_1<i_2...i_{q/2}\leq N}J_{i_1 i_2...i_q}f^{\dagger}_{i_1}f^{\dagger}_{i_2}\cdot\cdot\cdot f^{\dagger}_{i_{q/2}}f_{i_{q/2+1}}\cdot\cdot\cdot f_{i_{q-1}}f_{i_{q}}. \ee The grand potential is given by \cite{davison16} \be \Omega=...-J^2\int^{1/T}_0 d\tau [G(\tau)]^{q/2}[G(1/T-\tau)]^{q/2}. \ee The thermodynamics of the zero-dimensional complex SYK model was discussed in \cite{davison16}. The complete grand potential, including the contribution of the ground state energy is given by, \be \Omega=E_0-\mu_0 \mathcal{Q}-T \mathcal{G}+..., \ee where $\mathcal{Q}$ is the charge density, $\mathcal{Q}$ is a parameter related to the entropy. That is to say \bea \mathcal{Q}&=&-\frac{1}{2\pi}\frac{d \mathcal{G}}{d\mathcal{E}},\\ \mathcal{S}&=&\mathcal{G}+2\pi \mathcal{E} \mathcal{Q}, \eea where $\mathcal{S}$ is the entropy and $\mathcal{E}$ is a parameter controlling the particle-hole symmetry. Subtracting the ground state energy, we simply obtain \be \dot{\mathcal{C}}\sim T \mathcal{G}. \ee In the $\mathcal{E}\rightarrow 0$ limit, it becomes $\dot{\mathcal{C}}\sim T \mathcal{S}$. This result also agrees with \cite{Brown:2015lvg}. \subsection{Complexity growth rate and higher dimensional SYK model} Higher dimensional extensions of the original SYK models were investigated widely because such models can give interesting quantum critical properties, such as linear-in-T resistivity \cite{song2017,Chowdhury:2018sho}, many-body localization to metal phase transition \cite{yao2017,whcai2018} and so on. Recently, Patel et al. \cite{Patel2018} and Chowdhury et al. \cite{Chowdhury:2018sho} constructed a (2+1)dimensional strongly correlated solvable model, consisting of coupled SYK islands, which yields linear-in-T and linear-in-B behaviors. We are going to examine the complexity growth rate of this model. The (2+1)-dimensional SYK model given in \cite{Chowdhury:2018sho} with the Hamiltonian \begin{equation} H=H_{c}+H_{f}+H_{cf}, \end{equation} with \begin{equation} H_{c}=\sum_{\boldsymbol r,\boldsymbol r^{'}} \sum_{l}( -t^{c}_{\boldsymbol r,\boldsymbol r^{'}}-\mu_{c}\delta_{\boldsymbol r,\boldsymbol r^{'}} ) c^{\dagger}_{\boldsymbol r l}c_{\boldsymbol r^{'} l}+\frac{1}{(2N)^{3/2}}\sum_{\boldsymbol r}\sum_{ijkl}u^{c}_{ijkl}c^{\dagger}_{\boldsymbol r i}c^{\dagger}_{\boldsymbol r j}c_{\boldsymbol r k}c_{\boldsymbol r l}, \end{equation} \begin{equation} H_{f}=\sum_{\boldsymbol r,\boldsymbol r^{'}} \sum_{l}( -t^{f}_{\boldsymbol r,\boldsymbol r^{'}}-\mu_{f}\delta_{\boldsymbol r,\boldsymbol r^{'}} ) f^{\dagger}_{\boldsymbol r l}f_{\boldsymbol r^{'} l}+\frac{1}{(2N)^{3/2}}\sum_{\boldsymbol r}\sum_{ijkl}u^{f}_{ijkl}f^{\dagger}_{\boldsymbol r i}f^{\dagger}_{\boldsymbol r j}f_{\boldsymbol r k}f_{\boldsymbol r l}. \end{equation} The inter-band interaction $H_{cf}$ is chosen to be \begin{equation} H_{cf}=\frac{1}{N^{3/2}}\sum_{\boldsymbol r}\sum_{ijkl}V_{ijkl}c^{\dagger}_{\boldsymbol r i}f^{\dagger}_{\boldsymbol r j}c_{\boldsymbol r k}f_{\boldsymbol r l}, \end{equation} where the coefficients $V_{ijkl}$, are chosen to be identical at every site with $\overline{u^{f}_{ijkl}}=\overline{V_{ijkl}}=0 $, and the distribution of the couplings satisfy $\overline{(u^{f}_{ijkl})^{2}}=u^{2}_{f}$, $\overline{(u^{f}_{ijkl})^{2}}=0$ and $\overline{(v_{ijkl})^{2}}=u^{2}_{cf}$. This model can be regarded as two independent subsystems: the conducting $c$ fermions with a hopping $t^{c}_{\boldsymbol r,\boldsymbol r^{'}}$, and the local and immobile $f$ fermions with SYK interaction at each site. As to the thermodynamics properties of the intermediate non-fermi liquid regime, one can evaluate the entropy density through $S=-\frac{\partial F }{\partial T}$. This gives three contributions to the entropy density $\mathcal{S}=\frac{S}{2NV}$, \begin{equation} \mathcal{S}(T)=\mathcal{S}_{c}(T)+\mathcal{S}_{f}(T)+\mathcal{S}_{int}(T), \end{equation} with \begin{equation} \mathcal{S}_{f}(T)=\mathcal{S}_{0,q}+\gamma_{q}T, \end{equation} \begin{equation} \mathcal{S}_{c}(T)\sim T^{1/z}\sim T^{4\Delta (q)}, \end{equation} \begin{equation} \mathcal{S}_{int}(T)\sim T^{1+4\Delta (q)}. \end{equation} where $\Delta=\frac{1}{q}$ and $\gamma_q$ is a constant. Here, $\mathcal{S}_{f}(T)$ is the entropy of a single $SYK_{g}$ model, $S_{c}(T)$ comes from c-fermions, and $\mathcal{S}_{int}(T)$ originates from the inter-species interaction term $H_{int}$. The complexity growth rate is then \begin{equation} \dot{\mathcal{C}_{f}}\sim \displaystyle{ \int \mathcal{S}_{f} dT }=\displaystyle{ \int (\mathcal{S}_{0,q}+\gamma_{q}T) dT }=\mathcal{S}_{0,q}T+\frac{1}{2}\gamma_{q}T^{2}+\cdots, \end{equation} \begin{equation} \dot{\mathcal{C}_{c}}\sim \displaystyle{ \int \mathcal{S}_{c} dT }=\displaystyle{ \int T^{4\Delta (q)} dT }=\frac{1}{4\Delta+1}T^{4\Delta+1}+\cdots, \end{equation} \begin{equation} \dot{\mathcal{C}}_{int}\sim \displaystyle{ \int \mathcal{S}_{int} dT }=\displaystyle{ \int T^{1+4\Delta (q)} dT }=\frac{1}{4\Delta+2}T^{4\Delta+2}+\cdots. \end{equation} The complexity growth rate of $f$ fermions obeys the relation $\dot{\mathcal{C}} \sim T \mathcal{S}_{0,q}$. However, the complexity growth rate for $c$-fermions and the interaction term do not obey on the entropy, which is because the $c$-fermions do not obey the SYK interaction. Therefore the subsystem for c-fermions does not yield a gravity dual. This result in turn further indicates that the SYK model indeed has its own gravity dual. \subsection{Complexity growth rate of thermofield double state of SYK ``wormholes" } A pair of SYK islands of Majorana fermions with identical two-body interactions, coupled by one-body hopping, have been used to describe eternal traversable wormholes in a dual gravity theory \cite{maldacena201810}. The configuration contains negative null energy generated by quantum fields under the influence of an external coupling\cite{maldacena201810}. The dynamics of the two coupled SYK systems looks like that of a traversable wormhole. The Hamiltonian takes the form \cite{maldacena201810} \be H_{\rm total}=H_{\rm L, SYK}+H_{\rm R, SYK}+H_{\rm int},~~~H_{\rm int}=i\mu\sum_{j}\chi^j_L\chi^j_R. \ee The system will develop an approximate conformal symmetry at energy scales less than $\mathcal{J}$. The effects of the coupling $\mu$ are as a perturbation to the approximately conformal system. The thermofield double state is a pure state of the combined, while left and right systems have a large value of the left-right correlators. At small coupling $\mu$, the ground state is very close to the thermofield double state $\rm |TFD\rangle$ of the decoupled systems. At higher temperature, the partition function of the coupled system is then given by \cite{maldacena201810} \be \log \mathcal{Z}=2S_0+\frac{(2\pi)^2}{{\beta}}+\eta^2 \beta^{2-4\Delta}\int^1_0 dx \frac{\pi}{\sin x\pi}+... \ee where $S_0$ is the ground state entropy of each SYK model and $\eta$ is a parameter. To leading order, the complexity growth rate is given by \be \mathcal{\dot{C}}\sim 2S_0 T+\mathcal{O}(T^2). \ee This again agrees with our original proposal. However there is a factor difference from the result obtained on the gravity side, the complexity growth rate of JT gravity \cite{JTcomplexity} \be \mathcal{\dot{C}}\sim 4 T S_0+\mathcal{O}(T^2). \ee But our result agrees with \cite{cai2019}. \section{ The complexity/partition function relation for AdS black holes} The relation $\ln \mathcal{Z}\sim pV/T \sim \dot{\mathcal{C}}/T$ closely relates the microscopic physics ( i.e. the grand partition function) with the complexity growth rate. The origin of the microscopic states of Schwarzschild black hole still remains elusive. Within a semiclassical regime we can think of the partition function of the bulk theory as a path integral over metrics. Given the Euclidean saddle points of the bulk theory, the partition function is \be \mathcal{Z}=e^{-I_{E}[g_{*}]}, \ee where $I_{E}[g_{*}]$ is the Euclidean action at the saddle point. In the following, we use the notations given in \cite{hartnoll0903}. The bulk action for Schwarzschild-AdS black hole with the Gibbons-Hawking boundary term is given by \be I_{E}=-\frac{1}{2\kappa^2}\int_{M} d^{d+1}x\sqrt{g}\bigg(R+\frac{d(d-1)}{L^2}\bigg)+\frac{1}{2\kappa^2}\int_{\partial M}d^{d}x\sqrt{\gamma}\bigg(-2K+\frac{2(d-1)}{L}\bigg), \ee where $\gamma$ is the induced metric on the boundary and $K$ is the trace of the extrinsic curvature. One saddle is obtained by analytic continuation of the Schwarzschild-AdS metric, setting $\tau = it$. That is \bea ds^2_{*}&=&\frac{L^2}{r^2}\bigg[-f(r)dt^2+\frac{dr^2}{f(r)}+dx^i dx^i\bigg],\\ f(r)&=&1-\bigg(\frac{r}{r_{+}}\bigg)^d. \eea The Hawking temperature of the black hole is \be T=\frac{d r_{+}}{4\pi L^2}. \ee The corresponding entropy is then given by \be S=\frac{(4\pi)^d L^{d-1}}{2\kappa^2 d^{d-1}}V_{d-1}T^{d-1}. \ee After some calculation, one can evaluate the action of the Euclidean Schwarzschild-AdS black hole \be I_{E}=-\frac{(4\pi)^d L^{d-1}}{2\kappa^2 d^d}V_{d-1}T^{d-1}. \ee The complexity growth rate then is given by \be \dot{\mathcal{C}}\sim -TI_{E}d=\frac{(4\pi)^d L^{d-1}}{2\kappa^2 d^{d-1}}V_{d-1}T^d \ee We can also obtain the free energy from the Euclidean action \be F=-T \ln \zz=-\frac{(4\pi)^d L^{d-1}}{2\kappa^2 d^d}V_{d-1}T^d. \ee We conclude that $F=-\dot{\mathcal{C}}=TS/d$. For RN-AdS black holes, the corresponding bulk action is the Einstein-Maxwell theory \be I_{E}=\int d^{d+1}x\sqrt{g}\bigg[\frac{1}{2\kappa^2}\bigg(R+\frac{d(d-1)}{L^2}\bigg)-\frac{1}{4 g^2}F^2\bigg], \ee where $F=dA$ is the electromagnetic field strength. Working in the grand canonical ensemble with $\mu$ fixed and using the notation $\Omega=-T\ln \mathcal{Z}$, where $\zz$ is the partition function defined by the gravitational integral. The metric of the RN-AdS black hole is given by \bea ds^2&=&\frac{L^2}{r^2}\bigg[-f(r)dt^2+\frac{dr^2}{f(r)}+dx^i dx^i\bigg],\\ f(r)&=&1-\bigg[1+\frac{r^2_{+}\mu^2}{\gamma^2}(\frac{r}{r_{+}})^d+\frac{r^2_{+}\mu^2}{\gamma^2}(\frac{r}{r_{+}})^{2(d-1)}\bigg]. \eea The corresponding Hawking temperature is given by \be T=\frac{1}{4\pi r_{+}}\bigg(d-\frac{(d-2)r^2_{+}\mu^2}{\gamma^2}\bigg), ~~\gamma^2=\frac{(d-1)g^2L^2}{(d-2)\kappa^2}. \ee The grand potential is obtained as \be \Omega=-\frac{L^{d-1}}{2\kappa^2 r^d_{+}}\bigg(1+\frac{r^2_{+}\mu^2}{\gamma^2}\bigg)V_{d-1}. \ee From the complexity/grand potential relation, the complexity growth rate can then be obtained as \be \dot{\mathcal{C}}=-\Omega. \ee The most remarkable feature of the holographic principle is the Bekenstein Hawking area law for black hole entropy \be S = \frac{A}{4}. \ee In thermodynamics, the well known thermodynamic relation is \begin{equation} S=-(\frac{\partial \Omega}{\partial T})_{\mu}. \end{equation} This ansatz strongly indicates that $ S=\frac{\partial \dot{\mathcal{C}}}{\partial T}$, so one may evaluate the complexity growth rate via $\dot{\mathcal{C}}=\int S dT$. Actually, the relation $\ln \mathcal{Z}\sim pV/T \sim \dot{\mathcal{C}}/T$ has its deep connections with the CA conjecture. In the Euclidean coordinates, the action $I_{E}$ is related to the partition function via \begin{equation} I_{E}=-\ln \mathcal{Z},\label{12} \end{equation} Therefore, we have \be \cc=-T I_{E}. \ee That is to say, complexity is closely related to the action in the Euclidean spacetime. The difference between the original CA conjecture and the formula obtained in (\ref{12}) is that there is a minus sign. This means that the complexity growth rate can be positive or negative, which actually relates to the stability of black holes. For thermodynamically stable black holes, the minimum and negative free energy refers that the complexity growth rate is positive. For thermodynamically unstable black holes, the corresponding positive free energy indicates that the complexity growth rate is negative signalizing phase transitions would happen. \section{Complexity growth rate and black hole phase transition} \subsection{Schwarzschild-AdS Black Hole} In this section we will focus on the relationship between complexity and AdS black hole phase transition. We focus on the 4-dimensional Schwarzschild-AdS black holes. For such black holes, $P=\frac{-{\Lambda}}{8\pi}$, $\Lambda=-\frac{3}{l^{2}}$, $V=\frac {4\pi r^{3}_{+}}{3}$, the standard Schwarzschild-AdS Black Hole metric is written as \begin{equation} ds^{2}=-\bigg(1-\frac{2M}{r}+\frac{r^{2}}{l^{2}}\bigg)dt^{2}+\bigg(1-\frac{2M}{r}+\frac{r^{2}}{l^{2}}\bigg)^{-1}dr^{2}+r^{2}d\Omega^{2} , \end{equation} \begin{figure}[htbp] \centering \includegraphics[width=0.8\linewidth,height=0.4\textheight]{car} \caption{\ $\dot{\mathcal{C}}$\ as a function of the black hole horizon radius for fixed l=1, and increasing temperature $ T=\frac{\sqrt{\frac{1}{2}}}{2\pi l} $ (blue line), $T=\frac{\sqrt{3}}{2\pi l}$ (orange line), $ T=\frac{1}{2\pi l} $ (green line) and $ T=\frac{3}{4\pi l} $ (red line) from top to bottom .} \label{fig:car} \end{figure} where $M$ given by $M=\frac{r_{+}}{2}(1+\frac{r^{2}_{+}}{l^{2}})$ is the black hole mass and $\mathit{l}$ is the radius of curvature of the AdS space-time. The Hawking temperature at the horizon and the entropy is given by \begin{equation} T=\frac{{f}^{\prime }(r_{+})}{4\pi}=\frac{1}{4\pi r_{+}}\bigg(1+\frac{3r^{2}_{+}}{l^{2}}\bigg)\label{commutative8}\ ,~~~ \ S=\pi r_{+}^{2}. \end{equation} The form of $M$ shows that for any positive mass, there is only one horizon. As a consequence, this kind of black hole does not admit any extremal configuration in which $M$ has a minimum. From the formula (\ref{commutative8}) one can see that $T$ has a minimum value of $\frac{\sqrt{3}}{2\pi l}$ when $r_{+}=\frac{l}{\sqrt{3}}$. For $T<T_{min}$, there are no black holes but a pure radiation phase. The background heat bath is too cold to admit nucleation of black holes. For $T=T_{min}$, a single black hole is formed with a radius of $ r_{min}=\frac{l}{\sqrt{3}}$. For $T>T_{min}$, a pair of black holes (large/small) exist with radius given by \begin{equation} r_{l,s}=\frac{T}{2\pi T^{2}_{min}}\bigg(1\pm\sqrt{1-\frac{T^{2}_{min}}{T^{2}}}\bigg)\ ,~~~ \ r_{s}<r_{min} , \ r_{l}>r_{min} , \end{equation} One can compute the difference of Euclidean action between the black hole metric and that of anti-de Sitter space, and in this case the contribution of the surface term is zero. The action equals to the difference in four-volumes of the two metrics and is given by\cite{Haking:1983} \begin{equation}\label{action} I=\frac{\pi r_{+}^{2}(l^{2}-r_{+}^{2})}{l^{2}+3r_{+}^{2}}, \end{equation} Now we noticed \begin{equation} \dot{\mathcal{C}}\sim T \ln \mathcal{Z}=-TI_{E}=\frac{r^{3}_{+}}{4 l^{2}}-\frac{r_{+}}{4}\label{c1}. \end{equation} When the temperature is less than $T_{min}$, the maximum value of the complexity growth rate is at the origin $(r_{+}=0)$; When the temperature is equal to $T_{min}$, the function reach the inflection point at $r_{+} =\frac{1}{2 \pi T_{min}}=\frac{l}{\sqrt{3}}$. Above this temperature, there are two black holes, the small black hole corresponds to a locally minimum of $\dot{\mathcal{C}}$, while the larger one is locally stable being a locally maximum. With increasing T, $\dot{\mathcal{C}}$ becomes the locally maximum when the temperature reaches the Hawking phase transition temperature $T=\frac{1}{\pi l}\equiv T_{HP}$. \subsection{Reissner-Nordstrom AdS Black Holes} For many years, physicists have found that AdS charged black holes have almost the same thermodynamic properties as van der Waals gas (for example, have the same P-V criticality)\cite{Caldarelli:1999xj}. The standard 4-dimensional RN-AdS black hole metric is \begin{equation} ds^{2}=-\bigg(1-\frac{2M}{r}+\frac{r^{2}}{l^{2}}+\frac{Q^{2}}{r^{2}_{+}}\bigg)dt^{2}+\bigg(1-\frac{2M}{r}+\frac{r^{2}}{l^{2}}+\frac{Q^{2}}{r^{2}_{+}}\bigg)^{-1}dr^{2}+r^{2}d\Omega^{2}. \end{equation} The Hawking temperature at the horizon and the entropy is given by \begin{equation} T=\frac{{f}^{\prime }(r_{+})}{4\pi}=\frac{1}{4\pi r}\bigg(1+\frac{3r^{2}_{+}}{l^{2}}-\frac{Q^{2}}{r^{2}_{+}}\bigg)\ ,~~ \ S=\pi r_{+}^{2}. \end{equation} In order to obtain the partition function of the system, we calculate its Euclidean action. For a fixed charge $Q$, one considers a surface integral \begin{equation}\label{I_{s}} I_{s}=-\frac{1}{8\pi}\int_{\partial M}d^{3}x\sqrt{h}K-\frac{1}{4\pi}\int_{\partial M}d^{3}x\sqrt{h}n_{a}F^{ab}A_{b}, \end{equation} The first term is the standard Gibbons-Hawking term and the second term is needed to impose fixed $Q$ as a boundary condition at infinity. So the total action is then given by \begin{equation} I=I_{EM}+I_{s}+I_{c}, \end{equation} where $I_{EM}$ is given by $I_{EM}=-\frac{1}{16\pi}\int_{M}\sqrt{g}(R-F^{2}+\frac{6}{l^{2}})$, and $I_{c}$ represents the invariant counterterms needed to cure the infrared divergences \cite{Emparan:1999pm,Mann:1999pc}. The total action was first calculated in \cite{Chamblin:1999hg,Caldarelli:1999xj} and reads \begin{equation}\label{I} I=\frac{\beta}{4 l^2}\bigg(l^2r_{+}-r^3_{+}+\frac{3l^2Q^2}{r_{+}}\bigg) . \end{equation} \begin{figure}[htbp] \centering \includegraphics[width=0.7\linewidth, height=0.33\textheight]{C-T} \caption{\ $\dot{\mathcal{C}}$\ as a function of temperature for fixed Q = 1. The blue line corresponds to $P/P_{c} = 0.55$, and the green line corresponds to critical pressure P = Pc $\approx $ 0.0033. Obviously, for $T < T_{c} \approx 0.043$ there is a (small black hole)-(large black hole) first-order phase transition.} \label{fig:C-T} \end{figure} So in this case the complexity rate is \begin{equation} \dot{\mathcal{C}}\sim T \ln \mathcal{Z}=-TI_{E}=\frac{r^{3}_{+}}{4 l^{2}}-\frac{r_{+}}{4}-\frac{3Q^{2}}{4r_{+}}. \end{equation} Previous work on the critical behaviour of RN-AdS black hole in the non-extended phase space demonstrates that in the canonical (fixed charge) ensemble, for $Q<Q_{c}$, there exists a first order phase transition in the system \cite{Chamblin:1999hg,Chamblin:1999tk}. The critical point of the RN-AdS black hole, given by $T_{c}=\frac{\sqrt{6}}{18\pi Q}, P_{c}=\frac{1}{96\pi Q^{2}}$\cite{Kubiznak:2012wp}. We then consider the phase transition of the AdS charged black hole system in the extended phase space while we treat the black hole charge Q as a fixed external parameter, not a thermodynamic variable. The behaviour of $\dot{\mathcal{C}}$ is depicted in Fig.\ref{fig:C-T}. Since the $\dot{\mathcal{C}}$ demonstrates characteristic ``wallow tail'' behaviour, there is a first order transition in the system as $T<T_{c}$. \section{Relations to Lloyd Bound} \subsection{Neutral static black holes} Obviously, according to the definition of quantum complexity, we can see that any way to produce states has already limited the growth of complexity. Inspired by Margolus-Levitin theorem \cite{Margolus:1997ih}, Lloyd conjecture that the orthogonal time $\tau_{\perp}$ is bounded below by \begin{equation} \tau_{\perp}\geq \frac{h}{4E}, \end{equation} where $E$ is the average energy of the state. If we take the reciprocal of both sides and describe the left side of the equation as the rate of complexity, then we came to the conclusion: the rate of complexity is limited by the energy of the system. \begin{equation} \dot{\mathcal{C}}\le\frac{2E}{\pi\hbar}. \end{equation} which is the Lloyd bound. In calculations with Schwarzschild-AdS black holes, $E$ will be the mass $M$ of the black hole. We found that \begin{figure}[htbp] \centering \includegraphics[width=0.7\linewidth, height=0.3\textheight]{ads-s2} \caption[Plot of $\frac{PV}{\hbar}/\frac{2M}{\pi \hbar}$ as a function of $r_{+}$]{Plot of $\frac{PV}{\hbar}/\frac{2M}{\pi \hbar}$ expressed with blue lines and $\dot{\mathcal{C}}/\frac{2M}{\pi \hbar}$ expressed with yellow lines as a function of the black hole radius $r_{+}$, we have set l=1, $\hbar=1$.} \label{fig:ads-s} \end{figure} CV2.0 does not strictly obey the Lloyd bound. In Fig.3, we plot $\frac{PV}{\hbar}/\frac{2M}{\pi \hbar}$ and $\dot{\mathcal{C}}/\frac{2M}{\pi \hbar}$ as a function of the black hole radius $r_{+}$. We can see that in this case the complexity rate denoted by (\ref{c1}) always satisfies the Lloyd bound. Actually, we test the Schwarzschild AdS black hole of all sizes $r_{+}\sim l_{ads}$, $r_{+} \ll l_{ads}$, $r_{+} \gg l_{ads}$. They are always consistent with the Lloyd bound. \subsection{Charged black holes} As argued in \cite{Brown:2015lvg}, the existence of conserved charges slows down the growth of complexity at late time, The thermofield double state includes a chemical potential $\mu$: \begin{equation} \left| TFD_{\mu}\right\rangle=\frac{1}{\sqrt{Z}}\sum _{n}e^{-\beta(E_{n}+\mu Q_{n})/2} \left|E_{n}Q_{n}\right\rangle _{L}\left|E_{n}-Q_{n}\right\rangle _{R}. \end{equation} This state time-evolves by the Hamiltonian $H_{L}+\mu Q_{L}$ on the left, and $H_{R}-\mu Q_{R}$ on the right: \begin{equation} \left|\psi(t_{L},t_{R})\right\rangle=e^{-i(H_{L}+\mu Q_{L})t_{L}}e^{-i(H_{R}-\mu Q_{R})t_{R}}\left|TFD_{\mu}\right\rangle , \end{equation} where $H_{L}$ and $H_{R}$ are the $\mu = 0$ Hamiltonians. According to the same argument leading to the boundary of $\mu = 0$, the complex boundary becomes \cite{Brown:2015lvg} \begin{equation} \dot{\mathcal{C}}\le\frac{2}{\pi\hbar}\left[(M-\mu Q)-(M-\mu Q)_{gs}\right]\label{commutative11} , \end{equation} where $(M-\mu Q)_{gs}$ is the ground state of $(M-\mu Q)$, which is either an empty AdS spacetime or an extreme black hole. In Fig.6, we plot the growth of the complexity as a function of the black hole horizon for the three radius. All of them are below the bound given in (\ref{commutative11}). \begin{figure}[htbp] \centering \subfigure{} \begin{minipage}{4.7cm} \centering \includegraphics[scale=0.26]{ads-rn-r-l} \end{minipage} \subfigure{} \begin{minipage}{4.7cm} \centering \includegraphics[scale=0.26]{ads-rn-r-l1} \end{minipage} \subfigure{} \begin{minipage}{4.7cm} \centering \includegraphics[scale=0.26]{ads-rn-r-l2} \end{minipage} \caption{$\dot{\mathcal{C}}$ expressed with yellow lines and Lloyd bound expressed with blue lines as a function of $r_{+}$ with different sizes. The left figure is the case of the $r_{+}\sim l_{AdS}$, the middle figure is the case of the $r_{+} >> l_{AdS}$, and the figure on the right is the case of the $r_{+} << l_{AdS}$. $l=1,Q=0.4$} \label{01} \end{figure} \subsection{Einstein scalar theory: case 1 } In \cite{Liu:2019mxz}, the authors found that for black holes in Einstein-scalar theory the Lloyd bound can be violated, because the volume of black hole singularity becomes negative. It is of interesting to check whether such black holes still violates the Lloyd bound in own set-up. The Lagrangian of the Einstein-scalar theory takes the general form \begin{equation}\label{L} \mathcal{L}=\sqrt{-g}\left(R-\frac{1}{2}(\partial\phi)^{2}-V(\phi)\right). \end{equation} We begin with the example D = 4 case, in which the potential is \cite{Zloshchastiev:2004ny} \begin{equation}\label{V1} V(\phi)=-2g^{2}\left((\cosh\phi+2)-2\beta^{2}(2\phi+\phi\cosh\phi-3\sinh\phi)\right), \end{equation} where the parameter $\beta$ is a fixed dimensionless quantity. The theory admits asymptotic AdS black hole, given by \cite{Zloshchastiev:2004ny} \begin{equation*} ds^{2}=-fdt^{2}+\frac{dr^{2}}{f}+r(r+q)d\Omega^{2}_{2,k},\ ~~~ \ e^{\phi}=1+\frac{q}{r}, \end{equation*} \begin{equation} f=g^{2}r^{2}+k-\frac{1}{2}g^{2}\beta^{2}q^{2}+g^{2}(1-\beta^{2})qr+g^{2}\beta^{2}r^{2}(1+\frac{q}{r})\log(1+\frac{q}{r}). \end{equation} The solution contains only one integral constant $q$, parameterizing the mass \begin{equation} M=\frac{1}{12}g^{2}\beta^{2}q^{3}, \end{equation} and the thermodynamical variables are \cite{Liu:2019mxz} \begin{equation} T=\frac{f^{\prime}(r_{+})}{4\pi},\ ~~~ \ S=\pi r_{+}(r_{+}+q), \ ~~~ \ P=\frac{3g^{2}}{8\pi}, \end{equation} \begin{equation*} V=\frac{2}{3}\pi r_{+}^{3}(1+\frac{q}{r_{+}})(2+\frac{q}{r_{+}})\left(1+\beta^{2}\log(1+\frac{q}{r_{+}})\right)-\frac{1}{9}\pi\beta^{2}q(q^{2}+12qr_{+}+12r_{+}^{2}), \end{equation*} combined with the above physical quantities, we calculate the complexity as \begin{equation} \dot{\mathcal{C}}=\frac{1}{12}g^{2}\left(-\beta^{2}q^{3}+3r_{+}(q+r_{+}) \left(q-2\beta^{2}q+2r_{+}+\beta^{2}(q+2r_{+})\log[\frac{q+r_{+}}{r_{+}}] \right)\right). \end{equation} Note that since $T>0$, $S>0$ and $M>0$, we can conclude that $\dot{\mathcal{C}}=TS-M<2M$, so the Lloyd bound is satisfied for this case. \subsection{Einstein scalar theory: case 2 } We now consider the other scalar potential as given in \cite{feng13} \begin{equation*} \begin{split} V(\phi)=&-\frac{1}{2}(D-2)g^{2}e^{\frac{\mu-1}{\nu} \Phi }\\ &\times\left[(\mu-1)((D-2)\mu-1)e^{\frac{2}{\nu}\Phi}-2(D-2)(\mu^{2}-1)e^{\frac{1}{\nu}\Phi}+(\mu+1) ((D-2)\mu+1)\right] \\ &-\frac{(D-3)^{2}}{2(3D-7)}(\mu+1)\alpha e^{-\frac{1}{\nu}(4+\frac{\mu+1}{D-3})\Phi}(e^{\frac{1}{\nu}\Phi}-1)^{3+\frac{2}{D-3}}\times [(3D-7)e^{\frac{1}{\nu}\Phi} \\ & _{2}F_{1} [2,1+\frac{(D-2)(\mu+1)}{D-3};3+\frac{2}{D-2};1-e^{\frac{1}{\nu}\Phi}] \\ & -((3D-7)+(D-2)(\mu-1)) _{2}F_{1} [3,2+\frac{(D-2)(\mu+1)}{D-3};4+\frac{2}{D-2};1-e^{\frac{1}{\nu}\Phi}] \end{split} \end{equation*} The metric of the solution is given by \cite{feng13} \begin{equation} ds^{2}=-\frac{f}{H^{1+\mu}}dt^{2}+H^{\frac{1+\mu}{D-3}}\left(\frac{dr^{2}}{f}+r^{2}d\Omega^{2}_{D-2}\right),\ ~~~ \ H=1+\frac{q}{r^{D-3}} \end{equation} \begin{equation*} \begin{split} f=&g^{2}r^{2}H^{\frac{(D-2)(\mu+1)}{D-3}}+kH-\beta g^{2}r^{2}(H-1)^{\frac{D-1}{D-3}}\\ &\times_{2}F_{1}(1,\frac{(D-2)(\mu+1)}{D-3};\frac{2(D-2)}{D-3};1-\frac{1}{H}). \end{split} \end{equation*} The solution contains one integral constant q, parameterizing the mass of the solution, given by \begin{equation} M=\frac{(D-2)\Omega_{D-2}q(\beta g^{2}q\frac{2}{D-3}+k\mu)}{16\pi}, \end{equation} thermodynamical variables are given by \cite{Liu:2019mxz} \begin{equation*} T=\frac{H_{+}^{-\frac{(D-2)(\mu+1)}{2(D-3)}}}{4\pi}f^{\prime}_{+},\ ~~~ \ S=\frac{\Omega_{D-2}}{4}r_{+}^{D-2}H_{+}^{\frac{(D-2)(\mu+1)}{2(D-3)}}\ ~~~ \ p=\frac{(D-1)(D-2)g^{2}}{16\pi}, \end{equation*} \begin{equation} \begin{split} &V=\frac{\Omega_{D-2}r^{D-1}((1-\mu)H+\mu+1)H^{\frac{(D-2)(\mu+1)}{2(D-3)}}}{2(D-1)}+\frac{\Omega_{D-2}\beta q^{\frac{D-1}{D-3}}}{2(D-1)H} \\ &\left(2H-((1-\mu)(H-1)+2)_{2}F_{1}[1,\frac{(1+\mu)(D-2)}{D-3}; \frac{2(D-2)}{D-3};1-H^{-1}]\right). \end{split} \end{equation} Simply, we set $D=4, \beta=g=k=1$. In the case of $\mu=-1$, we calculate the complexity result as \begin{equation} \dot{\mathcal{C}}= \frac{(2r_{+}^{3}-q^{3}+q)\Omega_{2}}{16\pi}, \end{equation} and for $\mu=1$, the form is expressed as \begin{equation} \dot{\mathcal{C}}=\frac{(2r_{+}^{3}-5q^{3}+q(4r_{+}^{2}-3))\Omega_{2}}{16\pi}. \end{equation} Note also that in case, we can also conclude that the Lloyd bound is satisfied because $\dot{\mathcal{C}}=TS-M<2M$. From these results, we may conclude that the complexity/partition function relation can satisfy the Lloyd bound for various conditions. Under the consideration of the principle of maximal work, the Lloyd bound seems not to be saturated by the complexity/partition function relation. For example, for Schwarzschild-AdS black holes with planar horizons the complexity growth rate $\dot{\mathcal{C}}=M/2$, having a factor difference from the Lloyd bound. \section{ Discussion and Conclusion} In summary, by examining the CV 2.0 (i.e. $\mathcal{\dot{C}}\sim PV$) and using a fundamental relation between thermodynamics and statistical physics $ \ln \mathcal{Z}=PV/kT=-\Omega/kT$, we obtain a relation between the complexity, the grand potential and the partition function. For canonical ensembles, the grand potential reduces to the free energy of the system. In order to illustrate the validity of our proposal $\mathcal{\dot{C}}\sim T \ln \mathcal{Z}$, we have studied the complexity of the TFD state of various deformations of the SYK model. For the original SYK model and its extension to the complex field, the relation $\mathcal{\dot{C}}\sim T S$ is well respected. For $(2+1)$-dimensional SYK model with two bands where $f$ fermions form the bulk geometry while the conducting $c$ fermions live on the boundary. It turns out that only the $f$-fermions involving the SYK interactions obeys the relation $\mathcal{\dot{C}}\sim T S$. We further studied the complexity growth rate of the TFD state of the SYK ``wormholes" and found that the result agrees with \cite{JTcomplexity} at the qualitative level. We then applied the complexity/partition function relation to cases of AdS black holes. For both Schwarzschild-AdS and RN-AdS black holes, the relation $\mathcal{\dot{C}}\sim -F$ holds. The connections between complexity and phase transition were then discussed. It seems that the quantum complexity can be regarded as an order parameter for phase transitions. We have also checked whether our proposal violates the Lloyd bound. The results show that both the original Lloyd bound and the generalized Lloyd bound are satisfied for Schwarzschild-AdS and RN-AdS black holes. Quantum computational process can be considered as a way toward complexity increasing and work should be done during this process. In thermodynamics, there is a well-known principle-the principle of maximum work $dW\leq -dF$. In \cite{huang2017}, it was proposed that for general two-horizon black holes, the complexity growth rate in the WDW patch can be expressed as $\mathcal{\dot{C}}={H}_{+}-{H}_{-} $, that is to say, the difference between the enthalpy associated with the inner and outer horizons. Later, a new CV conjecture was proposed as \cite{Liu:2019mxz} \be \mathcal{\dot{C}}=2P \Delta V, \ee with $\Delta V=V^{+}-V^{-}$, where $V^{\pm}$ are the thermodynamical volumes on the outer and inner horizons. This is analogous to the mechanical work in the ordinary thermodynamics, i.e. $\Delta W=P \Delta V$. From the principle of maximum work, we can write down an analogous relation $\mathcal{\dot{C}}= 2 P \Delta V=2 \Delta W\leq -2 \Delta F$. For CV 2.0, this relation simply reduces to $\mathcal{\dot{C}}\leq -\Delta F$. In this work, we assume $\mathcal{\dot{C}}\sim -\Omega$ and examine this relation for several conditions. The results show that the Lloyd bound is not violated, but cannot be saturated exactly. Much more can be studied on the relation between complexity, thermodynamics and statistical mechanics, we defer further study on their connections in future studies. \section*{Acknowledgement} We would like to thank Hong L$\rm \ddot{u}$, Song He and Runqiu Yang for helpful discussions. This work is partly supported by NSFC (No.11875184 $\&$ No.11805117).
2023-04-23T06:41:32.420Z
2019-12-09T02:07:18.000Z
redpajama/arxiv
arxiv_0001
2,616
8,347
9362930c6c56a64c4ad179925159786af383a4f8
\section{Introduction} Quantum vacuum is not empty. Seen up close, it is crowded with all sort of virtual particles continuously popping in and out of existence. One of the most outstanding manifestations of vacuum fluctuations is the Casimir effect~\cite{Casimir,Miltonbook}, which has recently aroused great interest in a large class of domains, ranging from quantum computing~\cite{Benenti} to biology~\cite{Andersen}. As well known, the Casimir effect originates from alterations of the zero-point energy induced by boundary conditions. The ensuing attractive force is obtained by differentiating the vacuum energy with respect to the separation between the boundaries. In passing, we mention that an alternative derivation was proposed in Ref.~\cite{Jaffe}, where the Casimir effect was addressed by considering relativistic van der Waals forces between metallic plates. Besides its intrinsic interest in the standard Quantum Field Theory (QFT), the Casimir effect provides a useful test bench for physics beyond the Standard Model~\cite{Blasone:2018obn} and gravity theories~\cite{Sorge,Petruz,Buoninf}. In Refs.~\cite{Blasone:2018obn}, for instance, it was analyzed in connection with the unitary inequivalence between mass and flavor Fock spaces for mixed fields. Similarly, in Refs.~\cite{Petruz} and~\cite{Buoninf} the computation of the Casimir energy density and pressure was exploited to fix some constraints on the characteristic free parameters appearing in the Standard Model Extension and in extended theories of gravity, respectively. Recently, extensive studies were also carried out in the context of the Generalized Uncertainty Principle (GUP)~\cite{Nouicer,Harbach,Panella,Panella2,Dorsch}, where non-trivial corrections were shown to arise due to the existence of a minimal length at Planck scale. The present contribution fits in the last of the above lines of research, since it aims to investigate the connection between the Casimir effect and models which inherently embed this fundamental scale. The concept of minimal length naturally emerges in quantum gravity theories in the form of an effective minimal uncertainty in position $\Delta x_{\mathrm{min}}>0$. Several different theoretical arguments (many of them in form of Gedanken experiments) show the impossibility to measure arbitrarily short distances, due to the very existence of gravity. This naturally leads to a modification of the Heisenberg position-momentum uncertainty principle (HUP). In a one-dimensional setting, studies of string theory, loop quantum gravity, deformed special relativity and black hole physics~\cite{VenezGrossMende,MM, MM1bis,MM2,FS,Adler2,CGS,SC2013} have converged on the idea that a proper generalization of the HUP would be \begin{equation} \Delta x\, \Delta p \geq \frac{\hslash}{2} \left[1 +\beta \left(\frac{\Delta p\, c}{E_p}\right)^2 \right], \label{gup} \end{equation} where $E_p$ is the Planck energy and we are retaining only the leading-order correction in the dimensionless parameter $\beta>0$. Of course, in the limit $\beta\rightarrow0$, the HUP of ordinary quantum mechanics is recovered, as it should be. Let us also remark that the deformation parameter $\beta$ is not fixed by the theory: in principle, it can be either constrained via experiments~\cite{Brau:1999uv} or estimated by computational techniques in different contexts~\cite{Theor}, which yield $\beta\sim\mathcal{O}(1)$ (for a recent overview on the various attempts to fix $\beta$, see Ref.~\cite{ScardRev}). However, given the high-energy scale at which modifications of the HUP should become relevant, the natural arenas for testing GUP effects are undoubtedly Hawking~\cite{ONG} and Unruh~\cite{SBLC} radiations. Note that, for mirror-symmetric states (with $\langle \hat{p} \rangle = 0$), Eq.~\eqref{gup} can be equivalently rephrased in terms of the generalized commutation relation \begin{equation} \left[\hat{x},\hat{p}\right] \,=\, i\hslash \left[ 1 +\beta \left(\frac{\hat{p}\,c}{E_p} \right)^2 \right], \label{gupcomm} \end{equation} since $\Delta x\, \Delta p \geq (1/2)\left|\langle [\hat{x},\hat{p}] \rangle\right|$. Vice-versa, the above relation implies the inequality~(\ref{gup}) for any state. Moreover, in $n$ spatial dimensions, the commutator~\eqref{gupcomm} can be cast in different forms, among which the most common is \begin{equation} \label{moregeneralform} \left[\hat{x}_i,\hat{p}_j\right] =\left[f(\hat p^2)\,\delta_{ij}\,+\,g(\hat p^2)\,\hat p_i\,\hat p_j\right],\,\quad i,j=1,\dots,n \end{equation} with $f(\hat p^2)=1+\beta \left(\frac{\hat{p}\,c}{E_p} \right)^2$ and $g(\hat p^2)=0$ (note that in $n$-dimensions, the functions $f(\hat p^2)$ and $g(\hat p^2)$ can be chosen in different ways~\cite{Panella,Panella2}. In any case, they are not completely arbitrary, being related via the requirement of translational and/or rotational symmetry of the commutator~\cite{Sc1}). Working in the outlined scenario, in the present paper we compute GUP-corrections to the Casimir energy for three different geometries: the parallel-plate configuration, the spherical and cylindrical shells. For the first case, we follow a field theoretical treatment first~\cite{Nouicer,Harbach,Panella,Panella2} and then a heuristic derivation. The two approaches are found to be consistent as concerns the dependence of the corrective term on the inverse fifth power of the distance between the plates. On the other hand, to the best of our knowledge, this is the first time that the Casimir effect for spherical and cylindrical geometries is addressed in the context of GUP. Thereby, we can only compare our results with the ones existing in literature in the limit of vanishing $\beta$. The remainder of the work is organized as follows: in Section~\ref{CaQFT} we briefly review the standard calculation of the Casimir vacuum energy for the parallel-plate geometry. The obtained result is then extended to the context of GUP by quantizing the field in the formalism of maximally localized states. Motivated by the utility of heuristic procedures which help to develop physical intuition, in Section~\ref{CaHe} we consider a similar derivation of the Casimir effect both from HUP and GUP based on simple quantum and thermodynamic arguments. In this regard, we also clarify the approach of Ref.~\cite{Gine}, where the Casimir effect is deduced from the HUP by naively introducing an effective radius $r_e$. The above reasoning is then applied to the cases of a spherical and cylindrical shells in Sec.~\ref{sphcylshel}. Finally, conclusions and perspectives are given in Section~\ref{DandC}. \section{Casimir effect for parallel plates: QFT approach} \label{CaQFT} In the framework of canonical QFT, the Casimir effect can be derived via different approaches and in a wide range of contexts. By referring to the original treatment by Casimir, in what follows we sketch the main steps leading to the relation between the zero-point energy $\Delta E$ and the distance $d$ between the plates. \subsection{Casimir effect from Heisenberg uncertainty principle} We consider the simplest three-dimensional geometry of two parallel plates separated by a distance $d$ along the $x$-axis. Let $L$ be the side of the plates (with $L\gg d$) and $S=L^2$ their surface area. The Casimir effect arises from the vacuum fluctuations of any quantum field in the presence of such boundary conditions on field modes. Consider, for example, the electromagnetic field $\hat{\mathbf{A}}\hspace{0.2mm}(t,\mathbf{x})$ in the Coulomb gauge $\mathbf{\nabla}\cdot \mathbf{A}=0$, \begin{equation} \label{elefield} \hat{\mathbf{A}}(t,\mathbf{x})\,=\,\sum_{\lambda=1,2}\int\frac{d^3p}{{(2\pi)}^3}\sqrt{\frac{{(2\pi)}^4\,\hslash c^2}{\omega_{p}}}\left[\epsilon_{\mathbf{p},\lambda}\,\hat a_{\mathbf{p},\lambda}\,\psi_{\mathbf{p}}(t,\mathbf{x})\,+\,\mathrm{h.c.}\right], \end{equation} where $\epsilon_{\mathbf{p},\lambda}$ are the polarization vectors satisfying the relation $\epsilon_{\mathbf{p},\lambda}\,\epsilon^*_{\mathbf{p},\lambda'}=\delta_{\lambda\lambda'}$ and $\psi_{\mathbf{p}}(t,\mathbf{x})$ are the plane waves (i.e. the standard position representation of momentum eigenstates) of frequency $\omega_{p}=cp/\hslash$. The ladder operators $\hat a_{\mathbf{p},\lambda}$ in Eq.~\eqref{elefield} obey the canonical commutation relations. Now, the vacuum energy responsible for the attractive force between the plates can be obtained by subtracting the infinite vacuum energy of the electromagnetic field in free space from the corresponding infinite energy between the perfectly conducting boundaries. Mathematically speaking, we have \begin{equation} \label{Casimirenergy} \Delta E(d)\,\equiv\,E(d)\,-\,E_0\,=\,\langle0|\hat{H}(d)\,-\,\hat{H}|0\rangle\,, \end{equation} where the Hamiltonian is $\hat H=\frac{1}{8\pi}\int d^3x\,\big[{\big(\partial_0\hat{\mathbf{A}}\big)}^2-\hat{\mathbf{A}}\cdot\mathbf{\nabla}^2\hat{\mathbf{A}}\big]$. By using this relation, one can show that~\cite{Panella} \begin{equation} \label{eps} \Delta E(d)=c\hspace{0.3mm}S\hspace{-0.8mm}\int \frac{d^2p_{\perp}}{{(2\pi\hslash)}^2}\left[\frac{|\mathbf{p}_\perp|}{2}+\sum_{n=1}^{\infty}\sqrt{{|\mathbf{p}_\perp|}^2+\frac{n^2\pi^2\hslash^2}{d^2}}-\int_{0}^{\infty}dn\,\sqrt{{|\mathbf{p}_\perp|}^2+\frac{n^2\pi^2\hslash^2}{d^2}}\right]\hspace{-0.2mm}, \end{equation} where $\mathbf{p}_{\perp}=(p_y,p_z)$ is the transverse momentum and we have exploited the fact that the conditions of vanishing field on the plates only allow for a discrete set of values of the momentum along the $x$-axis, i.e. $p_x=\frac{n\pi\hslash}{d},$ with $n$ integer. Note that, in the above calculations, we have neglected surface corrections. The integral in Eq.~\eqref{eps} is divergent for large values of the momentum. A possible trick to remove this infinity is to introduce an ultraviolet momentum cutoff $p_{\mathrm{max}}\sim \hslash/d$ and remove the regularization only at the end of calculations\footnote{Note that other common regularization techniques are the zeta function regularization and the point splitting~\cite{Bordag}.}. By following this procedure and applying the asymptotic Euler--MacLaurin summation formula, we obtain the well-known expression for the energy shift \begin{equation} \label{Casenedens} \Delta E(d)\,=\,-\frac{\pi^2}{720}\frac{\hslash\hspace{0.2mm}c\hspace{0.2mm}S}{d^3}\,. \end{equation} For later convenience, we also write down the formula for the Casimir energy in one spatial dimension \begin{equation} \label{casundim} \Delta E(d)\,=\,-\frac{\pi}{12}\frac{\hslash\hspace{0.2mm}c}{d}\,. \end{equation} \subsection{Casimir effect in minimal length QED} Let us now investigate the Casimir effect in the presence of the generalized commutator~\eqref{moregeneralform}. Note that, in this case, a minimization of the generalized uncertainty relation with respect to $\Delta p_i$ gives the following nonzero minimal length $(\Delta x_i)_{\mathrm{min}}=\sqrt{\beta}\hspace{0.2mm}\ell_p$, where $\ell_p=\hslash c/E_p$ denotes the Planck length. A quantum theoretical framework which implements the appearance of a nonzero minimal uncertainty in position has been described in Ref.~\cite{MM1bis}. Unlike the standard quantum mechanics, in this context we do not have localized functions in the $\mathbf{x}$-space, so we have to introduce the so-called quasi-position representation. This consists in projecting the state of the system onto the set of \emph{maximally localized states}, which, by definition, are characterized by the minimal position uncertainty $(\Delta x)_{\mathrm{min}}$. In the momentum representation, the general (i.e. time-dependent) maximally localized state around the average position $\textbf x$ takes the form \begin{equation} \label{maxlocstat} \widetilde{\psi}_{\mathbf{p}}(t,\mathbf x)\,=\,\frac{1}{{(\sqrt{2\pi\hslash})}^3}\,e^{-i\left[\widetilde\omega_p\,t\,-\,\widetilde{\mathbf{p}}\cdot\mathbf{x}/\hslash\right]}, \end{equation} where \begin{equation} \label{defpom} \widetilde\omega_p\,=\,\frac{E_p}{\hslash\sqrt{\beta}}\arctan\left(\frac{cp\sqrt{\beta}}{E_p}\right)\hspace{-0.3mm},\quad\widetilde{\mathbf p}_i\,=\,\left[\frac{E_p}{cp\sqrt{\beta}}\arctan\left(\frac{cp\sqrt{\beta}}{E_p}\right)\right]\mathbf p_i\,. \end{equation} Note that, for $\beta\rightarrow 0$, the quasi-position representation reduces to the standard plane-wave formalism, since $\widetilde\omega_p\rightarrow\omega_p$, $\widetilde{\mathbf p}_i\rightarrow \mathbf{p}_i$ and $(\Delta x_i)_{\mathrm{min}}=0$. In terms of the maximally localized states, the electromagnetic field reads \begin{equation} \label{newfieldexp} \hat{\mathbf{A}}(t,\mathbf{x})\,=\,\sum_{\lambda=1,2}\int\frac{d^3p}{{(2\pi)}^3\left(1+\frac{c^2p^2\beta}{E^2_p}\right)}\sqrt{\frac{{(2\pi)}^4\,\hslash^2 c^2\sqrt{\beta}}{E_p\arctan\left(\frac{cp\sqrt{\beta}}{E_p}\right)}}\left[\epsilon_{\mathbf{p},\lambda}\,\hat a_{\mathbf{p},\lambda}\,\widetilde\psi_{\mathbf{p}}(t,\mathbf{x})\,+\,\mathrm{h.c.}\right], \end{equation} where the factor $\left(1+c^2p^2\beta/E_p^2\right)^{-1}$ arises from the modified completeness relation for the momentum eigenstates $|\mathbf{p}\rangle$. In order to derive GUP-corrections to the Casimir effect, let us replace the field expansion~\eqref{newfieldexp} in the Hamiltonian. The calculation of the energy shift~\eqref{eps} proceeds as usual~\cite{Panella2}. The remarkable difference, however, is that in this case we do not need to introduce any restriction on the momentum scale. Indeed, from Eq.~\eqref{defpom}, it follows the natural cutoff $\widetilde{p}_{\mathrm{max}}=\pi E_p/(2c\sqrt{\beta})$, leading to $n_{\mathrm{max}}=E_pd/(2\hslash c\sqrt{\beta})$. Accordingly, the Casimir energy takes the form~\cite{Panella2} \begin{equation} \label{veinfs} \Delta E(d)\,=\,-\frac{\pi^2}{720}\frac{\hslash\hspace{0.2mm}c\hspace{0.2mm}S}{d^3}\left[1\,+\,\frac{2\pi^2\hspace{0.2mm}\beta}{3}{\left(\frac{\hslash c}{E_pd}\right)}^2\right], \end{equation} to be compared with the standard QED expression~\eqref{Casenedens}. Note that the GUP term is attractive, since it contributes to increase the modulus of the energy. In Fig.~\ref{figura1}, the Casimir energy per unit surface area has been plotted as a function of the distance between the plates for three values of the deformation parameter $\beta$. For distances large enough, the different curves overlap, being the effects of the minimal length negligible. \begin{figure}[t] \centering \includegraphics[width=12cm]{newplot.pdf} \renewcommand{\baselinestretch}{0.8} \caption{Casimir energy per unit surface versus the distance between the plates for different values of $\beta$ (quantities are in Planck units). Note that the green (dash-dotted) line starts from the minimal distance $d=\sqrt{\beta}\hspace{0.2mm}\ell_p\simeq3\hspace{0.3mm} \ell_p$.} \label{figura1} \end{figure} \section{Casimir effect for parallel plates: heuristic approach} \label{CaHe} Although the heuristic arguments we are going to discuss do not give the exact expression for the Casimir force, they allow us to better understand the origin of this effect, as well as the nature of GUP-corrections to the standard formula~\eqref{Casenedens}. Thus, following the guidelines of the previous Section, we provide a heuristic computation of Casimir energy by using the Heisenberg Uncertainty Principle first and then working in the framework of minimal length theories based on the Generalized Uncertainty Principle~\eqref{gup}. Comparison with the corresponding QFT results is finally discussed. \begin{figure}[t] \centering \includegraphics[width=7.5cm]{Heur.pdf} \renewcommand{\baselinestretch}{0.8} \caption{Heuristic derivation of the zero-point energy for two parallel plates at distance $d$. The sphere of radius $R$ represents the whole space. The only photons which are allowed to impact on $P_0$ are those in the shadowed volume.} \label{figuragine} \end{figure} \subsection{Casimir effect from Heisenberg uncertainty principle} \label{CEFHP} In Ref.~\cite{Gine}, the Casimir effect is derived from the idea that the contribution to the vacuum energy at a point $P_0$ of a plate is affected by the presence of the other boundary. Specifically, the author considers virtual photons produced by vacuum fluctuations somewhere in the space and arriving at $P_0$. In order to compute the total Casimir energy $\Delta E$, one has to take into account all the points on the surface $S$ of the plate. Therefore, from the HUP \begin{equation} \Delta x\Delta E\simeq \frac{\hslash c}{2}\,, \end{equation} ($p=E/c$ for photons), the total contribution to the energy fluctuation $\Delta E$ is given by those photons in a volume $S\Delta x$ around the plate, where $\Delta x$ is the position uncertainty of the single particle. Note that, if we had only one plate, $\Delta x$ would be infinite, since photons may be created in any point of the space. However, this is no longer true in the presence of both the boundaries. In that case, indeed, virtual particles originating from behind the second plate cannot reach $P_0$. Thus, the additional plate acts as a sort of shield. The above situation can be depicted as follows: consider a sphere of radius $R$ centered at the point $P_0$ and enclosing both the plates (see Fig.~\ref{figuragine}). In the single-plate configuration, the effective volume $S\Delta x$ corresponding to the entire space can be thought of as the total volume of the sphere $V_T=4/3\pi R^3$, with $R\rightarrow\infty$. Clearly, such a volume will be reduced by including the second plate, which will prevent particle that pop out in $P$ from impacting on $P_0$ (with reference to Fig.~2, the effective volume is represented by the shadowed region). As a result, we can write $S\Delta x=V_T-V_C$, where $V_C$ is the volume shielded by the second plate. In the case of infinite boundaries, or even better when $L/(2d)\rightarrow\infty$, one can show that $V_C=2/3\pi R^3$, yielding~\cite{Gine} \begin{equation} \label{adx} S\Delta x\,\simeq\,\frac{2}{3}\pi R^3\,. \end{equation} In the above treatment, no length scale has been considered, hence the volume $S\Delta x$ diverges as the radius $R$ increases. To cure such a pathological behavior, in Ref.~\cite{Gine} the author introduces a cutoff $r_e$ representing the effective distance beyond which photons have a negligible probability to reach the plate. In this way, Eq.~\eqref{adx} can be rewritten as \begin{equation} \label{adxrenorm} S\Delta x\,\simeq\,\frac{2}{3}\pi r_e^3\,, \end{equation} which has indeed a finite value. Combining this relation with the HUP, we then obtain \begin{equation} \label{casre} |\Delta E(r_e)|\,=\,\frac{3}{4}\frac{S\hspace{0.2mm}\hslash\hspace{0.2mm}c}{\pi r_e^3}\,, \end{equation} which implies \begin{equation} \label{resimd} r_e\simeq d\,, \end{equation} from comparison with the exact expression~\eqref{Casenedens} for the Casimir energy. Strictly speaking, we would have \begin{equation} \label{setting} r_e=\frac{\sqrt[3]{540}\hspace{0.2mm}}{\pi}\hspace{0.4mm}d\simeq2.6\,d\,. \end{equation} Although the above derivation is straightforward and very intuitive, the discussion on the physical origin of the length cutoff $r_e$ appears to be rather obscure in some points. Therefore, in order to clarify the meaning of Eq.~\eqref{resimd}, let us focus on the computation of the Casimir effect in a simplified one-dimensional system: similar reasonings can be readily extended to three dimensions. From the Heisenberg uncertainty relation, it is well-known that large energy fluctuations live for very short time and, thus, hard virtual photons of energy $\Delta E$ can only travel short distances of order $\hslash c/\Delta E$. As a consequence, the further these particles are created from a plate, the more negligible their contribution to the energy around that plate will be. Let us apply these considerations to the apparatus in Fig.~\ref{figura}. It is easy to see that virtual photons popping out in the strip of width $d$ on the right side of the right plate do not contribute to the Casimir effect, since their pressure is balanced by those photons originating between the plates. By contrast, photons coming from a distance greater than $d$ in the right region do not experience any compensation, because their symmetric ``partners'' on the left side are screened by the first plate. The overall result is a net force acting on the right plate from right to left. Of course, this argument can be symmetrically applied to the left boundary and provides a qualitative explanation for the origin of the attractive Casimir force. \begin{figure}[t] \centering \includegraphics[width=9.5cm]{Casimir.pdf} \renewcommand{\baselinestretch}{0.8} \caption{Setup for the heuristic derivation of the Casimir effect: two infinite parallel plates (bold lines) at distance $d$. The effective radius beyond which the creation of virtual photons does not give a significant contribution to the Casimir energy is denoted by $r_e$.} \label{figura} \end{figure} Now, consider a point at a distance $x_0>d$ from one of the plates, as in Fig.~\ref{figura}. Virtual photons originate from quantum fluctuations in a small region around that point. Such a region, however, cannot be smaller than the (reduced) Compton length of the electron, $\lambda_C=\hslash/(m_ec)$, otherwise the energy amplitude of the fluctuation would exceed the threshold $E\simeq m_ec^2$ for the production of electron-positron pairs. Besides, photons produced at $x_0$ can impact on the plate (and therefore contribute to the Casimir force) only if their energy $E$ is such that $0<E<E_0$, where $E_0=\hslash c/x_0$. Particles of higher energy $E>E_0$, indeed, would recombine before reaching the plate, since the distance they travel is $x=\hslash c/E<\hslash c/E_0=x_0$. We can now assume that photons coming from $x_0$ originate from fluctuations of energy $E$ with a probability given by a Boltzmann-like factor $f(E)=e^{-E/(m_ec^2)}$. Thus, the total linear energy density (i.e. the energy per unit length) arriving on the plate will be \begin{equation} \label{dens} |\Delta\varepsilon(E_0)|\,=\,\int_0^{E_0}\frac{dE}{\lambda_C}\frac{E}{m_ec^2}\,f(E)\,=\,\frac{1}{\hslash c}\int_0^{E_0}dE\,E\,e^{-E/(m_ec^2)}\,, \end{equation} where, since we are dealing with the electromagnetic field, we have introduced the natural threshold of the electron mass/energy $m_ec^2$. In terms of the distance $x_0$, the above integral becomes \begin{equation} \label{densbis} |\Delta\varepsilon(x_0)|\,=\,\hslash c\int_{x_0}^{\infty}\frac{dx}{x^3}\,e^{-\hslash/(m_e\hspace{0.1mm}c\hspace{0.1mm}x)}\,. \end{equation} Finally, in order to get the contribution to the Casimir energy from all the photons which impact on the plate, we integrate over all the points $x_0$ such that $d<x_0<\infty$, obtaining \begin{equation} \label{denster} |\Delta E(d)|\,=\,\int_d^{\infty}dx_0\,\Delta\varepsilon(x_0)\,. \end{equation} The integrals in Eqs.~\eqref{densbis} and~\eqref{denster} can be easily evaluated by observing that, for $x$ large enough, the Boltzmann factor $e^{-\hslash/(m_ecx)}$ becomes approximately of order unity. This yields \begin{equation} |\Delta\varepsilon(x_0)|\,\simeq\,\frac{1}{2}\frac{\hslash c}{x_0^2}\,, \end{equation} and, hence \begin{equation} \label{energiacasimir} |\Delta E(d)|\,\simeq\,\frac{1}{2}\frac{\hslash c}{d}\,, \end{equation} which is in good agreement with the QFT prediction~\eqref{casundim}. Similarly, one can show that the generalization to three dimensions leads to a result consistent with Eq.~\eqref{Casenedens}. The physical relevance of the above discussion becomes clearer if we observe that probability distributions like those in Eq.~\eqref{dens} or~\eqref{densbis} allow us to naturally interpret the effective radius $r_e$ in Eq.~\eqref{adxrenorm} as the distance from the plate below which the vast majority of photons contribute to the large part of the Casimir energy. More rigorously, we can define $r_e>d$ as the distance within which photons carrying the fraction $\gamma$ $(0<\gamma<1)$ of the total Casimir energy are created. In other terms, we can write \begin{equation} \frac{\hslash c}{2}\int_d^{r_e}\frac{dx}{x^2}\,=\,\gamma\Delta E(d)\,, \end{equation} from which \begin{equation} \label{25} r_e\,=\,\frac{d}{1-\gamma}\,. \end{equation} Thus, setting $r_e\simeq2.6\hspace{0.5mm}d$ (as in Eq.~\eqref{setting}) amounts to consider a fraction $\gamma\simeq0.62$ of the total energy responsible for the Casimir effect. The above picture is quite rough, since it relies on the adoption of a Boltzmann-like distribution for the energy of quantum vacuum fluctuations. As a result, it underestimates the fraction of photons produced within the distance $r_e$ from the plate. Considerable improvements can be achieved by employing more realistic functions $f(E)$ in Eq.~\eqref{dens}. For further details on this topic, see Refs.~\cite{Few} and therein. \subsection{Casimir effect from Generalized uncertainty principle} Let us now extend the above arguments to the context of the GUP. As in the previous Subsection, we shall focus for simplicity on the one-dimensional case. The generalization to three dimensions proceeds in a very similar fashion. We start from the modified uncertainty relation~\eqref{gup}, here recast in the form \begin{equation} \label{modforphot} \Delta x\Delta E\,\simeq\,\frac{\hslash c}{2}\left[1\,+\,\beta{\left(\frac{\Delta E}{E_p}\right)}^2\right]. \end{equation} By solving with respect to $\Delta E$, we obtain \begin{equation} \Delta E\,=\,\frac{\Delta x\hspace{0.2mm}E_p^2}{\hslash\hspace{0.2mm} c\hspace{0.2mm}\beta}\left[1\pm\sqrt{1\,-\,\beta{\left(\frac{\hslash c}{\Delta x\hspace{0.2mm}E_p}\right)}^2}\right], \end{equation} where the only solution to be considered is the one with negative sign, as it reduces to the standard result for vanishing $\beta$ (conversely, the solution with positive sign has no evident physical meaning). After expanding to the first order in $\beta$, it follows that \begin{equation} \label{28} \Delta E\,=\,\frac{\hslash c}{2\Delta x}\left[1+\frac{\beta}{4}{\left(\frac{\hslash c}{\Delta x E_p}\right)}^2\right]. \end{equation} If we now neglect those photons coming from distances greater than the effective radius $r_e$, it is natural to assume the uncertainty position $\Delta x$ of the single photon to be of the order of $r_e$ and, thus, of $d$, according to Eq.~\eqref{setting}. Then, by replacing $\Delta x\simeq 2.6\hspace{0.5mm}d$ into Eq.~\eqref{28}, the contribution to the Casimir energy at a given point reads \begin{equation} |\Delta E(d)|\,\simeq\,0.2\hspace{0.4mm}\frac{\hslash c}{d}\left[1\,+\,0.04\hspace{0.3mm}\beta{\left(\frac{\hslash c}{E_p\hspace{0.2mm} d}\right)}^2\right], \label{betacorrec} \end{equation} which indeed agrees with Eq.~\eqref{casundim} in the limit $\beta\rightarrow0$. The above considerations can be now generalized to three-dimensions by taking into account the contribution to the zero-point energy at any point of the plates of area $S$. In doing so, straightforward calculations lead to \begin{equation} \label{finres} |\Delta E(d)|\,\simeq\,0.03\hspace{0.4mm}\frac{\hslash\hspace{0.2mm}c\hspace{0.2mm}S}{d^3}\left[1\,+\,0.04\hspace{0.3mm}\beta{\left(\frac{\hslash c}{E_p\hspace{0.2mm}d}\right)}^2\right], \end{equation} which is to be compared with Eq.~\eqref{veinfs}. In spite of our skinny assumptions, one can see that the obtained expression agrees with the field theoretical result as concerns the dependence of the GUP correction on the inverse fifth power of the distance between the plates. We also notice that the exact numerical coefficient can be recovered by including a proper factor which accounts for the extension of the GUP~\eqref{modforphot} to a higher-dimensional system. \section{Casimir effect for spherical and cylindrical shells: HUP vs GUP approaches} \label{sphcylshel} In this Section, we apply our heuristic approach to configurations other than parallel plates. Specifically, we compute the Casimir energy for a spherical and cylindrical shells. It is nevertheless worth mentioning that the following analysis may serve as a basis for the study of more sophisticated geometries too. \begin{figure}[t] \centering \includegraphics[width=8.1cm]{Sphere.pdf} \renewcommand{\baselinestretch}{0.8} \caption{Heuristic derivation of the zero-point energy for a spherical shell of radius $a$. The outer sphere of radius $R$ represents the whole space. The only photons which are allowed to impact on $P_0$ are those in the shadowed volume.} \label{sphereb} \end{figure} \subsection{Casimir effect for a spherical shell} \label{CEFAS} By following the same reasoning as for parallel plates, let us consider a spherical shell of (finite) radius $a$ enclosed in a larger sphere of (infinite) radius $R$ representing the whole space (see Fig.~\ref{sphereb}). By taking a point $P_0$ on the surface of the shell, one can easily understand that the only photons which are allowed to impact on $P_0$ are those originating from vacuum fluctuations in the shadowed volume. In light of this, we can write the effective volume $S\Delta x$ as the sum of the volume of the upper (external) hemisphere and the one of the shell, i.e. \begin{equation} \label{effectivevolume} S\Delta x\,=\,\frac{2}{3}\pi R^3\,+\,\frac{4}{3}\pi a^3\,, \end{equation} where $S=4\pi a^2$ is the surface area of the shell. For the internal consistency of our formalism, we require $0<a\le R/\sqrt[3]{2}$. By drawing a comparison with the parallel-plate system, the $a\rightarrow 0$ limit corresponds to the case in which the two plates are stuck together. In this case, the spherical shell degenerates into a single point, since $P_0$ merges with its antipode. Consequently, the effective volume $S\Delta x$ will be \emph{twice} the volume of the hemisphere of radius $R\rightarrow\infty$, i.e. it will cover all the space\footnote{In order to compute the effective volume relative to $P_0$ for $a\rightarrow 0$, we must also take into account the symmetric contribution of those photons which impact on the antipode of $P_0$.}. On the other hand, for $a=R/\sqrt[3]{2}\rightarrow\infty$, the point $P_0$ is not affected by the presence of the walls of the shell. This amounts to the case where the two parallel plates are infinitely far apart from each other. Again, the effective volume will be equal to the whole space, i.e. $S\Delta x=4\pi R^3/3\rightarrow\infty$ from Eq.~\eqref{effectivevolume}. Now, by using Eq.~\eqref{effectivevolume}, the position uncertainty $\Delta x$ reads \begin{equation} \label{posuncer} \Delta x\,=\,\frac{R^3+2a^3}{6\hspace{0.2mm}a^2}\,, \end{equation} that still diverges as $R$ increases. As for the parallel plates, however, we can reasonably neglect photons coming from distances greater than the effective radius $R\sim r_e$. By combining Eq.~\eqref{posuncer} with the HUP, it follows that \begin{equation} |\Delta E (a, r_e)|\,=\,\frac{3\hspace{0.4mm}\hslash c\hspace{0.6mm}a^2}{r_e^3\,+\,2a^3}\,. \end{equation} If we now assume $r_e$ to be of the order of the size of the system as in Eq.~\eqref{setting}, i.e. $r_e\simeq 2.6\, (2\hspace{0.2mm}a)$, we finally obtain \begin{equation} \label{finalsphere} |\Delta E (a)|\,=\,0.02\,\frac{\hslash c}{a}\,, \end{equation} that matches with the QFT result of Refs.~\cite{Boyer,Milton}, up to a factor $1/2$ (as discussed after Eq.~\eqref{25}, one may improve the agreement by refining the considerations on the photon energy distribution as in Eq.~\eqref{dens}). Note that we cannot infer any kind of information on the sign of $\Delta E$, and, thus, on the nature of the Casimir force (whether it is attractive or repulsive), since our heuristic calculations only allow to derive the absolute value of the energy shift. In this regard, we emphasize that the issue of the sign of the zero-point force for a spherical shell is quite controversial. In Refs.~\cite{Boyer,Milton}, for instance, it is argued that a conducting sphere would tend to be expanded due to effects of vacuum fluctuations. On the other hand, if one roughly approximates the sphere as two parallel plates of area $S=\pi a^2$ and separation $d\simeq a$ and considers the standard expression~\eqref{Casenedens} for the energy, the opposite sign for the Casimir force would be obtained\footnote{Note that the idea to describe a sphere as two (circular) parallel plates was originally proposed by Casimir to calculate the fine-structure constant $\alpha$.}~\cite{Casimirsphere}. A similar result is claimed to be valid for two slightly separated hemispheres (that is, a spherical shell sliced with a very subtle knife) and, more generally, for any symmetric configuration (see Kenneth-Klick's no-go theorem~\cite{KK}). \medskip Let us now investigate to what extent the zero-point Casimir energy~\eqref{finalsphere} gets modified by the Generalized Uncertainty Principle~\eqref{modforphot}. To this aim, by replacing Eq.~\eqref{posuncer} into~\eqref{28}, we get \begin{equation} |\Delta E(a, r_e)|\,=\,\frac{3\hspace{0.3mm}\hslash c\hspace{0.6mm}a^2}{r_e^3\,+\,2a^3}\left\{1\,+\,\beta{\left[\frac{3\hspace{0.3mm}\hslash c\hspace{0.6mm}a^2}{E_p\left(r_e^3\,+\,2a^3\right)}\right]}^2\right\}, \end{equation} where we have implicitly made use of the length cutoff $R\sim r_e$. By setting $r_e\simeq 2.6\hspace{0.5mm}(2\hspace{0.2mm}a)$ as before, we find \begin{equation} |\Delta E(a)|\,=\,0.02\,\frac{\hslash c}{a}\left[1\,+\,0.0004\hspace{0.2mm}\beta{\left(\frac{\hslash c}{E_p\hspace{0.3mm}a}\right)}^2\right]. \end{equation} Hence, on the basis of purely heuristic arguments, we obtain a GUP term scaling as the inverse cube of the radius of the spherical shell. As expected, the greater the sphere, the smaller the correction to the standard (HUP) result. Unlike the parallel-plate configuration, however, a full-fledged field theoretical calculation has not yet been carried out, thus preventing us from making any comparison. \subsection{Casimir effect for a cylindrical shell} We now compute the zero-point energy for a cylindrical shell of radius $a$ and height $H$. As depicted in Fig.~\ref{cylinderfig}, we assume $H>a$; however, similar considerations hold true for any size of the system. \begin{figure}[t] \centering \includegraphics[width=8.1cm]{Cylinder.pdf} \renewcommand{\baselinestretch}{0.8} \caption{Heuristic derivation of the zero-point energy for a cylindrical shell of radius $a$ and height $H>a$. The sphere of radius $R$ represents the whole space. As for the spherical configuration, the only photons which can reach $P_0$ are those in the shadowed volume.} \label{cylinderfig} \end{figure} Let us consider a point $P_0$ on the lateral surface of the cylinder. In this case, we can write the effective volume as \begin{equation} \label{cyl} S\Delta x\,=\,\frac{2}{3}\pi {r_e}^3\,+\,\pi a^2\hspace{0mm} H\,, \end{equation} where now $S=2\pi\hspace{0.2mm}a\hspace{0.2mm} H$ is the lateral surface of the cylinder and we have already implemented the cutoff $R\sim r_e$ on the radius of the surrounding sphere. Again, as $a$ and $H$ increase, the cylindrical shell tends to cover the whole space, while for vanishing $a$ and $H$, it collapses in a single point. In both cases, the effective volume will be equal to the entire space (see the discussion after Eq.~\eqref{effectivevolume}). By inverting Eq.~\eqref{cyl} with respect to the uncertainty position of photons, we get \begin{equation} \label{dxcyl} \Delta x\,=\,\hspace{0.2mm}\frac{2\hspace{0.2mm}{r_e}^3\,+\,3\hspace{0.2mm}a^2\hspace{0mm} H}{6\hspace{0.2mm}a\hspace{0.2mm} H}\,, \end{equation} which leads to the following expression of the standard (HUP) energy uncertainty: \begin{equation} |\Delta E (a, H, r_e)|\,=\, \frac{3\hspace{0.4mm}\hslash c\hspace{0.6mm}a\hspace{0.3mm}H}{2\hspace{0.2mm}{r_e}^3\,+\,3\hspace{0.2mm}a^2\hspace{0mm} H}\,. \end{equation} Let us now observe that, since we are neglecting vacuum fluctuations from distances greater than $r_e$, it is reasonable to assume $H\simeq 2\hspace{0.3mm}r_e$ (in other terms, the only photons which can impact on $P_0$ moving from the inside to the outside of the cylinder are those in the volume $V\simeq \pi a^2\hspace{0.2mm}(2\hspace{0.2mm}r_e)$). By setting $r_e\simeq 2.6\, (2\hspace{0.2mm}a)$ as before, we then obtain \begin{equation} \label{finalcyl} |\Delta \varepsilon (a)|\,\equiv\,\frac{|\Delta E (a)|}{H}\,=\,0.01\,\frac{\hslash c}{a^2}\,, \end{equation} where we have computed the energy per unit length in order to compare our expression with the QFT result of Ref.~\cite{DeRaad:1981hb}. Note that the two outcomes are in good agreement with each other. \medskip As usual, corrections induced by the GUP can be estimated by inserting Eq.~\eqref{dxcyl} into Eq.~\eqref{28}. Straightforward calculations yield \begin{equation} |\Delta \varepsilon(a, r_e)|\,=\,\frac{3\hspace{0.4mm}\hslash c\hspace{0.6mm}a}{{2\hspace{0.3mm}r_e}^3\,+\,6\hspace{0.2mm}a^2\hspace{0.3mm} r_e}\left\{1\,+\,\beta {\left[\frac{6\hspace{0.3mm}\hslash c\hspace{0.6mm}a\hspace{0.6mm}{r_e}}{E_p\left(2\hspace{0.3mm}r_e^3\,+\,6\hspace{0.2mm}a^2\hspace{0.4mm}r_e\right)}\right]}^2\right\}, \end{equation} which, for $r_e\simeq 2.6\hspace{0.5mm}(2\hspace{0.2mm}a)$, becomes \begin{equation} |\Delta E(a)|\,=\,0.01\,\frac{\hslash c}{a^2}\left[1\,+\,0.01\hspace{0.2mm}\beta{\left(\frac{\hslash c}{E_p\hspace{0.3mm}a}\right)}^2\right]. \end{equation} As for the sphere, a QFT treatment of Casimir effect for a cylindrical shell is missing so far. \section{Discussion and Conclusions} \label{DandC} We have computed the corrections to the Casimir energy in the framework of minimal length theories based on a generalized uncertainty principle with only a quadratic term in the momentum. Calculations have been carried out for three different systems: the parallel plates, the spherical and cylindrical shells. For the first geometry, the result derived via heuristic arguments has been compared with the more rigorous field theoretical expression, showing in both cases a dependence of the GUP-correction on the inverse fifth power of the distance between the plates. On the other hand, the absence of a QFT treatment of Casimir effect with GUP for non-planar configurations does not allow any comparison for the sphere and the cylinder. Nevertheless, such calculations, along with a possible extension of our formalism to arbitrary $D$-dimensional systems (see for example Ref.~\cite{Milton}) will be investigated in more detail in future works. Finally, some remarks are in order here. First, we point out that, even though direct observations of GUP effects on the Casimir force are extremely challenging, current experiments~\cite{Bressi} might enable us to fix an upper bound on the minimal length $(\Delta x)_{\mathrm{min}}=\sqrt{\beta}\hspace{0.2mm}\ell_p$, and, thus, on the parameter $\beta$. Furthermore, we emphasize that a similar analysis of GUP-induced corrections has been proposed in Ref.~\cite{SBLC} in the framework of the Unruh vacuum radiation for accelerated observers. In that case, the existence of a nonzero minimal length manifests itself in the form of (in principle) non-thermal corrections to the Unruh spectrum, which however can be reinterpreted as a shift of the usual Unruh temperature for small deformations of the commutator. In passing, we mention that deviations of the Unruh effect from the well-known behavior (and, more generally, non-inertial corrections on standard predictions of QFT) have been recently pointed out also in other contexts~\cite{othersect,other2,other3}. In light of these considerations, it is clear that the study of all these unconventional aspects of fundamental quantum phenomena represents a fertile but still largely uncharted field of research, since it allows us to test QFT in Planck-scale regime at both theoretical and experimental levels. More work is inevitably required along these directions. \section{Acknowledgements} The authors would like to thank the anonymous Referee for his/her comments, which improved the quality of the manuscript.
2023-04-23T06:41:32.833Z
2019-12-03T02:08:30.000Z
redpajama/arxiv
arxiv_0001
2,627
6,679
3641bddca2386b5a54b9669598a5e2765a0be995
\section{Introduction} \label{sec:introduction} Enhancing discrete choice models with neural nets and deep learning optimization algorithms is an active domain of research that has shown promising results \citep{sifringer2020enhancing,borysov2019generate,wong2020partite}. In recent years, experimental use cases of deep learning methods in discrete choice modelling have been explored such as automatic utility discovery \citep{sifringer2020enhancing}, variational inference optimization \citep{bansal2019bayesian} and remapping explanatory variables into transferrable embeddings for travel behaviour modelling \citep{pereira2019rethinking}. This paper provides a perspective of how a \textit{residual neural network} formulation accounts for unobserved choice heterogeneity in discrete choice models. While the proposed model we have developed has its roots in the Mother Logit model, it is not a Random Utility Maximization (RUM) consistent model. Likewise, many non-RUM compatible models are used in discrete choice modelling that is still very useful \citep{hess2018revisiting}. The increase in popularity of DNNs can be attributed to the general notion that these novel modelling strategies emulate behavioural actions and behaviour formation through neurological adaptations observed in the human brain \citep{bengio2015towards}. This is referred to as `biological plausibility' in deep learning literature and is an efficient way of generating and representing decision models \citep{friston2007free}. The similarity between behaviour theory and DNN has led to many interesting and useful applications in travel behaviour modelling and travel demand forecasting \citep{cantarella2005multilayer,lee2018comparison,wong2018discriminative,wang2019multitask}. Intuitively, DNNs are made up of several layers of linear and non-linear operations, called activation functions, which enable the feasibility of estimation from noisy and complex data. However, machine learning methods have their drawbacks. Even though these methods are increasingly being studied in travel mode choice prediction ever since a decade ago \citep{karlaftis2011statistical}, their usefulness has been limited to prediction tasks, lacking the explainability of models. Prediction accuracy as a comparison tool has been primarily used in early research in machine learning for travel behaviour modelling work and found to be that neural networks appear to lack consistency with economic principles \citep{hensher2000comparison}. It is argued that DNN may not be suitable for econometric interpretation, and would lead to incorrect assumptions of the stochastic nature of decision-making behaviour. More recent studies have compared the performance of discrete choice and machine learning in prediction. Variable importance analysis has shown that, in most cases, DNNs outperform discrete choice models \citep{omrani2013prediction,hagenauer2017comparative,wang2018machine}. It has been observed in machine learning models that increasing the number of layers beyond a specific limit would degrade the model due to overfitting, unreachable optimal solutions, and model identification problems \citep{glorot11deep,he2016residual}. Even in cases showing DNNs producing more accurate predictions\footnote{Assuming discrete classification probabilities.} than discrete choice models, the structural formulations are not consistent across studies. Another problem with DNNs, although less of immediate concern, is the inconsistency in meta-learning hyperparameter selection, data-leakage and illogically estimated parameters \citep{hillel2019machine}. Although not covered in this study's scope, we can address these problems with regularization techniques such as Gradient Batch Normalization or Dropout, or adaptive gradient search such as Adam or AdaGrad \citep{kingma2014adam}. Moreover, the applicability of machine learning algorithms has not yet been justified in behavioural modelling applications and economic analysis beyond ad-hoc decision tree\footnote{Note: Methods used to select the subset of features in a decision tree results in categories that are sometimes arbitrary. Tree splitting rules are ultimately ad-hoc heuristics. However, comparative selection methods may still be useful if used to inform analysts about which metrics to use in specific choice scenarios.} learning approaches, which are not robust and based on greedy heuristics that do not generalize well from training data \citep{witten2016data,brathwaite2017machine}. Lastly, training and optimizing a multi-layered discrete choice model to capture variations in taste heterogeneity have not yet provided the expected benefits beyond few ``shallow'' layers \citep{wang2019multitask}. This paper proposes a tractable method of incorporating a \textit{data-driven neural network architecture} into a random utility choice model. We seek to improve choice modelling methodologies by incorporating algorithms that work well for deep learning and can be used in choice modelling while performing post-estimation welfare analysis. It extends the systematic utility function to include attributes of other alternatives in potentially non-linear ways to relax the independent and identically distributed (IID) assumptions. The model structure is similar to the existing Mother Logit family of models that incorporate relaxation of the independence of irrelevant alternatives (IIA) property to account for correlation between the IID error terms and the observed explanatory variables \citep{mcfadden1977application,timmermans1992mother}. Our strategy is inspired by the concept of Residual Neural Networks (\textit{ResNet}) in deep learning literature -- adding skip connections between layers allows gradient backpropagation across multiple layers to address the vanishing gradient problem \citep{bengio2015towards}. Recent studies have shown that this strategy significantly improves the learning algorithm in deep neural network architecture with marginal or no loss in performance \citep{witten2016data,he2016residual}. We show that we can easily adapt the ResNet approach for discrete choice models, and it has similarities to the Mother Logit utility formulation. Our proposed methodology provides the utility function with a generic Deep Learning method of correcting for choice heterogeneity in the model using a residual function in the model formulation. This allows one to leverage deep learning algorithms to estimate new choice models. We define this new choice model structure as a \textit{ResLogit} model. This paper aims to present a practical implementation of neural networks in choice modelling research that leverages the strengths of deep learning. While this paper deals on the consistency with utility maximization methods, we acknowledge that there are other numerous methods in deep learning literature for optimization through regularization, hyperparameter search, meta-algorithms that are comparable in performance to our ResLogit implementation. This study focuses on the methodological benefits of deep learning in discrete choice analysis. Our work contributes to the use of deep learning methodology in travel behaviour modelling. It has since been highly relevant in today's context of data-driven modelling and use of Big Data for choice and behaviour modelling. In summary, the main contributions of this work are: \begin{itemize} \item We present the specification of the ResLogit model that uses a residual DNN error correction component in the choice utility in the form of a \textit{data-driven} choice model. \item We present the desirable effects of the ResLogit that enables parameter estimation tractability and interpretability due to the skipped connections between neural network layers and allows for econometric $\beta$-parameters to be estimated consistently. \item We analyze the role of residuals in econometric behaviour models and improve previous attempts to integrating deep learning methods in discrete choice applications. \end{itemize} This paper is organized as follows: \Cref{sec:background} provides a primer of neural networks and an overview of discrete choice models. \Cref{sec:specification} presents the specification of our proposed ResLogit model. \Cref{sec:redblue} demonstrates our formulation on a classic red-bus, blue-bus example. \Cref{sec:casestudies} evaluates the methodology on a real-world travel dataset and discusses the results. Finally, \Cref{sec:conclusion} concludes our work and discusses future implications of incorporating deep learning techniques in discrete choice modelling. \section{Background} \label{sec:background} Logit models have traditionally been used to analyze relationships between observed behaviour and attributes associated with the choices and decision maker's characteristics \citep{ben1985discrete,ben1995discrete}. This framework has proved successful for decades because of its parsimonious, tractable, and flexible model formulation for representing rational behaviour assumptions. It assumes that the underlying decision processes are unknown from the observer, and decision-makers select their preferred choice by ranking all potential alternatives and choosing the alternative with the maximum utility through Random Utility Maximization (RUM) theory. The modeller is assumed to have incomplete information about the decision-maker's behaviour, and the model will have to account for some uncertainty. An important feature of the Logit model is the IIA property, which is an outcome of the assumption that the error terms of the alternatives in an MNL model are IID \citep{mcfadden1978modeling}. When the error terms are correlated, strict IID assumption may lead to an incorrect forecast and model misspecification. The Logit model imposes a random error term representing behavioural uncertainty and account for the lack of information presented to the analyst. This random error term is assumed to be uncorrelated to the attributes of the alternatives. Extensions to the Logit models such as Nested Logit and Mixed Logit have been developed to account for the error correlation when the assumption does not hold. \subsection{Representation of non-linearity and cross-effects in choice utilities} Model misspecification may arise when the error terms are correlated with non-chosen alternatives. Various studies in discrete choice modelling have accounted for heterogeneity across choice alternatives and decision-makers by incorporating attributes of non-chosen alternatives known as \textit{cross-effects}. The assumption is that the included additional function conditions for part of the error term correlate with the non-chosen alternatives. There are several approaches to dealing with similarities and cross-effects between alternatives \citep{schuessler2007recent}: \begin{itemize} \item Segmentation into nests or classes, \item Analyzing the variance-covariance structure, and \item Incorporating similarity factors into the deterministic part of the utility. \end{itemize} The first group consists of extensions to the MNL model such as the Nested Logit model to partially relax the IID assumption by segmenting alternatives into subsets. They are similar within each group (correlated) but independent between groups (non-correlated). These models specify the correlation between alternatives by allowing attribute coefficients to vary between observations, class segments or individuals. Although this model formulation works well with simple stated preference choice scenarios where the analyst can control the survey questions and options, cognitive bias formed during the behaviour learning process, e.g. anchoring effects, are not fully captured \citep{tversky1981framing}. For instance, when a traveller makes a mode choice decision, there is a tendency to rely heavily on the information that they learn. The learning process may also evolve, resulting in Spatio-temporal heterogeneity. The second group consists of the Generalized Extreme Value (GEV) model family (e.g. Mixed Logit), and Probit models which allow for different (co-)variances among the error term in the utility function \citep{mcfadden1977application,daganzo1977multinomial}. Multivariate distributed random error terms are introduced into the utility to capture potentially any correlation structure. This assumption works well with simple behavioural models and allows for tractable estimation. It does not necessarily reflect observed behaviour accurately with arbitrarily defined error distribution for more complex behavioural models. However, we can also derive individual-specific estimates from the individual's conditional distribution based on their choices \citep{hensher2003mixed}. Identification and computation of a large number of random distribution are still problematic in conventional discrete choice applications. Recent research efforts have also focused on Mixed Logit estimation using optimization techniques primarily used in machine learning. In particular, Bayesian variational inference optimization methods have shown to be promising \citep{bansal2020bayesian}. The third group consists of models that include an explicit measure of similarity among alternatives in the utility function. This group include hybrid choice models and the integrated choice and latent variable (ICLV) family of models. Most notably, the Mother Logit model introduced by \citet{mcfadden1975independence} represents a generalization of the conventional MNL model, but not necessarily RUM consistent, by allowing for the existence of cross-effects and other substitutions (reference dependence, decoy, anchoring bias, regret, etc.) in the utility to relax the IID assumption \citep{timmermans1992mother}. The Mother Logit formulation can approximate any discrete choice model in which the alternative's scale value is a function of all attributes of all choices \citep{timmermans1992mother}. Other choice model development such as the Random Regret Minimization (RRM) model \citep{chorus2010new} which include terms from foregone alternatives, can be reformulated as a Mother Logit model \citep{mai2017similarities}. The RRM model bases the assumption that one or more alternatives outperform the desired choice. This is translated into an anticipated regret function, and the analyst can formulate the non-linear utility as a function of attribute cross-effects of all the alternatives in the deterministic component and a random error term. \citet{mai2017similarities} also presented a case of a Recursive Logit (RL) model based on the Mother Logit formulation. \citet{mai2017similarities} formulated the RL model utility functions as a route choice problem, which computes the sum of the outgoing link utility and the expected maximum utility to the destination node, accounting for these cross-effects in the link utility functions. When links overlap between different feasible route choice alternatives, the non-linear RL utility of a given route choice would include attributes from other route alternatives. Cross-effect represents the utility correction measure of similarity or dissimilarity across all attributes of all alternatives \citep{timmermans1992mother}. A negative cross-effect indicates that an IIA model overestimates the utility of the alternative due to correlated attributes and alternatives (e.g. the red/blue bus problem \citep{mcfadden1973conditional}). Likewise, a positive cross-effect indicates that the utility is underestimated and a positive bias correction is required to account for the choice heterogeneity. The Mother Logit formulation implies that the model violates RUM regularity conditions \citep{timmermans1992mother}. Nevertheless, such model flexibility can accommodate behavioural anomalies incompatible with RUM based models \citep{hess2018revisiting}. \subsection{Generalized approach to capture non-linearity and cross-effects in discrete choice models using DNNs} \label{sec:gen_approach} Passive data collected from sensors, devices and infrastructure that track decision making actions over time can reveal learning behaviour and trends of the decision-makers. The general approach of representing decision-making uncertainties and learning processes as probabilistic error terms may be sufficient in obtaining satisfactory approximations. However, it is often difficult to identify the source of heterogeneity due to the complex interactions between influences from various attributes of non-chosen alternatives over a long period of interaction. Furthermore, it provides no useful indication of selecting the error term mixing distribution or how many mixing distributions are required to reach an acceptable estimation of the decision-making behaviour \citep{mcfadden2000mixed}. Combining DNNs and discrete choice modelling strengths have been explored in the past several years \citep{borysov2019generate,bansal2019bayesian,pereira2019rethinking,wong2020partite,badu2020composite}. These new hybrid models are designed to capture learning behaviour and trends from large datasets, independent from the subjective bias induced from stated preference survey questionnaires. The decision making learning algorithm is assumed to contain non-linear cross-effects, which results in complex error distributions and a non-linear utility function. In practical choice modelling applications, the learning algorithm's process updates the model is unknown to the modeller. Therefore it is said to be a `black-box' model \citep{breiman2001statistical}. Non-linear activation functions in DNNs are assumed to represent taste variations and random heterogeneity in the choice model. For instance, a non-compensatory decision protocol distribution is often used to generalize decision rules in discrete choice, rather than to define fixed assumptions about the error distribution \citep{vythoulkas2003modeling}. Although neural networks have proved popular in recent years with their simple design and implementation, they rely on hyperparameter search or meta-learning process which cannot be intuitively interpreted from a micro-economic perspective. Hyperparameters are the learning algorithm parameters that specify the learning procedure: $L_1$ and $L_2$ penalties, gradient step size, decay or initialization conditions. In some situations, hyperparameter tuning\footnote{hyperparameter tuning refers to the specification of the \textit{learning algorithm}, not the model parameters, e.g. $\beta$ parameters.} can yield state-of-the-art performance. \citet{lipton2018mythos} gave the hypothesis on lack of model interpretability by identifying that most machine learning-based systems may achieve high accuracy despite failing to explain where the source of the difference lie. The MLP model is seen as a `black-box' model and will not be able to identify the beta parameters associated with the independent explanatory variables. Model identifiability may be problematic as there can be multiple model specification defined by the same set of parameters. \subsection{General formulation of a neural network model} We explain the necessary notations and formulation of an MLP network, the \textit{ResNet} architecture, and how we can integrate the residual functions into a choice model, which follows a logically consistent extension of traditional MNL that relaxes the IIA property. Each neuron in an MLP is a basic processing unit that performs a non-linear transform on the input \citep{lee2018comparison}. The goal is to approximate some function $y=f^*(\B{V})$ with $y=f(\B{V};\theta)$, where the input $\B{V}$ is a linearized function of a vector of observed variables $\B{x}$ and a vector of estimated parameters $\BS{\beta}$, denoted as $\B{V} = f(\BS\beta,\B{x})$. The function $f(\B{V};\theta)$ is a map of the linear components $\B{V}$ to a vector of discrete choice probabilities $y$. $\theta$ is the neural network parameters that result in the best approximation of $f^*$. During the training process, the model is estimated by a batched gradient descent algorithm given an objective function, i.e. maximum likelihood estimation\footnote{Batched gradient descent is most used in deep learning optimization. For most machine learning problems, the data size is too large for quasi-Netwon methods such as BFGS/L-BFGS algorithm to perform in \emph{comparable time}. Furthermore, computing in batches allows for parallelized computation on GPUs.} \footnote{In general, the \emph{no free lunch theorem} in optimization states that no one solution works best for all problems}. The MLP architecture can be represented mathematically as a series of chain functions: \begin{align} \label{eq:mlp1} \begin{aligned} \B{h}^{(1)} &= f^{(1)}(\B{V}) \\ \B{h}^{(2)} &= f^{(2)}(\B{h}^{(1)}) \\ &\ldots \\ \B{h}^{(M)} &= f^{(M)}(\B{h}^{(M-1)}) \\ y &= softmax(\B{h}^{(M)}) \end{aligned} \end{align} \noindent where $f^{(1)}$, $f^{(2)}$, ..., $f^{(M)}$ are the activation functions of the DNN and $M$ gives the depth of the model. For example, a 3-layer DNN results in the general form $f(\B{V})= f^{(3)}(f^{(2)}(f^{(1)}(\B{V})))$. $\B{h}^{(1)},\B{h}^{(2)},...,\B{h}^{(M)}$ are the intermediary non-linear output of each $m^{th}$ activation function and the final layer is a \textit{softmax} function\footnote{This softmax function is equivalent to a conditional Logit in discrete choice problems.}, and the output results in a vector of discrete probabilities associated with each choice. The choice of activation functions is loosely guided by neuroscience observations and `biological plausibility', which refers to the similarity between the behaviour theory and signal transmission in the nervous system \citep{goodfellow2016deep}. The activation function can be linear or non-linear. For example, using a sigmoid function: $f(\B{V})=(1+e^{-\B{V}})^{-1}$ results in a probabilistic output between 0 and 1. In general, most DNN architectures suffers from non-identifiability due to the nature of the chain of non-linear activation functions -- the change in $\beta$ parameter associated with the explanatory variable cannot be mapped directly to the output probabilities. The na\"{i}ve intuition is that the MLP can learn increasingly complex features by adding more layers, and each layer returns an ``improved'' approximation of $f^*$. On the contrary, research has shown that the number of layers representing a perfect model does not follow an asymptotic limit. Still, it deteriorates as one increases the number of layers \citep{srivastava2015training,he2016residual}, contradicting the assumption that DNNs provides greater flexibility than conventional discrete choice models. Observations in discrete choice literature affirm this technical limitation of using multiple deep layers to improve modelling accuracy \citep{alwosheel2018dataset,lee2018comparison}. \subsection{Formulating the neural network as a dynamical system} The \textit{ResNet} architecture was proposed by \citet{he2016residual} to overcome the limitations of the MLP model. We can interpret the model as a discretization of a dynamical system that exploits the use of identity shortcuts to enable the flow of information across layers without causing model degradation from repeated non-linear transformations \citep{he2016residual}. From an optimization perspective, the hypothesis is that it is easier to optimize ``a small change to the input rather than improving the entire layer of inputs at once'' \citep{he2016residual}. This approach potentially provides an attractive possibility for modellers to retain the econometric variables and allows the neural network function to approximate the underlying error variance from a choice modelling perspective. Furthermore, it has been proven that the \textit{ResNet} model architecture has no critical points other than the global minimum \citep{hardt2016identity}. The \textit{ResNet} model $y=f(\B{V})$ is defined as the following series of functions: \begin{align} \label{eq:resnet1} \begin{aligned} \B{h}^{(1)} &= f^{(1)}(\B{V}) + \B{V} \\ \B{h}^{(2)} &= f^{(2)}(\B{h}^{(1)}) + \B{h}^{(1)} \\ &\ldots \\ \B{h}^{(M)} &= f^{(M)}(\B{h}^{(M-1)}) + \B{h}^{(M-1)} \\ y &= softmax(\B{h}^{(M)}) \end{aligned} \end{align} The \textit{ResNet} uses a skip connection mechanism (eq. \ref{eq:resnet1}) to the gradient to propagate through the layers, preventing the vanishing gradient problem \citep{he2016residual}. The last line of \Cref{eq:resnet1} transforms the output of the final intermediate layer to a vector of probabilities using the \textit{softmax} function\footnote{For consistency with literature, we denote \textit{softmax} in the context of neural networks, and Logit in the context of discrete choice. However, both functions are mathematically equivalent}. We can further generalize the \textit{ResNet} blocks as a series of recursive functions: \begin{equation} \label{eq:resnet2} \B{h}^{(m)}=f^{(m)}(\B{h}^{(m-1)};\theta^{(m)}) + \B{h}^{(m-1)},\hspace{1em} \B{h}^{(0)} = \B{V}, \hspace{1em}\textrm{for}\hspace{1em}m=1,...,M \end{equation} where $\B{h}^{(0)}$ is the input after the initial linearization of the utility and $\B{h}^{(M)}$ is the output map before the \textit{softmax} function. Approximating the parameters of the neural network $\theta^{(1)},\theta^{(2)},...,\theta^{(M)}$ is equivalent to solving for a series of linear discrete optimal control problem $U_m=f(V_m;\theta_m)+\varepsilon_m$. We can also interpret $\B{h}^{(1)},...,\B{h}^{(M)}$ as a series of non-linear utility components that capture the cross-effects induced by similarity or overlap with the non-chosen alternatives. If $f^{(m)}$ in eq. \ref{eq:resnet2} is large, it indicates the presence of cross-effects on the output probability. If this value is close to zero for all $m$ (non-linear cross-effects not present), the model would collapse to a Logit model. \section{Specification of the ResLogit choice model} \label{sec:specification} Our proposed ResLogit choice model improves discrete choice estimation by incorporating a neural network based on the recent \textit{ResNet} architecture. \Cref{fig:comp3} shows a comparison between an MNL, MLP and the proposed ResLogit model as a simplified graphical model. The general framework of our ResLogit architecture is that it is much more efficient to model the unobserved heterogeneity using a neural network rather than applying a neural network to the entire utility. \citet{sifringer2020enhancing} applied a similar concept for a Learning MNL (LMNL) model, although using a fully connected neural network as a linear addition to the utility plus an unobserved error component. This ad-hoc approach divided the explanatory variables into two groups, where one was used in the systematic linear utility and the other group in the neural network capturing the average effects. In general, we specify the utility function as a sum of the deterministic component of observed characteristics and a neural network component that captures the unobserved heterogeneity in the choice process. Our approach's advantage is that the skip allows for a greater chance of identifiability in the estimation of each layer of the neural network. In contrast, the L-MNL model would still be vulnerable to the vanishing gradient problem. A utility $U_{int}$ is defined by a deterministic component $V_{int}$ and a random error component $\varepsilon_{int}$: \begin{equation} \label{eq:utility0} U_{int} = V_{int} + \varepsilon_{int} \end{equation} The deterministic component is a linear function of a vector of attributes $x_{nt}$ of a single alternative with a vector of estimated parameters $\boldsymbol\beta$. The most general expression of the Logit model, the Mother Logit model, introduces a random variable $g_{int}$ in the utility that is a function of all attributes of all choices\footnote{Note to readers that the subscript $i$ refers to the index of the alternative in this section and the following sections. It does not refer to $g_{int}$ having only attributes from the $i^{th}$ alternative. We represent a function that depends solely on attributes from the alternative with an uppercase notation (e.g. $V$)}. Note, in some cases, the random variable $g$ \emph{replaces} the deterministic part $V_{int}$ \citep{hess2018revisiting}. Our ResLogit model's utility takes the general expression of the Mother Logit model as the output of the residual component. The utility $U_{int}$ of individual $n$ selecting choice $i$ in a choice task $t$, from a choice set of $J$ alternatives with the residual component term is as follows: \begin{equation} \label{eq:utility1} U_{int} = V_{int} + g_{int} + \varepsilon_{int} \end{equation} \begin{figure}[!h] \centering \includegraphics[width=\textwidth]{figures/reslogit.pdf} \caption{Simplified graphical model. (a) A Multinomial Logit model. (b) A MLP network with 2 hidden layers. (c) The proposed ResLogit model with 2 residual layers. Here we show the models expressed as symbolic operators that compute each step from the input $x_{nt}$ to the output probabilities $y$. The graph operator $+$ compute $h^{(m)}=h^{(m-1)}+f(h^{(m-1)})$. We omit the ASC variables for brevity. (d) Representation of the LMNL model used in \citet{sifringer2020enhancing}.} \label{fig:comp3} \end{figure} \noindent The utility is a linear function of the systematic observed component $V_{int}$, the residual component $g_{int}$, and an extreme value distributed error term $\varepsilon_{int}$ representing the remaining unobserved errors not captured in the neural network. $\B{V}_{nt}$ is a $J\times 1$ vector of utilities $v_{jnt}$ associated with each individual $n$ for choice task $t$: \begin{equation} \B{V}_{nt} = \bordermatrix[{[]}]{ & \cr & V_{1nt} \cr & V_{1nt} \cr & \vdots \cr & V_{jnt} }_{J\times 1} \end{equation} \noindent and $\B{g}_{nt}$ is a $J\times 1$ vector of residual components $g_{jnt}$ associated with the respective utility $j$ that contains all attributes from all alternatives: \begin{equation} \B{g}_{nt} = \bordermatrix[{[]}]{ & \cr & g_{1nt} \cr & g_{1nt} \cr & \vdots \cr & g_{jnt} }_{J\times 1} \end{equation} \noindent Eq. \ref{eq:utility1} would lead to the choice probability $y_i=f_i(\B{V},\B{g})$ for $i \in {1,...,J}$: \begin{equation} \label{eq:condprob1} P(i) = y_i = \frac{\exp(V_{int} + g_{int})}{\sum_{j\in\{1,...,J\}}\exp(V_{jnt} + g_{jnt})}\hspace{1em}\forall i \in \{1,...,J\} \end{equation} \noindent where: \begin{equation} \label{eq:utility_logsum} \B{g}_{nt} = -\sum_{m=1}^M \ln\left(1+\exp(\theta^{(m)}\B{h}_{nt}^{(m-1)})\right) \end{equation} \begin{equation} \B{h}_{nt}^{(0)}=\B{V}_{nt} \end{equation} \noindent For any block $m$: \begin{equation} \B{h}_{nt}^{(m)} = \B{h}_{nt}^{(m-1)} - \sum_{m'=1}^{m} \ln\left(1+\exp(\theta^{(m')}\B{h}_{nt}^{(m'-1)}\right), \hspace{1em}\textrm{for}\hspace{1em}m=1,...,M \end{equation} \noindent and $\theta^{(m)}$ is a $J\times J$ matrix of residual parameters: \begin{equation} \theta^{(m)} = \bordermatrix[{[]}]{ & & & & \cr & c_{11} & c_{12} & \dots & c_{1j'} \cr & c_{11} & c_{22} & & \vdots \cr & \vdots & & \ddots & \vdots \cr & c_{j1} & \dots & \dots & c_{jj'} }_{J\times J}\hspace{1em}\textrm{for}\hspace{1em}m=1,...,M \end{equation} \noindent where $c_{jj'}$ is the parameter matrix element for the $j^{th}$ row and $j'^{\textrm{ }th}$ column, and $\B{h}_{nt}^{(m)}$ is a $J\times 1$ vector of non-linear utility components for the $m^{th}$ residual layer: \begin{equation} \B{h}_{nt}^{(m)} = \bordermatrix[{[]}]{ & \cr & h_{1nt}^{(m)} \cr & h_{2nt}^{(m)} \cr & \vdots \cr & h_{jnt}^{(m)} }_{J\times 1}\hspace{1em}\textrm{for}\hspace{1em}m=1,...,M \end{equation} The parameter matrices are defined such that the dimension of the residual output $\B{g}_{nt}$ matches the dimension of $\B{V}_{nt}$ for an element-wise additive operation. We can have several intermediate neural network layers of varying sizes within each residual layer, which is one of the conveniences of the neural network architecture. $\theta^{(m)}$ serves as the similarity or cross-effect factors to the utility function. The chosen alternative's utility is increased or decreased by its degree of similarity with other non-chosen alternatives by this factor. The MNL perspective corresponds to shifting the vector of utilities by $\B{g}_{nt}$. If the cross-effect factors are zero, i.e. $\theta^{(m)}=0$ for all $m$, then the utility surplus is shifted by 0 and falls back to an MNL model. Another observation is that the choice probability is conditional on the expectation of the output of the residual terms: \begin{equation} \B{Q}_{nt}^{(m)}=\frac{1}{1+\exp(\theta^{(m)}\B{h}_{nt}^{(m-1)})},\hspace{1em}\textrm{s.t.}\hspace{1em}\B{Q}_{nt}^{(m)}\geq 0,\hspace{1em}\textrm{for}\hspace{1em}m=1,...,M \end{equation} \noindent and if we assume that $\B{Q}_{nt}^{(m)}=\{Q_{jnt}^{(m)}\}$ for $j\in\{1,...,J\}$ is a vector of probabilities, we can rewrite the ResLogit formulation in \Cref{eq:condprob1} as a conditional choice probability: \begin{equation} P(i) = y_i = \frac{\left(\prod_m Q_{int}^{(m)}\right) \exp(V_{int})}{\sum_{j\in\{1,...,J\}} \left(\prod_m Q_{jnt}^{(m)}\right) \exp(V_{jnt})}\hspace{1em},\forall i \in \{1,...,J\} \end{equation} The residual component (\Cref{eq:utility_logsum}) derives from entropy, or expected surplus function of the respective residual layers and the corresponding logsum term is the result of the log of the Logit probability denominator. Behaviour modelling uses entropy to measure the variation or accessibility of a specific choice \citep{erlander2010cost}. For example, \citet{mattsson2002probabilistic} characterized such formulation as maximization of the sum of the expected utility and a weighted entropy. \citet{anas1983discrete} postulated that the entropy principle in choice models correspond to how much information-seeking behaviour is used to find the ``best'' utility specification. \citet{fosgerau2017discrete} and \citet{matejka2015rational} also illustrated the affinity to generalized bounded rationality and the duality between discrete choice and rational inattention behaviour. Consequently, information cost acts as a barrier between prior beliefs and the decision making actions, which results in choice heterogeneity. An agent optimizes his or her desired outcome by minimizing this information cost \citep{matejka2015rational}. Our ResLogit model aims to extend this concept by allowing for a data-driven surplus expression in the utility function (Presented in \Cref{eq:utility1}) to emulate the decision-makers' learning process. \subsection{Depth of the neural network} \label{subsec:depth} Increasing the depth of the neural network increases the number of additive residual terms in the utility function. The residual layers represent the underlying unobserved behaviour distribution that is not captured by the explanatory variables. This mathematical formulation allows the model to reflect individual taste heterogeneities in the non-linear residual function. Unlike a typical MLP model or the recently developed Learning-MNL model \citep{sifringer2020enhancing}, training a ResLogit model does not suffer from the vanishing gradient problem. This eliminates the singularities caused by model non-identifiability. This property's key implication on choice modelling is that we can operationalize the learning behaviour as a function in the utility while retaining the same econometric parameters in the structural equation. \subsection{Estimation approach} \label{subsec:estimation_approach} The estimation procedure is a data-driven first-order stochastic gradient descent SGD learning algorithm, and we evaluate the performance on an out-of-sample validation set. In data-driven optimization, we are maximizing a performance measure (e.g. out-of-sample performance) by indirectly maximizing a different surrogate objective function (e.g. maximizing log-likelihood of the training data). We typically assume that the out-of-sample dataset is independent and identically distributed from the training dataset. In contrast, pure optimization of discrete choice models directly maximizes the likelihood objective function, which is a goal of itself. This method of estimating a large number of parameters has been proven efficient in machine learning. In some cases, a surrogate objective function approach may result in a faster and better solution \citep{goodfellow2016deep}. Other pre-conditioning methods or extensions can also be implemented into the surrogate objective function allowing it to reach multiple local optimum points and provide a regulating effect. For example, these pre-conditioning includes adding momentum, adaptive learning rate methods or gradient noise normalization, see \citet{ruder2016overview} for an overview of such methods. Another important difference is that the final convergence criteria are based on the performance measure, not the surrogate objective function within data-driven optimization. This approach enables the algorithm to terminate when overfitting begins to occur (early-stopping criteria). The estimation reaches convergence when the objective function no longer improves. For this reason, a data-driven approach is more suitable in estimating our ResLogit model since a pure optimization approach will run into model non-identifiability issues due to a large number of estimated parameters. \subsubsection{Objective function and parameter updates} \label{subsec:objective_func} The set of optimal parameters $\theta$ and $\BS{\beta}$ are estimated by maximizing the log-likelihood, where the log-likelihood is as follows: \begin{equation} LL(\theta,\BS{\beta}) = \sum_{n=1}^N\ln P(i_n|\B{x}_n;\theta,\BS{\beta}). \end{equation} \noindent The mini-batch SGD algorithm performs the following update rule on each iteration $t$: \begin{align} \theta_{t+1} &= \theta_{t} - \eta_t \nabla_{\theta} \mathcal{J}_{\mathcal{B}}(\theta,\BS{\beta}),\\ \BS{\beta}_{t+1} &= \BS{\beta}_{t} - \eta_t \nabla_{\BS{\beta}} \mathcal{J}_{\mathcal{B}}(\theta,\BS{\beta}), \end{align} \noindent where: \begin{align} \nabla_{\theta}\mathcal{J}_{\mathcal{B}}(\theta,\BS{\beta}) = \frac{1}{K} \sum_{n'\in\mathcal{B}} \nabla_{\theta}LL_{n'}(\theta,\BS{\beta}),\\ \nabla_{\BS{\beta}}\mathcal{J}_{\mathcal{B}}(\theta,\BS{\beta}) = \frac{1}{K} \sum_{n'\in\mathcal{B}} \nabla_{\BS{\beta}}LL_{n'}(\theta,\BS{\beta}), \end{align} \noindent and $K$ is the batch size, $\mathcal{B}$ is a batch of observations sampled from $\B{x}_n$, $n'$ denotes the observation in the batch and $\eta_t$ is the learning rate. We can regard $\nabla\mathcal{J}_{\mathcal{B}}(\theta,\BS{\beta})$ as a noisy estimate of the true gradient $\nabla LL(\theta,\BS{\beta})$. We sample from the training set and adjust the $\BS{\beta}$ and $\theta$ parameters to reduce the training error, then we monitor the error in the validation by sampling from the validation dataset. The goal of the optimization is to reduce the validation error while also reducing the difference between the training and validation error. This can also be achieved by taking the model at the maximum log-likelihood of the validation dataset with an assumption that the estimation on the training dataset is asymptotic as the number of iterations on the samples $N\rightarrow \infty$. The derivatives of the estimated parameters is computed using backpropagation \citep{goodfellow2016deep}. Given the ResLogit formulation and taking the backpropagation from the output log-likelihood, the derivative of the log-likelihood with respect to $\BS{\beta}$ is: \begin{equation} \frac{\partial LL}{\partial \BS{\beta}} = \frac{\partial LL}{\partial \B{V}}\frac{\partial \B{V}}{\partial \BS{\beta}} + \frac{\partial LL}{\partial \B{h}^{(m)}}\frac{\partial \B{h}^{(m)}}{\partial \BS{\beta}} + \frac{\partial LL}{\partial \B{h}^{(m-1)}}\frac{\partial \B{h}^{(m-1)}}{\partial \BS{\beta}} + ... \frac{\partial LL}{\partial \B{h}^{(1)}}\frac{\partial \B{h}^{(1)}}{\partial \BS{\beta}} \label{eq:bprop} \end{equation} The gradient formulation is shown in eq. \ref{eq:bprop} that by the nature of the residual connections, each derivative of the residual layers is independently computed. This prevents the phenomena known as vanishing gradient. If any of the gradients is computed to be zero, it does not affect the total backpropagated value and the $\BS{\beta}$ parameters can still be updated. This allows the ResLogit to converge to an optimal MNL solution, even with non-identifiable residual layers. In contrast, with a fully connected MLP model, the gradient formulation is a result of a chain rule: \begin{equation} \label{eq:chainrule} \frac{\partial LL}{\partial \BS{\beta}} = \frac{\partial LL}{\partial \B{h}^{(m)}}\frac{\partial \B{h}^{(m)}}{\partial \B{h}^{(m-1)}}...\frac{\partial \B{h}^{(1)}}{\partial \B{V}}\frac{\partial \B{V}}{\partial \BS{\beta}} \end{equation} In eq. \ref{eq:chainrule}, if any of the intermediate derivatives are zero, then the total derivative is zero, and the model fails to learn and update $\BS{\beta}$, resulting in model non-identifiability. The number of parameters used is relative to the number of alternatives in the choice set. Each element in the matrix corresponds to the cross-effects of other alternatives on the chosen alternative. The diagonal elements in the matrix are the cross-effects with itself, i.e. a scale factor adjustment. If this residual matrix is an identity matrix, that means that there are no cross-effects induced between alternatives (IIA holds), and the model collapses into a standard MNL model. \section{Red/Blue bus theoretical example} \label{sec:redblue} We show an example of how a simple nesting structure can be obtained using the ResLogit formulation in a hypothetical scenario. Let us consider the red/blue bus problem. The red/blue bus problem is a classic example of IIA property violation in choice models. The problem arises in the assumption that the error terms for the red and blue bus options are independent, but they are correlated and share similar decision attributes in reality. This means that the change in utility for a red bus will influence the change in utility of the blue bus. To relax this assumption, choice modellers often use a Nested Logit model to relax the IIA assumption by adding a conditional probability term or logsum term. The choice scenarios are summarized in \Cref{tab:choice_scenarios}. \subsection{Scenario description} \label{subsec:redblue_scenario} In the first scenario (Scenario 1), assuming that we have a vector of 2 choices in a choice task $t$. $\B{V}:\{ V_{car}, V_{bus}\}$, where each alternative has the same utility $V_{car}=1$, $V_{bus}=1$ Under strict IID assumptions, the probability of choosing either bus or car is, therefore, $P_{car}=P_{bus}=0.5$. In the second scenario (Scenario 2), suppose that now we have a red bus $(V_{red\_bus})$ and blue bus $(V_{blue\_bus})$ option in place of $V_{bus}$, $\B{V}=\{V_{car}, V_{red\_bus}, V_{blue\_bus}\}$. The utility of each alternative does not change, and all 3 alternatives have the same utility: $V_{car}=1$, $V_{red}=1$, $V_{blue}=1$. Assuming the choice task is IID, the probabilities for the respective alternative should result in: $P_{car}=0.5$, $P_{red\_bus}=0.25$, and $P_{blue\_bus}=0.25$. The probability of \textit{car} choice does not change when we add a new mode to the choice set. However, the actual probabilities when estimated by an MNL model would result in: $P_{car}=0.33$, $P_{red\_bus}=0.33$, and $P_{blue\_bus}=0.33$, which does not seem plausible and violates IIA property conditions. In the third scenario (Scenario 3), under our proposed ResLogit model, the correlation between the red and blue bus is corrected by a residual vector $\B{g}$, with residual parameter matrix $\theta^{(1)}$. Using a 1-layer ResLogit model and a residual vector function defined by $\B{g}=-\ln(1+\exp(\theta^{(1)}\B{V}))$, we simulate a choice scenario with alternatives \textit{car}, \textit{red bus}, \textit{blue bus}. We assume at a value of $1$ represents a positive cross-effect and a $-1$ value denotes a negative cross-effect and $0$ value represents no cross-effects (IIA property holds). The negative value of cross-effects between the car and bus option may suggest that the alternatives are competing options (e.g. buses and cars sharing the same road segment). we assign a value of $\{1\}$ to elements $c^{(1)}_{32}$ and $c^{(1)}_{23}$ and a value of $\{-1\}$ to elements $c^{(1)}_{12}$, $c^{(1)}_{21}$, $c^{(1)}_{13}$ and $c^{(1)}_{31}$: \begin{equation} \label{eq:matrix01} \theta^{(1)} = \bordermatrix[{[]}]{ & & & \cr & c_{11} & c_{12} & c_{13} \cr & c_{21} & c_{22} & c_{23} \cr & c_{31} & c_{32} & c_{33} } = \bordermatrix[{[]}]{ & & & \cr & 0 & -1 & -1 \cr & -1 & 0 & 1 \cr & -1 & 1 & 0 }. \end{equation} Given a $3\times 1$ vector of utilities $\B{V}=\begin{bmatrix}1&1&1\end{bmatrix}^{\top}$, the residual vector $\B{g}$ is: \begin{align} \B{g} &= -\ln\left( 1 + \exp(\theta^{(1)}\B{V}) \right), \\ &= -\ln\left( 1 + \exp\Big( \begin{bmatrix} 0 & -1 & -1 \\ -1 & 0 & 1 \\ -1 & 1 & 0 \end{bmatrix}\cdot \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix} \Big) \right), \\ &= \begin{bmatrix} -0.127 \\ -0.693 \\ -0.693 \end{bmatrix}, \end{align} \noindent giving the choice probabilities as: \begin{align} \begin{split} \label{eq:redblue_result} P(i) = \frac{\exp(V_i + g_i)}{\sum_{j\in C} \exp(V_j + g_j)} \hspace{1em} \textrm{for} \hspace{1em} i \in \textrm{\textit{car, red bus, blue bus}}\\ P(\textrm{\textit{car}}) = 0.468; \hspace{1em} P(\textrm{\textit{red bus}}) = 0.265; \hspace{1em}P(\textrm{\textit{blue bus}}) = 0.265; \hspace{1em} \end{split} \end{align} The probabilities in \Cref{eq:redblue_result} show that with an addition of the residual matrix to account for the cross-effects, we have moved the choice probabilities of the car and red/blue bus options toward the true IIA conditions without changing the underlying utilities. Now, if we assume no cross-effects between the car and bus alternatives (both car and buses are not sharing the same road segment), we update \Cref{eq:matrix01} with values of $\{0\}$ for parameters $c^{(1)}_{12}$, $c^{(1)}_{21}$, $c^{(1)}_{13}$ and $c^{(1)}_{31}$: \begin{equation} \label{eq:matrix02} \theta^{(1)} = \bordermatrix[{[]}]{ & & & \cr & c_{11} & c_{12} & c_{13} \cr & c_{21} & c_{22} & c_{23} \cr & c_{31} & c_{32} & c_{33} } = \bordermatrix[{[]}]{ & & & \cr & 0 & 0 & 0 \cr & 0 & 0 & 1 \cr & 0 & 1 & 0 }. \end{equation} \noindent The resulting residual vector would be: \begin{equation} \B{g} =\begin{bmatrix} -0.693 \\ -1.313 \\ -1.313 \end{bmatrix}, \end{equation} \noindent giving the choice probabilities as: \begin{align} \begin{split} \label{eq:redblue_result2} P(i) = \frac{\exp(V_i + g_i)}{\sum_{j\in C} \exp(V_j + g_j)}, \hspace{1em} \textrm{for} \hspace{1em} i \in \textrm{\textit{car, red bus, blue bus}}\\ P(\textrm{\textit{car}}) = 0.482; \hspace{1em} P(\textrm{\textit{red bus}}) = 0.259; \hspace{1em}P(\textrm{\textit{blue bus}}) = 0.259; \hspace{1em} \end{split} \end{align} \begin{table}[!t] \caption{Illustration of red/blue bus choice scenario showing the effect of residual correction factors of a 1-layer model.} \label{tab:choice_scenarios} \centering \begin{tabu}{X[1] X[1,r] X[1,r] X[1.5,r] X[1,r]} \toprule Choice & $V_i$ & $g_i$ & $\exp(V_i + g_i)$ & $P(i)$ \\ \midrule \multicolumn{5}{l}{Scenario 1} \\ car & 1 & - & 2.718 & 0.5 \\ bus & 1 & - & 2.718 & 0.5 \\ \midrule \multicolumn{5}{l}{Scenario 2} \\ car & 1 & - & 2.718 & 0.33 \\ red bus & 1 & - & 2.718 & 0.33 \\ blue bus & 1 & - & 2.718 & 0.33 \\ \midrule \multicolumn{5}{l}{Scenario 3 (competing car/bus)} \\ car & 1 & -0.127 & 2.394 & 0.468 \\ red bus & 1 & -0.693 & 1.359 & 0.265 \\ blue bus & 1 & -0.693 & 1.359 & 0.265 \\\\[-1em] \multicolumn{5}{l}{Scenario 3 (non-competing car/bus)} \\ car & 1 & -0.693 & 1.359 & 0.482 \\ red bus & 1 & -1.313 & 0.731 & 0.259 \\ blue bus & 1 & -1.313 & 0.731 & 0.259 \\ \bottomrule \end{tabu} \end{table} In principle, the nests between the car and the bus options are not pre-specified \textit{a priori} by the modeller. The parameter matrix is estimated from data and defines the nesting structure or error term correlation of the choice alternatives. The first observation of the hypothetical example shown above is that with a logical assumption of positive ($c_{bus,bus}=1$) cross-effect residual parameter between the two bus alternatives and zero ($c_{bus,car}=0$) cross-effect residual parameter between the car and bus alternatives would result in a nesting structure which reflects the relaxed IID assumption probabilities. The second observation stems from the correlations between error terms of competing alternatives. If the residual parameters are negative, it accounts for competing alternatives (e.g. buses and cars share the same road segment from the origin to destination), resulting in a slightly different outcome than a non-compete scenario. \section{Case study} \label{sec:casestudies} This study evaluates our proposed ResLogit model's effects and performance in three criteria: model depth, model degradation, and model predictive performance compared to an MLP neural network. We also evaluate the residual effects on econometric parameters by comparing the beta and standard error values with a baseline MNL model without the residual layers. We evaluate the ResLogit model's performance using individualized characteristics and attributes in a revealed preference (RP) travel survey dataset using out-of-sample accuracy at the minimum validation loss point on the validation curve. We computed the accuracy using a 30\% hold-out validation set from our dataset. We compared the model degradation effects between our ResLogit and a vanilla MLP model with identical model hyperparameters to address the adverse impact of model degradation from increasing layers. We showed the effects of increasing layers in the ResLogit model and the MLP model on estimation accuracy and model identifiability. \subsection{Data and model description} \label{subsec:mtltrajet} We used the 2016 \textit{Mtl Trajet} RP dataset collected from the user's smartphone data on a mobile application \citep{yazdizadeh2017generic}. A list of explanatory variables and the choice set used for this mode choice prediction analysis are shown in \Cref{tab:mtltrajet}. The respondents' travel diary includes mode choice, activity choice, trip attributes (e.g. trip length, start/end time, location) and GPS trajectories. The travel survey was conducted over four months, from September to December 2016. In total, there were 60,365 unique trips made during the period. To evaluate out-of-sample performance, we divide the dataset into two sets using a 70:30 training/validation split ($N_{training}=42,256$ samples, $N_{validation}=18,109$ samples). We developed the model estimation algorithm using open-source deep learning libraries in Python. The code for our experiments is available on our Github page\footnote{\url{https://github.com/LiTrans/reslogit-example}}. \begin{table}[!t] \centering \caption{Descriptive variables of the dataset.} \label{tab:mtltrajet} \begin{tabu}{X[0.4,l] X[0.8,l] X[0.45,l] X[0.15,r] X[0.25,r]} \toprule variable & description & type & mean & std dev \\ \midrule weekend & trip on weekend & dummy variable & 0.205 & 0.001 \\ hour\_8\_10 & trip between 8am to 10am & dummy variable & 0.163 & 0.0015 \\ hour\_11\_13 & trip between 11am to 1pm & dummy variable & 0.147 & 0.001 \\ hour\_14\_16 & trip between 2pm to 4pm & dummy variable & 0.209 & 0.002 \\ hour\_17\_19 & trip between 5pm to 7pm & dummy variable & 0.249 & 0.002 \\ hour\_20\_22 & trip between 8pm to 10pm & dummy variable & 0.095 & 0.001 \\ hour\_23\_1 & trip between 11pm to 1am & dummy variable & 0.03 & 6e-4 \\ hour\_2\_4 & trip between 2am to 4am & dummy variable & 0.006 & 3e-4 \\ hour\_5\_7 & trip between 5am to 7am & dummy variable & 0.101 & 0.005 \\ num\_coord & number of trajectory links & continuous & 109.8 & 131.23 \\ trip\_dist & trip distance (km) & continuous & 8.366 & 10.42 \\ trip\_duration & trip duration (min) & continuous & 24.04 & 20.97 \\ trip\_avgspeed & trip average speed (km/h) & continuous & 22.503 & 18.815 \\[1mm] activity & trip activity type:\{1: education, 2: health, 3: leisure, 4: meal, 5: errands, 6: shopping 7: home, 8: work, 9: meeting\} & categorical \\[1mm] \midrule choice alternatives & \multicolumn{4}{l}{1: Auto, 2: Bike, 3: Public Transit, 4: Walk, 5:Auto+Transit, } \\ & \multicolumn{4}{l}{6: Other mode, 7: Other combination} \\ \bottomrule \end{tabu} \end{table} We iterated over the experiment by varying the depth of the ResLogit and MLP neural network using 2, 4, 8 and 16 hidden layers $(M=\{2,4,8,16\})$. Note that our study only shows a relative comparison between the models with a similar number of layers and neural network hyperparameters. The objective of this experiment is to show the effectiveness of the ResLogit approach as a way of incorporating deep learning methods into discrete choice models over a conventional MLP neural network. This experiment considers three specific objectives: \begin{enumerate} \item Effects of the number of residual layers on the model $\beta$ parameters. \item Model validation accuracy and maximum log-likelihood estimation comparison. \item Comparison of estimated $\beta$ parameters between the ResLogit model and MNL model. \end{enumerate} The model estimation process begins with a baseline MNL estimation. Next, the MLP models were estimated (4 models, one each for 2, 4, 8 and 16 hidden layers), and labelled as MLP-2, MLP-4, MLP-8 and MLP-16, respectively. We performed the same training process on the ResLogit models (RL-2, RL-4, RL-8, RL-16). For the learning algorithm, we used the mini-batch SGD learning algorithm with a mini-batch size of 64 (i.e. gradient is computed over a sample of 64 observations from the training dataset) to train our models. For the learning algorithm, we applied an RMSprop optimization step \citep{goodfellow2016deep}. The ResLogit model residual parameters are initialized using an identity matrix. Once the models have been trained, we take the best-specified model at the minimum validation loss point and compute the prediction accuracy using the validation dataset's model parameter values. \subsection{Analysis of model results} \Cref{fig:curves_resnet_vs_mlp} and \Cref{fig:loss_curves_resnet_vs_mlp} reports the validation results of the MNL and ResLogit models with a baseline comparison to a MNL model (red line). A condensed version of the estimated $\beta$ parameters of the MNL and ResLogit models are presented in \Cref{tab:mnl_residual}, which we showed the comparison between our best estimated ResLogit structure (RL-16) and the MNL model. \Cref{fig:resmat} shows the parameters of the first four residual layers. \subsubsection{Performance measure on out-of-sample data} \Cref{fig:curves_resnet_vs_mlp} shows the validation curves of the model log-likelihood. The x-axis represents the iteration step, and the y-axis reports the log-likelihood. The MNL curve indicates the baseline performance where no augmentation to the utility or model. The plot on the left shows the comparison between the MNL and MLP models. This result indicates that the MLP model performs \textit{worse} than the MNL model. The only change between the MLP and ResLogit experiments is the model structure. Therefore the improvement is most likely only attributed to the change in model structure, and not other hyperparameters\footnote{It is also plausible that an MLP will do better or equivalent to a Logit model and sometimes an MLP can perform worse than a Logit model (on this particular class of problem, for example). This can be explained by the "No Free Lunch" theorem \citep{kawaguchi2017generalization}: "If an algorithm performs well on a certain class of problems, then it necessarily pays for that with degraded performance on the set of all remaining problems." \citep[Theorem 1]{wolpert1997no}.}. MLP-2 also took twice as long to reach the maximum log-likelihood (400 vs 200 iterations on the MNL model). The MLP models (MLP-4, MLP-8 and MLP-16) produced significantly noisier output in the backpropagation step in SGD, which causes the ``spikes'' seen on the left plot. There were also identifiability problems with MLP-4, MLP-8 and MLP-16 models. Since the MLP-4, MLP-8 and MLP-16 models were misspecified, they could not reach the same performance log-likelihood compared to the MNL models. This result showed that adding neural network layers does not guarantee better performance and a simple MNL could potentially outperform a DNN, which is in line with our initial hypothesis. We observed that as we increase the depth of the ResLogit models (\Cref{fig:curves_resnet_vs_mlp}, right), the log-likelihood remains consistent and outperforms the baseline MNL. Although we are using the same number of parameters and the same learning algorithm, the ResLogit method generated correctly specified models while the MLP models were misspecified. Model specification test is handled by out-of-sample validation analysis and econometric interpretation of beta parameters (explained in the following sections). We note that we did not implement any other forms of regularization for experiment consistency, e.g. $L_1$, $L_2$ regularizer or Dropout techniques. An alternative approach to model selection for more complex data where there are many unknown variables is to use a statistical measure such as the Akaike Information Criterion (AIC). The AIC statistic calculated for the MNL, MLP-16 and RL-16 models is 32566, 34902 and 28086 respectively. \begin{figure}[!h] \centering \includegraphics[width=\textwidth]{figures/validation_curves_highdpi.pdf} \caption{Validation log-likelihood results of the model estimation.}\label{fig:curves_resnet_vs_mlp} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=\textwidth]{figures/loss_curves_highdpi.pdf} \caption{Validation loss comparison between the MLP models and the ResLogit models.}\label{fig:loss_curves_resnet_vs_mlp} \end{figure} \Cref{fig:loss_curves_resnet_vs_mlp} shows the validation error curves for both models. The error is defined as (1 - \textit{mean prediction accuracy}) where the \textit{mean prediction accuracy} is: \begin{equation}\mathcal{L}_n(i,i^*)= \begin{cases} 1 \hspace{1em} i=i^*\\ 0 \hspace{1em} i\neq i^* \end{cases}i,i^* \in \mathcal{D}_{validation} \end{equation} \begin{equation} \textrm{\textit{mean prediction accuracy}} = \frac{1}{N_{validation}}\sum_{n=1}^N \mathcal{L}_n(i,i^*) \end{equation} \noindent where $i$ is the actual choice, $i^*$ is the predicted choice, $\mathcal{L}_n(i,i^*)$ is the 0-1 loss function and $\mathcal{D}_{validation}$ is the validation dataset. The stability of convergence shows no strong overfitting bias during the estimation process. On the MLP curves on the left plot, the model with the smallest error is the one with the least number of hidden layers but only after iteration 400, with the MNL model coming in as the second-lowest error. We can see that the error reaches a saturation point around 0.3 for MLP-2 with a negligible decrease at MLP-4 to MLP-16. This makes sense because the non-linear structure of the multi-layered neural network will be susceptible to the vanishing gradient problem observed in this figure. The results are more profound when we compare the MLP with the ResLogit model (\Cref{fig:loss_curves_resnet_vs_mlp}, right). In the MLP plot, we observe that the learning gets trapped in a locally optimal point. The difference is minimal with two layers, which we expected, but a more pronounced difference between the MLP and ResLogit model when the number of layers increases. On the right plot of \Cref{fig:loss_curves_resnet_vs_mlp} the loss gets progressively smaller as we increase the number of residual layers, which is consistent and follows a logical pattern. Even with an RL-2, the error drops significantly faster, and the model achieved lower error than the MNL model as soon as the estimation starts. This means that neural networks are best suited to capture the error distribution rather than using it as a transformative operator on the explanatory variables. \subsection{Model coefficient estimates} \Cref{tab:mnl_residual} presents the coefficient estimates, standard errors and robust standard errors for the observed explanatory variables for the MNL and RL-16 model. The parameter estimates indicate the individuals' exhibited preferences for each attribute for each alternative. The results show that individuals reacted towards a stronger preference for transit when the trip time is longer in the ResLogit model, relative to the MNL model. Individuals also prefer a longer route for transit compared to auto according to the ResLogit model. In contrast, the MNL estimates show that individuals prefer a longer route when taking auto over transit. There are specific indicators which are captured in ResLogit and not in the MNL models. For instance, on weekends, people in Montreal use their car more to do shopping, recreation, visit their parents in the suburbs, go to cottage, etc. Therefore, ResLogit is giving us a positive sign for car over the weekend compared to other modes. Another example is that during morning rush hour (8-10), people commute and there is a higher chance that they take auto+transit (due to the availability of a large amount of parking at stations) to reach their office. This fact is captured only by ResLogit. Standard errors can be calculated through the Fisher Information Matrix, requiring only the Hessian of the log-likelihood which assumes a correctly specified model. Additionally, the correct specification assumption can be relaxed by computing the robust sandwich estimator. We calculate the standard errors as a function of the negative inverse of the Hessian matrix $\mathcal{H}$, which gives the variance-covariance matrix of $\beta$, assuming those estimates are normally distributed. This value gives the Cramer-Rao bound: \begin{equation} \hat{\sum}_\beta^{CR} = - \hat{\mathcal{H}}^{-1} \end{equation} The Hessian matrix is the second-order derivative of the log-likelihood with respect to the model parameters. Then, taking the diagonal of the square root of that variance-covariance matrix normalized by the size of the dataset, we obtain the standard errors. The robust standard error $\hat{\sum_\beta^{Rob.}}$ is calculated by: \begin{equation} \hat\sum_\beta^{Rob.} = (- \hat{\mathcal{H}}^{-1})\hat{B}(- \hat{\mathcal{H}}^{-1}) \end{equation} where $\hat{B} = \sum_{n=1}^N(\frac{\partial LL_n}{\partial\B{\beta}})(\frac{\partial LL_n}{\partial\B{\beta}})^{\top}$ In terms of the coefficient significance value, the ResLogit parameters have more parameters with a nominal p-value < 0.05 compared to the MNL model. The standard error and robust standard error estimates show that the ResLogit estimates are more reliable than the MNL model. For the extreme cases, the parameter estimates for trip distance for walking showed the smallest value compared to other modes for both models as expected, indicating that the results are consistent. The robust standard errors also show that some parameters are not significant, for instance, \textit{meeting activity-bike} has a high standard error when accounting for model misspecification. This is logical as travelling by bicycle is not usually common. The estimates for \textit{hour (20-22)-bike} also indicate that this parameter is not a significant parameter, we can say that the hours between 8 and 10 pm does not impact the preference of \textit{bike} mode. We caution the readers that we can give no general guarantees to the precision of the standard errors or the asymptotic behaviour of the model fit for heavily biased models \citep{goeman2018l1}, such as L1 or L2 regularization used in neural networks and other machine learning methods. Our ResLogit formulation reduces this bias in the model through the addition of residual layers to account for the systematic errors. Therefore, the robust standard errors that we report are reliable, but only provide an approximation of model specification correctness and the variance of the estimates. \subsection{Analyzing cross-effects from the residual matrices} The cross-effects of non-chosen alternatives are reflected in \Cref{fig:resmat}. The figure shows the parameters of the first four residual layers of RL-16. The matrices' values correspond to the level of dependency between the utility of one alternative with the utility function of the second alternative, and vice versa. As explained in \Cref{sec:redblue}, this matrix defines the underlying error term correlations between the choice alternatives. For example, the positive value of transit-bike in \Cref{fig:resmat} (a) is 1.55. This means that the attributes of transit mode positively influence individuals who choose bike mode, increasing the utility of transit influences the increase in mode share for the bike. However, the reverse may not be identical. The value for bike-transit in \Cref{fig:resmat} (a) is -0.26, indicating that increasing the utility of bike (e.g. more bike infrastructure), decreases the mode share for transit. We may relate this observation to the shared infrastructure between auto and bike. The non-zero values indicate the existence of non-linear cross-effects in the stated choices. This analysis provides an estimate of the cross-effect influence between modes of travel. Nonetheless, this experiment has shown how the ResLogit formulation uses the residual function to enhance model performance. \begin{table}[!t] \centering \caption{Comparison of a subset of parameter estimates between MNL and ResLogit model.} \label{tab:mnl_residual} \begin{tabu}{X[2,l]X[1.5,l]X[1,r]X[1,r]X[1,r]X[1,r] X[1,r]X[1,r]} \toprule && \multicolumn{3}{l}{MNL} & \multicolumn{3}{l}{ResLogit (16-layer)} \\ Parameter ($\beta_{mj}$)& Choice & parameter & std. err. & rob. std. err. & parameter & std. err. & rob. std. err. \\ \midrule weekend & auto & -0.057$^*$ & 0.036 & 0.386 & 0.045$^*$ & 0.006 & 1.157 \\ & bike & -0.990$^*$ & 0.081 & 7.335 & -0.448$^*$ & 0.063 & 7.566 \\ & transit & -0.751$^*$ & 0.042 & 1.569 & -0.090$^*$ & 0.007 & 0.089 \\ hour\_8\_10 & walk & -0.841$^*$ & 0.070 & 7.986 & -1.459 & 0.013 & 0.063 \\ & auto+transit & -2.273$^*$ & 0.121 & 15.005 & 1.162 & 0.032 & 0.230 \\ hour\_11\_13 & bike & -0.854$^*$ & 0.073 & 47.886 & -1.210$^*$ & 0.071 & 15.565 \\ & auto+transit & -2.540$^*$ & 0.217 & 48.866 & 1.618 & 0.039 & 0.359 \\ hour\_17\_19 & auto & 0.058$^*$ & 0.029 & 0.186 & -0.586 & 0.004 & 0.001 \\ hour\_20\_22 & bike & -1.271$^*$ & 0.092 & 16.937 & -0.943$^*$ & 0.085 & 15.009 \\ trip\_dist & auto & 0.354 & 0.007 & 0.002 & -0.113 & 0.001 & 0.000 \\ & transit & 0.297 & 0.008 & 0.002 & 0.817 & 0.001 & 0.000 \\ & walk & -2.197 & 0.028 & 0.387 & -0.257 & 0.004 & 0.001 \\ trip\_time & auto & -0.627 & 0.005 & 0.000 & -0.397 & 0.001 & 0.000 \\ & transit & 0.870 & 0.005 & 0.000 & 0.303 & 0.001 & 0.000 \\ & walk & 0.863 & 0.009 & 0.007 & -0.752 & 0.002 & 0.000 \\ trip\_aspeed & auto & 0.988 & 0.005 & 0.001 & -0.024 & 0.001 & 0.000 \\ & walk & -1.738 & 0.014 & 0.058 & -1.900 & 0.002 & 0.000 \\ act\_edu & auto & -1.357$^*$ & 0.080 & 10.697 & -0.187 & 0.011 & 0.055 \\ & walk & -0.067$^*$ & 0.086 & 22.325 & -0.871 & 0.029 & 0.558 \\ act\_home & auto & -0.119$^*$ & 0.026 & 0.151 & 0.340 & 0.003 & 0.001 \\ & bike & -1.048$^*$ & 0.044 & 3.217 & -0.705$^*$ & 0.039 & 1.477 \\ & transit & 0.109$^*$ & 0.027 & 0.093 & 0.764 & 0.004 & 0.001 \\ act\_work & auto & -0.055$^*$ & 0.027 & 0.115 & 0.276 & 0.003 & 0.003 \\ & transit & -0.011$^*$ & 0.028 & 0.096 & 0.631 & 0.004 & 0.004 \\ & auto+transit & -1.853$^*$ & 0.073 & 4.028 & 0.851 & 0.028 & 0.114 \\ act\_meeting & bike & -2.776$^*$ & 0.259 & 154.812 & -1.803$^*$ & 0.174 & 106.564 \\ \midrule log-likelihood && \multicolumn{3}{l}{-16145} & \multicolumn{3}{l}{-13121} \\ sample size && \multicolumn{3}{l}{42,255} & \multicolumn{3}{l}{42,255} \\ \# of estimated parameters && \multicolumn{3}{l}{138} & \multicolumn{3}{l}{922} \\ max. validation accuracy && \multicolumn{3}{l}{72.01\%} & \multicolumn{3}{l}{76.73\%} \\ \bottomrule \multicolumn{7}{l}{$^*$: Not statistically significant at p-value $<$ 0.05.} \end{tabu} \end{table} \begin{figure}[!t] \centering \includegraphics[width=\textwidth]{figures/drawing.pdf} \caption{First 4 layers of weight matrices from the ResLogit model.} \label{fig:resmat} \end{figure} \subsection{Elasticity analysis} The point aggregate elasticity of $P_n(i)$ with respect to to input $x_n$ is given by the following equation: \begin{equation} E_{x_n}(i) = \frac{dP_n(i)}{dx_n}\frac{x_n}{P(i)} \end{equation} The elasticity measures the impact of increasing or decreasing a variable on the demand of the respective choice. In this case we use \textit{trip\_dist} as the variable and we measure the impact of market share on the \textit{auto}, \textit{bike}, \textit{transit} and \textit{walk} choices. Similarly we compute the arc elasticities of $P_n(i)$ with respect to $\hat{x_n}$ when we change the \textit{trip\_dist} by $\Delta x_n$ where $\hat{x_n} = x_n + \Delta x_n$. \Cref{tab:elasticity} shows the point elasticities obtained from the MNL, MLP and ResLogit model (16 layers). The ResLogit model show expected signs similar to the MNL model. \textit{Walk} mode show a smaller increase in trip distance than the MNL model, while \textit{Transit} mode shows a more significant impact from trip distance in the ResLogit model compared to the MNL model. For the MLP model, \textit{transit} mode show a negative sign compared to the MNL model. Surprisingly, the ResLogit model shows a different sign in \textit{Auto} mode. Indeed, for \textit{Auto} mode, one should expect negative elasticity. If we analyze the two models' elasticities (presented in \Cref{fig:elasticities}) assuming different scenarios where we increase or decrease the overall trip distance, for instance, willingness to change modes to travel a longer or shorter distance or construction of new transit networks. We can see that the elasticities from the ResLogit predict a non-linear change relation between trip distance and the respective mode choice. This shows a clear distinction from the MLP model, where the relationship between trip distance and mode shows a relatively linear curve. We expect that elasticity is heterogeneous and it will vary across different scenarios, given different unobserved trade-offs between mode choices, For \textit{auto} mode, The ResLogit model predicts that with a decrease in trip distance by 50\%, elasticity is positive (and negative otherwise), while increasing the trip distance will result in greater sensitivity to trip distance. \textit{Bike} mode shows a positive elasticity when we increase the trip distance by 50\% but a negative elasticity when we decrease the trip distance by about -50\%. We can infer from this result that travellers are willing to switch from bikes to other modes or from other modes to bikes, considering other unobserved factors not captured in the data. This sign switching phenomenon is interesting because it indicates a heterogeneous population that will react differently while also \textit{considering other alternatives}. This consideration of non-chosen alternatives shows that the ResLogit model behaves in line with the behavioural theory of the Mother Logit model example where attributes from non-chosen alternatives enter the utility of the chosen alternative. \begin{table}[!t] \centering \caption{Point elasticities.} \label{tab:elasticity} \begin{tabu}{X[1] X[1] X[1] X[1]} \toprule & MNL & MLP & ResLogit \\ Choice & \\ \midrule &\textit{trip\_dist} & \textit{trip\_dist} & \textit{trip\_dist} \\ Auto & 0.178 & 0.133 & -0.103 \\ Bike & -1.031 & -0.128 & -0.980 \\ Transit & 0.232 & -0.206 & 0.669\\ Walk & -1.54 & -0.207 & -0.769\\ \bottomrule \end{tabu} \end{table} \begin{figure}[!t] \centering \includegraphics[width=\textwidth]{figures/elasticities.pdf} \caption{Elasticity versus \% increase or decrease in trip distance, comparison between models.}\label{fig:elasticities} \end{figure} \subsection{Significance of model depth and utility formulation} The general notion is that increasing the complexity and non-linearity in the model should result in greater model fit, given the higher degree of freedom induced by the neural network. However, the MLP network model suffers from the vanishing gradient problem shown by increasing the number of layers. There is a bottleneck effect with depth $M\geq 4$, and the validation log-likelihood and loss no longer improved. In contrast, we do not see this detrimental effect in the ResLogit model, even at a depth of 16 layers. This study highlights how machine learning models may sometimes be worse off than a simple discrete choice model without understanding the neural network formulation structure. \subsubsection{Behaviour interpretation} As explained in \Cref{sec:gen_approach}, a decision-maker's learning process may be developed over time through experiences, and the agent updates his or her underlying distribution. The ResLogit model captures this effect while retaining the value function of the observed component of the utility. We can use this approach of capturing uncertainties to account for heterogeneity in the choice process arising from inconsistency within travel mode choice. Besides the differences in optimal performance, it is also of practical interest to study the actual $\beta$ parameter solution vectors and observe how they differ from a standard MNL model without accounting for learning behaviour. \Cref{tab:mnl_residual} shows the differences in $\beta$ parameter estimates between the benchmark MNL and RL-16. These exact set of significant variables accounted for can be inferred from each reported metric's standard error. A conceptual step in discrete choice analysis is the ability to provide the basis of estimation of $\beta$ (and standard error) and economic indications using data on observed choices and attributes. Here, our ResLogit approach follows the same approach as discrete choice methods. The unobserved attributes, expressed in $\varepsilon$ in MNL models, captured the error contribution to the utility. We observe that the ResLogit counterpart differs from the MNL model in most metrics. However, the ResLogit model's ability to ``explain away'' uncertainty yields greater parameter significance as reported by the lower standard error. The formulation of the ResLogit, which adds the $g$ term, captures the cross-effects of the different mode choice alternatives to ensure that the decision is free from unobserved errors and endogeneity. Under regularity conditions, this residual component captures the unobserved error using a learning algorithm, similar to how in real-life, a traveller explores new route options or stick to habitual choices. In general, the ResLogit framework allowed for the error term to be formulated within the utility. \subsubsection{Sensitivity analysis} It is important to examine the differences in $\beta$ parameter responses when changing the neural network size. Our emphasis of this analysis is on the $\beta$ value significance and non-linear responsiveness when more residual layers are added to the choice model. \Cref{tab:variability} shows a sensitivity analysis regarding the $\beta$ parameters of trip time over time of departure. The table shows the variation between trip time and time of departure beta parameters for each model. The values represent the degree of variability of each time of departure dummy variable on the utility of each mode alternative. We take the ratio of $(\beta_{\textrm{trip time}}x_{\textrm{trip time}}) / (\beta_{\textrm{departure dummy}}x_{\textrm{departure dummy}})$. This gives us the sensitivity of travel time over different departure time segments. If the parameters for $\beta_{\textrm{trip time}}$ are not influenced by variation in departure time, then the values would have a small standard deviation across departure time, and the standard deviation would give an indicator of uniformity of the trip time-sensitivity across different departure times. If the standard deviation is small, it would indicate that the trip time heterogeneity is captured in the residual component. The $\beta_{\textrm{trip time}}$ represents the value that is closer to the true mean. The attribute effects are shown in \Cref{tab:variability} represent the mean preference on each individual's utility, after controlling for taste variability. This result indicates the effects of increasing residual layers on the stability of the econometric parameters. As expected in the MNL model, the time of departure dummy variable influences the utility and choice of mode. This is a consistent result, as we cannot represent the variation over the departure time as a single linear factor in the utility function. Modifying the MNL model by incorporating the residual layers would reduce the variability and sensitivity to time of departure. The average standard deviation of trip time versus time of departure coefficient decreases as we increase the number of layers is shown in the table. This shows how the implied heterogeneity in the utility function can be explained away through the neural network component, retaining the properties of the observed utility component. Note that this estimation does not allow us to identify the relationship between the heterogeneity of departure time and the preference of different travel modes. One can use economic indicators to estimate this effect. The values reported for RL-2 to RL-16 may not be entirely stable, and further investigation is needed into how the model responds to changes in hyperparameters and regularization. However, we can conclude that this experiment shows the capability of our proposed ResLogit approach, particularly in: (a) allowing for a specific analysis of the underlying distribution and (b) exploring the attributes that represent the most significant degree of heterogeneity in the model--that may present an interesting subject for future research. \begin{table}[!t] \centering \caption{Sensitivity analysis of different travel modes over time of departure. Values show the difference in trip time parameter estimates across hourly segments.} \label{tab:variability} \resizebox{0.74\textwidth}{!}{ \begin{tabu}{X[1,l] X[0.5,l] X[0.5,l] X[0.5,l] X[0.5,l]} \toprule Model & \multicolumn{4}{l}{trip time/time of departure variability} \\ \midrule MNL & Auto & Bike & PT & Walk\\ hour\_8\_10 & 3.07 & -0.26 & 26.36 & -1.03 \\ hour\_11\_13 & -3.34 & -0.24 & -12.08 & -5.43 \\ hour\_14\_16 & -2.07 & -0.33 & 5.88 & -2.61 \\ hour\_17\_19 & -10.81 & -0.45 & 3.26 & -1.75 \\ hour\_20\_22 & 2.42 & -0.16 & -3.49 & -1.31 \\ hour\_23\_1 & 0.76 & -0.14 & -1.26 & -0.54 \\ hour\_2\_4 & 1.88 & -0.09 & -0.52 & -0.43 \\ hour\_5\_7 & 4.86 & -0.16 & -14.26 & -0.92 \\\\[-0.75em] stddev & 4.66 & 0.11 & 11.74 & 1.54 \\ \midrule RL-2 \\ hour\_8\_10 & -0.18 & 3.63 & 15.37 & -0.60 \\ hour\_11\_13 & -0.16 & 1.73 & -5.99 & -1.33 \\ hour\_14\_16 & -0.17 & 2.19 & -14.66 & -0.78 \\ hour\_17\_19 & -0.23 & 3.76 & 9.25 & -0.54 \\ hour\_20\_22 & -0.19 & 1.77 & -6.62 & -0.83 \\ hour\_23\_1 & -0.29 & 1.65 & -2.07 & -0.64 \\ hour\_2\_4 & -0.17 & 1.68 & -0.89 & -0.94 \\ hour\_5\_7 & -0.17 & 2.29 & 45.38 & -0.72 \\\\[-0.75em] stddev & 0.04 & 0.81 & 17.62 & 0.23 \\ \midrule RL-4 \\ hour\_8\_10 & 0.08 & 0.19 & 0.84 & -1.24 \\ hour\_11\_13 & 0.06 & 0.23 & 1.01 & -1.80 \\ hour\_14\_16 & 0.08 & 0.22 & 1.07 & -1.47 \\ hour\_17\_19 & 0.10 & 0.24 & 1.08 & -1.58 \\ hour\_20\_22 & 0.08 & 0.19 & 0.97 & -1.49 \\ hour\_23\_1 & 0.09 & 0.16 & 0.94 & -1.09 \\ hour\_2\_4 & 0.07 & 0.20 & 1.88 & -1.50 \\ hour\_5\_7 & 0.08 & 0.22 & 0.88 & -1.66 \\\\[-0.75em] stddev & 0.01 & 0.03 & 0.31 & 0.21 \\ \midrule RL-8 \\ hour\_8\_10 & -0.26 & -1.35 & 0.39 & -1.39 \\ hour\_11\_13 & -0.26 & -2.89 & 1.42 & -1.15 \\ hour\_14\_16 & -0.24 & -1.59 & 0.48 & -1.36 \\ hour\_17\_19 & -0.27 & -1.58 & 0.41 & -1.61 \\ hour\_20\_22 & -0.25 & -2.01 & 0.45 & -1.85 \\ hour\_23\_1 & -0.37 & -1.46 & 0.34 & -1.12 \\ hour\_2\_4 & -0.26 & 3.87 & -2.94 & -0.75 \\ hour\_5\_7 & -0.25 & -2.14 & 0.38 & -1.53 \\\\[-0.75em] stddev & 0.04 & 1.95 & 1.20 & 0.32 \\ \midrule RL-16 \\ hour\_8\_10 & 0.75 & -0.93 & 0.40 & 0.52 \\ hour\_11\_13 & 0.69 & -0.67 & 0.49 & 0.42 \\ hour\_14\_16 & 0.71 & -0.81 & 0.43 & 0.46 \\ hour\_17\_19 & 0.68 & -1.22 & 0.40 & 0.48 \\ hour\_20\_22 & 0.70 & -0.86 & 0.39 & 0.50 \\ hour\_23\_1 & 0.62 & -0.87 & 0.43 & 0.55 \\ hour\_2\_4 & 0.51 & -0.56 & 1.04 & 0.48 \\ hour\_5\_7 & 0.69 & -0.91 & 0.42 & 0.50 \\\\[-0.75em] stddev & 0.07 & 0.18 & 0.21 & 0.04 \\ \bottomrule \end{tabu} } \end{table} \section{Conclusion} \label{sec:conclusion} This paper has presented a data-driven deep learning-based choice model that integrates a residual neural network architecture into a Logit model structure. This paper's methodological contribution is a new model that captures the learning process using neural network model structure for accounting for cross-effects in the utility error term. We proposed an approach that combines a residual neural network with a Logit model. This study's first objective resolves the shortcomings in the integration of machine learning techniques and neural networks in discrete choice modelling. The second objective addressed the systematic error of biased model estimates in DNNs due to its lack of economic interpretability. Unlike earlier studies that only examined the performance of machine learning algorithms and their comparisons with discrete choice models in out-of-sample predictions, this paper studies the impact of a residual function in the choice utility as a data-driven variant of the Mother Logit model. The ResLogit model proposed in this paper frames the Mother Logit model's expansion function like a neural network and the parameters within the neural network are estimated through a mini-batch stochastic gradient descent algorithm, maximizing over the out-of-sample validation set. This data-driven approach also addresses model non-identifiability issues when estimating a large number of unknown parameters. A new direction to a more flexible and general model is presented using the concept of residual modelling -- mapping the error term correlation to a residual function instead of using traditional neural networks. The skipped connection structure allows each residual layer to be estimated independently without model identification problems due to exploding/vanishing gradient during backpropagation. A classic red/blue bus IIA violation example is used, and we demonstrated our methodology on a large scale travel behaviour dataset. We examined the performance comparison with an MNL and MLP neural network across a different number of layers. The results showed that with a ResLogit model, it optimized quickly and efficiently, without degradation in model performance as the number of layers increased. In the context of model identifiability, the ResLogit model yielded a smaller standard error for each econometric model parameters than the baseline MNL model. We also demonstrated the sensitivity of trip time and time of departure variability over different model characteristics. We observed that incorporating residual layers reduced model sensitivity to cross-effects and choice heterogeneity. Our proposed ResLogit model improved discrete choice models' capabilities in terms of performance without sacrificing model interpretability. We noted that our experiment results do not consider hyperparameter tuning or regularization steps, which may affect the reliability of our model validation results. This proof-of-concept illustrates how choice modellers can leverage on deep learning methodologies and learning algorithms to enhance the current set of tools and models for discrete choice analysis. Our future work will establish additional models and extensions to our proposed ResLogit methodology. More work has to be done on the interpretability of the model, and how to define clear guidelines so researchers without advanced knowledge of machine learning can use these new modelling techniques. Also, more comparative studies can be done between different learning algorithms for Logit models. Further investigation is also required into the meta-learning side of deep learning in discrete choice modelling. For example, we do not know the optimal hyperparameter configuration or efficiently identify a good set of hyperparameters without a tedious iterative search.
2023-04-23T06:41:33.240Z
2021-02-17T02:15:24.000Z
redpajama/arxiv
arxiv_0001
2,643
13,391
c9e8b754a254a2319c3ee7959cbe0da99652cfb9
\section{Introduction} \begin{figure*} \centering \includegraphics[width=\linewidth]{K_vs_J} \caption{Plot showing the colour magnitude diagram of Galactic WR stars from the catalogue detected by \textit{Gaia} (red) and WR stars only observed at IR wavelengths (grey). Stars not observed by \textit{Gaia} have larger (>3) J$-$K colours, indicating significant extinction. Filled red circles are stars with the most reliable distances, these are limited to bright sources (K<12) with J$-$K<3.} \label{fig:K_vs_J} \end{figure*} Wolf-Rayet (WR) stars are the final stages of evolution for massive O stars ($>$25 $\mathrm{M\_{\sun}}$, \citealt{2007ARA&A..45..177C}). With extremely fast and dense stellar winds, they play an important role in helping to ionize \hii regions and disperse natal gas left over from the star formation process. This feedback may drive and quench star formation. Additionally, WR stars are potential progenitors of long Gamma Ray Bursts \citep{2010A&A...518A..29L} and stripped envelope supernovae, although some may collapse directly to black holes \citep{2009A&A...502..611G}. The later stages of massive star evolution depend heavily on parameters such as initial mass and metallicity, which influence mass loss rates \citep{2005A&A...429..581M}. Such dependencies make modelling massive star evolution challenging. The accuracy of evolutionary models can be tested with observations, which in turn depend on reliable distances. Inaccurate distances can thus lead to an incorrect understanding of massive star evolution. The Milky Way contains a rich population of WR stars, whose total has been estimated at 1200$\pm$200 \citep{2015MNRAS.449.2436R}. Over half have been detected thus far\footnote{\url{http://pacrowther.staff.shef.ac.uk/WRcat/index.php}, v1.21}\addtocounter{footnote}{-1}\addtocounter{Hfootnote}{-1}. Of those, approximately half have been discovered via IR surveys (e.g \citealt{2006MNRAS.372.1407C}, \citealt{2007MNRAS.376..248H}, \citealt{2009AJ....138..402S}), whilst the rest are optically visible. Until now, distances to WR stars have relied upon the small subset of the population, which are thought to be members of clusters or associations (e.g \citealt{1984A&AS...58..163L}). These stars, along with the WR population of the Magellanic Clouds (e.g \citealt{1968MNRAS.140..409S} and \citealt{1990ApJS...73..685V}), have been used to calculate absolute magnitude calibrations (e.g \citealt{2001NewAR..45..135V}, \citealt{2015MNRAS.447.2322R}). The calibrations were then applied to estimate distances to field stars. As there is some variation in absolute magnitudes within spectral subtypes, the resulting distances had large uncertainties (50\% according to \citealt{2001NewAR..45..135V}). Binarity is a key additional piece of the evolutionary puzzle for massive stars. \citet{2009AJ....137.3358M} estimates that 40-70\% of all massive stars are in binaries. Additionally, \citet{2012Sci...337..444S} suggests that 70\% of O stars will undergo interaction during their lifetimes. WR stars may form via Roche Lobe overflow \citep{1967ZA.....65..251K} at the upper end of the stripped star regime \citep{2018A&A...615A..78G} and may be responsible for the high rate of observed Ibc supernovae, relative to the number of massive stars (\citealt{2013MNRAS.436..774E}, \citealt{2011MNRAS.412.1522S}). Binaries therefore have a major influence on the evolutionary trajectory of massive stars. Studying the fractions of runaways can provide an insight into how massive binaries interact and verify models involving binary physics. Here, again, accurate distances are essential to determine how far a WR star has travelled over its lifetime. The second \textit{Gaia} data release (\citealt{2018AA...616A...1G}, \citealt{2016A&A...595A...1G}, hereafter referred to as DR2) offers parallaxes, proper motions and positions for over a billion stars in the Galaxy. A large fraction of the Galactic WR population have been detected in the \textit{Gaia} G band, (330-1050nm) and so \textit{Gaia} increases the number of WR with trigonometric parallaxes from just one (WR11 in Hipparcos, \citealt{2007A&A...474..653V}) to almost 400. In this work (Paper I) we present distances obtained using \textit{Gaia} data and discuss the resulting new insights into Wolf-Rayet absolute magnitudes, runaways and physical parameters. In Section ~\ref{sec:dist}, we determine the most likely distances for Galactic WR stars using a Bayesian method and in Section ~\ref{sec:absmag}, validate these using absolute magnitudes. We compare the new \textit{Gaia} distances to previous values in Section ~\ref{sec:distdisc}. Distances from the Galactic midplane are discussed in section ~\ref{sec:hab} and used to identify potential runaways. Finally, we conclude with an overview and anticipate potential improvements from later \textit{Gaia} data releases. In Paper II (Rate, Crowther \& Parker, submitted), we will use these new distances and other Gaia DR2 results to reevaluate WR membership of clusters and associations, and discuss the implications of the results on our understanding of massive star origins and evolution. Future studies will use our distances and extinctions to calculate updated WR line luminosity calibrations for application to unresolved extragalactic WR populations. \section{Distance determination methods}\label{sec:dist} \subsection{\textit{Gaia} DR2 catalogue} \label{ssec:gcat} The parallax and errors used to calculate distances were taken from the \textit{Gaia} DR2 catalogue \citep{2018AA...616A...1G}. The calculation also made use of $G$ band magnitudes, astrometric excess noise (to identify potentially spurious results) and \textit{Gaia} RA and Declination coordinates. A python {\scriptsize{ASTROQUERY}} (\citealt{2013A&A...558A..33A}, \citealt{2018AJ....156..123A}) script downloaded data from the \textit{Gaia} archive \citep{2017A&C....21...22S} using the ADQL query in Appendix A of the online material. The script searched for stars which were within 1'' of the quoted WR coordinates. Almost all known WR stars are isolated enough for this constraint to be sufficient. The majority (370) of 415 successful search coordinates came from \citet{2001NewAR..45..135V}. However, 45 coordinates from the catalogue did not lead to correct \textit{Gaia} detections. In these instances, coordinates from {\scriptsize{SIMBAD}} were used instead (\citealt{2000A&AS..143....9W}, accessed on 23/05/2018). We checked the coordinates for accuracy using images from {\scriptsize{VPHAS+}} DR3 \citep{2014MNRAS.440.2036D}, {\scriptsize{IPHAS}} DR2 (\citealt{2014MNRAS.444.3230B}, \citealt{2005MNRAS.362..753D}) and 2MASS \citep{2006AJ....131.1163S}, to ensure they corresponded to isolated WR stars. The remaining 243 WR stars yielded no successful results with either coordinate set. Figure ~\ref{fig:K_vs_J} shows most of these (>230) have J--K $>$ 3 mag, indicating significant foreground dust extinction and are therefore inaccessible to Gaia. \begin{figure} \centering \includegraphics[width=\linewidth]{g_cutoff.pdf} \caption{Histogram of $G$ band magnitudes for Gaia DR2 detected WR stars. The solid line (black) involves 187 WR stars with reliable absolute magnitudes (Section \ref{sec:absmag}) and the dashed line (red) involves the full sample of 383 WR stars.} \label{fig:g_hist} \end{figure} 383 stars (58\% of the total) from the Galactic WR catalogue\footnotemark have \textit{Gaia} parallaxes. Of those, 305 have positive parallaxes. Figure ~\ref{fig:g_hist} shows that both the total WR population, and the sample containing only the results with reliable distances, appear to be relatively complete up to $G\sim$13 mag. However, for results with robust absolute magnitudes, the distribution falls off more quickly beyond $G\sim$13 mag. This is because fainter magnitudes are preferentially removed due to their larger astrometric excess noise and increased incidence of negative parallaxes (which are more likely to produce unacceptable absolute magnitudes). \subsection{Bayesian methods} \label{sssec:bmeth} The conversion of \textit{Gaia} parallaxes to distances significantly modifies the shape of the original parallax ($\omega$) probability distribution, which means uncertainties do not transform symmetrically. This occurs unless the parallax errors ($\sigma_\omega$) are very small ($\sigma_\omega/\omega<0.1$, \citealt{2015PASP..127..994B}), which is not the case for most of our DR2 sources Additionally, many sources have negative parallaxes; a consequence of the data processing algorithm fitting noisy observations \citep{2018A&A...616A...9L} and of the variation in parallax zero points (see Section \ref{sssec:lhood}). Obtaining the WR star distances should therefore be done carefully using Bayesian methods. Bayesian inference is therefore the recommended way to transform parallaxes to distances \citep{2018A&A...616A...9L}. The end result is a probability distribution with correct uncertainties, reflecting the non symmetric transformation of parallax to distance. Bayesian methods are also capable of elegantly accounting for unphysical parallaxes and so there is no need to cut negative data from the sample \citep{2018A&A...616A...9L}. The technical details of the Bayesian method used, including equations and plots of the model \hii region and dust maps, are in Appendices B, C and D in the online material. \begin{figure} \centering \includegraphics[width=\linewidth]{uwu_plt.pdf} \caption{Weighted fit to the unit weight uncertainty factors from \citet{2018A&A...616A..17A}, used to increase the uncertainties $\sigma_\omega$, to account for underestimation in the \textit{Gaia} catalogue. The dotted line is the linear component of the fit, whilst the solid line is the total fit and the red crosses are the unit weight uncertainties of the external data.} \label{fig:uwu} \end{figure} \subsubsection{Likelihoods} \label{sssec:lhood} The likelihood can be constructed by assuming the parallax distribution is Gaussian, with a mean at the parallax measured by \textit{Gaia} and the parallax error as the standard deviation (\citealt{2018arXiv180407766H}, \citealt{2018A&A...616A...9L}, \citealt{2015PASP..127..994B}). The parallaxes quoted by \textit{Gaia} are not corrected for the global zero point. As our sample of WR stars is spread over the sky and the zero point will therefore not be dominated by regional systematics, we choose to apply this global correction to the distance calculation \citep{2018A&A...616A..17A}. In light of the variation in measured zero points and the fact that \citet{gaia_pres} states that the zero point is likely multivariate, with no general process currently available to calculate it, we choose to use the globally measured QSO zero point of $-$0.029 mas (\citealt{2018A&A...616A...2L}, \citealt{2018A&A...616A...9L}). One possible effect of this on the final distances is that if the full multivariate zero point could be used, some small negative parallaxes could be converted to positive values. We discuss further effects of this choice in Section \ref{ssec:compold}. Additionally, analysis from \citet{2018A&A...616A..17A} suggests that, when compared to external data, the errors of DR2 parallaxes in the catalogue are underestimated. This is because they are consistent with the internal uncertainties, and do not account for systematics. The underestimation varies with G band magnitude and is particularly acute for results in the range 12<G<15, which could be underestimated by 30-50\% \citep{2018AA...616A...1G}. To account for this, we calibrate the uncertainties of Gaia parallaxes using parallaxes from previous surveys. \citet{2018A&A...616A..17A} provide in their Table 1 the unit weight error calculated using a variety of comparative surveys and the median G band of these surveys. Using this data, we present the conversion curve shown in Figure~\ref{fig:uwu}. This is similar to the approach of \citet{gaia_pres}, although our model neglects the HST measurement (1.9 unit weight error at G=8 mag). It is possible to fit a combined Gaussian and straight line which can increase the size of the uncertainties in proportion to the G band magnitude. Details of the equation used for this fit and the impact of increasing the uncertainties on the distances are in Appendices B and E in the online material. These increased uncertainties were applied to our WR parallaxes and lead to a likelihood that is appropriate for the WR population. \subsubsection{Prior} \label{sssec:prior} The prior is a probability distribution of the expected distances for a given WR star. Previous work with \textit{Gaia} \citep{2018AJ....156...58B} has opted for a smooth, exponentially decreasing prior, with a single parameter that can be tuned based on galactic latitude and longitude. This is designed to follow the distribution of all observed stars within the Milky Way and to provide a distance derived purely from a geometric model. Almost all WR stars are found at large (kiloparsec) distances and lie preferentially in the Galactic plane, so their observed distribution will be significantly affected by extinction. Previous priors do not properly account for this, which could be problematic for our sample. Instead, we build a prior using \hii regions and a dust model for extinction. \hii regions approximate the spatial distribution of massive stars. They are independent of previous WR distribution maps, avoiding any bias from previous incorrect results and are well sampled across the galaxy (as they are detectable at a broad range of wavelengths). To find the overall distribution, we considered \hii region density along each line of sight. Figure ~\ref{fig:mix_gauss} shows a mixture of Gaussians fitted to binned Galactic latitude and longitude distributions, which gave normalised numbers of \hii regions at a given latitude or longitude coordinate. These were then multiplied together to get a total number density along the line of sight. We apply a simple dust model \citep{2015MNRAS.447.2322R} to account for the effects of extinction. This consists of both molecular and atomic gas, to replicate the thin and thick disks. For the Sun, we chose a distance of 8.122 kpc \citep{2018A&A...615L..15G} to the Galactic Centre and a height of 20.8 pc \citep{2019MNRAS.482.1417B} above the plane. The resulting distribution is shown in the online supplementary material, in Appendix C. \begin{figure} \centering \includegraphics[width=\linewidth]{hii_numbers} \caption{A mixture of Gaussians showing the number of \hii regions over (a) Galactic latitude and (b) Galactic longitude, based on Figure 6 and data from \citet{2003A&A...397..213P}. The solid lines are the individual Gaussians and the black dotted line is the overall fit. The peak around l=75-90$^{\circ}$ is the Cygnus X region.} \label{fig:mix_gauss} \end{figure} The prior covered distances between 0 and 15 kpc, at a resolution of 1 pc. The probability is zero below 300 pc, as we do not expect to find any WR stars detected with Gaia closer than this distance. The final form of the prior therefore varies from Gaussian like in regions with a pronounced \hii region peak or low extinction, to exponential like in regions with a less pronounced peak or high extinction. \begin{figure} \includegraphics[width=1.1\linewidth]{WR4} \caption{Posterior distribution for WR4, shown alongside the prior components and credible interval. The filled star is the most likely distance to WR4 (3.75$^{+0.89}_{-0.62}$ kpc, compared to 3.71$^{+0.65}_{-0.49}$ kpc from \citealt{2018AJ....156...58B}).} \label{fig:eg_distrib} \end{figure} \subsubsection{Posterior} \label{sssec:post} We then calculated the posterior distribution. Figure ~\ref{fig:eg_distrib} shows an example of this for WR4, together with the prior and its components. Use of the numerical dust model meant we could not differentiate the posterior and produce an analytical solution for the maximum likelihood. Instead the peak of the distribution was taken as the most likely distance. Credible intervals (similar to those used in \citealt{2018AJ....156...58B}) give distances which, when used as integral limits, cover 68\% of the area below the curve. The one sigma errors are the differences between the peak and these distances. \begin{figure} \includegraphics[width=\linewidth]{err_plot.pdf} \caption{(a) Comparison between parallax error $|\sigma_\omega/\omega|$ and astrometric error noise (mas) for Galactic WR stars from \textit{Gaia} DR2, for which dotted lines indicate values of unity for each parameter to highlight data quality flags a, e, g, n; (b) Comparison between G band magnitudes and inferred distances (pc) for Galactic WR stars from \textit{Gaia} DR2, with the dotted line marking a distance of 2 kpc.} \label{fig:err_plt} \end{figure} \begin{table*} \renewcommand{\arraystretch}{1.5} \caption{Gaia DR2 astrometric, photometric and parallax properties for 383 Galactic WR stars, including WR11 using a parallax and photometry from Hipparcos \citep{2007A&A...474..653V}. The distance for WR11 was calculated in the same manner as WR with \textit{Gaia} results, except the adjustments to calculate $\omega$ and $\sigma_{\omega}$ were not applied. Stellar luminosities, updated from \citet{2019A&A...625A..57H} and \citet{2019A&A...621A..92S} according to our revised distances, are restricted to sources with no error flags. The full table is available in the online supplementary material.} \begin{tabular}{ l@{\hspace{2mm}} l@{\hspace{3mm}} l@{\hspace{3mm}} c@{\hspace{3mm}} c@{\hspace{3mm}} c@{\hspace{3mm}} c@{\hspace{3mm}} c@{\hspace{3mm}} r@{\hspace{2mm}} c@{\hspace{2mm}} c@{\hspace{3mm}} c@{\hspace{3mm}} l} \hline WR & Spectral & Alias & RA & Dec & $\omega \pm \sigma_w$ & $d$ & |z| & $G$ & $G_{BP}-G_{RP}$ & Excess & $\log L$ & Flags \\ Number & Type & & J2015 & J2015 & mas & kpc & pc & mag & mag & Noise & $L_{\odot}$ & \\ \hline WR1 & WN4b & HD 4004 & 00 43 28.39 & +64 45 35.4 & 0.314$\pm$0.040 & 3.15$\substack{+0.47 \\ -0.36}$ & 125$\substack{+15 \\ -12}$ & 9.79 & 1.05 & 0.00 & & g \\ WR3 & WN3ha & HD 9974 & 01 38 55.62 & +58 09 22.6 & 0.342$\pm$0.051 & 2.90$\substack{+0.52 \\ -0.39}$ & 188$\substack{+37 \\ -27}$ & 10.58 & 0.18 & 0.10 & 5.56 & g \\ WR4 & WC5+? & HD 16523 & 02 41 11.67 & +56 43 49.8 & 0.258$\pm$0.051 & 3.75$\substack{+0.89 \\ -0.62}$ & 174$\substack{+46 \\ -32}$ & 9.68 & 0.51 & 0.06 & 5.72 & g \\ WR5 & WC6 & HD 17638 & 02 52 11.66 & +56 56 07.1 & 0.334$\pm$0.042 & 2.97$\substack{+0.43 \\ -0.33}$ & 90$\substack{+16 \\ -12}$ & 10.06 & 0.94 & 0.00 & 5.53 & g \\ WR6 & WN4b & EZ CMa & 06 54 13.04 & $-$23 55 42.0 & 0.441$\pm$0.065 & 2.27$\substack{+0.42 \\ -0.31}$ & 376$\substack{+73 \\ -53}$ & 6.57 & 0.04 & 0.18 & 5.78 & g \\ WR7 & WN4b & HD 56925 & 07 18 29.13 & $-$13 13 01.5 & 0.221$\pm$0.051 & 4.23$\substack{+1.08 \\ -0.74}$ & 11$\substack{+2 \\ -1}$ & 11.17 & 0.73 & 0.00 & 5.33 & g \\ WR8 & WN7o/CE & HD 62910 & 07 44 58.22 & $-$31 54 29.5 & 0.263$\pm$0.038 & 3.74$\substack{+0.63 \\ -0.48}$ & 226$\substack{+41 \\ -31}$ & 9.92 & 0.84 & 0.00 & & g \\ WR9 & WC5+O7 & HD 63099 & 07 45 50.40 & $-$34 19 48.5 & 0.212$\pm$0.035 & 4.57$\substack{+0.84 \\ -0.63}$ & 256$\substack{+70 \\ -52}$ & 10.14 & 1.30 & 0.00 & & g \\ WR10 & WN5h & HD 65865 & 07 59 46.24 & $-$28 44 03.0 & 0.162$\pm$0.040 & 5.46$\substack{+1.25 \\ -0.91}$ & 75$\substack{+12 \\ -9}$ & 10.94 & 0.60 & 0.09 &5.78 & g \\ WR11 & WC8+O7.5III-V & $\gamma$ Vel & 08 09 31.96 & $-$47 20 11.8 & 2.920$\pm$0.300 & 0.34$\substack{+0.04 \\ -0.03}$ & 24$\substack{+5 \\ -4}$ & 1.70 & & & & \\ WR12 & WN8h & Ve5-5 & 08 44 47.29 & $-$45 58 55.4 & 0.154$\pm$0.037 & 5.71$\substack{+1.24 \\ -0.92}$ & 175$\substack{+42 \\ -31}$ & 10.36 & 1.15 & 0.00 & 5.93 & g \\ \hline \end{tabular} \label{table:final} Columns are: (1) WR Number, (2) Spectral type, (3) Alternative name, (4) \textit{Gaia} Right Ascension, (5) \textit{Gaia} Declination, (6) Zero point corrected parallax $\omega$ and inflated error $\sigma_{\omega}$, (7) Distance from the Sun, (8) Distance from the midplane, (9) \textit{Gaia} G band apparent magnitude, (10) \textit{Gaia} colour index, (11) Astrometric excess noise, (12) Stellar luminosity, (13) Error flags, a = astrometric excess noise $>$ 1 mas; e = large parallax uncertainty $|\sigma_\omega/\omega|$>1; n = negative parallax $\omega$<0, g = good astrometry. \end{table*} \begin{table*} \renewcommand{\arraystretch}{1.5} \caption{Intrinsic colours of WR stars from PoWR models (\citealt{2004A&A...427..697H} and \citealt{2015A&A...579A..75T} for WN, \citealt{2012A&A...540A.144S} for WC) for $\mathrm{(b-v)_0^{WR}}$ and monochromatic (J$-$K)$^{\rm mono}_{0}$ and (H$-$K)$^{\rm mono}_{0}$, and \citet{2015MNRAS.447.2322R} for (J$-\mathrm{K_s}$)$_0$ and (H$-\mathrm{K_s}$)$_0$.} \begin{tabular}{ l@{\hspace{2mm}} l@{\hspace{3mm}} c@{\hspace{3mm}} c@{\hspace{3mm}} c@{\hspace{3mm}} c@{\hspace{3mm}} c@{\hspace{3mm}} c@{\hspace{3mm}} c} \hline WR subtype & PoWR model & $\log( T/k)$ & $\mathrm{\log(R_t)}$ & $(\mathrm{b-v})_0^{WR}$ & (J$-\mathrm{K_s}$)$_0$ & (H$-\mathrm{K_s}$)$_0$ & (J$-$K)$^{\rm mono}_{0}$ & (H$-$K)$^{\rm mono}_{0}$\\ \hline WN3-4 & WNE 12-11 & 4.95 & 1.0 & $-$0.32$\pm$0.1 & $-$0.11$\pm$0.1 & $-$0.03$\pm$0.1 & \phantom{$-$}0.24 & \phantom{$-$}0.16 \\ WN4b-7b & WNE 12-18 & 4.95 & 0.3 & $-$0.18$\pm$0.1 & \phantom{$-$}0.37$\pm$0.1 & \phantom{$-$}0.27$\pm$0.1 & \phantom{$-$}0.63 & \phantom{$-$}0.40 \\ WN5-6 & WNE 08-11 & 4.75 & 1.0 & $-$0.28$\pm$0.1 & \phantom{$-$}0.18$\pm$0.1 & \phantom{$-$}0.16$\pm$0.1 & \phantom{$-$}0.30 & \phantom{$-$}0.20 \\ WN7-9 & WNL 06-13 & 4.65 & 0.8 & $-$0.15$\pm$0.1 & \phantom{$-$}0.13$\pm$0.1 & \phantom{$-$}0.11$\pm$0.1 & \phantom{$-$}0.30 & \phantom{$-$}0.18 \\ WN6ha & WNL 07-07 & 4.70 & 1.4 & $-$0.33$\pm$0.1 & $-$0.015$\pm$0.1 & \phantom{$-$}0.03$\pm$0.1 & \phantom{$-$}0.00 & \phantom{$-$}0.00 \\ WN7ha & WNL 07-07 & 4.70 & 1.4 & $-$0.33$\pm$0.1 & $-$0.04$\pm$0.1 & \phantom{$-$}0.01$\pm$0.1 & \phantom{$-$}0.00 & \phantom{$-$}0.00 \\ WN8-9ha & WNL 05-07 & 4.60 & 1.4 & $-$0.32$\pm$0.1 & $-$0.04$\pm$0.1 & \phantom{$-$}0.01$\pm$0.1 & \phantom{$-$}0.01 & \phantom{$-$}0.00 \\ Of/WN & WNL 07-06 & 4.65 & 1.5 & $-$0.34$\pm$0.1 & $-$0.11$\pm$0.1 & $-$0.07$\pm$0.1 & $-$0.04 & $-$0.03 \\ WO2-3 & WC 17-12 & 5.20 & 0.9 & $-$0.37$\pm$0.1 & \phantom{$-$}0.11$\pm$0.1 & \phantom{$-$}0.00$\pm$0.1 & \phantom{$-$}0.20 & \phantom{$-$}0.11 \\ WC4-7 & WC 11-16 & 4.90 & 0.5 & $-$0.20$\pm$0.2 & \phantom{$-$}0.62$\pm$0.1 & \phantom{$-$}0.58$\pm$0.2 & \phantom{$-$}0.54 & \phantom{$-$}0.33 \\ WC8 & WC 09-14 & 4.80 & 0.7 & $-$0.37$\pm$0.1 & \phantom{$-$}0.43$\pm$0.1 & \phantom{$-$}0.38$\pm$0.1 & \phantom{$-$}0.38 & \phantom{$-$}0.21 \\ WC9 & WC 06-12 & 4.65 & 0.9 & $-$0.32$\pm$0.1 & \phantom{$-$}0.23$\pm$0.1 & \phantom{$-$}0.26$\pm$0.1 & \phantom{$-$}0.12 & \phantom{$-$}0.09 \\ WN/WC & & & & $-$0.23$\pm$0.1 & \phantom{$-$}0.37$\pm$0.1 & \phantom{$-$}0.27$\pm$0.1 & & \\ \hline \end{tabular} \label{table:powr_types} \end{table*} \subsection{Flags from \textit{Gaia}} \label{ssec:gflag} The validity of the distances is determined by the quality of the parallax data. A significantly negative parallax (less than the zero point), will result in a smaller likelihood than a positive parallax and will increase the proportional size of the prior. Negative parallaxes can also indicate unreliable \textit{Gaia} data. Similarly, a large error (on the scale of the data itself) will also result in a much smaller likelihood and a greater influence from the prior. These issues mainly arise from badly fitted parallax solutions, which can be identified using parameters in the \textit{Gaia} catalogue. We chose astrometric excess noise (the observational noise which needs to be added to the data to match the solution residuals) as this identifier. Large values can indicate that a solution does not fit the data well. We chose to use this parameter, as it was the quality indicator with the clearest cut-off and acted as a good benchmark for removing bad values when calculating absolute magnitudes. The excess noise can also account for modelling errors, which are not included in the observational noise. Significant astrometric excess noise is mainly applied to fainter objects, in particular those with brighter neighbours. The \textit{Gaia} documentation \citep{2018gdr2.reptE..14H} states that high excess noise will be present in early releases and suggests that users apply their own cut-offs to determine erroneous values. The ideal excess for results with distances is zero, which indicates a good fit. However, excluding an outlier with excess noise 18 mas, the average value for our sample is 0.71 mas and the standard deviation is 0.98 mas. Therefore, we flag all results with noise above 1 mas. Combined, our three criteria for flagging \textit{Gaia} data quality are {\fontfamily{qcr}\selectfont a = astrometric\_excess\_noise}>1 e = $|\sigma_\omega/\omega|$>1 n = $\omega$<0. Results without any of these issues are given the 'g' flag. These flags are applied to the distances in Table ~\ref{table:final}. We apply the flags to the zero point corrected parallaxes and the increased errors, as these are the values are used to calculate distance. A star can be flagged if it satisfies one or more of the criteria. If all three are applied, then ~37\% of the WR stars with parallaxes have an a, e or n flag. 59\% of the flagged results had more than one negative flag. This reflects the way such errors are intertwined, where a poor solution fit due to noisy observations can lead to a large astrometric excess noise, sizeable errors and negative parallaxes all at once. The relations between flags are shown in Figure ~\ref{fig:err_plt}. In general, WR stars with large astrometric excess noise are supposedly located closer than 4 kpc, and in many cases closer than 2 kpc. This latter group further breaks down into brighter objects at around $G$=11 mag (WR146 and WR115) and $G$=15 mag (including WR77p) and fainter objects with $G$>17 mag. The fainter objects may have high excess noise because of astrometric modelling difficulties, caused by issues like binarity or a badly determined spacecraft attitude during a given time interval (\citealt{2018gdr2.reptE..14H}, \citealt{2018A&A...616A...2L}). These problems would make it difficult for the \textit{Gaia} AGIS algorithm to reliably extract astrometric parameters. The brighter objects may have high excess noise for a variety of reasons, such as issues with instrument calibration \citep{2018A&A...616A...2L}. High astrometric excess noise can also occur if the stars are in binaries (WR146) or potential binaries (WR115). The other two flags show a less clear breakdown. Negative parallaxes can occur at all magnitudes and distances, but have non zero excess noise. Only a small fraction of results with large error ratios have zero astrometric excess noise and none at all occur below $G$=12 mag. Both flags become increasingly common beyond $G$=15 mag and only a few points beyond $G$=18 mag are not flagged. This is expected given that highly reddened objects at any distance are more difficult for \textit{Gaia} to observe. The flags applied to the data are listed in Table ~\ref{table:final}. Any users should note that distances to these flagged stars may be suspect and should account for this in their analysis. \section{Absolute magnitudes} \label{sec:absmag} In addition to the \textit{Gaia} data quality flags, we checked the validity of the distance results by calculating absolute magnitudes in the $\mathrm{v^{WR}}$- band \citep{1968MNRAS.140..409S}\footnote{A 'WR' superscript is added to distinguish the Smith $v$ filter from the standard Johnson V-band filter}, designed to avoid WR emission lines, and the $\mathrm{K_s}$ band. As part of this, we calculated extinction using intrinsic colours and an adopted extinction law. The result was then combined with distances and apparent magnitudes to obtain absolute magnitudes. \subsection{Intrinsic colours for single stars} \label{ssec:intcol} Intrinsic optical colours were taken from PoWR grids (\citealt{2004A&A...427..697H} and \citealt{2015A&A...579A..75T} for WN, \citealt{2012A&A...540A.144S} for WC), for single stars in the $\mathrm{v^{WR}}$ band (see Table ~\ref{table:powr_types}). The exception is for WN/WC stars, as the value $\mathrm{(b-v)_{0}^{WR}} = -$0.23 is averaged from the $E(\mathrm{b-v)^{WR}}$ values of \citet{2012A&A...540A.144S} and the $\mathrm{b^{WR}}$ and $\mathrm{v^{WR}}$ apparent magnitudes of each star. Intrinsic colours for the J, H and $\mathrm{K_s}$ bands are taken from \citet{2015MNRAS.447.2322R}, with monochromatic near-IR PoWR synthetic colours also included. \subsection{Intrinsic colours for binary systems} \label{ssec:bin} \begin{figure} \centering \includegraphics[width=\linewidth]{wn_line_str} \caption{WN stars with He{\scriptsize{II}} 4686\ang\ equivalent widths from \citet{1989ApJ...337..251C} and \citet{1996MNRAS.281..163S}. The lines show the equivalent width for a typical single WN star at each subtype. The shaded regions should contain only single stars.} \label{fig:wn_lines} \end{figure} \begin{figure} \includegraphics[width=1.1\linewidth]{wc_line_str} \caption{Equivalent widths of (a) C{\scriptsize{IV}} 5808\ang\ and (b) C{\scriptsize{III}} 5696\ang\ from \citet{1986ApJ...300..379T}, \citet{1989ApJ...337..251C}, \citet{1990ApJ...358..229S}, \citet{1991ApJ...378..302C}, \citet{2009PASP..121..591M} and \citet{2014MNRAS.445.1663Z} showing the relation between line strengths and spectral types for both single and binary stars. The dotted line shows the equivalent width for a typical single WC star at each subtype. The shaded region is the one sigma standard deviation and should contain only single stars.} \label{fig:wc_lines} \end{figure} 16\% (61 stars) of our WR sample were classified as binaries. For these systems, we calculated absolute magnitudes in the same manner as single stars, but included the companion in the intrinsic colour by measuring the dilution of the strongest optical emission lines. These are He{\scriptsize{II}} 4686\ang\ for WN stars, and C{\scriptsize{IV}} 5808\ang\ and C{\scriptsize{III}} 5696\ang\ for WC stars. We fit the relation of the equivalent width to subtype for single stars (see Figs~\ref{fig:wn_lines}--\ref{fig:wc_lines}), to obtain the equivalent width of a 'typical' single star with a particular subtype. For WC stars, we used C{\scriptsize{IV}} 5808\ang to obtain the typical equivalent width of a single WR star with subtype 4, 5 or 6. In subtypes 8 and 9, the dominant line is instead C{\scriptsize{III}} 5696\ang. The fractions for WC7, which can contain either line, were the average dilution of the two. The fractional contribution of the WR's visible light ($Fc_{WRv}$) to the binary was then found using: \begin{equation} \label{eq:wr_contrib} Fc_{WRv}=\frac{EW_{b}}{EW_{s}} \end{equation} where $EW_{b}$ is the WR equivalent width for the binary and $EW_{s}$ is the equivalent width for a single star. We summed the intrinsic colour of each component, weighted by contribution fraction, to obtain the colour for the system. WR stars contribute a higher fraction of the continuum flux to the binary at near-IR wavelengths with respect to the visual (see Table ~\ref{table:sed_ir}). To illustrate this, we compare template spectra from WR stars of different subtypes to an O star from a Kurucz ATLAS model ($T_{\rm eff}$ = 37500K and $\log g$ = 5). Each template spectrum is set to the same V-band continuum flux. The fraction of light contributed by the template O star at IR wavelengths can then be calculated. We use this to obtain the intrinsic colours of the binary in the same way as optical wavelength colours. \begin{table} \caption{The relative continuum flux contribution of WR stars to O-type companions at near-IR wavelengths for various subtypes, adopting a Kurucz ATLAS O star model with $T_{\rm eff}$ = 37500K and $\log g$ = 5 for the companion, assuming each contribute 50\% of the V-band continuum flux.} \begin{tabular}{|p{.13\textwidth}|p{.05\textwidth}|p{.05\textwidth}|p{.05\textwidth}|p{.05\textwidth}|} \hline WR subtypes & & $F_{WR}/F_{0}$ & & \\ & V & J & H & K \\ \hline WNE-w & 1 & 1.33 & 1.56 & 1.94 \\ WNE-s & 1 & 2.45 & 3.35 & 4.56 \\ WN6ha & 1 & 1.22 & 1.38 & 1.63 \\ WN8 & 1 & 2.03 & 2.70 & 3.55 \\ WN9 & 1 & 1.33 & 1.5 & 1.78 \\ Of/WN & 1 & 1.17 & 1.22 & 1.33 \\ WC4-5 & 1 & 2.03 & 2.57 & 3.55 \\ WC6-7 & 1 & 1.94 & 2.45 & 3.35 \\ WC8 & 1 & 1.86 & 2.23 & 3.00 \\ WC9 & 1 & 1.70 & 2.13 & 2.57 \\ \hline \end{tabular} \label{table:sed_ir} \end{table} For WR11, we used the light ratio derived in \citet{2000A&A...358..187D} and for WR104, we used the ratio from \citet{2000MNRAS.314...23W}. For WR30a, we estimated the fraction of light contributed by the WR was 10\%, based on the emission line strength of similar WO4 star BAT99-123 (Br93, Sand 2). For WR64-4, we used the He{\scriptsize{II}} 1.16$\mathrm{\mu m}$, 1.69$\mathrm{\mu m}$ and 2.19$\mathrm{\mu m}$ IR lines to find contribution ratios, as no optical data were available. For WR35a, a reverse approach was followed based on the absolute magnitude of the system and assuming an absolute V magnitude for the O8.5V companion (from \citealt{2006A&A...457..637M}), to calculate the absolute magnitude of the WR component. \subsection{Optical and IR extinctions} \label{ssec:extinctions} We calculate dust extinctions using the intrinsic colours (Table~\ref{table:powr_types}) and apparent magnitudes in the $\mathrm{v^{WR}}$ band taken from the Galactic Wolf-Rayet catalogue, which was primarily compiled from \citet{2001NewAR..45..135V} and \citet{1988AJ.....96.1076T}. J, H and $\mathrm{K_s}$ band magnitudes were primarily sourced from the 2MASS catalogue. The $\mathrm{K_s}$ band extinction, $A_{Ks}$, was calculated using the standard extinction law $A_{Ks} = 0.107 A\substack{WR \\ v}$ (obtained from $A_{Ks} = 0.118 A_V$ from \citealt{1989ApJ...345..245C} and $A\substack{WR \\ v} = 1.1 A_V$ from \citealt{1982IAUS...99...57T}), if values of $A\substack{WR \\ v}$ were available. Otherwise, $A_{Ks}$ was calculated with the relations of $A_H$ and $A_J$ to $A_{Ks}$ (using parameters from \citealt{2011ApJ...737...73F} towards the Galactic Centre and \citealt{2009MNRAS.400..731S} elsewhere, as in \citealt{2015MNRAS.447.2322R}). For WR25, known to have an anomalous extinction curve, we calculated $A\substack{WR \\ v}$ using $R\substack{WR \\ v}=6.2$ from \citet{1995A&A...293..427C}. Since dust extinction preferentially attenuates blue {\textbf wavelengths}, the \textit{Gaia} $G_{BP}-G_{RP}$ can be used as a proxy for extinction. Some stars had unusually high $\mathrm{K_s}$ band extinctions (possibly due to incorrect photometry), which led to erroneous absolute magnitudes. Figure ~\ref{fig:A_bp_rp}(a) shows the relationship between $(G_{BP}-G_{RP})$ and $A_{Ks}$, while Fig~\ref{fig:A_bp_rp}(b) compares $(G_{BP}-G_{RP})$ and $A\substack{WR \\ v}$. A 5$\sigma$ (grey dashed lines) cut-off from the line of best fit (black solid line) was used to exclude incorrect extinctions. Some values of $A\substack{WR \\ v}$ were also excluded for being outliers, indicating an issue either with some photometry or the $G_{BP}-G_{RP}$ magnitudes. \begin{figure} \centering \includegraphics[width=\linewidth]{A_vs_bp_rp.pdf} \caption{\textit{Gaia} $G_{BP}-G_{RP}$ colours for Galactic WR stars compared to (a) $\mathrm{K_s}$-band and (b) $\mathrm{v^{WR}}$ band extinctions. In (a), the solid black line presents the best fit to data with $G_{BP}-G_{RP}$<3 while in (b), the solid line is a best fit to all data. The grey dashed lines are the 5$\sigma$ bounds, based on the uncertainties of the fit parameters. The solid blue line is also the best fit to the data, but weighted so that it passes through $A^{\rm WR}_{v}=A_{Ks}$=0 at $(G_{BP}-G_{RP})_0=-0.43$, as expected for a generic B0\,V star.} \label{fig:A_bp_rp} \end{figure} To obtain meaningful results at low $G_{BP}-G_{RP}$ (where we have no observations) we ensure that the extinction is zero at the intrinsic colour, $(G_{BP}-G_{RP})_0$. We obtain $(G_{BP}-G_{RP})_0$ for a generic blue energy distribution, namely a B0\,V spectral type, with $V-I$=$-$0.44 in the Johnson filter \citep{2001ApJ...558..309D}. We transform this relation to the Cousins system \citep{1979PASP...91..589B} and finally to $(G_{BP}-G_{RP})_0=-0.43$, using the $V-I$ to $G_{BP}-G_{RP}$ calibration in \citet{2018A&A...616A...4E}. Carrasco \& Jordi (priv. comm) (using methodology from \citealt{2010A&A...523A..48J}) provide the transformation from $A_V$ to $A_G$ by artificially reddening template PoWR WR spectra with different extinctions (from $A_V\sim0.5$ to 36 mag). Synthetic photometry for the \textit{Gaia} \citep{2018A&A...619A.180M} passbands was then obtained at each $A_V$. This allowed for the calculation of $E(G_{BP}-G_{RP})$ and $A_G$. The results from Carrasco \& Jordi allow us to find the intrinsic colour $(G_{BP}-G_{RP})_0$ for each WR subtype. The generic B0\,V model we have used to calculate $(G_{BP}-G_{RP})_0$, is within the uncertainty of the average WR value $(G_{BP}-G_{RP})_0=-0.35\pm0.14$ of the subtypes in Table ~\ref{table:intrinsic_col}. \begin{table} \caption{Conversion equations between narrowband $\mathrm{v^{WR}}$ and \textit{Gaia} G band filters for $(G_{BP}-G_{RP})_0$ of different spectral types, using results from Carrasco \& Jordi (valid for $A_v$ < 12).} \begin{tabular}{ccc} \hline WR class & $(G_{BP}-G_{RP})_0$ & $A\substack{WR \\ v}$ to $A_G$ \\ \hline WNE-w & $-$0.421 & $-$0.0169$A_v\mathrm{^{2}}$+0.894$A_v$ \\ WNE-s & $-$0.136 & $-$0.0159$A_v\mathrm{^{2}}$+0.871$A_v$ \\ WN6ha & $-$0.406 & $-$0.0166$A_v\mathrm{^{2}}$+0.891$A_v$ \\ WN8 & $-$0.163 & $-$0.0157$A_v\mathrm{^{2}}$+0.868$A_v$ \\ WN9 & $-$0.359 & $-$0.0163$A_v\mathrm{^{2}}$+0.886$A_v$ \\ WC5 & $-$0.619 & $-$0.0178$A_v\mathrm{^{2}}$+0.933$A_v$ \\ WC7 & $-$0.479 & $-$0.0182$A_v\mathrm{^{2}}$+0.921$A_v$ \\ WC8 & $-$0.360 & $-$0.0178$A_v\mathrm{^{2}}$+0.901$A_v$ \\ WC9 & $-$0.159 & $-$0.0156$A_v\mathrm{^{2}}$+0.870$A_v$ \\ B0V SED & $-$0.430 & \\ \hline \end{tabular} \label{table:intrinsic_col} \end{table} \renewcommand{\arraystretch}{1.5} \begin{table} \caption{Average absolute magnitudes for Galactic Wolf-Rayet subtypes in $\mathrm{v^{WR}}$ and $\mathrm{K_s}$ band filters. In the $\mathrm{v^{WR}}$ band, the WC9d sample has been combined with non dusty WC9 stars.} \begin{tabular}{|p{.1\textwidth}|p{.08\textwidth}|p{.05\textwidth}|p{.08\textwidth}|p{.05\textwidth}|p{.08\textwidth}|p{.08\textwidth}|p{.1\textwidth}|p{.1\textwidth}|p{.08\textwidth}|p{.05\textwidth}|} \hline WR subtypes & $M_{v^{WR}}$ (mag) & N($\mathrm{v^{WR}}$) & $M_{Ks}$ (mag) & N($\mathrm{K_s}$) \\ \hline WN3-4 & $-3.6\pm0.5$ & 6 & $-3.1\pm0.6$ & 7 \\ WN5-6 & $-4.3\pm0.6$ & 22 & $-4.0\pm0.5$ & 33 \\ WN6-7ha & $-6.5\pm0.3$ & 3 & $-6.2\pm0.3$ & 5 \\ WN4-6b & $-4.5\pm0.6$ & 13 & $-4.6\pm0.7$ & 15 \\ WN7 & $-4.6\pm0.6$ & 10 & $-4.8\pm0.3$ & 15 \\ WN8 & $-5.7\pm0.6$ & 8 & $-6.0\pm0.8$ & 13 \\ WN8-9ha & $-7.0\pm0.4$ & 2 & $-6.8\pm0.4$ & 2 \\ WN9 & $-6.0\pm0.8$ & 2 & $-5.7\pm0.7$ & 6 \\ Of/WN & $-5.8\pm0.1$ & 2 & $-6.1\pm0.1$ & 3 \\ WO2-4 & $-3.1\pm1.4$ & 3 & $-2.6\pm1.0$ & 4 \\ WC4-5 & $-4.1\pm0.6$ & 11 & $-4.3\pm0.4$ & 11 \\ WC6-7 & $-3.9\pm0.4$ & 19 & $-4.9\pm0.4$ & 22 \\ WC8 & $-4.5\pm0.9$ & 6 & $-5.3\pm0.5$ & 7 \\ WC9 & $-4.6\pm0.4$ & 12 & $-4.8\pm0.5$ & 9 \\ WC9d & & & $-6.6\pm0.8$ & 13 \\ \hline \end{tabular} \label{table:avg_absmag} \end{table} For the $\mathrm{K_s}$ band, we obtain the $G_{BP}-G_{RP}$ to $A_{Ks}$ relationship using data with $G_{BP}-G_{RP}<3$. This is the regime in which $A_{Ks}$ follows the extinction law, as these stars are also observed in the $\mathrm{v^{WR}}$ band. At higher $G_{BP}-G_{RP}$, the calculated extinction begins to deviate from this relationship. The empirical fit is shown in blue in Figure ~\ref{fig:A_bp_rp}(a) and has the form: \begin{equation} \label{eq:K_vs_bp_rp} A = X(G_{BP}-G_{RP})+Y \end{equation} where $G_{BP}-G_{RP}$ is the value from the \textit{Gaia} catalogue, $X$=0.2250 and $Y$=0.0961. The $\mathrm{v^{WR}}$ band, shown in Figure ~\ref{fig:A_bp_rp}(b), was much more closely grouped around the line of best fit, with $X$=2.217 and $Y$=0.9436. The gradient is 9.85 times the gradient for the $\mathrm{K_s}$ band. This is slightly larger than the $A_{Ks} = A\substack{WR \\ v}/9.35$ extinction law used to calculate values of $A_{Ks}$ with $A\substack{WR \\ v}$. The deviation reflects the fact that some values of $A_{Ks}$ were not calculated using that extinction law. \begin{figure*} \begin{adjustwidth}{-2.7cm}{2.7cm} \includegraphics[width=1.3\linewidth]{BP_RP_vs_Gabs.pdf} \end{adjustwidth} \caption{(a) \textit{Gaia} DR2 colour magnitude diagram for Galactic WR stars plus O stars from GOSC (v4.1, \citealt{2013msao.confE.198M}). Absolute magnitudes are calculated using our inferred distance moduli $\mu$ and $A_G$ (converted from $A^{\rm WR}_{v}$ using the relation from Carrasco \& Jordi). The red star is the WR component of $\gamma$ Velorum, the only WR star with a trigonometric parallax from \textit{Hipparcos}; (b) \textit{Gaia} DR2 colour magnitude diagram for Galactic WR stars plus 70,000 stars from DR2, satisfying the selection criteria from section 2.1 of \citet{2018A&A...616A..10G}.} \label{fig:cmd} \end{figure*} We can also use the synthetic photometry from Carrasco \& Jordi to calculate the conversion relationship from $A\substack{WR \\ v}$ to $A_G$ (also shown in Table ~\ref{table:intrinsic_col}), by converting $A_V$ in their relationship to $A\substack{WR \\ v}$. This enables us to calculate the absolute \textit{Gaia} G magnitude and present the \textit{Gaia} colour magnitude diagram (CMD) in Figure ~\ref{fig:cmd}, for the most reliable WR results. Fig.~\ref{fig:cmd}(a) presents a CMD for Galactic WR stars plus visually bright O stars from v4.1 of the Galactic O Star Catalogue (GOSC, \citealt{2013msao.confE.198M}), while Fig.~\ref{fig:cmd}(b) compares the CMD of WR stars to 70,000 DR2 stars from \citet{2018A&A...616A..10G}. Two exceptionally bright stars are the extreme hypergiants He\,3-519 (WR31a) and AG Car (WR31b), which exhibit very late WN characteristics at extreme visual minima \citep{1994A&A...281..833S}. \subsection{Absolute magnitudes} \label{ssec:absmag} \begin{figure*} \begin{minipage}[c]{0.8\linewidth} \includegraphics[width=\linewidth]{MK_plotted.pdf} \end{minipage}\hfill \begin{minipage}[c]{0.2\linewidth} \caption{Absolute magnitudes in the $\mathrm{K_s}$ band. Red crosses are individual WR star results and the red circle is the average for each spectral subtype (with the sample standard deviation of the data as the uncertainties). Green squares are the comparative data from \citet{2015MNRAS.447.2322R}.} \label{fig:MK} \end{minipage} \end{figure*} \begin{figure*} \begin{minipage}[c]{0.8\linewidth} \includegraphics[width=\linewidth]{Mv_plotted.pdf} \end{minipage}\hfill \begin{minipage}[c]{0.2\linewidth} \caption{Absolute magnitudes in the $\mathrm{v^{WR}}$ band. Red crosses are individual WR star results and the red circle is the average for each spectral subtype (with the sample standard deviation of the data as the uncertainties). Green squares are the comparative data from \citet{2001NewAR..45..135V}. Results from the LMC (\citealt{2014A&A...565A..27H}, \citealt{2019A&A...627A.151S} and \citealt{2002A&A...392..653C}) are shown in blue, with crosses for individual stars and the diamond the average for each subtype. LMC WN5-6 stars include very luminous H-rich main sequence WN5--6h stars. Results for WO were calculated using \citet{2015A&A...581A.110T} and \citet{1988AJ.....96.1076T}} \label{fig:Mv} \end{minipage} \end{figure*} We used the extinctions, distances and apparent magnitudes to calculate the absolute magnitudes for stars that have reliable extinctions (within the 5$\sigma$ bounds of Figure ~\ref{fig:A_bp_rp}). Repeating the calculation using a Monte Carlo selection (bootstrapping with replacement) from the distributions of the three parameters, produced a binned histogram of absolute magnitude against frequency. This acted as a proxy for the probability distribution of each absolute magnitude. A Gaussian or Weibull distribution was fit to the binned data, to find the most likely absolute magnitude and uncertainties (more details are available in Appendix F of the online material). For binaries, the absolute magnitudes of Wolf-Rayet components were separated from the total system magnitude. A multi step process of sigma clipping allowed us to find reliable absolute magnitudes for all WR stars. First, stars with high astrometric excess noise, or unrealistically low absolute magnitudes ($\geq -$1 mag) were removed from the sample. We then calculated the averages of the remaining stars in each subtype class. Stars with unusually high or low absolute magnitudes (defined as were greater than one sample standard deviation, from the mean) were then cut from the sample. This cut-off provided a good balance between excluding clearly incorrect values and including valid ones across all subtypes. The remaining sample contained only the most reliable absolute magnitude results in each subclass and were used to calculate the averages presented in Table ~\ref{table:avg_absmag}. LBVs, aside from He~3-519 (WR31a) and AG Car (WR31b), were removed due to variability. WR20-2, WR42-1, WR43-2, WR43-3 were also excluded from the averages, owing to uncertain subtypes. We obtain $\mathrm{K_s}$ band results for dusty subtypes (WC8d and WC9d) by converting $A\substack{WR \\ v}$ to $A_{Ks}$, using the standard extinction law. This method prevents the IR dust emission from contaminating the extinction calculation. The absolute magnitudes could then be calculated for each subtype and in each filter, with the standard deviation providing upper and lower bounds on the typical absolute magnitudes. The WC9d were combined in the $\mathrm{v^{WR}}$ band, but not in the $\mathrm{K_s}$ band, as their IR excess renders them brighter than dust free WR stars. As there were only three WC8d (WR48a, WR53 and WR113) in the final sample, these stars were grouped with the non dusty WC8 stars and only WR113 was used to calculate the final absolute $\mathrm{K_s}$ in Table~\ref{table:avg_absmag}. Excluding WR113 from the average, we obtain $M_{Ks}$=--5.3~mag for WC8 stars, the same result as Table~\ref{table:avg_absmag}. \begin{table*} \renewcommand{\arraystretch}{1.5} \caption{Absolute $\mathrm{K_s}$-band magnitudes for Galactic WR stars. The full table is available in the online supplementary material.} \begin{tabular}{|p{.07\textwidth}|p{.12\textwidth}|p{.05\textwidth}|p{.05\textwidth}|p{.05\textwidth}|p{.05\textwidth}|p{.08\textwidth}|p{.05\textwidth}|p{.08\textwidth}|p{.08\textwidth}|p{.05\textwidth}|} \hline WR Number & Spectral type & $\mathrm{K_s}$ (mag) & $\mu$ (mag) & J$-\mathrm{K_s}$ (mag) & H$-\mathrm{K_s}$ (mag) & $A_{Ks}$ (mag) & $M\substack{\mathrm{Sys} \\ \mathrm{\mathrm{K_s}}}$ (mag) & F$\substack{\mathrm{WR} \\ \mathrm{K_s}}$/ F$\substack{\mathrm{Sys} \\ \mathrm{K_s}}$ & $M\substack{\mathrm{WR} \\ \mathrm{K_s}}$ (mag) & Flags \\ \hline WR1 & WN4b & 7.48 & 12.49 & 0.73 & $0.38$ & 0.30$\pm$0.08 & & & $-5.4\substack{+0.3 \\ -0.3}$ & b: \\ WR3 & WN3ha & 10.01 & 12.31 & 0.23 & $0.12$ & 0.11$\pm$0.08 & & & $-2.5\substack{+0.3 \\ -0.4}$ & b: \\ WR4 & WC5+? & 7.88 & 12.87 & 0.87 & $0.69$ & 0.18$\pm$0.11 & & & $-5.2\substack{+0.4 \\ -0.5}$ & b \\ WR5 & WC6 & 7.65 & 12.36 & 0.98 & $0.69$ & 0.30$\pm$0.11 & & & $-5.1\substack{+0.3 \\ -0.3}$ & g \\ WR6 & WN4b & 5.89 & 11.78 & 0.46 & $0.34$ & 0.05$\pm$0.08 & & & $-6.0\substack{+0.3 \\ -0.4}$ & b \\ WR7 & WN4b & 9.27 & 13.13 & 0.70 & $0.40$ & 0.24$\pm$0.08 & & & $-4.2\substack{+0.4 \\ -0.5}$ & g \\ WR8 & WN7o/CE & 7.93 & 12.87 & 0.64 & $0.39$ & 0.31$\pm$0.08 & & & $-5.3\substack{+0.3 \\ -0.3}$ & b: \\ WR9 & WC5+O7 & 7.54 & 13.3 & 0.91 & $0.57$ & 0.45$\pm$0.08 & $-6.3\substack{+0.3 \\ -0.4}$ & $0.60\pm0.24$ & $-5.7\substack{+0.9 \\ -0.7}$ & b \\ WR10 & WN5h & 9.61 & 13.69 & 0.44 & $0.28$ & 0.24$\pm$0.17 & & & $-4.4\substack{+0.5 \\ -0.5}$ & g \\ \hline \label{table:final_absmagk} \end{tabular} Columns: (1) WR Number, (2) Spectral type, (3) $\mathrm{K_s}$ apparent magnitude, (4) Distance modulus $\mu$, (5) J$-\mathrm{K_s}$ colour, (6) H$-\mathrm{K_s}$ colour, (7) $\mathrm{K_s}$ band extinction $A_{Ks}$, (8) Absolute magnitude of binary system (including companion), (9) Fraction of light contributed to the binary system by the WR component, (10) Absolute magnitude of WR star, (11) Error flags, where $M$ > upper$_{\rm initial}$ or $M$ < lower$_{\rm initial}$ = b, $M$ > upper$_{\rm final}$ or $M$ < lower$_{\rm final}$ = b: ($_{\rm initial}$ denotes the averages calculated before sigma clipping, $_{\rm final}$ are the final absolute magnitude boundaries) and g are results with no issues. \end{table*} \begin{table*} \renewcommand{\arraystretch}{1.5} \caption{Absolute $\mathrm{v^{WR}}$-band magnitudes for Galactic WR stars. The full table is available in the online supplementary material.} \begin{tabular}{|p{.07\linewidth}|p{.12\linewidth}|p{.06\linewidth}|p{.05\linewidth}|p{.07\linewidth}|p{.08\linewidth}|p{.08\linewidth}|p{.12\linewidth}|p{.05\linewidth}|p{.05\linewidth}|} \hline WR Number & Spectral type & $\mathrm{v^{WR}}\pm$0.1 (mag) & $\mu$ (mag) & $\mathrm{(b-v)}^{WR}$ (mag) & $A\substack{WR \\ v}$ (mag) & $M\substack{\mathrm{Sys} \\ \mathrm{v}}$ (mag) & F$\substack{\mathrm{WR} \\ \mathrm{v}}$/ F$\substack{\mathrm{Sys} \\ \mathrm{v}}$ & M$\substack{\mathrm{WR} \\ \mathrm{v}}$ (mag) & Flags \\ \hline WR1 & WN4b & 10.51 & 12.49 & 0.51 & 2.84$\pm$0.71 & & & $-4.9\substack{+0.8 \\ -0.8}$ & g \\ WR3 & WN3ha & 10.70 & 12.31 & -0.06 & 1.07$\pm$0.71 & & & $-2.8\substack{+0.8 \\ -0.8}$ & b: \\ WR4 & WC5+? & 10.53 & 12.87 & 0.20 & 1.65$\pm$1.01 & & & $-4.2\substack{+1.1 \\ -1.1}$ & g \\ WR5 & WC6 & 11.02 & 12.36 & 0.47 & 2.76$\pm$1.01 & & & $-4.2\substack{+1.1 \\ -1.1}$ & g \\ WR6 & WN4b & 6.94 & 11.78 & -0.07 & 0.45$\pm$0.71 & & & $-5.4\substack{+0.8 \\ -0.8}$ & b: \\ WR7 & WN4b & 11.75 & 13.13 & 0.36 & 2.22$\pm$0.71 & & & $-3.8\substack{+0.9 \\ -0.9}$ & b: \\ WR8 & WN7o/CE & 10.48 & 12.87 & 0.47 & 2.88$\pm$0.71 & & & $-5.4\substack{+0.8 \\ -0.8}$ & b: \\ WR9 & WC5+O7 & 10.93 & 13.3 & 0.74 & 4.16$\pm$0.72 & $-6.6\substack{+0.8 \\ -0.8}$ & $0.29\pm0.12$ & $-5.3\substack{+1.4 \\ -1.2}$ & b \\ WR10 & WN5h & 11.08 & 13.69 & 0.22 & 2.26$\pm$1.61 & & & $-5.0\substack{+1.7 \\ -1.7}$ & b: \\ \hline \end{tabular} \label{table:final_absmagv} Columns: (1) WR Number, (2) Spectral type, (3) $\mathrm{v^{WR}}$ apparent magnitude and error, (4) Distance modulus $\mu$, (5) $\mathrm{(b-v)}^{WR}$ colour, (6) $\mathrm{v^{WR}}$ band extinction $A_v$, (7) Absolute magnitude of binary system (including companion), (8) Fraction of light contributed to the binary system by the WR component, (9) Absolute magnitude of WR star, (10) Error flags, where $M$ > upper$_{\rm initial}$ or $M$ < lower$_{\rm initial}$ = b, $M$ > upper$_{\rm final}$ or $M$ < lower$_{\rm final}$ = b: ($_{\rm initial}$ denotes the averages calculated before sigma clipping, $_{\rm final}$ are the final absolute magnitude boundaries) and g are results with no issues. \end{table*} In Figure ~\ref{fig:MK}, we present the final absolute $\mathrm{K_s}$ band magnitudes and uncertainties for each subtype. These are compared with corresponding values from \citet{2015MNRAS.447.2322R}. Figure ~\ref{fig:Mv} shows the same distribution for the $\mathrm{v^{WR}}$ band, compared with \citet{2001NewAR..45..135V}. Tables ~\ref{table:final_absmagk} and ~\ref{table:final_absmagv} show results for individual stars (the full lists are in the supplementary online material). We additionally plot the absolute magnitudes for 116 LMC stars in Figure~\ref{fig:Mv}, using results from \citet{2014A&A...565A..27H} for single WN and Of supergiant stars (excluding WN2b), \citet{2019A&A...627A.151S} for stars in binaries, \citet{2002A&A...392..653C} for single WC stars and reddenings from \citet{2015A&A...581A.110T} and $\mathrm{v^{WR}}$ band magnitudes from \citet{1988AJ.....96.1076T} for BAT99-123 (WO4). We adopt spectral types of LMC late WN stars from \citet{1997A&A...320..500C} instead of \citet{2008MNRAS.389..806S}. From Fig.~\ref{fig:Mv}, absolute $\mathrm{v^{WR}}$ magnitudes of LMC stars are brighter than their Galactic analogues, so it is inappropriate to apply LMC WR absolute magnitudes to Galactic stars. LMC WN5--6 stars are particularly bright, since this sample includes the luminous H-rich main sequence WN5--6h stars whose closest Galactic analogues are WN6--7ha stars which are amongst the visually brightest WR stars in the Milky Way. In total, reasonable absolute magnitudes, extinctions and no \textit{Gaia} excess noise flags, were obtained in 187 cases. Absolute magnitudes for almost all WR subtypes revealed standard deviations that overlapped with the uncertainty range of the previous results in both the $\mathrm{v^{WR}}$ and the $\mathrm{K_s}$ bands. The differences between values can be attributed to the improved distance estimates and the increased number of stars with distances. Some stars, such as WR2 (the only WN2 star, \citealt{2019MNRAS.484.5834C}), were not present in the \textit{Gaia} catalogue. There is a clear trend across both filters of increasing absolute magnitudes with increasing subtype. In both filters, WN4-6b are brighter than their weak-lined counterparts despite their higher effective temperatures \citep{2006A&A...457.1015H}. WNLha stars are known to be highly luminous, and conform to this expectation. \begin{table*} \caption{WR stars within 2 kpc of the Sun, including colour excess, K-band extinction and $\mathrm{A_{Ks}}$/kpc, extinction per kpc.} \begin{tabular}{|p{.08\linewidth}|p{.15\linewidth}|p{.15\linewidth}|p{.08\linewidth}|p{.05\linewidth}|p{.08\linewidth}|p{.08\linewidth}|p{.08\linewidth}|} \hline WR Number & Alias & Spectral type & Distance (kpc) & Flags & E(B-V) & $\mathrm{A_{Ks}}$ & $\mathrm{A_{Ks}}$/kpc \\ \hline WR11 & $\gamma$ Vel & WC8+O7.5III-V & 0.34$\substack{+0.04 \\ -0.03}$ & ... & 0.00$\pm$0.30 & 0.00$\pm$0.11 & $0.00\substack{+0.32 \\ -0.32}$ \\ WR94 & HD 158860 & WN5o & 0.95$\substack{+0.06 \\ -0.06}$ & g & 1.24$\pm$0.21 & 0.45$\pm$0.08 & $0.47\substack{+0.09 \\ -0.08}$ \\ WR90 & HD 156385 & WC7 & 1.15$\substack{+0.11 \\ -0.09}$ & g & 0.10$\pm$0.30 & 0.04$\pm$0.11 & $0.03\substack{+0.09 \\ -0.03}$ \\ WR78 & HD 151932 & WN7h & 1.25$\substack{+0.15 \\ -0.12}$ & g & 0.44$\pm$0.21 & 0.16$\pm$0.08 & $0.13\substack{+0.06 \\ -0.06}$ \\ WR139 & HD 193576 & WN5o+O6III-V & 1.31$\substack{+0.07 \\ -0.06}$ & g & 0.81$\pm$0.24 & 0.30$\pm$0.09 & $0.23\substack{+0.07 \\ -0.07}$ \\ WR79 & HR 6265 & WC7+O5-8 & 1.37$\substack{+0.12 \\ -0.10}$ & g & 0.31$\pm$0.26 & 0.11$\pm$0.09 & $0.08\substack{+0.07 \\ -0.07}$ \\ WR145 & AS 422 & WN7o/CE+? & 1.46$\substack{+0.12 \\ -0.10}$ & g & 2.28$\pm$0.39 & 0.83$\pm$0.14 & $0.57\substack{+0.11 \\ -0.10}$ \\ WR110 & HD 165688 & WN5-6b & 1.58$\substack{+0.15 \\ -0.12}$ & g & 1.13$\pm$0.21 & 0.41$\pm$0.08 & $0.26\substack{+0.05 \\ -0.05}$ \\ WR111 & HD 165763 & WC5 & 1.63$\substack{+0.32 \\ -0.23}$ & g & 0.22$\pm$0.30 & 0.08$\pm$0.11 & $0.05\substack{+0.07 \\ -0.05}$ \\ WR142 & Sand 5 & WO2 & 1.65$\substack{+0.11 \\ -0.09}$ & g & 2.13$\pm$0.21 & 0.78$\pm$0.08 & $0.47\substack{+0.06 \\ -0.05}$ \\ WR105 & NS 4 & WN9h & 1.73$\substack{+0.32 \\ -0.23}$ & g & 2.41$\pm$0.21 & 0.88$\pm$0.08 & $0.51\substack{+0.10 \\ -0.08}$ \\ WR134 & HD 191765 & WN6b & 1.75$\substack{+0.13 \\ -0.11}$ & g & 0.46$\pm$0.21 & 0.17$\pm$0.08 & $0.10\substack{+0.04 \\ -0.04}$ \\ WR52 & HD 115473 & WC4 & 1.75$\substack{+0.16 \\ -0.13}$ & g & 0.59$\pm$0.30 & 0.22$\pm$0.11 & $0.12\substack{+0.06 \\ -0.06}$ \\ WR144 & HM19-1 & WC4 & 1.75$\substack{+0.24 \\ -0.19}$ & g & & 0.47$\pm$0.19 & $0.27\substack{+0.11 \\ -0.11}$ \\ WR93 & Th10-19 & WC7+O7-9 & 1.76$\substack{+0.19 \\ -0.15}$ & g & 1.67$\pm$0.23 & 0.61$\pm$0.08 & $0.34\substack{+0.06 \\ -0.06}$ \\ WR142-1 & HBHalpha 4203-27 & WN6o & 1.77$\substack{+0.23 \\ -0.18}$ & g & & 0.69$\pm$0.16 & $0.39\substack{+0.10 \\ -0.10}$ \\ WR113 & HD 168206 & WC8d+O8-9IV & 1.80$\substack{+0.24 \\ -0.19}$ & g & 0.94$\pm$0.21 & 0.34$\pm$0.08 & $0.19\substack{+0.05 \\ -0.05}$ \\ WR142a & PCG02 1 & WC8 & 1.81$\substack{+0.61 \\ -0.37}$ & g & & 0.83$\pm$0.19 & $0.46\substack{+0.19 \\ -0.14}$ \\ WR133 & HD 190918 & WN5o+O9I & 1.85$\substack{+0.16 \\ -0.14}$ & g & 0.36$\pm$0.21 & 0.13$\pm$0.07 & $0.07\substack{+0.04 \\ -0.04}$ \\ WR113-2 & SMG09 1425$\_$47 & WC5-6 & 1.86$\substack{+0.90 \\ -0.56}$ & g & & 0.65$\pm$0.21 & $0.35\substack{+0.21 \\ -0.16}$ \\ WR70-5 & WM10 11b & WC9 & 1.95$\substack{+0.75 \\ -0.47}$ & g & & 1.26$\pm$0.26 & $0.65\substack{+0.28 \\ -0.21}$ \\ WR98 & HDE 318016 & WN8o/C7 & 1.96$\substack{+0.31 \\ -0.24}$ & g & 1.59$\pm$0.21 & 0.58$\pm$0.08 & $0.29\substack{+0.06 \\ -0.05}$ \\ WR25 & HD 93162 & O2.5If*/WN6+O & 1.97$\substack{+0.18 \\ -0.15}$ & g & 0.93$\pm$0.32 & 0.34$\pm$0.11 & $0.17\substack{+0.06 \\ -0.06}$ \\ WR135 & HD 192103 & WC8 & 1.98$\substack{+0.18 \\ -0.15}$ & g & 0.41$\pm$0.21 & 0.15$\pm$0.08 & $0.08\substack{+0.04 \\ -0.04}$ \\ WR85 & HD 155603B & WN6h & 1.99$\substack{+0.30 \\ -0.24}$ & g & 1.03$\pm$0.21 & 0.37$\pm$0.08 & $0.19\substack{+0.05 \\ -0.04}$ \\ \hline \end{tabular} \label{table:in_2kpc} \end{table*} The spread in absolute magnitudes is similar to those previously obtained in the near-IR, but slightly larger in the $\mathrm{v^{WR}}$ band. \citet{2015MNRAS.447.2322R} quote a range of 0.3-0.6 mag, whilst the standard deviation in our $\mathrm{K_s}$ band results spans 0.1-1.0 mag, but is also more typically 0.3-0.6 mag. For the $\mathrm{v^{WR}}$ band, the standard deviations range from 0.3-1.4 mag and mostly have standard deviations between 0.4-0.6 mag. We therefore corroborate the findings of \citet{2019A&A...621A..92S} that WC stars of the same subtype have a broader range of absolute magnitudes than expected. We also posit this is true for WN stars (\citealt{2019A&A...625A..57H} also note the relations between absolute magnitude and subtype are not strict). The uncertainties show no systematic differences between WC and WN classes or regular variation across subtypes. However, particularly in the $\mathrm{v^{WR}}$ band, some classes suffered from very small numbers of WR stars (only 2 WN9 stars had $\mathrm{v^{WR}}$ band magnitudes, for instance). This increases the size of the uncertainties on the mean result. Due to this intrinsic variation, we advise caution when using averages as absolute magnitude calibrations and recommend accounting for the large uncertainties by exploring other methods, such as a Bayesian approach with a probability distribution centred on the average magnitude. We also recommend continued use of the intrinsic colours in Table ~\ref{table:powr_types}, rather than calculating new values using our methods and results. The large uncertainties of our absolute magnitudes, mean that propagated uncertainties of any resulting intrinsic colours are correspondingly large. These new uncertainties are far larger than the intrinsic colours from Table ~\ref{table:powr_types}. \subsection{Sensitivity of results to adopted intrinsic colours} We test the sensitivity of the results to the intrinsic colours. For the $v^{WR}$ band, this is straightforward in that any difference in ${\mathrm{(b-v)_0^{WR}}}$ is propagated through to the extinction (so multiplied by 4.12, \citealt{1982IAUS...99...57T}). However, within the $\mathrm{K_s}$ band, the combination of (J$-\mathrm{K_s}$)$_0$ and (H$-\mathrm{K_s}$)$_0$ complicates this somewhat and we test the effects by calculating $M_{Ks}$ with alternative J$-\mathrm{K_s}$ and H$-\mathrm{K_s}$ synthetic colours. These are taken from the PoWR grids (\citealt{2004A&A...427..697H} and \citealt{2015A&A...579A..75T} for WN, \citealt{2012A&A...540A.144S} for WC), using the same models as Table ~\ref{table:powr_types}. Unlike the $\mathrm{b-v}^{WR}$ colours, these are only valid at the monochromatic wavelengths and not the whole filter bands, which are affected by emission lines, especially for early-type WC stars. The difference in absolute magnitudes are between 0.05 for WN5-6 and 0.2 for WC6-7 and WN2-4 (as emission lines fall within the filter band and are not included in the monochromatic result), with most subtypes falling between 0.1 and 0.2. In all instances, this was well within the uncertainties on individual magnitudes. \subsection{Photometric Flags} \label{ssec:pflag} In addition to the \textit{Gaia} flag, we identify results with potentially spurious absolute magnitudes. As stars with incorrect extinctions were removed, spurious results can indicate either incorrect apparent magnitudes, or an incorrect \textit{Gaia} parallax, whose distance generates the wrong absolute magnitude. We therefore adopt two different flags, one where the absolute magnitude is implausible and another where the absolute magnitude only just falls outside the uncertainty of the subtype average. The latter does not necessarily indicate a bad result, but these data should be treated with caution. M > upper$_{initial}$ or M < lower$_{initial}$ = b M > upper$_{final}$ or M < lower$_{final}$ = b: where upper and lower are the upper and lower magnitude bounds of the absolute magnitude average. $_{initial}$ denotes the averages calculated before sigma clipping (Section ~\ref{sec:absmag}), $_{final}$ are the final absolute magnitude boundaries (as in Table ~\ref{table:avg_absmag}) and M is the absolute magnitude of individual WR stars. Results with a 'b' flag are highly implausible and lie well outside the range of acceptable absolute magnitudes, whilst those with a 'b:' flag are still acceptable, but fall outside the 1$\sigma$ uncertainties of the results in Table ~\ref{table:avg_absmag}. Again, results without any of these issues are given the 'g' flag. Results without any absolute magnitudes are flagged with 'u'. These stars were included to provide the reader with the distance moduli of the stars and any other helpful information (e.g apparent magnitudes), if their absolute magnitudes could not be calculated. For all subsequent analysis we use only the most photometrically reliable results, which have a 'b:' or 'g' flag in either the $\mathrm{v^{WR}}$ band, or the $\mathrm{K_s}$ band. These data do not have high astrometric excess noise ('a') Gaia data quality flags. Results with, for example, two 'b' flags were excluded. These flags are applied to the absolute magnitudes in Tables ~\ref{table:final_absmagv} and ~\ref{table:final_absmagk}. We note that 13 objects retained in this selection process had either negative parallax ('n') or high parallax to error ratio ('e') \textit{Gaia} flags. However, the reliable absolute magnitudes mean the distances may still be valid. \section{New distances to WR stars and comparison to other \textit{Gaia} derived distances} \label{sec:distdisc} We can compare the WR star sample from \textit{Gaia} to the total population. There is no substantial difference between the latitude and longitude distribution of WR stars detected in \textit{Gaia} and the total known WR distribution. The exception is for some regions, such as around Westerlund 1 and towards the Galactic Centre, which went undetected by \textit{Gaia} due to their high extinctions (with $A_V>30$ mag in the latter case). Crowding presented an additional challenge. WR 43A and 43B are not included in the final distance catalogue as the same \textit{Gaia} source was detected for both stars. The detection for WR43C is also spurious, as the position overlaps with other objects. These stars are located in the compact cluster NGC~3603 (\citealt{2008AJ....135..878M}, \citealt{1998MNRAS.296..622C}) and therefore blending is to be expected. It is possible that further stars are missing parallaxes due to crowding, as this issue would reduce the quality of the \textit{Gaia} five parameter solution below acceptable limits, and cause it to be excluded from the \textit{Gaia} catalogue. Finally, some stars may not have been detected due to their close binary companions. \citet{2018A&A...616A..17A} shows that completeness falls for separations below 2'', to a limit at 0.12''. This may account for three missing stars with narrowband $\mathrm{v^{WR}}$ < 15 mag (WR2, WR63 and WR86), two of which (WR63 and WR86) have known companions. Table ~\ref{table:final} includes distances for each WR star with measured parallaxes. Also included are the 68\% credible intervals. Table ~\ref{table:in_2kpc} lists the closest WR stars (with reliable results) within 2 kpc of the Sun. We find 25 WR stars within this distance, similar to the 30 WR stars within 2 kpc from \citet{1983ApJ...274..302C}. We also calculate distances to O stars using our Bayesian prior and GOSC v4.1 \citep{2013msao.confE.198M}. For the O star population within 2 kpc, we obtain a WR/O ratio of 0.09. This ratio is within the 0.07--0.10 range of \citet{1983ApJ...274..302C}, found by comparing lifetimes of H and He core burning phases from massive star models, as an analogue to O star and WR star phases. However, our ratio includes all O stars, and not just the most massive population that WR stars are descended from. \citet{1983ApJ...274..302C} also calculate a WR/O ratio with only O stars >40$\mathrm{M\_{\sun}}$, and find a much higher ratio of 0.36$\pm$0.15. Table~\ref{table:in_2kpc} also includes $K_s$-band extinctions, and extinctions per kpc for these nearby WR stars, with ${A_{K_s}}$/kpc $\sim$ 0.26 mag, albeit with significant star-to-star variation. Dust extinctions of stars in common with the 3D dust map from Pan-STARRS1 and 2MASS \citet{2015ApJ...810...25G} shows reasonable overall agreement. \begin{figure} \centering \includegraphics[width=\linewidth]{cf_original.pdf} \caption{(a) A comparison between distances to Galactic WR stars in common between this work and \citet{2015MNRAS.447.2322R}. The black dashed line indicates one-to-one agreement. Error bars from \citet{2015MNRAS.447.2322R} have been omitted for clarity; (b) A comparison between WR distances obtained in this work and \citet{2018AJ....156...58B}. We illustrate the effect of extinction by presenting the full prior including both dust and \hii regions (red stars) and a prior with only \hii regions (black cross).} \label{fig:cf_ori} \end{figure} \begin{figure*} \centering \includegraphics[width=\textwidth]{height_original2.pdf} \caption{A comparison between the WR distances from the midplane from \citet{2015MNRAS.447.2322R} and this work. Blue circles are the points from this work with distances greater than 3$\sigma$, where $\sigma$ is the \hii region scale height. The dotted line indicates parity between the two measures. Stars with significant disagrement are labelled with their WR numbers.} \label{fig:hi_ori} \end{figure*} \subsection{Comparison with previous WR distances} \label{ssec:compold} \citet{2015MNRAS.447.2322R} provide distance estimates for 228 Galactic WR stars based on previous absolute magnitude calibrations. Of those, 87 have reliable distances from this work. Fig.~\ref{fig:cf_ori}(a) compares distances to Galactic WR stars in common with \citet{2015MNRAS.447.2322R}. Agreement is reasonable up to $\sim$2 kpc. This is the subset of \textit{Gaia} sources with the lowest uncertainties and extinction, enabling accurate applications of our prior and absolute magnitude calibrations. Beyond 2 kpc, there is significant scatter, with many stars closer than previously thought. These are principally more highly reddened WR stars that have been discovered recently. Conversely many stars that were thought to be nearby based on calibrations, have significantly larger distances (e.g. WR57 is revised from 2.98$\pm$0.52 kpc to 5.50$\substack{+1.49 \\ -1.06}$ kpc). All of our 187 stars with reliable absolute magnitudes have distance estimates from \citet{2018AJ....156...58B}. Comparisons are presented in Figure~\ref{fig:cf_ori}(b). Again, good agreement is obtained up to $\sim$2 kpc, beyond which the \citet{2018AJ....156...58B} distances are generally larger than our results. The average $\omega/\sigma_{\omega}$ for stars at distances beyond 2.5 kpc is $-$0.71. The error is therefore a substantial proportion of the total parallax, which suggests disparities stem primarily from limitations in the \textit{Gaia} data and the differences between the two priors. At large distances and so proportionally large parallax errors, the prior dominates the data and the peak of the posterior shifts closer to the peak of the prior. For this work, the peak of the prior probability defaults to <3 kpc, depending on longitude. If the peak in the Bailer-Jones prior is substantially closer or further, this results in a large divergence between the two measures. Our prior differs significantly from \citet{2018AJ....156...58B} as it more directly accounts for extinction and the specific distribution of massive stars. The red stars/black crosses in Figure~\ref{fig:cf_ori}(b) show the contrast between results calculated with/without the dust extinction model. In most instances, the stars had results more in line with \citet{2018AJ....156...58B} when dust was excluded. Therefore, in the vast majority of cases, dust extinction in the prior is the primary factor leading to different results. Since distances from \citet{2018AJ....156...58B} formed the basis of the recent spectroscopic studies of Galactic WR stars by \citet{2019A&A...621A..92S} and \citet{2019A&A...625A..57H}, use of distances from this study with no warning flags would lead to generally modest 0.05 dex reductions in stellar luminosity. These are included in Table~\ref{table:final}, with higher reductions for relatively distant stars including WR74 (WN7o, 0.24 dex), WR91 (WN7b, 0.23 dex) ,WR56 (WC7, 0.20 dex) and WR64 (WC7, 0.20 dex). We also compare the distances to a Galactic LBV (WR31b = AG Car) and LBV candidate (WR31a = He~3-519) which are in common with \citet{2019MNRAS.488.1760S}. They obtain a distance of 7.12$\substack{+2.53 \\ -1.67}$ kpc to WR31a, versus 7.35$\substack{+1.45 \\ -1.18}$ kpc from this work, and 4.65$\substack{+1.43 \\ -0.92}$ kpc to WR31b, versus 4.85$\substack{+0.93 \\ -0.70}$ kpc from this work. These are well within the uncertainties of both stars, particularly given WR31a has a high error to parallax ratio of 0.72 (as measured directly from the catalogue values). \citet{2019MNRAS.488.1760S} adopt a different zero point to our study, namely $-$0.05\,mas as an initial value and model some uncertainty in this as part of their calculation. This decision is based on the variety of different zero points found in the literature (e.g \citealt{2018ApJ...861..126R}, \citealt{2019ApJ...878..136Z}, \citealt{2018ApJ...862...61S} and \citealt{2019ApJ...872...85G}). Therefore, these distances are systematically closer than those from \citet{2018AJ....156...58B}. This result agrees both with our findings and \citet{2019MNRAS.487.3568S}, who also find that \citet{2018AJ....156...58B} appear to systematically overestimate distances. As \citet{2019MNRAS.488.1760S} adopts a similar prior to that of \citet{2018AJ....156...58B}, the overlapping results therefore indicate that the larger zero point is performing much the same function as our dust model, acting to moderate the distances of \citet{2018AJ....156...58B}. \begin{figure} \includegraphics[width=\linewidth]{height_fits.pdf} \caption{A histogram distribution of WR distances from the Galactic disk. The dotted line shows the Cauchy fit from Equation ~\ref{heightsfit}.} \label{fig:szplt} \end{figure} \section{Distances from the Galactic disk} \label{sec:hab} To identify potential runaway stars, we calculated distances from the Galactic plane using the most likely distance from the Sun and the Galactic latitude of the star, with the addition of the 20.8 pc \citep{2019MNRAS.482.1417B} for the Sun's distance above the midplane. The 68\% distance uncertainty intervals were scaled to give height uncertainties. The new midplane distances in Table ~\ref{table:final} are compared with results from \citet{2015MNRAS.447.2322R} in Figure ~\ref{fig:hi_ori}. In general, the deviation from previous results increases with height, reflecting the uncertainty of distances to very remote WR stars. The scale heights, $\sigma$, of \hii regions loosely trace massive star formation sites and can therefore highlight potential runaways. Based on the median north scale height between 3.9 kpc and 5.6 kpc in \citet{2004MNRAS.347..237P}, $\sigma$ is 52 pc. The south scale heights contained too few points to be reliable. We additionally calculated the scale height of the WR population. The histogram of WR distances from the midplane is presented in Figure ~\ref{fig:szplt} and can be fit with a Cauchy distribution \begin{equation} \label{heightsfit} g = \frac{A}{\pi\gamma}\frac{\gamma^2}{(z-c)^2+\gamma^2} \end{equation} where A is the scale constant, c is the distribution centre and $\gamma$ is the scale parameter, specifying the half width half maximum (HWHM). Fitting these parameters gives a centre of 1.5 pc and a HWHM of 53.4 pc. The central value of our distribution is similar to \citet{2015MNRAS.447.2322R} (1.9 pc), though their HWHM is somewhat smaller, at 39.2 pc. The central value would suggest many WR stars are slightly above the plane, but this may be due to planar dust extinction rendering WR stars which sit below the disk being inaccessible to \textit{Gaia}. Our results are similar to \citet{1990AJ....100..431C}, who find a WR scale height of 45$\pm$5 pc using an isothermal disk model and \citet{2016AstL...42....1B}, who obtained a height 51.3$\pm$3.7 pc using the same method. However, this latter value relies on a sample at <4 kpc (excluding distant stars to avoid the effects of Galactic disk warp) and thus only covers about half the WR stars in our sample. To identify only the most extreme runaways and ensure they did not form in situ, we apply a 3$\sigma$ cut-off using the \hii region scale height. Since a velocity of 1 km\,s$^{-1}$ equates to 1 pc\,Myr$^{-1}$, runaways ($\geq$30 km\,s$^{-1}$) will travel in excess of 150 pc over a typical WR lifetime of 5 Myr. 91\% of 383 WR stars in \textit{Gaia} reside within three scale heights from the Galactic plane, so 9 \% of WR stars are located far from the Galactic plane. Table ~\ref{table:zheights} presents the |z| distances for each of these stars. However, the resulting runaway list does not acount for the known warp in the Galactic disk. \citet{2019A&A...627A.150R} estimate the warp begins at a radius of 12-13 kpc from the Galactic centre for their sample of young, bright stars (which they refer to as the OB sample). All but two of our WR stars are within 12 kpc of the Galactic centre and by this measure, would be unaffected. However, their results show some complex structures that in fact suggest some of our sample may be affected by the warp. An alternative measure from \citet{2019ApJ...871..208L}, estimates that the Galactic disk instead begins to warp at a radius of 9.2 kpc. 20 stars are further from the centre than this distance, and so their heights would need to account for the warp. To obtain a robust candidate list of runaways with $\geq$30 km\,s$^{-1}$, we used the Galactic warp model and onset from \citet{2019ApJ...871..208L} to calculate the height of the Galactic plane at the position of each of the 383 WR stars with distances. We subtracted off the height of this Galactic warp, which produced a distance from the midplane for each star, which accounted for the warp. These distances were then used to exclude any stars which were not 3$\sigma$ from the plane, once the warp was accounted for. Using this method, we excluded WR8 and WR12 from our runaway list in Table \ref{table:zheights}. Therefore, 31 stars (8\% of WR stars in \textit{Gaia}) are robust runaway candidates. We do not apply the warp to our full list of distances from the plane in Table \ref{table:final}, as the warp onset and model are still uncertain. The runaways identified in \citet{2015MNRAS.447.2322R} generally remain far from the plane. However, many of the more extreme distances from the plane are now moderated, due to reduced distances from the Sun. This suggests that extreme runaways are less common than previously thought. WR93a and WR64 are not included, as they were identified as having abnormal $\mathrm{v^{WR}}$ band extinction (Section ~\ref{sec:absmag}), which meant it was not possible to calculate their absolute magnitudes, so their distances could not be validated. \renewcommand{\arraystretch}{1.5} \begin{table} \caption{Distance of WR stars from the midplane |z|, for which excesses exceed 3$\sigma$, where $\sigma$=52 pc, the \hii region scale height of 52 pc. Previously identified runaways with |z| $\geq$300 pc according to \citet{2015MNRAS.447.2322R} are also indicated}. \begin{tabular}{|p{.1\linewidth}|p{.19\linewidth}|p{.12\linewidth}|p{.1\linewidth}|p{.08\linewidth}|p{.15\linewidth}} \hline WR Number & Spectral type & Dist (kpc) & |z| (pc) & \hii $\sigma$ & Known runaway \\ \hline WR148 & WN8h+ & 9.47$\substack{+1.77 \\ -1.49}$ & 1087$\substack{+199 \\ -168}$ & 20.9$\substack{+3.8 \\ -3.2}$ & Yes \\ WR57 & WC8 & 5.50$\substack{+1.49 \\ -1.06}$ & 462$\substack{+131 \\ -93}$ & 8.9$\substack{+2.5 \\ -1.8}$ & No \\ WR123 & WN8o & 5.35$\substack{+1.56 \\ -1.09}$ & 423$\substack{+129 \\ -91}$ & 8.1$\substack{+2.5 \\ -1.7}$ & Yes \\ WR73 & WC9d & 6.81$\substack{+1.85 \\ -1.47}$ & 423$\substack{+109 \\ -87}$ & 8.1$\substack{+2.1 \\ -1.7}$ & No \\ WR17 & WC5 & 6.75$\substack{+1.74 \\ -1.33}$ & 413$\substack{+112 \\ -86}$ & 7.9$\substack{+2.1 \\ -1.6}$ & Yes \\ WR71 & WN6o & 3.19$\substack{+0.67 \\ -0.48}$ & 402$\substack{+89 \\ -63}$ & 7.7$\substack{+1.7 \\ -1.2}$ & Yes \\ WR6 & WN4b & 2.27$\substack{+0.42 \\ -0.31}$ & 376$\substack{+73 \\ -54}$ & 7.2$\substack{+1.4 \\ -1.0}$ & No \\ WR75c & WC9 & 7.15$\substack{+1.78 \\ -1.45}$ & 366$\substack{+86 \\ -70}$ & 7.0$\substack{+1.7 \\ -1.3}$ & Yes \\ WR124 & WN8h & 5.87$\substack{+1.48 \\ -1.09}$ & 360$\substack{+85 \\ -63}$ & 6.9$\substack{+1.6 \\ -1.2}$ & Yes \\ WR150 & WC5 & 8.73$\substack{+1.70 \\ -1.38}$ & 357$\substack{+73 \\ -60}$ & 6.9$\substack{+1.4 \\ -1.1}$ & No \\ WR61 & WN5o & 5.49$\substack{+1.25 \\ -0.91}$ & 353$\substack{+85 \\ -62}$ & 6.8$\substack{+1.6 \\ -1.2}$ & Yes \\ WR49 & WN5(h) & 8.35$\substack{+1.44 \\ -1.17}$ & 348$\substack{+64 \\ -52}$ & 6.7$\substack{+1.2 \\ -1.0}$ & Yes \\ WR58 & WN4b/CE & 5.88$\substack{+1.42 \\ -1.04}$ & 337$\substack{+86 \\ -63}$ & 6.5$\substack{+1.7 \\ -1.2}$ & No \\ WR40 & WN8h & 3.83$\substack{+0.67 \\ -0.50}$ & 302$\substack{+56 \\ -42}$ & 5.8$\substack{+1.1 \\ -0.8}$ & No \\ WR126 & WC5/WN & 7.57$\substack{+1.49 \\ -1.19}$ & 300$\substack{+55 \\ -44}$ & 5.8$\substack{+1.1 \\ -0.8}$ & No \\ WR103 & WC9d+? & 3.46$\substack{+1.28 \\ -0.77}$ & 275$\substack{+109 \\ -65}$ & 5.3$\substack{+2.1 \\ -1.3}$ & No \\ WR33 & WC5; WC6 & 7.59$\substack{+1.62 \\ -1.30}$ & 273$\substack{+54 \\ -43}$ & 5.2$\substack{+1.0 \\ -0.8}$ & No \\ WR69 & WC9d+OB & 3.48$\substack{+0.64 \\ -0.47}$ & 272$\substack{+54 \\ -40}$ & 5.2$\substack{+1.0 \\ -0.8}$ & No \\ WR92 & WC9 & 3.78$\substack{+1.25 \\ -0.79}$ & 271$\substack{+96 \\ -61}$ & 5.2$\substack{+1.8 \\ -1.2}$ & No \\ WR54 & WN5o & 6.52$\substack{+1.37 \\ -1.05}$ & 264$\substack{+60 \\ -46}$ & 5.1$\substack{+1.1 \\ -0.9}$ & Yes \\ WR129 & WN4o & 5.47$\substack{+1.22 \\ -0.90}$ & 254$\substack{+52 \\ -38}$ & 4.9$\substack{+1.0 \\ -0.7}$ & No \\ WR83 & WN5o & 3.80$\substack{+1.10 \\ -0.72}$ & 251$\substack{+79 \\ -52}$ & 4.8$\substack{+1.5 \\ -1.0}$ & No \\ WR131 & WN7h+abs & 6.92$\substack{+1.40 \\ -1.09}$ & 227$\substack{+42 \\ -32}$ & 4.4$\substack{+0.8 \\ -0.6}$ & No \\ WR56 & WC7 & 8.67$\substack{+1.46 \\ -1.20}$ & 226$\substack{+41 \\ -34}$ & 4.3$\substack{+0.8 \\ -0.7}$ & Yes \\ WR30 & WC6+O6-8 & 5.09$\substack{+0.99 \\ -0.74}$ & 211$\substack{+45 \\ -33}$ & 4.1$\substack{+0.9 \\ -0.6}$ & No \\ WR20 & WN5o & 6.98$\substack{+1.18 \\ -0.93}$ & 204$\substack{+38 \\ -30}$ & 3.9$\substack{+0.7 \\ -0.6}$ & No \\ WR3 & WN3ha & 2.90$\substack{+0.52 \\ -0.39}$ & 188$\substack{+38 \\ -28}$ & 3.6$\substack{+0.7 \\ -0.5}$ & Yes \\ WR4 & WC5+? & 3.75$\substack{+0.89 \\ -0.62}$ & 174$\substack{+47 \\ -32}$ & 3.4$\substack{+0.9 \\ -0.6}$ & No \\ WR128 & WN4(h) & 2.90$\substack{+0.54 \\ -0.39}$ & 170$\substack{+35 \\ -26}$ & 3.3$\substack{+0.7 \\ -0.5}$ & No \\ WR52 & WC4 & 1.75$\substack{+0.16 \\ -0.13}$ & 159$\substack{+13 \\ -11}$ & 3.1$\substack{+0.2 \\ -0.2}$ & No \\ WR34 & WN5o & 7.41$\substack{+1.37 \\ -1.09}$ & 159$\substack{+33 \\ -26}$ & 3.1$\substack{+0.6 \\ -0.5}$ & No \\ \hline \end{tabular} \label{table:zheights} \end{table} Two main evolutionary paths may have created these runaways. The first is the disruption of a binary system when the primary star explodes as a supernova and ejects the remaining companion \citep{1961BAN....15..265B}. The second scenario is dynamical ejection from a dense cluster, which can eject both binary and single stars \citep{1967BOTT....4...86P}. The majority of outliers with >3$\sigma$ distances are apparently single stars, as only WR30 and WR69 have confirmed OB companions. As both single stars and binaries can be ejected from clusters, it is not possible for us to definitively state which mechanism is dominant. We defer a discussion of the origin of runaways to Paper II which considers the association of WR stars with star clusters or OB associations. However, we note that recent simulations suggest fast runaways from either mechanism are anticipated to be very rare \citep{2019A&A...624A..66R, 2016A&A...590A.107O}, in stark contrast with the high fraction of WR stars at extreme distances from the Galactic plane. Two stars merit individual consideration. The high velocity runaway WR124 is now located at |z|=360 pc, compared to previous estimates of 217 pc \citep{2015MNRAS.447.2322R}, 193 pc \citep{2010ApJ...724L..90M} and 250 pc \citep{1982A&A...114..135M}. This confirms its runaway status, although our work places it significantly further from the Sun (5.9 kpc instead of 3.3 kpc from \citealt{2010ApJ...724L..90M}). WR148 is located furthest from the Galactic plane. \citet{1986ApJ...304..188D} suggested it as a possible WR+compact object binary disrupted by a SN, however, \citet{2017MNRAS.467.3105M} claim it is instead a WN+O binary. If the latter is true, our data suggests that WR148 is a binary system that has been ejected from a cluster, concurring with \citet{2017MNRAS.467.3105M}. Assuming a lifetime of 5 Myr and a straight vertical trajectory from the Galactic disk, the minimum possible velocity for WR148 is 212 \kms, making it a very rapid cluster ejection. \citet{1989ApJ...347..373M} suggested WN8-9 were over represented amongst runaways, a finding which was corroborated by \citet{2015MNRAS.447.2322R}. However amongst our sample, only 4 out of 31 stars are of the WN8-9 subtype. The previous over representation disappears with the drop in extreme runaways. If our sample is representative of the wider WR star population, this suggests that the observed distribution was due to overestimated distance measurements, which would have made the stars appear further from the plane than they truly are. \section{Conclusions} \label{sec:con} We have calculated distances and absolute magnitudes of the Galactic WR population using data from \textit{Gaia} DR2: \begin{itemize}[leftmargin=*] \item 383 WR stars (58\% of the known Galactic population) have full five parameter astrometric solutions (proper motions and parallaxes) in the \textit{Gaia} catalogue. WR stars with large J$-$K>3 colours, indicating high dust extinctions, were generally not detected by \textit{Gaia}. \item We used the \textit{Gaia} parallaxes to calculate distances to the 383 WR stars detected by \textit{Gaia}. We use Bayesian methods to properly transform the parallax uncertainties to distance uncertainties and to obtain distances from negative parallaxes. Our Bayesian prior accounts for extinction using a Galactic dust model and the specific distribution of massive stars using \hii regions. Potential underestimates of parallax uncertainties and the zero point error are accounted for in our calculation. \item The resulting distances agree well with both the previous calibration \citep{2015MNRAS.447.2322R} and DR2 distances from \citet{2018AJ....156...58B} up to 2 kpc. Deviations above 2 kpc are due primarily to the large uncertainties of the \textit{Gaia} parallaxes. Distances from \citet{2018AJ....156...58B} formed the basis of recent spectroscopic studies of Galactic WR stars by \citet{2019A&A...621A..92S} and \citet{2019A&A...625A..57H}. Use of distances from this study would generally lead to modest 0.05 dex reductions in stellar luminosities, albeit with reductions of up to 0.2 dex for relatively distant stars. \item 25 WR stars are found within 2 kpc, compared to 30 WR stars from \citet{1983ApJ...274..302C}. Based on the population in GOSC v4.1 \citep{2013msao.confE.198M}, the WR/O star ratio in this region is 0.09. \item We calculate absolute magnitudes for WR stars, in both the $\mathrm{v^{WR}}$ and $\mathrm{K_s}$ bands. Of these, 187 stars have an absolute magnitude in either band and were used to generate subtype averages for calibrations. Both WN and WC stars are found to be more diverse in their absolute magnitude ranges than anticipated and therefore we recommend avoiding use of calibrations without accounting for this large intrinsic spread. \item We have applied our new distances to identify 31 potential runaways from the Galactic disk, accounting for the Galactic warp. \hii region scale heights define the cut-offs for runaway status. 20 of these WR stars with |z|>156 pc are new detections. The vast majority of the runaway stars are single. However, as both companion supernovae and dynamical ejection from clusters can produce single star runaways, it was not possible for us to determine the dominant runaway production mechanism, which is deferred to Paper II. \end{itemize} The current limitations of our prior are mainly the simplified dust extinction map. With an increased number of observations, the quality of future \textit{Gaia} release data should improve. Therefore, the number of WR stars with negative parallaxes should fall and we thus expect a corresponding decrease in the number of flagged results. Better parallax to error ratios in the early DR3 release (estimated to improve by a factor 1.2, \citealt{gaia_pres2}), will also reduce uncertainties and the effect of our prior when used with small parallaxes. Further improvements to the astrometric modelling and fitting algorithms should also reduce the number of questionable results via a reduction in astrometric excess noise. Finally, there is a possibility that the number of WR stars with distances will increase. 32 objects only had two parameter solutions (fitting positions) from \textit{Gaia} DR2. Future \textit{Gaia} data releases may find satisfactory full five parameter solutions, which would also include parallaxes. \section*{Acknowledgements} GR wishes to thank the Science and Technology Facilities Council (STFC), for their financial support through the Doctoral Training Partnership. We wish to thank the referee Dr Anthony Brown for his helpful comments and suggestions on the submitted manuscript. We also thank Dr Josep Manel Carrasco and Dr Carme Jordi for providing the synthetic photometry in V broadband, \textit{Gaia} $G_{BP}-G_{RP}$ and G filters at different extinctions and for different WR star subtypes, used in Section ~\ref{ssec:extinctions}. This work has made use of data from the European Space Agency (ESA) mission {\it \textit{Gaia}} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it \textit{Gaia}} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it \textit{Gaia}} Multilateral Agreement. This publication also makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. The work in Section ~\ref{ssec:gcat} is based on data products from observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 177.D-3023, as part of the VST Photometric H$\mathrm{\alpha}$ Survey of the Southern Galactic Plane and Bulge (VPHAS+, www.vphas.eu). Additionally, this paper makes use of data obtained as part of the INT Photometric H$\mathrm{\alpha}$ Survey of the Northern Galactic Plane (IPHAS, www.iphas.org) carried out at the Isaac Newton Telescope (INT). The INT is operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias. All IPHAS data are processed by the Cambridge Astronomical Survey Unit, at the Institute of Astronomy in Cambridge. The bandmerged DR2 catalogue was assembled at the Centre for Astrophysics Research, University of Hertfordshire, supported by STFC grant ST/J001333/1. In addition to Astropy \citep{2013A&A...558A..33A}, this work would not be possible without the python packages Numpy (\citealt{numpy_book}, \citealt{doi:10.1109/MCSE.2011.37}), Pandas \citep{mckinney-proc-scipy-2010} and Matplotlib \citep{2007CSE.....9...90H}. \bibliographystyle{mnras} \section{ADQL query} \label{sec:adquert} {\fontfamily{qcr}\selectfont{SELECT TOP 10 DISTANCE(POINT('ICRS', ra, dec), POINT('ICRS', WRra, WRdec)) AS dist, * \\ FROM gaiadr2.gaia\_source \\ WHERE CONTAINS(POINT('ICRS', ra, dec), CIRCLE('ICRS', WRra, WRdec, search\_radius))=1 \\ ORDER BY dist ASC \\}} where WRra and WRdec are the WR RA and DEC search coordinates in decimal format and the search\_radius is one arcsecond. The query selects the top ten closest points (arranged in distance order) that are within a 1'' circle of the WR search coordinates. All \textit{Gaia} catalogue columns are selected for convenience. \section{Increased uncertainties} \label{sec:bcert} Figure 3 in Section 2.2.1 shows the underestimation of DR2 parallax uncertainties, as compared to the uncertainties of external data (from table 1. in \citealt{2018A&A...616A..17A}). The combined Gaussian and straight line fit to the uncertainties is given by: \begin{equation} \label{eq:uwu} X = -0.01319 G + 1.376 + \frac{1.1}{\sqrt{2\pi}1.35}\exp\Bigg[-\frac{1}{2(1.35)^2}\bigg(G-14.59\bigg)^2\Bigg] \end{equation} where $G$ is the WR \textit{Gaia} G band magnitude and $X$ is the factor by which the error is estimated to increase. The updated parallax (in mas) $\omega$ and error $\sigma_{\omega}$ (also in mas) parallax inputs to the likelihood are therefore given by \begin{equation} \label{eq:newpar} \omega = \Psi+0.029 \end{equation} \begin{equation} \label{eq:newer} \sigma_{\omega} = \sigma_{\Psi}X \end{equation} where $\Psi$ is the original parallax from the \textit{Gaia} catalogue. This leads to a final likelihood of the form \begin{equation} \label{eq:likelihood} P(\omega|r,\sigma_{\omega})=\frac{1}{\sqrt{2\pi}\sigma_{\omega}}\exp\Bigg[-\frac{1}{2\sigma_{\omega}^2}\bigg(\omega-\frac{1}{r}\bigg)^2\Bigg] \end{equation} \section{Prior details} \label{sec:bprior} \begin{figure*} \centering \vspace{-3cm} \begin{adjustwidth}{-1.2cm}{1.9cm} \setlength{\subfigcapskip}{10pt} \subfigure[]{{\includegraphics[scale=0.45]{HII_region_density}}} \hspace{0cm} \subfigure[]{{\includegraphics[scale=0.45]{hii_latitude}}} \end{adjustwidth} \caption{(a) Density of Galactic \hii regions over distance and longitude, at zero latitude, before extinction is applied (based on \citealt{2004MNRAS.347..237P} and \citealt{2003A&A...397..213P}). The coordinate system is centred on the Sun, with the Galactic Centre at 8.122 kpc. (b) Density of Galactic \hii regions across different latitudes, viewed from the Sun and based on \citet{2003A&A...397..213P}.} \label{fig:hii_dist} \end{figure*} \begin{figure*} \centering \begin{adjustwidth}{-1.5cm}{1.7cm} \setlength{\subfigcapskip}{10pt} \subfigure[]{{\includegraphics[scale=0.45]{dust_dist}}} \hspace{0cm} \subfigure[]{{\includegraphics[scale=0.5]{vertical_extinction}}} \end{adjustwidth} \caption{(a) Dust distribution over longitude and distance, at zero latitude, in the simple disk model and (b) the variation of dust integrated along line of sight with latitude, viewed from the Sun. The coordinate system is centred on the Sun, with the Galactic Centre at 8.122 kpc.} \label{fig:dusts} \end{figure*} \begin{figure*} \centering \vspace{-3cm} \includegraphics[width=0.7\linewidth]{overhead_extinction} \caption{Extinction variation with distance and Galactic longitude, at zero latitude, as calculated using the dust model. The plot is centred on the Sun, with the Galactic Centre at 8.122 kpc.} \label{fig:ai_long} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.7\linewidth]{full_prior} \caption{Combined prior, consisting of \hii region prior and dust extinction.} \label{fig:full_prior} \end{figure*} The prior is a combination of \hii region distributions at radio wavelengths (to avoid extinction and get a good spatial sample) and an extinction map. For the \hii region distribution, we chose a Gaussian centred on 3000 pc to best approximate the distribution with distance from the Sun (based on Figure 12 from \citealt{2004MNRAS.347..237P}). Over varying Galactic longitudes and latitudes, the number of \hii regions and their spread over distance changes. To alter the distribution for different lines of sight, the standard deviation was modified based on the \hii region number density at a given latitude and longitude. The standard deviations range from 1-3 kpc, depending on the longitude and latitude of the line of sight. Figure~\ref{fig:hii_dist} (a) shows the resulting distribution over different longitudes at different distances and \ref{fig:hii_dist} (b) shows the distribution over latitudes. There is a particularly large excess probability around l=73-86$^{\circ}$ and -3$\leq$b$\leq$-4$^{\circ}$ due to the Cygnus X region (as stated in \citealt{2003A&A...397..213P}). Over these coordinates, the mean of the Gaussian is instead centred on 1400 pc and the standard deviation is correspondingly lower. The overhead extinction map in Figure~\ref{fig:ai_long} was generated using a simple dust disk model (see Figure~\ref{fig:dusts} (a) for the variation in longitude and \ref{fig:dusts} (b) for the latitude). Our primary goal was to determine how extinction affected the observable distances along each line of sight. In regions of high extinction, the peak of the prior would have to be shifted towards the Sun, as the probability of seeing a WR star at a greater distance would decrease. The I band (which peaks at $\sim$8000\ang) is best suited for this, as it operates towards the extreme red end of the \textit{Gaia} G band (at 10500\ang). Any distance that is too faint to observe in this wavelength range would therefore be very faint in G and have a small or nil probability of hosting a WR star that is visible to \textit{Gaia}. At each distance, the dust was integrated along the line of sight and normalised to the extinction at the Galactic centre. This was chosen to be 15.36 magnitudes, in the I band. Unfortunately, it was not possible to reliably convert $A_I$ to $A_G$, as the conversion relationship given in \citet{2018A&A...616A...4E} does not extend to the large values of $V-I_c$ at the Galactic centre. Galactic centre extinction in the I band was calculated by assuming the V band extinction at the same point is 32 mag (based on averaging optical extinction at 0.55$\mu$ from \citealt{2011ApJ...737...73F}) and multiplying by 0.48 \citet{1989ApJ...345..245C} to account for the difference in reddening. Figure ~\ref{fig:ai_long} shows the resulting extinction variation with Galactic longitude. We then converted the extinction to a factor which could be applied to the probability at each distance, to simulate the reduction of flux from extinction \begin{equation} \label{eq:dust_ext} \delta = 2.512^{(-A_I)} \end{equation} where $A_I$ is the I band extinction at that distance, calculated from $A_I=0.48AV$ (where $AV$ is the V band extinction). This conversion factor was then multiplied by the \hii region distribution, to give a final distribution. This combines both the radio \hii region observations and dust extinction, and so approximates what might be seen by \textit{Gaia}. This final distribution is shown in Figure \ref{fig:full_prior}. As compared to Figure \ref{fig:hii_dist}, the peak of the prior has moved significantly closer to the Sun (within 1-3 kpc, depending on longitude). \section{Posterior} \label{sec:bmath} The Bayesian inferred distribution of distances (the posterior $P(r|\Psi,\sigma_{\Psi})$) is calculated using \begin{equation} \label{eq:bayes} P(r|\Psi,\sigma_{\Psi})=\frac{1}{Z}P(\Psi|r,\sigma_{\Psi})P(r) \end{equation} \citep{2015PASP..127..994B}, where $P(\Psi|r,\sigma_{\Psi})$ is the likelihood (the probability distribution of measured parallaxes, $P(r)$ is the prior (the expected distribution of the distances) and Z is a normalisation constant. For our likelihood and prior, the resulting posterior distribution is \begin{equation} \label{eq:posterior} P(\omega|r,\sigma_{\omega})=\frac{1}{\sqrt{2\pi}\sigma_{\omega}\sigma_p}\exp\Bigg[-\frac{1}{2}\Big(\frac{\bigg(\omega-\frac{1}{r}\bigg)^2}{\sigma_{\omega}^2}+\frac{(r-\mu_p)^2}{\sigma_p^2}\Big)\Bigg]\delta \end{equation} where $\sigma_p$ is the standard deviation of the Gaussian from the \hii region prior in the direction of the WR and $\mu_p$ is the mean. We do not account for errors in the WR position, as these are insignificant compared to the simplifications in the prior (such as the simplification of the dust distribution). We calculated the credible intervals (uncertainties), by cycling through each of the calculated probabilities, beginning with the maximum. At each probability, the corresponding distances either side of the distribution peak were selected. The area under the curve for this distance range could then be compared to the target area (e.g 68\% for one sigma uncertainties). The process was repeated until the area integrated reached or exceeded the required credible interval. This method could also be applied to the two sigma uncertainties (95\%) Due to the finite nature of the grid, slight deviations from the specified 68\% area occurred, the largest of which was for WR11 (which reached 68.5\% of the area). However, these deviations led to typical interval changes of a few pc or less, below the reasonable precision of our distance calculation. \section{Impact of uncertainties} \label{sec:undisc} \begin{figure} \centering \includegraphics[width=\linewidth]{err_deviations} \caption{A comparison between distances with and without the modelled error increase. The dashed line denotes where the two distance calculations are the same and the solid line is the fit from equation \ref{excessfit}} \label{fig:err_dev} \end{figure} Figure ~\ref{fig:err_dev} shows that underestimated parallax errors from \textit{Gaia} have a significant effect on the most probable distance. Beyond $\sim$1.4 kpc, the adjusted errors result in systematically closer distances, compared to data with no uncertainty increases. This occurs because the larger parallax to error ratio means the prior has a greater influence on the resulting distance. As the prior accounts for dust extinction, these distances from parallaxes with inflated errors are smaller than those from the original parallaxes. We can fit a line to determine the typical contribution of modified errors \begin{equation} \label{excessfit} d_e = 0.7724d + 349.25 \end{equation} where $d_e$ is the distance with extended errors and $d$ contains no error modification. The deviations between this fit and a line x=y indicate a typical contribution of 24\% at 10 kpc, decreasing towards zero at 1.5 kpc. Below this distance, the difference begins to increase again because the increased errors have little effect and the fit is no longer accurate. For isolated cases, the maximum deviation was higher, up to $\sim$50\%. In most instances, the differences between the distances from the original \textit{Gaia} catalogue parallax error and the distances from the increased parallax error, fall within uncertainties. A major limitation is that the error rescaling used here, may not account for individual errors which are still underestimated. Overall, the data show that underestimated parallax errors have a significant effect on many distances and that these underestimates need to be accounted for in distance calculations. \section{Bootstrapping and fits to absolute magnitudes} \label{sec:fitdist} For the bootstrapping procedure, we sample 1000 distributions of 20,000 points each (with replacement) from the true distributions of apparent magnitudes (assumed to be a Gaussian with the peak at the measured value and the standard deviation as the uncertainty), distances and extinction. This generated a distribution of absolute magnitudes which could be fitted with a Gaussian if the $\chi^2$ value was below 0.005 (setting the limit below this value made it difficult to fit stars). Alternatively, if the $\chi^2$ value was above 0.005, a Weibull distribution (non symmetric with left or right skew) was fitted instead \begin{equation} \label{eq:weibull} y = \frac{k}{\lambda}\bigg(\frac{M_{range}}{\lambda}\bigg)^{(k-1)}e^{-(M_{range}/\lambda)^k} \end{equation} where $k$ is the shape parameter, $\lambda$ is the scale parameter and $M_{range}$ is the range of absolute magnitude values over which the fit is made. As the Weibull distribution is only valid over a positive interval, we add a constant to transform the negative absolute magnitudes to positive values \begin{equation} \label{eq:mod} M_{mod} = M_{range}+M_{max}+0.1 \end{equation} where $M_{mod}$ is the transformed range and $M_{max}$ is the maximum value in the fit range. Both distribution types were fitted using a least squares curve fit in the python {\scriptsize{SCIPY}} package. The most likely absolute magnitude was the average of the Gaussian, or the mode $M_{mode}$ of the Weibull distribution, transformed back to negative values \begin{equation} \label{eq:mode} M_{mode} = \lambda\bigg(\frac{k-1}{k}\bigg)^{(1/k)}-(M_{max}+0.1) \end{equation} Credible intervals were again used for 68\% uncertainties on individual magnitudes. The typical variation between Monte Carlo runs (due to different data selections), was less than +/-0.05. In a small number of cases, the distribution fitting failed. In these instances, we calculated the point value of absolute magnitude, using the peaks of the distance, apparent magnitude and extinction probability curves. Due to the non Gaussian nature of the distance distributions, however, there was some offset (usually on the scale of 0.1 mag) between the peaks fitted to full distributions and these point values. \bibliographystyle{mnras}
2023-04-23T06:41:33.352Z
2020-02-12T02:20:02.000Z
redpajama/arxiv
arxiv_0001
2,646
17,520
788d5b7d46ab3b1d2d6687667f66cb8836cb5614
\section{Introduction} \label{s:intro} In the last years, we have witnessed a dramatic shift of traffic at network edge, from the wired/fixed component to the wireless/mobile segment. This trend, mainly due to the huge success of mobile devices (smartphones, tablets) and their pervasive applications (Whatsapp, Instagram, Netflix, Spotify, Youtube, etc.), is expected to further strengthen in the next few years, as testified by several traffic forecasts. For example according to CISCO \cite{CISCO} in the 5 year interval ranging from 2017 to 2022 traffic demand on the cellular network will approximately increase by a factor of 9. As a consequence, the access (wireless and wired) infrastructure must be completely redesigned by densifying the cellular structure, and moving content closer to users. To this end, the massive deployment of caches within base stations of the cellular network is essential to effectively reduce the load on the back-haul links, as well as limit latency perceived by the user. This work considers a dense cellular network scenario, where caches are placed at every Base Station (BS) and a significant fraction of users is ``covered'' by several BSs (whose cells are said to ``overlap''). The BSs in the transmission range of a given user can coordinate to offer a seamless optimized caching service to the user and possibly exploit coordinated multipoint (CoMP) techniques~\cite{lee12} on the radio access. We remark that, as soon as there are overlapping BSs, finding the optimal {offline} static content allocation becomes an NP-hard problem, even when the request process is known, the metric to optimize is the simple cache hit ratio, and coordinated transmissions are not supported~\cite{shanmugam13}. But realistic scenarios are more complex: popularities are dynamic and unknown a priori and more sophisticated metrics (e.g., PHY-based ones) that further couple nearby BSs-caches are of interest. Moreover, centralized coordination of hundreds or thousands of caches per km$^2$ (e.g., in ultra-dense networks) is often infeasible or leads to excessive coordination overhead. In such a context, our paper provides an answer to the open question about the existence of general (computationally efficient) distributed strategies for edge-cache coordination, which are able to provide some guarantees on global performance metrics (like hit ratio, retrieval time, load on the servers, etc.). In particular, we propose a new policy---\mbox{\textproc{$q$LRU-$\Delta$}}---which provably achieves a locally optimal configuration for general performance metrics. \mbox{\textproc{$q$LRU-$\Delta$}}{} requires a simple modification to the basic behaviour of \textproc{$q$LRU}~\cite{garetto16}. Upon a hit at a cache, \mbox{\textproc{$q$LRU-$\Delta$}}{} moves the corresponding content to the front of the queue with a probability that is proportional to the marginal utility of storing this copy. Upon a miss, it introduces the new content with some probability $q$. \mbox{\textproc{$q$LRU-$\Delta$}}{} inherits from \textproc{$q$LRU}{} $\mathcal O(1)$ computation time per request and memory requirements proportional to the cache size. Its request-driven operation does not need a priori knowledge of content popularities, removing a limit of most previous work. Some information about the local neighborhood (e.g., how many additional copies of the content are stored at close-by caches also serving that user) may be needed to compute the marginal gain. Such information, however, is limited, and can be piggybacked on existing messages the user sends to query such caches, or even on channel estimates messages mobile devices regularly send to nearby BSs~\cite{LTE-book}. As an example, we show that \mbox{\textproc{$q$LRU-$\Delta$}}{} is a practical solution to optimize hit ratio, retrieval time, load on the servers, etc., both when a single BS satisfies the user's request and when multiple BSs coordinate their transmissions through CoMP techniques. \subsection{Related work} We limit ourselves to describe work that specifically addresses the caching problem in dense cellular networks. The idea of coordinating the placement of contents at caches, which are closely located at BSs, was first proposed in~\cite{Caire12} and its extension \cite{shanmugam13} under the name of FemtoCaching. This work assumes that requests follow the Independent Reference Model (IRM) and geographical popularity profiles are available, i.e., content requests are independent and request rates are known for all cell areas and their intersections. Finding the optimal content placement that maximizes the hit ratio is proved to be an NP-hard problem, but a greedy heuristic algorithm is shown to guarantee a $\frac{1}{2}$-approximation of the maximum hit ratio. In \cite{Poularakis14}, the authors generalized the approach of \cite{Caire12,shanmugam13}, providing a formulation for the joint content-placement and user-association problem that maximizes the hit ratio. They also proposed efficient heuristic solutions. {This line of work has been further extended in~\cite{saputra19}, which also considers the request routing problem.} Authors of \cite{Naveen15} included the bandwidth costs in the formulation, and proposed an on-line algorithm for the solution of the resulting problem. In \cite{Chattopadhyay18}, instead, the authors designed a distributed algorithm based on Gibbs sampling, which was shown to asymptotically converge to the optimal allocation. Reference~\cite{Anastasios2}~revisits the optimal content placement problem within a stochastic geometry framework and derives an elegant analytical characterization of the optimal policy and its performance. In \cite{avrachenkov17} the authors developed a few asynchronous distributed content placement algorithms with polynomial complexity and limited communication overhead (communication takes place only between overlapping cells), whose performance was shown to be very good in most of the tested scenarios. Still, they assumed that content popularities are perfectly known by the system. Moreover they focused on cache hit rates, and did not consider CoMP. One of the first papers that jointly considers caching and CoMP techniques was~\cite{ao15}: two BSs storing the same file can coordinate its transmission to the mobile user in order to reduce the delay or to increase the throughput. The authors considered two caching heuristics: a randomized caching policy combined with maximum ratio transmission precoding and a threshold policy combined with zero forcing beamforming. These policies are in general suboptimal with no theoretical performance guarantee. Reference~\cite{tuholukova17} addresses this issue for joint transmissions techniques. The authors proved that delay minimization leads to a submodular maximization problem as long as the backhaul delay is larger than the transmission delay over the wireless channel. Under such condition, the greedy algorithm provides again a guaranteed approximation ratio. Reference~\cite{chen17} considers two different CoMP techniques, i.e.,~joint transmission and parallel transmission, and derives formulas for the hit rate using tools from stochastic geometry. Nevertheless, all aforementioned works hold the limiting assumption in~\cite{Caire12} that geographical content popularity profiles are known by the system. Reliable popularity estimates over small geographical areas may be very hard to obtain~\cite{leconte16}. On the contrary, policies like \textproc{LRU}{} and its variants (\textproc{$q$LRU}, \textsc{2LRU}, \dots) do not rely on popularity estimation and are known to well behave under time-varying popularities. For this reason they are a de-facto standard in most of the deployed caching systems. Reference~\cite{giovanidis16} proposes a generalization of \textproc{LRU}{} to a dense cellular scenario. As above, a user at the intersection of multiple cells can check the availability of the content at every covering cell and then download from one of them. The difference with respect to standard \textproc{LRU}{} is how cache states are updated. In particular, the authors of~\cite{giovanidis16} considered two schemes: \textproc{LRU-One}{} and \textproc{LRU-All}. In \textproc{LRU-One}, each user is assigned to a reference cell/cache and only the state of her reference cache is updated upon a hit or a miss, independently from which cache the content has been retrieved from. In \textproc{LRU-All}, the state of all caches covering the user is updated. Recently, \cite{paschos19} proposed a novel approach to design coordinated caching polices in the framework of online linear optimization. A projected gradient method is used to tune the fraction of each content to be stored in a cache and regret guarantees are proved. Unfortunately, this solution requires to store pseudo-random linear combinations of original file chunks, and, even ignoring the additional cost of coding/decoding, it has $\mathcal O(F)$ computation time per request as well as $\mathcal O(F)$ memory requirements, where $F$ is the catalogue size. Also, coding excludes the possibility to exploit CoMP techniques, because all chunks are different. {A caching algorithm resorting on a deep reinforcement learning approach was instead recently proposed in \cite{wu2019dynamic}}. Lastly, reference \cite{leonardi18jsac} proposes a novel approximate analytical approach to study systems of interacting caches, under different caching policies, whose predictions are surprisingly accurate. The framework builds upon the well known characteristic time approximation~\cite{che02} for individual caches as well as an exponentialization approximation. We also rely on the same approximations, which are described in Sect.~\ref{s:optimality}. \cite{leonardi18jsac}~also proposes the policy \textproc{$q$LRU-Lazy}{}, whose adoption in a dense cellular scenario is shown to achieve hit ratios very close to those offered by the greedy scheme proposed in~\cite{Caire12} even without information about popularity profiles. \mbox{\textproc{$q$LRU-$\Delta$}}{} generalizes \textproc{$q$LRU-Lazy}{} to different metrics as well as CoMP transmissions. {Furthermore, the analytical results about optimality obtained in this paper, in which we adopt a different technique, are significantly stronger, while more elegant and concise. In this paper, indeed, we prove global optimality for \mbox{\textproc{$q$LRU-$\Delta$}}{} while in \cite{leonardi18jsac} only local optimality has been shown for \textproc{$q$LRU-Lazy}{}.} \subsection{Paper Contribution} \label{s:contri} The main contribution of this paper is the proposal of \mbox{\textproc{$q$LRU-$\Delta$}}, a general-purpose caching policy that can be tailored to optimize different performance metrics. The policy implicitly coordinates caching decisions across different caches also taking into account joint transmission opportunities. \mbox{\textproc{$q$LRU-$\Delta$}}{} is presented in details in Sect.~\ref{s:operation}, after the introduction of our network model in~Sect.~\ref{s:network_model}. Sect.~\ref{s:optimality} is devoted to prove that, under a stationary request process, \mbox{\textproc{$q$LRU-$\Delta$}}{} achieves an optimal configuration as the parameter $q$ converges to $0$. The proof is technically sophisticated: it relies on the characterization of stochastically stable states using techniques originally proposed by P.~R.~Kumar and his coauthors~\cite{connors88,connors89,desai94} to study simulated annealing. In a previous version of this report~\cite{arxiv1} we used a different approach inspired by~\cite{young93} to prove the following weaker result: it is not possible to replace a single content at one of the caches and still improve the performance metric of interest. In order to illustrate the flexibility of~\mbox{\textproc{$q$LRU-$\Delta$}}, we show in Sect.~\ref{s:case_studies} how to particularize the policy for two specific performance metrics, i.e.,~the hit rate and the retrieval delay under CoMP. While our theoretical guarantees hold only asymptotically, numerical results show that \mbox{\textproc{$q$LRU-$\Delta$}}{} with $q\in [0.01,0.1]$ already approaches the performance of the {offline} allocation obtained through greedy, which, while not provably optimal, is the best baseline we can compare to. Note that the greedy algorithm requires complete knowledge of network topology, transmission characteristics, and request process, while \mbox{\textproc{$q$LRU-$\Delta$}}{} is a reactive policy that relies only on a noisy estimation of the marginal benefit deriving from a local copy. We remark that the goal of \mbox{\textproc{$q$LRU-$\Delta$}}{} and this paper is not to propose ``the best'' policy for \emph{any} scenario with ``coupled'' caches, but rather a simple and easily customizable policy framework with provable theoretical properties. Currently, new caching policies designed for a particular scenario/metric are often compared with classic policies like \textproc{LRU}{} or \textproc{LFU}{} or the more recent \textproc{LRU-One}{} and \textproc{LRU-All}{}. This comparison appears to be quite unfair, given that these policies 1) ignore or only partially take into account the potential advantage of coordinated content allocations and 2) all target the hit-rate as performance metric. \mbox{\textproc{$q$LRU-$\Delta$}}{} may be a valid reference point, while being simple to implement. A Swiss-army knife is a very helpful object to carry around, even if each of its tools may not be the best one to accomplish its specific task. \section{Network model} \label{s:network_model} We consider a set of $B$ base stations (BSs) arbitrarily located in a given region $R \subseteq \mathbb R^2$, each equipped with a local cache with size $C$. Users request contents from a finite catalogue of size $F$. {Given a content $f$, a specific allocation of its copies across the caches is specified by the vector $\mathbf x_f = (x^{(1)}_f, x^{(2)}_f,\dots, x^{(B)}_f)$,} where $x^{(b)}_f = 1$ (resp.~$x^{(b)}_f=0$) indicates that a copy of $f$ is present (resp.~absent) at BS $b$. Let $\mathbf e^{(b)}$ be the vector with a $1$ in position $b$ and all other components equal to $0$. We write $\mathbf x_f \oplus \mathbf e^{(b)}$ to indicate a new cache configuration where a copy of content $f$ is added at base station $b$, if not already present {(i.e., $\mathbf x_f\oplus e^{(b)}=\mathbf x_f$ whenever $x_f^{(b)}=1$)}. Similarly, $\mathbf x_f \ominus \mathbf e^{(b)}$ indicates a new allocation where there is no copy of content $f$ at $b$ { ($\mathbf x_f \ominus \mathbf e^{(b)}=\mathbf x_f $ whenever $x_f^{(b)}=0$)}. {Finally, we denote by $\mathbf X_f(t)=\left(X^{(1)}_f(t), \dots, X^{(B)}_f(t)\right)$, the specific content $f$ configuration at time $t$.} When user $u$ requests and receives content $f$, some network stakeholder achieves a gain that we assume to depend on user $u$, content $f$ and the current allocation of content $f$ copies ($\mathbf X_f(t)$). We denote the gain as $g_f(\mathbf X_f(t),u)$. For example, if the key actor is the content server, $g_f(\mathbf X_f(t),u)$ could be the indicator function denoting if $u$ can retrieve the content from one of the local caches (reducing the load on the server). If it is the network service provider, $g_f(\mathbf X_f(t),u)$ could be the number of bytes caching prevents from traversing bottleneck links. Finally, if it is the user, $g_f(\mathbf X_f(t),u)$ could be the delay reduction achieved through the local copies. We consider that $g_f(\mathbf 0,u)=0$, i.e.,~if there is no copy of content $f$, the gain is zero. The gain $g_f(\mathbf X_f(t),u)$ may be a random variable. For example, it may depend on the instantaneous characteristics of the wireless channels, or on some user's random choice like the BS from which the file will be downloaded. We assume that, conditionally on the network status $\mathbf X_f(t)$ and the user $u$, these random variables are independent from one request to the other and are identically distributed with expected value ${\mathbb E}[g_f(\mathbf X_f(t),u)]$. Our theoretical results hold under a stationary request process. In particular, we consider two settings. In the first one, there is a finite set of $U$ users located at specific positions. Each user $u$ requests the different contents according to independent Poisson process with rates $\lambda_{f,u}$ for $f \in \{1, 2, \dots, F\}$. The total expected gain per time unit from a given placement $\mathbf x_f$ is \begin{equation} G_f(\mathbf x_f) = \sum_{u=1}^U \lambda_{f,u} {\mathbb E}\left[g_f(\mathbf x_f,u)\right]. \end{equation} In the second setting, a potentially unbounded number of users are spread over the region $R$ according to a Poisson point process with density $\mu()$. Users are indistinguishable but for their position $\mathbf r$. In particular, a user $u$ in $\mathbf r$ generates a Poisson request process with rate $\lambda_f(\mathbf r)$ and experiences a gain $g_f(\mathbf x_f,\mathbf r)$. The total expected gain from a given placement of content~$f$ copies is in this case \begin{equation} G_f(\mathbf x_f) = \int_R \lambda_{f}(\mathbf r) {\mathbb E}\left[g_f(\mathbf x_f,\mathbf r)\right] \mu(\mathbf r) \textrm d \mathbf r. \end{equation} {We observe that $G_f(\cdot)$ is non negative and non-decreasing in the sense that $G_f(\mathbf x_f \oplus \mathbf e^{(b)})\ge G_f(\mathbf x_f)$, for each $\mathbf x_f$ and each $b$.} In what follows, we will refer to the marginal gain from a copy at base station $b$. When the set of users is finite, we define the following quantities, respectively for a given user and for the whole network: \begin{align} &\Delta g_f^{(b)}(\mathbf x_f, u) \triangleq g_f(\mathbf x_f,u) - g_f(\mathbf x_f \ominus \mathbf e^{(b)},u),\\ & \Delta G_f^{(b)}(\mathbf x_f) \triangleq G_f(\mathbf x_f) - G_f(\mathbf x_f \ominus \mathbf e^{(b)}) \label{e:deltaGb} \end{align} {$\Delta g_f^{(b)}(\mathbf x_f, u)$ represents the cost reduction observed by user $u$ when the system moves from state $\mathbf x_f \mathbf e^{(b)}$ to state $ \mathbf x_f$. $\Delta G_f^{(b)}(\mathbf x_f) $ represents the average cost reduction when the system moves from state $ \mathbf x_f \ominus \mathbf e^{(b)}$ to state~$\mathbf x_f$.} It is possible to definite similarly $\Delta g_f^{(b)}(\mathbf x_f, r)$ when users' requests are characterized by a density over the region $R$. In what follows, we will usually refer to the case of a finite set of users, but all results hold in both scenarios. We would like our dynamic policy to converge to a content placement that maximizes the total expected gain, i.e., \begin{align} \label{e:static_opt_gen} & \maxim_{\mathbf x_1, \mathbf x_2, \dots, \mathbf x_F} & & G(\mathbf x) \triangleq \sum_{f=1}^F G_f(\mathbf x_f) \\ & \text{subject to} & & \sum_{f=1}^F x_f^{(b)} =C \;\;\; \forall b = 1, \ldots, B,\nonumber\\ & & & x_f^{(b)} \in \{0,1\} \;\;\; \forall f =1, \ldots, F, \nonumber\\ & & & \;\;\;\;\;\;\;\;\;\;\;\; \;\;\; \;\;\;\;\;\;\;\forall b = 1, \ldots, B. \nonumber \end{align} even in the absence of a priori knowledge about the request process. In the three specific examples we have mentioned above, solving problem~\eqref{e:static_opt_gen} respectively corresponds to 1) maximize the hit ratio, 2) minimize the network traffic, and 3) minimize the retrieval time. This problem is in general NP-hard, even in the case of the simple hit ratio metric~\cite{shanmugam13}. { Note also that it is possible to define opportunely the gain function to take into account a notion of fairness across contents, for example to determine a weighted $\alpha$-fair cache allocation~\cite{kelly14stochastic_networks}. } \section{\mbox{\textproc{$q$LRU-$\Delta$}}} \label{s:operation} We describe here how our system operates and the specific caching policy we propose to approach the solution of Problem~\eqref{e:static_opt_gen}. When user $u$ has a request for content $f$, it broadcasts an inquiry message to the set of BSs ($I_u$) it can communicate with. The subset ($J_{u,f}$) of those BSs that have the content $f$ stored locally declare their availability to user~$u$. If no local copy is available, the user sends the request to one of the BSs in $I_u$, which will need to retrieve it from the content provider.\footnote{ This two-step procedure introduces some additional delay, but this is inevitable in any femtocaching scheme where the BSs need to coordinate to serve the content. } If a local copy is available ($J_{u,f}\neq \emptyset$) and only point-to-point transmissions are possible, the user sends an explicit request to download it to one of the BSs in $J_{u,f}$. Different user criteria can be defined to select the BS {in $J_{u,f}$} to download from (e.g., SNR, or pre-assigned priority list~\cite{LTE-book}). If CoMP techniques are supported, then all the BSs in $J_{u,f}$ coordinate to jointly transmit the content to the user. Our policy \mbox{\textproc{$q$LRU-$\Delta$}}{} works as follows. Each BS $b$ with a local copy ($b \in J_{u,f}$) moves the content to the front of the cache with probability proportional to the marginal gain due to the local copy, i.e., \begin{equation} \label{e:update} p_f^{(b)}(u) = \beta \Delta g_f^{(b)}(\mathbf X_f(t),u), \end{equation} where the constant {$\beta\le \left(\max_{u, b, \mathbf x_f} \Delta g_f^{(b)}(\mathbf x_f,u)\right)^{-1}$ guarantees that the RHS of above equation is always in $[0,1]$}. At least one of the BSs without the content (i.e., those in $I_{u,f} \setminus J_{u,f}$) {will store} an additional copy of $f$ with probability \begin{equation} \label{e:miss} q_f^{(b)}(u) = q^{(b)} \delta \Delta g_f^{(b)}(\mathbf X_f(t) \oplus \mathbf e^{(b)},u), \end{equation} where $\delta$ plays the same role of $\beta$ above and $q^{(b)}$ is a dimensionless parameter in $(0,1]$. { Some information about the local neighborhood (e.g., how many additional copies of the content are stored at close-by caches also serving that user) may be needed to compute the marginal gains in \eqref{e:update} and \eqref{e:miss}. Such information, however, is limited, and can be piggybacked on existing messages the user sends to query such caches, or even on channel estimates messages mobile devices regularly send to nearby BSs. In Sect.~\ref{s:case_studies} we detail what information needs to be exchanged when the system aims to maximize the hit rate or minimize the delay. } We are going to prove that \mbox{\textproc{$q$LRU-$\Delta$}}{} is asymptotically optimal when the values $q^{(b)}$ converge to $0$. This result holds under different variants for~\eqref{e:update} and~\eqref{e:miss}. First, as it will be clear from the discussion in the following section, our optimality result depends on ${\mathbb E}[p_f^{(b)}(u)]$ being proportional to ${\mathbb E}[\Delta g_f^{(b)}(\mathbf X_f(t),u)]$. Then it is possible to replace $g_f^{(b)}(\mathbf X_f(t),u)$ in~\eqref{e:update} with any other unbiased estimator of ${\mathbb E}[g_f^{(b)}(\mathbf X_f(t),u)]$. We are going to show an example when this is useful in~Sect.~\ref{s:case_studies}. {upon a favourable random outcome, content $f$ can be retrieved simultaneously by} any number ($>0$) of BSs in $I_{u,f} \setminus J_{u,f}$ and the probability {$q_f^{(b)}(u)$ could be simply chosen equal to $q^{(b)}$, i.e., made independent of the caching allocation.} We propose~\eqref{e:miss} because this rule is more likely to add copies that bring a large benefit $\Delta g_f^{(b)}(\mathbf X_f(t) \oplus \mathbf e^{(b)},u)$. This choice likely improves convergence speed, and then the performance in non-stationary popularity environments. \section{Optimality of \mbox{\textproc{$q$LRU-$\Delta$}}} \label{s:optimality} We are going to prove that \mbox{\textproc{$q$LRU-$\Delta$}}{} achieves a locally optimal configuration when the values $q^{(b)}$ vanish. The result relies on two approximations: the usual characteristic time approximation (CTA) for caching policies (also known as Che's approximation)~\cite{fagin77,che02} and the new exponentialization approximation (EA) for networks of interacting caches originally proposed in~\cite{leonardi18jsac}. The main results of this paper is the following: \begin{prop} \label{p:qlrud_convergence_general} \textbf{[loose statement]} Under characteristic time and exponentialization approximations, a spatial network of \mbox{\textproc{$q$LRU-$\Delta$}}{} caches asymptotically achieves an optimal caching configuration when $q^{(b)}$ vanish. \end{prop} Before moving to the detailed proof, we provide some intuition about why this result holds. We observe that, as $q^{(b)}$ converges to $0$, cache $b$ exhibits two different dynamics with very different timescales: the \emph{insertion of new contents} tends to happen more and more rarely ($q_f^{(b)}(u)$ converges to $0$), while the frequency of \emph{position updates} for files already in the cache is unchanged ($p_f^{(b)}(u)$ does not depend on $q^{(b)}$). A file $f$ at cache $b$ is moved to the front with a probability proportional to $\Delta g_f^{(b)}(\mathbf X_f,u)$, i.e., proportional to how much the file contributes to improve the performance metric of interest. This is a very noisy signal: upon a given request, the file is moved to the front or not. At the same time, as $q$ converges to $0$, more and more moves-to-the-front occur between any two file evictions. The expected number of moves-to-the-front file $f$ experiences is proportional to 1) how often it is requested ($\lambda_{f,u}$) and 2) how likely it is to be moved to the front upon a request ($p_f^{(b)}(u)$). Overall, the expected number of moves is proportional to $\sum_u \lambda_{f,u} {\mathbb E}\!\left[\Delta g_f^{(h)}(\mathbf X_f,u)\right]$, i.e.,~its contribution to the expected gain. By the law of large numbers, the random number of moves-to-the-front will be close to its expected value and it becomes likely that the least valuable file in the cache occupies the last position. We can then think that, when a new file is inserted in the cache, it will replace the file that contributes the least to the expected gain. \mbox{\textproc{$q$LRU-$\Delta$}}{} then behaves as a greedy algorithm that, driven by the request process, {replaces the least useful file in the cache at each insertion}, until it reaches a maximum. \subsection{Characteristic Time Approximation} {In this section we focus on a single cache (i.e., one base station in isolation), or equivalently on a cache $b$ in a network of $B$ non-overlapping cells.} {This is a standard approximation for a cache in isolation}, and one of the most effective approximate approaches for analysis of caching systems. CTA was first introduced (and analytically justified) in~\cite{fagin77} and later rediscovered in~\cite{che02}. It was originally proposed for \textproc{LRU}{} under the IRM request process, and it has been later extended to different caching policies and different requests processes~\cite{garetto16,garetto15}. The characteristic time $T_c^{(b)}$ is the time a given content spends in the cache since its insertion until its eviction in absence of any request for it. In general, {this quantity depends in a complex way on the dynamics of other contents requests. Instead, the CTA assumes that $T_c^{(b)}$} is a random variable independent from other contents dynamics and with an assigned distribution (the same for every content). This assumption makes it possible to decouple the dynamics of the different contents: upon a miss for content $f$, the content is retrieved and a timer with random value $T_c^{(b)}$ is generated. When the timer expires, the content is evicted from the cache. Cache policies differ in {\it i)} the distribution of $T_c^{(b)}$ and {\it ii)} what happens to the timer upon a hit. For example, $T_c^{(b)}$ is a constant under \textproc{LRU}, \textproc{$q$LRU}, \textproc{2LRU}, and \textproc{FIFO}{} and exponentially distributed under \textproc{RANDOM}. Upon a hit, the timer is renewed under \textproc{LRU}, \textproc{$q$LRU}, and \textproc{2LRU}, but not under \textproc{FIFO}{} and \textproc{RANDOM}. {In what follows we will only consider policies for which $T_c^{(b)}$ is a constant.} Under CTA, the instantaneous cache occupancy can violate the hard buffer constraint. {{The value of $T_c^{(b)}$ is obtained by imposing the expected occupancy to be equal to the buffer size: \begin{equation} \label{e:che_single_cache} \sum_{f=1}^F \pi_f^{(b)} = C \end{equation} where $\pi_f^{(b)}$ denotes the probability that content $f$ is in cache $b$. Its expression as a function of $T_c^{(b)}$ depends on the specific caching policy~\cite{garetto16}.} Despite its simplicity, CTA was shown to provide asymptotically exact predictions for a single \textproc{LRU}{} cache under IRM as the cache size grows large~\cite{fagin77,Jele99,fricker2012}. Once inserted in the cache, a given content $f$ will sojourn in the cache for a random amount of time $T_{S,f}^{(b)}$, independently from the dynamics of other contents. $T_{S,f}^{(b)}$ can be characterized for the different policies. In particular, if the timer is renewed upon a hit, we have: { \begin{equation}\label{sojourn-time-struct} T_{S,f}^{(b)}= \sum_{k=1}^{\infty} Y_k \mathrm{1}_{\{Y_1< T_c^{(b)}, \ldots, Y_k<T_c^{(b)} \}}+ T_c^{(b)}= \sum_{k=1}^M Y_k+ T_c^{(b)}, \end{equation} where $M \in \{0,1, \dots\}$ is the number of consecutive hits following a miss, and \changes{$Y_k$ is the time interval between the $k$-th request following a miss and the previous content request}. } We want to compute the expected value of $T_{S,f}^{(b)}$ that we denote as $1/\nu_f^{(b)}$. When the number of users is finite, requests for content $f$ from user~$u$ arrive according to a Poisson process with rate $\lambda_{f,u}$. The time instants at which content $f$ is moved to the front are generated by thinning this Poisson process with probability $\beta {\mathbb E}[\Delta g_f^{(b)}(u)]$.\footnote{ Here we simply write $\Delta g_f^{(b)}(u)$ instead of $\Delta g_f^{(b)}(\mathbf X_f^{(b)},u)$, because we are considering a single cache. Similary, we write $\Delta G_f^{(b)}$, instead of $\Delta G_f^{(b)}(\mathbf X_f(t))$. } The resulting sequence is then also a Poisson process with rate $\lambda_{f,u} \beta {\mathbb E}[\Delta g_f^{(b)}(u)]$. Finally, as request processes from different users are independent, the aggregate cache updates due to all users is a Poisson process with rate $$\beta \sum_{u=1}^U \lambda_{f,u} {\mathbb E}[\Delta g_f^{(b)}(u)] = \beta \Delta G_f^{(b)}.$$ The same result holds when we consider a density of requests over the region $R$. As the aggregate cache updates follow a Poisson process with rate $\beta \Delta G_f^{(b)}$, $\{Y_k\}$ are i.i.d. truncated exponential random variables with rate $\beta \Delta G_f^{(b)}$ over the interval $[0,T_c^{(b)}]$ and their expected value is \[ {\mathbb E}[Y_k]= \frac{1}{\beta \Delta G_f^{(b)}} - \frac{T_c^{(b)} } {e^{\beta \Delta G_f^{(b)} T_c^{(b)}} -1} .\] Moreover, the probability that no update occurs during a time interval of length $T^{(b)}_c$ is $e^{-\beta \Delta G_f^{(b)} T^{(b)}_c}$. Then $M$ is distributed as a geometric random variable with values $\{0, 1, \dots\}$ with expected value \[{\mathbb E}[M]=\frac{1-e^{-\beta \Delta G_f^{(b)} T_c^{(b)}}}{e^{-\beta \Delta G_f^{(b)} T_c^{(b)}}}= e^{\beta \Delta G_f^{(b)} T_c^{(b)}} -1.\] {Since $M$ is clearly a stopping point for the sequence $\{ Y_k\}_k$,} we can then apply Wald's Lemma to \eqref{sojourn-time-struct} obtaining: \begin{align} \label{e:rate} \nu_f^{(b)}& \triangleq \frac{1}{\mathbb{E}[T_{S,f}^{(b)}]} = \frac{1}{\mathbb{E}[Y_1] \;\mathbb{E}[M] +T_c^{(b)}}\nonumber\\ & = \frac{\beta \Delta G_f^{(b)}}{e^{\beta \Delta G_f^{(b)} T_c^{(b)}}-1}. \end{align} \subsection{Exponentialization Approximation} We consider now the case when $B$ cells may overlap. The sojourn time of content $f$ inserted at time $t$ in cache~$b$ will now depend on the whole state vector $\mathbf X_f(\tau)$ for $\tau \ge t$ (until the content is not evicted), because the content is updated with probability~\eqref{e:update} depending on the marginal gain of the copy (and then on $\mathbf X_f(\tau)$). EA consists to assume that the stochastic process $\mathbf X_f(t)$ is a continuous-time Markov chain. For each $f$ and $b$ the transition rate $\nu_f^{(b)}$ from state $\mathbf X_f(t)=(x_f^{(b)}=1, \mathbf x_f^{(-b)})$ to $(x_f^{(b)}=0, \mathbf x_f^{(-b)})$ is given by \eqref{e:rate} with $\Delta G_f^{(b)}$ replaced by $\Delta G^{(b)}_f(\mathbf X_f(t))$. EA replaces then the original stochastic process, whose analysis is extremely difficult, {with a set of MCs $\mathbf X_f(t)$, for $f=1, \dots, F$, which are only coupled through the characteristic times $T_c^{(b)}$ at the BSs.} Reference~\cite{leonardi18jsac} shows that this has no impact on any system metric that depends only on the stationary distribution in the following cases: \begin{enumerate} \item isolated caches, \item caches using \textproc{RANDOM}{} policy, \item caches using \textproc{FIFO}{} policy as far as the resulting Markov Chain $\mathbf X_f(t)$ is reversible. \end{enumerate} Numerical results in \cite{leonardi18jsac} show that the approximation is practically very accurate also in more general cases. {Similarly to what done for a single cache, we can determine the values $T_c^{(b)}$ at each cache, by imposing that: \begin{equation} \label{e:che_multi_cache} \sum_{f=1}^F\sum_{\mathbf x_f \in \{0,1\}^B} x_f^{(b)} \pi_f(\mathbf x_f) = C, \end{equation} where $\pi_f(\mathbf x_f)$ denotes the stationary probability that MC $\mathbf X_f(t)$ is in state $\mathbf x_f$. } \subsection{Transition rates of the continuous time Markov Chain as $q$ vanishes} For a given content $f$, let $\mathbf x_f$ and $\mathbf y_f$ be two possible states of the MC $\mathbf X_f(t)$. We write $\mathbf x_f < \mathbf y_f$ whenever $x_f^{(b)} \le y_f^{(b)}$ for each $b$ and there is at least one $b_0$ such that $x_f^{(b_0)} < y_f^{(b_0)}$, and we say that $\bold{y}_f$ is an \emph{ancestor} of $\mathbf x_f$, and $\mathbf x_f$ is a \emph{descendant} of $\bold{y}_f$. Furthermore we denote by $|\mathbf x_f|=\sum_b x_f^{(b)}$ the number of copies of content $f$ stored in state $\mathbf x_f$, and we call it the weight of the state $\mathbf x_f$. If $\mathbf x_f < \mathbf y_f$ and $|\mathbf x_f|=|\mathbf y_f|-1$, we say that $\mathbf y_f$ is a \emph{parent} of $\mathbf x_f$ and $\mathbf x_f$ is a \emph{child} of $\mathbf y_f$. Now observe that, by construction, transition rates in the MC are different from 0 only between pair of states $\mathbf x_f$ and $ \mathbf y_f$, such that $\mathbf x_f < \mathbf y_f$ or $\mathbf y_f < \mathbf x_f$. The transition $\mathbf x_f \to \mathbf y_f$ is called an \emph{upward} transition, while $\mathbf y_f \to \mathbf x_f$ is called a \emph{downward} transition. A downward transition can only occur from a parent to a child ($|\mathbf x_f |= |\mathbf y_f |-1$). Let $b_0$ be the index such that $x_f^{(b_0)}<y_f^{(b_0)}$. We have that the downward rate is \begin{equation} \label{e:downward_rate} \rho_{[\mathbf y_f \to \mathbf x_f]} = \nu_f^{(b_0)}(\mathbf y_f) = \frac{\beta \Delta G_f^{(b_0)}}{e^{\beta \Delta G_f^{(b_0)}(\mathbf y_f) T_c^{(b_0)}}-1}. \end{equation} Upward transitions can occur to states that are ancestors. The exact transition rate between state $\mathbf x_f$ and state $\mathbf y_f$ with $\mathbf x_f < \mathbf y_f$ can have a quite complex expression, because it depends on the joint decisions of the BSs in $I_{u,f}\setminus J_{u,f}$. Luckily, for our analysis, we are only interested in how this rate depends on $q$, when $q$ converges to $0$. We use the symbol $\propto$ to indicate that two quantities are asymptotically proportional for small $q$, i.e., $f(q) \propto g(q)$ if and only if there exists a strictly positive constant $a$ such that $\lim_{q \to 0} f(q)/g(q)=a$. If $a=1$, then we write $f(q)\sim g(q)$ following Bachmann-Landau notation. Upon a request for $f$, a transition $ \mathbf x_f\to \mathbf y_f$ occurs, if $|\mathbf y_f| - |\mathbf x_f|$ BSs independently store, each with probability proportional to its parameter $q^{(b)}$, an additional copy of the content $f$ in their local cache. It follows that: \begin{equation} \label{e:upward_rate} \rho_{[\mathbf x_f \to \mathbf y_f]} \propto \prod_{b | y_f^{(b)} - x_f^{(b)} = 1} q^{(b)}. \end{equation} Now, as $q^{(b)}$ converges to $0$, for every $f$ every upward rate $\rho_{[\mathbf x_f \to \mathbf y_f]}$ tends to 0. Therefore, the characteristic time of every cell $T_C^{(b)}$ must diverge. In fact, if it were not the case for a cache $b$, none of the contents would be found in this cache asymptotically, because upward rates would tend to zero, while downward rates would not. This would contradict the set of constraints~\eqref{e:che_multi_cache} imposed by the CTA. Therefore necessarily $T_C^{(b)}$ diverges for every cell $b$. More precisely, we must have $T_C^{(b)}=\Theta(\log \frac{1}{q})$ at every cache, otherwise we fail to meet~\eqref{e:che_multi_cache}. In other words, there exist positive constants $a_l^{(b)}$ and $a_u^{(b)}$, such that $T_C^{(b)}(q)/ \log (1/q)$ asymptotically belongs to $[a_l^{(b)},a_u^{(b)}]$. { Given that the behavior $T_C^{(b)}(q)/ \log (1/q)$ is expected to be smooth, we assume that there exist (potentially different) positive constants $\gamma_b$ for all $b \in \{1, \dots, B\}$ such that $T_C^{(b)}(q)\sim \frac{1}{\beta \gamma_{b}}(\log \frac{1}{q})$ and $\frac{1}{\beta \gamma_{b} }\in [a_l^{(b)},a_u^{(b)}]$.} Now, we consider that BS $b$ employs $q^{(b)}= q^{\gamma_b}$. This choice makes the characteristic time scale in the same way at each cache: $T_C^{(b)}(q^{(b)}) \sim \frac{1}{\beta}\log \frac{1}{q}$. From this result and~\eqref{e:downward_rate}, it follows that a downward transition from a parent $\mathbf y_f$ to a child $\mathbf x_f=\mathbf y_f \ominus \mathbf e^{(b_0)}$ occurs with rate \[ \rho_{[\mathbf y_f \to \mathbf x_f]} \propto q^{\Delta G_f^{(b_0)}(\mathbf y_f) }. \] The following lemma summarises the results of this section. { \begin{lem} \label{l:asymptotic_rates} Consider two neighbouring states $\mathbf x_f$ and $\mathbf y_f$ with $\mathbf x_f < \mathbf y_f$ and the set of positive constants $\{\gamma_b, b=1, \dots, B\}$, such that $T_c^{(b)}(q^{(b)}) \sim \frac{1}{\beta \gamma_b} \log \frac{1}{q}$. If $q^{(b)} = q^{\gamma_b}$ then \[ \rho_{[\mathbf x_f \to \mathbf y_f]} \propto q^{\boldsymbol \gamma^\intercal \left(\mathbf y_f - \mathbf x_f\right)}, \] if $ \mathbf x_f = \mathbf y_f \ominus \mathbf e^{(b_0)} $, then \[ \rho_{[\mathbf y_f \to \mathbf x_f]} \propto q^{\Delta G_f^{(b_0)}(\mathbf y_f) }. \] \end{lem} } From now on we will assume that $q^{(b)} = q^{\gamma_b}$. For each possible transition, we define its \emph{direct resistance} to be the exponent of the parameter $q$, then $r_f(\mathbf x_f,\mathbf y_f)=\boldsymbol \gamma^\intercal \left(\mathbf y - \mathbf x\right)$, $r_f(\mathbf y_f,\mathbf x_f)= \Delta G_f^{(b_0)}(\mathbf y_f)$ and $r_f(\mathbf x_f,\mathbf x_f)=0$. Observe that the higher the resistance, the less likely the corresponding transition. \subsection{Stochastically stable states} In this section, we first introduce the key concept of stochastically stable states, in which, as $q$ converges to $0$, the system gets trapped. Then, we provide a characterization of stochastically stable states (Corollary~\ref{c:stochastically_stable}), which will be useful in Sect.~\ref{s:opt_proof} to prove that they correspond to optimal configurations. { We consider the discrete time MC $\hat{\bold{X}}_f(k)$, obtained sampling the continuous time MC $\bold{X}_f(t)$ with a period $\tau>0$, i.e., $\hat{\bold{X}}_f(k)=\bold{X}_f(k \tau)$. } Let $P_{f,q}$ denote the transition probability matrix of $\hat{\bold{X}}_f(k)$. For $q=0$, the set of contents in the cache does not change, each state is an absorbing one and any probability distribution is a stationary probability distribution for $P_{f,0}$. We are rather interested in the asymptotic behaviour of the MC when $q$ converges to $0$. For $q>0$ the MC is finite, irreducible,\footnote{ This is guaranteed if insertion probabilities in \eqref{e:miss} are positive. In some specific settings, it may be $\Delta g_f^{(b)}(\mathbf X_f(t) \oplus \mathbf e^{(b)},u)=0$ for each $u$. We can then consider $q_f^{(b)}(u) = q \gamma \max(\Delta g_f^{(b)}(\mathbf X_f(t) \oplus \mathbf e^{(b)},u),\epsilon)$ with $\epsilon>0$, or simply $q_f^{(b)}(u) =q$. } and aperiodic and then admits a unique stationary probability $\bold \pi_{f,q}$. \begin{defn}\label{d:stocstable} A state $\mathbf x_f$ is called stochastically stable if $\lim_{q \to 0 } \bold \pi_{f,q}(\mathbf x_f) >0$. \end{defn} We are going to characterize such states. { The set of possible transitions of $\hat{\bold{X}}_f(k)$ is in general larger than the set of possible transitions of $\bold{X}_f(t)$, as multiple transitions of $\bold{X}_f(t)$ can occur during the period $\tau$. For example, $\bold{X}_f(t)$ cannot move directly from $\mathbf x_f$ to $\mathbf x''_f = \mathbf x_f \ominus \mathbf e^{(b_1)}\ominus \mathbf e^{(b_2)}$ with $|\mathbf x_f''| = |\mathbf x_f|-2$, but during the interval $\tau$ it could move from $\mathbf x_f$ to $\mathbf x'_f = \mathbf x_f \ominus \mathbf e^{(b_1)}$ and then from $\mathbf x'_f$ to $\mathbf x''_f$. The transition $\mathbf x_f \to \mathbf x''_f$ is then possible for $\hat{\bold{X}}_f(k)$. At the same time, for small values of $\tau$ and of $q$, the probability of a direct transition $\mathbf x_f \to \mathbf x'_f$ is proportional to $q^{r(\mathbf x_f, \mathbf x_f')} \tau + o\left(q^{r(\mathbf x_f, \mathbf x_f')} \right) + o(\tau)$, but the probability of a combined transition $\mathbf x_f \to \mathbf x'_f \to \mathbf x''_f$ is smaller than $q^{r(\mathbf x_f, \mathbf x_f')+r(\mathbf x'_f, \mathbf x''_f)} \tau^2 + o\left(q^{r(\mathbf x_f, \mathbf x_f')} \right) + o\left(q^{r(\mathbf x'_f, \mathbf x''_f)} \right) + o(\tau)$. These transitions may be neglected as their transition probabilities are $o(\tau)$ and their equivalent resistance is equal to the sum of the direct transitions they are composed by. We can then restrict ourself to consider the transitions in $\bold{X}_f(t)$.} { Each MC $\hat{\mathbf X}_f(k)$ has then transition rates proportional to a power of $0<q<1$, i.e.~$P_{f,q}(\mathbf x_f,\mathbf x'_f) \propto q^{r_f(\mathbf x_f,\mathbf x'_f)}$.\footnote{We omit from now on, the proportionality to $\tau$.} These MCs were studies in a series of papers~\cite{connors88,connors89,desai94} by P.~R.~Kumar and his coauthors, because of their relation with the MCs that appear in simulated annealing problems, where $r_f(\mathbf x_f, \mathbf x'_f) = \max(C(\mathbf x'_f)-C(\mathbf x_f),0)$ and $C(\mathbf x_f)$ is a cost function we want to minimize. We list as lemmas three results from those papers we are going to use. Consider a weighted graph $\mathcal G_f$, whose nodes are the possible states $\mathbf x_f \in \{0,1\}^B$ and edges indicate possible direct transitions and have a weight equal to the corresponding resistance. Given an in-tree $\mathcal T(\mathbf x_f)$ in $\mathcal G_f$ routed in $\mathbf x_f$, we denote by $r_f(\mathcal T(\mathbf x_f))$ the resistance of the in-tree, i.e., the sum of all resistances of the edges of $\mathcal T(\mathbf x_f)$. We also denote by $\mathfrak T(\mathbf x_f)$ the set of all in-trees routed in state $\mathbf x_f$. Finally, we denote by $r_f(\mathbf x_f)$ the resistance of the minimum weight in-tree (or anti-arborescence) in $\mathcal G_f$ rooted to $\mathbf x_f$, i.e., \[r_f(\mathbf x_f) \triangleq \min_{\mathcal T \in \mathfrak T(\mathbf x_f) } r_f(\mathcal T).\] Intuitively, the resistance of a state is a measure of the general difficulty to reach state $\mathbf x_f$ from all other nodes. A consequence of the Markov chain tree theorem (see for example~\cite{anantharam89}) is that \begin{lem} \label{l:stationary_distribution} \cite[Lemma~1]{desai94} The stationary probabilities of the MC $X_{f,q}(k)$ have the following expression \[\pi_{f,q}(\mathbf x_f) \propto q^{r_f(\mathbf x_f) - \underset{\mathbf x'_f}{\min } \; r_f(\mathbf x'_f)}.\] \end{lem} A consequence of Lemma~\ref{l:stationary_distribution} is that the stochastically stable states are those with minimal resistance. Consider the following system of \emph{modified balance equations} in the variables $\nu_f(\mathbf x)$: \begin{equation} \label{e:balance_eqs} \left\{ \begin{aligned} & \underset{\mathbf x_f \in A, \mathbf z_f \in A^c}{\max} \nu_f(\mathbf x_f) - r_f(\mathbf x_f,\mathbf z_f) \\ &\phantom{===} = \underset{\mathbf x_f \in A, \mathbf z_f \in A^c}{\max} \nu_f(\mathbf z_f) - r_f(\mathbf z_f,\mathbf x_f),\; \\ &\phantom{=====} \forall A \subset \{0,1\}^B\\ & \underset{\mathbf x_f \in \{0,1\}^B}{\max} \nu_f(\mathbf x_f) = \sigma. \end{aligned} \right. \end{equation} \begin{lem}\cite[Theorem~3]{connors89} \label{l:balance_eqs} For each $\sigma$, the system~\eqref{e:balance_eqs} admits a unique solution. Solutions for different values of $\sigma$ are translates of each other. \end{lem} System~\eqref{e:balance_eqs} implicitly determines the set of stochastically stable states: \begin{lem} \label{l:balance_stoch_stable} \cite[Theorem~4]{desai94} Given $\{\nu_f(\mathbf x_f)\}$ the solution of system~\eqref{e:balance_eqs}, it holds: \[r_f(\mathbf x_f) - \min_{\mathbf x'_f} r_f(\mathbf x'_f) = \sigma - \nu_f(\mathbf x_f).\] \end{lem} In particular for our system, we can prove that \begin{lem} \label{l:our_balance} The function \[\phi_f(\mathbf x_{f})\triangleq G_f(\mathbf x_f) - \boldsymbol \gamma^\intercal \mathbf x_f\] is a solution of system~\eqref{e:balance_eqs} (for a particular value of $\sigma$). \end{lem} The proof is in Appendix~\ref{a:our_balance}. A consequence of Lemma~\ref{l:stationary_distribution}, Lemma~\ref{l:balance_eqs}, Lemma~\ref{l:balance_stoch_stable}, and Lemma~\ref{l:our_balance} is that \begin{cor} \label{c:stochastically_stable} The set of stochastically stable states is the set of global maximizers of $\phi_f(\mathbf x_f)$. \end{cor} } For each content $f$ we are then able to characterize which configurations are stochastically stable as $q$ converges to $0$. \subsection{Optimality proof} \label{s:opt_proof} { We now consider the continuous relaxation of the optimization problem~\eqref{e:static_opt_gen}: \begin{align} \label{e:relaxed_opt} & \maxim_{\{\alpha_f(\mathbf x_f)\}} & & \sum_{f=1}^F \sum_{\mathbf x_f \in \{0,1\}^B} \alpha_f(\mathbf x_f) G_f(\mathbf x_f) \\ & \text{subject to} & & \sum_{f=1}^F \sum_{\mathbf x_f \in \{0,1\}^B} \alpha_f(\mathbf x_f) x_f^{(b)} =C, \;\;\; \forall b \in [B] \nonumber\\ & & & \sum_{\mathbf x_f \in \{0,1\}^B} \alpha_f(\mathbf x_f) = 1, \;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \forall f \in [F] \nonumber\\ & & & \alpha_f(\mathbf x_f) \ge 0, \;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \forall f \in [F], \forall b \in [B]. \nonumber \end{align} The optimization problem~\eqref{e:static_opt_gen} corresponds to the particular case, where we require that, for each $f \in [F]$, there exists a single state $\mathbf x_f$ with $\alpha_f(\mathbf x_f) = 1$ and $\alpha_f(\mathbf x'_f) = 0$ for each $\mathbf x'_f \neq \mathbf x_f$. As the feasible set of the relaxed problem~\eqref{e:relaxed_opt} includes the feasible set of problem~\eqref{e:static_opt_gen}, the optimum value of problem~\eqref{e:relaxed_opt} is at least as large as the optimal value of problem~\eqref{e:static_opt_gen}. Note how the capacity constraint in problem~\eqref{e:relaxed_opt} is similar to the relaxed constraint considered by the CTA (see~\eqref{e:che_multi_cache}). This suggests that the stationary probabilities $\pi_f(\mathbf x_f)$ will play the role of the coefficients $\alpha_f(\mathbf x_f)$. } Now we can state formally our result. \begin{repprop}{p:qlrud_convergence_general} Under characteristic time and exponentialization approximations, let $\{\gamma_b, b=1, \dots, B\}$ be the constants in~Lemma~\ref{l:asymptotic_rates}. Consider the spatial network of \mbox{\textproc{$q$LRU-$\Delta$}}{} caches, where cache $b$ selects the parameter $q^{(b)} = q^{\gamma_b}$. {As $q$ converges to $0$, the stationary probabilities $\{\pi_{f,q}(\mathbf x_f), f \in [F], \mathbf x_f \in \{0,1\}^B\}$ converge to an optimal solution of Problem~\eqref{e:relaxed_opt}.} \end{repprop} { The proof is in Appendix~\ref{a:proof_qlrud}. It relies on the characterization of stochastically stable states in Corollary~\ref{c:stochastically_stable} and on studying problem~\eqref{e:relaxed_opt} using the method of Lagrange multipliers. } \begin{comment} We observe that by Lemma \ref{l:dominatdown} any downward transition in $\mathcal P$, say it $\mathbf x_{f_2}^l \to \mathbf x^{l+1}_{f_2}$, has to be dominant, i.e.,~$r_f(\mathbf x_{f_2}^l,\mathbf x^{l+1}_{f_2}) = \Lambda^{(b_l)}_{f_2}(\mathbf x^{l+1}_{f_2})/\gamma_{b_l} \le 1$). Then, if $\mathcal P=(\mathbf x'_{f_2}, \mathbf x_{f_2})$, i.e.,~the path is simply the direct link from $\mathbf x'_{f_2}$ to $\mathbf x_{f_2}$, we contradict the fact that $\mathbf x'_{f_2} \to \mathbf x_{f_2}$ is not dominant. If the path $\mathcal P$ is longer, then it needs to include at least a non-dominant upward transition, because the function $\phi()$ needs to decrease along $\mathcal P$ from $\mathbf x'_f$ to $\mathbf x_f$ ($\phi(\mathbf x'_{f_2})>\phi(\mathbf x_{f_2})$) and downward transitions are necessarily dominant. Let $\mathbf x^{m}_{f_2} \to \mathbf x^{m+1}_{f_2}$ be a non-dominant upward transition along the path. This transition \begin{enumerate} \item cannot be the last one in the path (i.e., it has to be~$m<k-1$), \item cannot be followed by an upward transition. \end{enumerate} In fact, in both cases it would be possible to build a tree rooted a different node than $\mathbf x_{f_2} $with a strictly smaller resistance. In the first case, consider the new tree $\mathcal T' = \mathcal T - (\mathbf x^{k-1}_{f_2},\mathbf x^{k}_{f_2}=\mathbf x_{f_2}) + (\mathbf x_{f_2},\mathbf x^{k-1}_{f_2})$ routed in $\mathbf x^{k-1}_{f_2}$. In the second case, consider the tree $\mathcal T' = \mathcal T - (\mathbf x^{m}_{f_2},\mathbf x^{m+1}_{f_2}) - (\mathbf x^{m+1}_{f_2},\mathbf x^{m+2}_{f_2}) + (\mathbf x^{m+1}_{f_2},\mathbf x^{m}_{f_2}) + (\mathbf x_{f_2},\mathbf x'_{f_2})$ routed in $\mathbf x^{m}_{f_2}$. We conclude that any non-dominant upward transition along the path has to be followed by a downward transition. Observe that the function $\phi(.)$ (strictly) decreases along the non-dominant (upward) transition $\mathbf x^{m}_{f_2} \to \mathbf x^{m+1}_{f_2}$ and does not decrease along the downward transition $\mathbf x^{m+1}_{f_2} \to \mathbf x^{m+2}_{f_2}$. If the increase of $\phi(.)$ along $(\mathbf x^{m+1}_{f_2} , \mathbf x^{m+2}_{f_2})$ is not smaller than its decreases along $(\mathbf x^{m}_{f_2} , \mathbf x^{m+1}_{f_2})$, (i.e.,~$|\phi(\mathbf x^{m+1}_{f_2})-\phi(\mathbf x^{m+2}_{f_2})| \ge |\phi(\mathbf x^{m}_{f_2})-\phi(\mathbf x^{m+1}_{f_2})|$), then $\phi(\mathbf x_{m+2}) \ge \phi(\mathbf x_{m})$. This cannot happen for all the non-dominant upward transitions, because the function $\phi(.)$ needs to decrease over $\mathcal P$. It follows that there exists at least one non-dominant upward transition $\mathbf x^{m'}_{f_2} \to \mathbf x^{m'+1}_{f_2}$ followed by a downward transition $\mathbf x^{m'+1}_{f_2} \to \mathbf x^{m'+2}_{f_2}$, such that $|\phi(\mathbf x^{m'+1}_{f_2})-\phi(\mathbf x^{m'+2}_{f_2})| < |\phi(\mathbf x^{m'}_{f_2})-\phi(\mathbf x^{m'+1}_{f_2})|$ and taking into account the signs it is: \[0 \le \phi(\mathbf x^{m'+2}_{f_2}) - \phi(\mathbf x^{m'+1}_{f_2}) < \phi(\mathbf x^{m'}_{f_2}) - \phi(\mathbf x^{m'+1}_{f_2}) \] From the definition~\eqref{e:potential2} it follows \begin{align*} \phi(\mathbf x^{m'+2}_{f_2}) - \phi(\mathbf x^{m'+1}_{f_2}) & = \gamma_{b_{m'+2}} - \Lambda_{f_2}^{(b_{m'+2})}(\mathbf x^{m'+2}_{f_2}), \\ \phi(\mathbf x^{m'}_{f_2}) - \phi(\mathbf x^{m'+1}_{f_2}) & = \gamma_{b_{m'}}- \Lambda_{f_2}^{(b_{m'})}(\mathbf x^{m'}_{f_2}). \end{align*} Then \[\gamma_{b_{m'+2}} - \Lambda_{f_2}^{(b_{m'+2})}(\mathbf x^{m'+2}_{f_2}) < \gamma_{b_{m'}}- \Lambda_{f_2}^{(b_{m'})}(\mathbf x^{m'}_{f_2}),\] and if $\gamma_{b_{m'+2}} = \gamma_{b_{m'}} = \gamma$, we obtain:\footnote{ Note that this is the first and only time in the proof we use the hypothesis on the characteristic time being of the same order.} \[r_f(\mathbf x^{m'+1}_{f_2},\mathbf x^{m'}_{f_2})=\frac{\Lambda_{f_2}^{(b_{m'})}(\mathbf x^{m'}_{f_2})}{\gamma} < \frac{\Lambda_{f_2}^{(b_{m'+2})}(\mathbf x^{m'+2}_{f_2})}{\gamma} = r_f(\mathbf x^{m'+1}_{f_2},\mathbf x^{m'+2}_{f_2}).\] Consider then the tree $\mathcal T' = \mathcal T - (\mathbf x^{m'}_{f_2},\mathbf x^{m'+1}_{f_2}) - (\mathbf x^{m'+1}_{f_2},\mathbf x^{m'+2}_{f_2}) + (\mathbf x^{m'+1}_{f_2},\mathbf x^{m'}_{f_2}) + (\mathbf x_{f_2},\mathbf x'_{f_2})$ rooted in $\mathbf x^{m'}_{f_2}$, its resistance is \[r\left(\mathcal T'\right) = r\left(\mathcal T'\right) - 1 - \frac{\Lambda_{f_2}^{(b_{m'+2})}(\mathbf x^{m'+2}_{f_2})}{\gamma} + \frac{\Lambda_{f_2}^{(b_{m'})}(\mathbf x^{m'}_{f_2})}{\gamma} + 1 < r\left(\mathcal T\right),\] obtaining again a contradiction. \end{comment} \begin{comment} \begin{lem} \label{l:hit_rate} Given a cache configuration $\mathbf x=(\mathbf x_1, \dots, \mathbf x_F)$ the gain for content $f$ can be calculated as follows \[G_f (\mathbf x_f)= \sum_{b=1}^B \Delta G^{(b)}_f(x^{(1)}_f, \dots, x^{(b)}_f, 0, \dots, 0).\] \end{lem} \begin{proof} The gain for content $f$ is \begin{align*} G_f(\mathbf x_f) & = G_f(\mathbf x_f) - G_f(\mathbf 0) \\ & = \sum_{b=1}^B G_f(x^{(1)}_f, \dots, x^{(b-1)}_f, x^{(b)}_f, 0, \dots, 0) \\ & \phantom{\sum_{b=1}^B } - G_f(x^{(1)}_f, \dots x^{(b-1)}_f, 0, 0, \dots, 0)\\ & = \sum_{b=1}^B \Delta G_f^{(b)}(x^{(1)}_f, \dots x^{(b-1)}_f, x^{(b)}_f, 0, \dots, 0), \end{align*} where the first equality follows from $G_f(\mathbf 0) = 0$ and the last one from~the definition~\eqref{e:deltaGb}. \end{proof} \end{comment} \section{Case studies} \label{s:case_studies} As we discussed, \mbox{\textproc{$q$LRU-$\Delta$}}{} can be made to optimize different utility functions $G_f(\cdot)$. In this section we illustrate two specific case studies: hit rate maximization, and delay minimization with CoMP techniques. We first describe what form the general \mbox{\textproc{$q$LRU-$\Delta$}}{} assumes in these cases and then illustrate with some experiments the convergence result in Proposition~\ref{p:qlrud_convergence_general}. \subsection{Hit rate maximization} The gain is simply $1$ from a hit and $0$ from a miss,~i.e., \[g_f(\mathbf X_f,u) = \mathbbm 1(J_{u,f} \neq \emptyset),\] where $\mathbbm 1(\cdot)$ denotes the indicator function. According to~\eqref{e:update} with $\beta =1$, each BS $b$ with a local copy ($b \in J_{u,f}$) moves the content to the front of the cache with probability \begin{align*} p_f^{(b)}(u) & = \Delta g_f^{(b)}(\mathbf X_f(t),u) \\ &= \mathbbm 1(J_{u,f} \neq \emptyset) - \mathbbm 1(J_{u,f} \setminus \{b\} \neq \emptyset)\\ & = 1 - \mathbbm 1(J_{u,f} \setminus \{b\} \neq \emptyset)\\ & = \mathbbm 1(J_{u,f} \setminus \{b\} = \emptyset) = \mathbbm 1(J_{u,f}= \{b\}), \end{align*} where the third equality is due to the fact that $b \in J_{u,f}$. Similarly, from~\eqref{e:miss}, at least one of the BSs without the content (i.e., those in $I_{u,f} \setminus J_{u,f}$) decides to store an additional copy of $f$ with probability \begin{align*} q_f^{(b)}(u) & = q \mathbbm 1(J_{u,f} = \emptyset). \end{align*} The policy then works as follows. Upon a miss ($J_{u,f}=\emptyset)$, at least one cache decides to retrieve the content with probability $q$. Upon a hit ($J_{u,f}\neq \emptyset$), the cache serving the content brings it to the front if and only if no other cache could have served it (i.e.,~$|J_{u,f}| = 1$). {Note that in order to compute $p_f^{(b)}$ and $q_f^{(b)}$ cache~$b$ simply needs to know the size of $J_{u,f}$. The system can then operate as follows: the user broadcasts a query for content $f$, discovers $J_{u,f}$ (which BSs have a copy of the content) and piggyback this information when querying the specific BS from which it wants to retrieve the content}. This policy is a slight extension of \textproc{$q$LRU-Lazy}{} proposed in~\cite{leonardi18jsac}. The only minor difference is that under \textproc{$q$LRU-Lazy}{} only one cache retrieves the contents upon a miss. \mbox{\textproc{$q$LRU-$\Delta$}}{} allows for some additional flexibility. In what follows, we consider that each cache decides independently to retrieve the copy (and then multiple copies of the same content can be retrieved). \subsection{Delay minimization with CoMP} Let $h_{b,u}$ denote the signal-to-noise ratio (SNR) of the wireless channel between BS $b$ and user $u$. We assume for simplicity that $\{h_{b,u}, b \in I_u\}$ are i.i.d. random variables with expected value $h$, and we consider $h_{b,u}=0$, when $u$ is not reachable by the BS $b$ ($b\notin I_u$). We consider BSs can employ a coordinated transmission technique. In particular, BSs in $J_{u,f}$ can cooperate to transmit the file to $u$, and we assume they are able to achieve the aggregate channel capacity $C\!\left(\sum_{b \in J_{u,f}} h_{b,u}\right)\triangleq W \log_2(1+\sum_{b \in J_{u,f}} h_{b,u})$, where $W$ is the channel bandwidth \cite{tse2005fundamentals, ao15}. Upon a miss, the content needs to be retrieved from a base station $b^* \in I_u$, selected uniformly at random, and then transmitted from $b^*$ to $u$.\footnote{ It is possible to consider more complicated schemes, e.g.,~where the $u$ retrieves from the BS with the highest SNR. } The user then experiences a delay equal to the backhaul delay (denoted as $d_B$) plus the transmission delay $M/ C( h_{b^*,u})$, where $M$ is the size of the content. Upon a hit, the delay is instead equal to \begin{align} \frac{M}{C\!\left(\sum_{b \in J_{u,f}} h_{b,u}\right)} & = \frac{M}{C\!\left(\sum_{b \in I_{u}} h_{b,u} X_f^{(b)}\!(t)\right)}\\ & = \frac{M}{C\!\left(\sum_{b} h_{b,u} X_f^{(b)}\!(t)\right)} \end{align} Summing up, the delay experienced by user $u$ requesting file $f$ is \begin{align*} d_f(\mathbf X_f(t), u)& = \begin{cases} d_B + \frac{M}{C( h_{b^*,u})}, & \textrm{if }J_{u,f}= \emptyset,\\ \frac{M}{C\left( \sum_{b} h_{b,u} X_f^{(b)}\!(t)\right)}, & \textrm{otherwise.} \end{cases} \end{align*} The total expected delay per request is then \begin{equation} D_f(\mathbf X_f(t)) = \sum_{u=1}^U \lambda_{f,u} {\mathbb E}\left[d_f(\mathbf X_f(t),u)\right], \end{equation} when the set of users is finite, and \begin{equation} D_f(\mathbf X_f(t)) = \int_R \lambda_{f}(\mathbf r) {\mathbb E}\left[d_f(\mathbf X_f(t),\mathbf r)\right] \mu(\mathbf r) \textrm d \mathbf r, \end{equation} when a potentially unbounded set of users is distributed over the region (see Sect.~\ref{s:network_model}). We want to minimize the delay $D_f(\mathbf x_f)$. In order to frame this goal according to our reference maximization problem~\eqref{e:static_opt_gen}, we can simply consider $G_f(\mathbf x_f) \triangleq d_{\max} - D_f(\mathbf x_f)$, where $d_{\max}$ is a bound on the retrieval time, e.g.,~equal to the sum of the backhaul delay and the maximum delay on the transmission channel. Similarly, we consider $g_f(\mathbf x_f,u) \triangleq d_{\max} - d_f(\mathbf x_f,u)$. Note that \begin{align*} & \Delta g_f^{(b)}(\mathbf x_f,u) \\ & =(d_{\max} - d_f^{(b)}(\mathbf x_f,u))\\ & \;\;\;\;\; - (d_{\max} - d_f^{(b)}(\mathbf x_f\ominus \mathbf e^{(b)},u)) \nonumber\\ & = d_f^{(b)}(\mathbf x_f\ominus \mathbf e^{(b)},u) - d_f^{(b)}(\mathbf x_f,u)\\ & = \begin{cases} d_B + \frac{ M}{c\left( h_{b^*,u}\right)} - \frac{ M}{c\left( h_{b,u}\right)}, \hspace{0.5cm}\textrm{ if }J_{u,f}= \{b\},\\ \frac{ M}{C\left( \sum_{b'\neq b} h_{b',u} X_f^{(b')}\right)} - \frac{M}{C\left( \sum_{b'} h_{b',u} X_f^{(b')}\right)}, \hspace{0.2cm}\textrm{o/w.} \end{cases} \end{align*} Note that $d_{\max}$ cancels out and then the choice of its value is irrelevant for the algorithm. Remember from our discussion at the end of Sect.~\ref{s:operation} that it is possible to replace $\Delta g_f^{(b)}$ in~\eqref{e:update} with any other function with the same expected value. Given that $h_{b,u}$ and $h_{b^*,u}$ are identically distributed, we can have each BS $b$ with a local copy ($b \in J_{u,f}$) move the content to the front of the cache with probability \begin{align*} & p_f^{(b)}(u) = \\ & = \begin{cases} \beta d_B, \hspace{4.9cm}\textrm{ if }J_{u,f}= \{b\},\\ \frac{\beta M}{C\left( \sum_{b'\neq b} h_{b',u} X_f^{(b')}\right)} - \frac{\beta M}{C\left( \sum_{b'} h_{b',u} X_f^{(b')}\right)}, \hspace{0.2cm}\textrm{o/w.} \end{cases} \end{align*} Similarly, from~\eqref{e:miss}, at least one of the BSs without the content (i.e., those in $I_{u,f} \setminus J_{u,f}$) decides if storing an additional copy of $f$ with probability \begin{align*} & q_f^{(b)}(u) = \\ & = \begin{cases} q \delta d_B, \hspace{4.9cm}\textrm{ if }J_{u,f}= \emptyset,\\ \frac{q \delta M}{C( \sum_{b'} h_{b',u} X_f^{(b')})} - \frac{q \delta M}{C( h_{b,u}+\sum_{b'} h_{b',u} X_f^{(b')}))}, \hspace{0.2cm}\textrm{o/w.} \end{cases} \end{align*} As above, we consider that each cache decides independently to retrieve an additional copy. {Similarly to what discussed above, the user can piggyback to its request the measured SNRs values from all the BSs in its transmission range ($I_u$). This information allows BS $b$ to compute $p_f^{(b)}$ and $q_f^{(b)}$.} \begin{figure}[t] \centering \includegraphics[width=\myFigureScale\linewidth]{berlin.pdf} \caption{T-Mobile BS configuration in Berlin.} \label{f:berlin} \end{figure} \subsection{Numerical Results} {In our simulations} we consider a topology where $B=10$ base stations are located according to the positions of T-mobile base stations in Berlin extracted from~\cite{bs_dataset}. The BS locations are indicated in Fig.~\ref{f:berlin}. We assume their transmission range is 150m, and spatial user density to be homogeneous, so that each user on average is covered by 5.9 BSs. SNRs have constant values $h_{b,u}=10$dB, the channel bandwidth is $W=5.0$MHz, and backhaul access delay is $d_B=100$ms. The catalog counts $F=10^6$ files with size $M=10^6$~bits, whose popularity distribution follows a Zipf law with exponent $\alpha=1.2$. Each BS has a local cache with capacity $C=100$~files{, unless otherwise stated.} We show the performance of \mbox{\textproc{$q$LRU-$\Delta$}}, when it is configured to maximize the hit rate and when to minimize the delay rate. In the figures we refer to the two cases as \textproc{qLRU-$\Delta h$} and \textproc{qLRU-$\Delta d$}. For \textproc{qLRU-$\Delta d$}, { we set $\beta$ and $\delta$ equal to the minimum value that guarantees respectively $p_f^{(b)}(u)\le 1$ and $q_f^{(b)}(u)\le q$ for every possible state of the cache $\mathbf x_f$}. We would like to compare their performance with the corresponding optimal {offline} allocations. Unfortunately, both corresponding optimization problems are NP-hard, but the greedy algorithm has a guaranteed $1/2$-approximation ratio for hit ratio maximization~\cite{shanmugam13} and for delay minimization~\cite{tuholukova17}.\footnote{ Precisely, the greedy static allocation achieves at least $1/2$ of the delay savings achievable by the best possible static allocation. } We then consider the corresponding {offline} allocations as baselines and denote them respectively as \textproc{Greedy-$h$}{} and \textproc{Greedy-$d$}. Note that the greedy algorithm requires complete knowledge of the network and of content popularities, while \mbox{\textproc{$q$LRU-$\Delta$}}{} has no such information. {Additionally, we provide the results for the simulation of two other online policies: \textproc{$q$LRU}{} and \textproc{FIFO}{}. Both policies maintain the contents in the cache as an ordered list with insertions occurring at the front of the list and evictions at the rear. In \textproc{$q$LRU}{}, the requested content is inserted with probability $q$ upon a miss and moved to the front upon a hit. Note that \textproc{$q$LRU}{} with $q=1$ coincides with \textproc{LRU}. In \textproc{FIFO}, the requested content is always inserted upon a miss and maintains its position upon a hit.} In all our experiments, policies' simulations have a warm up phase and a measurement phase each consisting of $10^8$ requests. \begin{figure}[h] \centering \includegraphics[width=0.42\textwidth]{convergence_ratio_berlin.png} \caption{{Comparison of {online} policies and \textproc{Greedy-$h$}: hit rate (left) and distance of their allocations (right) versus $q$.}} \label{fig:hit_rate} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.42\textwidth]{convergence_delay_berlin.png} \caption{{Comparison of {online} policies and \textproc{Greedy-$d$}: average delay (left) and distance of their allocations (right) versus $q$.}} \label{fig:delay} \end{figure} Figure~\ref{fig:hit_rate} (left) shows the hit rate achieved by \mbox{\textproc{Greedy-$h$}} and by \textproc{$q$LRU-$\Delta h$}{} for different values of~$q$. As $q$ decreases, \textproc{$q$LRU-$\Delta h$}'s hit rate converges to that of \textproc{Greedy-$d$}. {The hit rate of \textproc{$q$LRU}{} also improves for smaller $q$. For a single cache, \textproc{$q$LRU}{} coincides with \textproc{$q$LRU-$\Delta h$}{} and it is then implicitly maximizing the hit rate when $q$ converges to $0$. But in a networked setting, the deployment of \textproc{$q$LRU}{} at each cache does not perform as well because each cache is myopically maximizing its own hit rate without taking into account the presence of the other ones. Instead, \textproc{$q$LRU-$\Delta h$}{} correctly takes into account the marginal contribution the cache can bring to the whole system. Finally, \textproc{FIFO}{} achieves the lowest hit rate as the sojourn time of each content inserted in the cache is roughly the same, independently from its popularity.} We also compare how different the content allocations of~\textproc{$q$LRU-$\Delta h$}, \textproc{$q$LRU}, and \textproc{FIFO}{} are from the allocation of~\mbox{\textproc{Greedy-$h$}}. To this purpose, we define the \emph{occupancy vector}, whose component $i$ contains the number of copies of content $i$ present in the network averaged during the measurement phase. We then compute the cosine distance\footnote{ The cosine distance between vectors $u$ and $v$ is given by $\text{dist}(u,v) = 1 - \frac{\langle u, v \rangle}{\lVert u \rVert_2 \lVert v \rVert_2}$, where $\langle\cdot,\cdot\rangle$ denotes the inner product. } of the occupancy vectors of the specific {online} policy and \textproc{Greedy-$h$}. {Figure~\ref{fig:hit_rate} (right) shows how such distance decreases as $q$ decreases, indicating that the files \textproc{Greedy-$h$}{} stores tend to be cached longer and longer under \textproc{$q$LRU-$\Delta h$}, and partially under \textproc{$q$LRU}. The allocations of \textproc{FIFO}{} and \textproc{Greedy-$h$}{} are instead quite far. } Figure~\ref{fig:delay} shows the corresponding results for \textproc{Greedy-$d$}{} and \textproc{$q$LRU-$\Delta d$}. The conclusion is the same: as $q$ decreases \mbox{\textproc{$q$LRU-$\Delta$}}{} improves the metric of interest (the delay in this case) achieving performance comparable to those of the optimal {offline} greedy allocation {and outperforming existing policies like~\textproc{$q$LRU}{} and \textproc{FIFO}.} \begin{figure}[h] \centering \includegraphics[width=0.32\textwidth] {capacity_ratio.png} \caption{{Comparison of {online} policies and $\textproc{Greedy-$h$}$: hit ratio versus cache capacity.}} \label{fig:capacity_ratio} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.32\textwidth] {capacity_delay.png} \caption{{Comparison of {online} policies and $\textproc{Greedy-$d$}$: average delay versus cache capacity.}} \label{fig:capacity_delay} \end{figure} {Figures~\ref{fig:capacity_ratio} and~\ref{fig:capacity_delay} show the hit ratio and average delay, respectively, of {online} policies and greedy algorithms as we increase the cache capacity per BS. We fix $q=0.001$ for \mbox{\textproc{$q$LRU-$\Delta$}}{} and \textproc{$q$LRU}{}. In both scenarios, \mbox{\textproc{$q$LRU-$\Delta$}}{} outperforms all other {online} policies and it closely follows the result of the corresponding greedy policy. Note that the strange shape of \textproc{FIFO}{} curves is an artefact of the semi-log graph as shown by the inserts.} {Additionally, we have carried out additional experiments with different catalog size and popularity distributions, these results are qualitatively very similar to the results already reported.} If some knowledge about content popularity is available, it can be exploited to determine the initial content to allocate in the caches using the offline greedy algorithms, i.e., \textproc{Greedy-$h$}{} and \textproc{Greedy-$d$}{} when the metric of interest is the hit ratio or the delay, respectively. We show through an experiment in Fig.~\ref{fig:frame_pop} that \mbox{\textproc{$q$LRU-$\Delta$}}{} can modify the initial cache configuration and improve performance. The left figure considers the hit ratio as objective, the right one the delay. The ground truth popularity follows a Zipf distribution with $\alpha = 1.2$ (as in the previous experiments) and noisy popularity estimations are available: they are obtained multiplying true popularities by random values from a log-normal distribution with expected value $1.0$ and variance $e^{\sigma^2} - 1$ ($\sigma^2$ is the variance of its logarithm). If $\sigma^2=0$, estimated popularity values coincide with the true ones, but the larger the variance $\sigma^2$, the less accurate the estimations. The horizontal dashed lines indicate the performance of the corresponding initial cache configuration under the true request process. The solid curves show the performance over time of \textproc{$q$LRU-$\Delta h$}{} (left) and \textproc{$q$LRU-$\Delta d$}{} (right) with $q=10^{-3}$. We observe that the curves converge to the same value, that is slightly worse than the initial one when popularity estimations are exact ($\sigma^2=0$), but better in all other cases. This result shows that \mbox{\textproc{$q$LRU-$\Delta$}}{} can effectively improve performance even when popularity estimates are available. Interestingly, one may expect that the time needed for \mbox{\textproc{$q$LRU-$\Delta$}}{} to reach the steady state performance depends on the accuracy of the initial popularity estimates (the more accurate, the less changes would be needed to reach the final cache allocation), but the dependence, if present at all, is very small. We remark that available popularity information could also be used also to tune \mbox{\textproc{$q$LRU-$\Delta$}}'s parameters to speed-up the transient. For example, we can modify~\eqref{e:miss} to favor the contents the greedy algorithm would have put in the cache. This change is in the same spirit of introducing the factor $\Delta g_f^{(b)}(\mathbf X_f(t) \oplus \mathbf e^{(b)},u)$ in~\eqref{e:miss}. As we discuss at the end of Section~\ref{s:operation}, these changes likely improve convergence speed, but do not affect the steady-state and then \mbox{\textproc{$q$LRU-$\Delta$}}'s optimality guarantees. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth] {frame_pop.png} \caption{Convergence of \textproc{$q$LRU-$\Delta h$}{} (left) and \textproc{$q$LRU-$\Delta d$}{} (right) starting the simulation with the respective greedy allocation for different accuracy of popularity estimation, quantified by the variance~$\sigma^2$. The solid curves are the average of $100$ different simulation rounds.} \label{fig:frame_pop} \end{figure} \section{Discussion and conclusions} \label{s:conclusions} In this paper, we have introduced \mbox{\textproc{$q$LRU-$\Delta$}}, a general-purpose caching policy that can be tuned to optimize different performance metrics in a dense cellular network. Recently~\cite{garetto20}, we discovered that the same approach can be applied to a different application scenario, i.e., similarity caching systems, in which a user request for an object $o$ that is not in the cache can be (partially) satisfied by a similar stored object $o'$, at the cost of a loss of user utility. This cost can be expressed as function of the set of objects currently stored in the cache, similarly to how in this paper the gain is a function of the set of BSs storing the content. Under stationary request processes, the smaller $q$ is, the better \mbox{\textproc{$q$LRU-$\Delta$}}{} performs. When content popularities and/or user densities vary over time, the caching policy may react too slowly to changes if $q$ is small. A detailed experimental evaluation in~\cite{leonardi18jsac} using real traces from Akamai suggests that the sweet-spot is for $q$ values between $0.01$ and $0.1$, that achieve a good tradeoff between convergence speed and performance. A practical alternative to make the policy more reactive is to use a virtual cache. The virtual cache only stores content ids and it is managed independently from the physical cache, e.g., through a \textproc{LRU}{} policy. Upon a miss at a physical cache, the content is stored there if and only if its id is present in the virtual cache. Upon hits, the policy updates the state of the cache exactly as \mbox{\textproc{$q$LRU-$\Delta$}}. Under stationary request traffic, a miss for content $i$ leads to an insertion with probability $1-e^{-\lambda_i T_{c,v}}$, where $T_{c,v}$ is the characteristic time of the virtual cache. The virtual cache can be seen as an alternative way to implement a probabilistic insertion (at the cost of introducing a popularity-bias), achieving small insertion probabilities when the virtual cache (and then $T_{c,v}$) is small. At the same time, two close requests for a content cause it to be placed immediately in the physical cache, while \mbox{\textproc{$q$LRU-$\Delta$}}{} would store it on average after $1/q$ requests. This variant reacts faster and it is then more suited for non-stationary settings. \mbox{\textproc{$q$LRU-$\Delta$}}{} responds to hits in a binary way: the content is moved to the front or maintained in the same position. The dynamic performance of the policy may probably be improved by introducing a list-based variant~\cite{gast15}, where the cache is organized in a number of ranked lists and a content is promoted to a higher-priority list upon a hit. The marginal gain of the copy can affect the probability of the content to be randomly promoted to the next list, or the number of lists the content advances by. Another interesting research direction is to extend \mbox{\textproc{$q$LRU-$\Delta$}}{} to operate with heterogeneous content sizes. This can be probably achieved by making the update probability inversely proportional to the content size, similarly to what done in~\cite{neglia18ton}. This work was partly funded by the French Government (National Research Agency, ANR) through the ``Investments for the Future'' Program reference \#ANR-11-LABX-0031-01. \IEEEpeerreviewmaketitle \bibliographystyle{IEEEtran}
2023-04-23T06:41:34.017Z
2021-04-27T02:00:21.000Z
redpajama/arxiv
arxiv_0001
2,671
12,521
12e09eb4436da159de015d74085c46ac8221e7bf
\section{Introduction} \label{sec:Intro} In the past decades, various numerical methods have been proposed for simulating two-phase flows. Interface-capturing methods have been more popular due to the automatic treatment of topological changes. A review of the different classes of interface-capturing methods can be found in \cite{Mirjalili_ARB}. Phase field methods, which are also known as diffuse interface methods have emerged as promising approaches for simulating two-phase flows. In these methods, the transport equation governing the phase indicator is modified by incorporating physical effects that govern capillary interfaces. Although in realistic immiscible two-phase flows the physical thickness of the interface is virtually impossible to numerically resolve, these methods offer some desirable properties that have attracted the interest of two-phase flow modelers in recent years \citep{Anderson1998,Badalassi2003,Ding2007}. This can be primarily attributed to the following advantages: \begin{itemize} \item Simplicity and ease of programming: Since phase advection is performed via discretization of partial differential equations (PDE's), such algorithms are much easier to implement compared to alternatives such as geometric VOF methods which require sophisticated computational geometry. \item Cost and scalability: The cost of time-integrating a PDE governing the evolution of the phase field is much lower than alternative options. Moreover, as opposed to levet-set and VOF schemes, phase field methods are blind to the interface location, rendering them automatically load-balanced. \item Smooth field: Similar to methods based on advection of a level-set function, phase field methods have the luxury of computing normal vectors and curvature values directly from a readily available smooth field. \item Mass conservation: Phase field methods based on discretizing PDE's that are in conservative form conserve the mass of the system. Contrary to traditional level-set methods, since there is no reinitialization step in phase field methods, conservation of mass of the system is guaranteed. \end{itemize} Traditionally, phase field methods have been based on the Cahn-Hilliard or Allen-Cahn equations, which are two important gradient flows of the Ginzburg-Landau-Wilson free energy functional. In a bounded physical domain given by $\Omega$, this energy functional is defined on the $H^1(\Omega)$ space in the form \begin{equation} F:{ H }^{ 1 }(\Omega )\rightarrow [0,\infty ], F(\phi )=\frac { 1 }{ 2 } \int _{ \Omega }{ { \epsilon }^{ 2 }{ \left| \nabla \phi \right| }^{ 2 }dV }+ \int _{ \Omega }{ W(\phi )dV }, \label{CH_energy_defn} \end{equation} where $\phi=-1$ and $\phi=+1$ represent the pure phases and $W(s)= (1-{ s }^{ 2 }) ^{ 2 }/4$ is the mathematically approximated double-well potential. In the absence of fluid flow, the steady state solution to the Cahn-Hilliard and Allen-Cahn equations minimizes the Ginzburg-Landau-Wilson free energy functional in two different norms. This enables these models to capture effects such as coarsening in binary-fluid mixtures and Ostwald ripening in systems in which phase change or phase separation can occur. However, in the two-phase flow community, phase field models are used in the same spirit as such sharp interface capturing schemes (VOF, level-set, etc.), with no reference to interface thermodynamics and continuum phase transitions, but instead solely employed for capturing the advection of phase interfaces due to fluid flow. Owing to its conservative form, the Cahn-Hilliard equation is a popular option within the two-phase flow community. This equation, given by \begin{equation} \frac { \partial \phi }{ \partial t } +\nabla \cdot (\vec { u } \phi )=-{ \nabla }^{ 2 }\left [{ \epsilon }^{ 2 }{ \nabla }^{ 2 }\phi -W'(\phi )\right ], \label{Cahn_Hilliard} \end{equation} is the $H^{ -1 }(\Omega )$ gradient flow of the energy functional defined in Equation \ref{CH_energy_defn}. For this specific phase field method, \cite{Jacqmin1999} showed how surface tension force can be defined such that total energy (kinetic energy plus surface energy) is only dissipated, causing spurious currents to vanish. Most articles on Cahn-Hilliard, including \cite{Jacqmin1999}, have focused on equal or low density ratios. \cite{Ding2007} laid the foundation for applying these equations to flows with high density ratios. Later, \cite{Shen_Yang} extended the work by \cite{Jacqmin1999} to non-unity density ratios by elaborating on how the momentum equation for Allen-Cahn and Cahn-Hilliard systems should be modified such that these phase field systems admit discrete energy laws in which the total energy is non-increasing. \cite{Dong2012} later used this framework and presented a spectral element based solver which was suitable for handling the fourth order spatial derivatives in Equation \ref{Cahn_Hilliard}. Despite the Cahn-Hilliard model's advantage of having upper bounds on total energy which leads to robustness and stability, there are multiple issues with phase field methods based on the Cahn-Hilliard equation: \begin{itemize} \item Artificial dissipation of total energy is undesirable, especially for realistic applications such as turbulent two-phase flows. \item Handling a fourth order spatial derivative is cumbersome. \item Equilibrium solutions can yield phase values outside of the $[1,-1]$ bounds away from the interfaces. For example, for a stationary spherical drop with radius $r$, the analytical equilibrium solution involves a far-field overshoot on the order of $\epsilon/r$\citep{Yue2007}. This shift in the equilibrium solution is problematic for high density ratios, as it can cause significant deviation in the density values, even resulting in regions of negative density. More importantly, since the Cahn-Hilliard equation is in conservative form, this results in artificial shrinkage (mass leakage) of drops and bubbles\citep{Yue2007}. \item Coarsening effects are observed in two-phase flow simulations using Cahn-Hilliard equation. This is due to the fact that in order to minimize the energy functional in the domain, the right-hand side terms in the equation may cause spontaneous coalescence of drop/bubbles in a non-physical manner. \end{itemize} The third and fourth issues above are the most critical shortcomings of the original Cahn-Hilliard model. In order to keep the phase field function in the correct bounds, \cite{Dong2012} resorted to clipping out of bound values. This does not however counter the spontaneous shrinking (mass leakage) issue. Some methods such as those suggested by \cite{Wang2015,Li2016,Zhang2017} have attempted to reduce mass leakage by adding corrective terms to the right-hand side of Equation \ref{Cahn_Hilliard}. These "profile-corrected" and "flux-corrected" phase field methods result in reduction of spontaneous shrinkage (mass leakage) and coarsening effects\citep{Soligo2019}, but such fixes come with a penalty. Namely, the modified phase field equations in such methods are no longer gradient flows of an energy functional. As a result, converse to the original Cahn-Hilliard model, they do not admit discrete energy laws, undermining the rationale for solving a cumbersome fourth order PDE to capture two-phase interfaces. The Allen-Cahn equation is the ${ L }^{ 2 }(\Omega )$ gradient flow of the energy functional defined in Equation \ref{CH_energy_defn}, given by \begin{equation} \frac { \partial \phi }{ \partial t } +\nabla \cdot (\vec { u } \phi )={ \epsilon }^{ 2 }{ \nabla }^{ 2 }\phi -W'(\phi ), \label{Allen_Cahn} \end{equation} where $W'(\phi)=dW(\phi)/d\phi$. It is clear that the Allen-Cahn model is essentially a convection-diffusion equation with a source term. There is a wealth of numerical methods for solving such equations, and consequently, this easy to implement second order PDE is widely used in material science applications where phase change occurs. The downside to the Allen-Cahn model is that it is in nonconservative form. As such, it is not readily suitable for simulation of immiscible, incompressible two-phase flows with no phase change. Many authors have sought to rectify this issue by augmenting the original PDE with non-local corrections that allow for total mass conservation. Corrections using space and time dependent Lagrange multipliers have recently become popular for two-phase flow simulations using the Allen-Cahn equations \citep{Brassel2011,Kim2014,Zhai2015,Jeong2017,Joshi2018}. As an alternative approach, some researchers have modified the Allen-Cahn equation in a local sense. This has resulted in some new phase field models that do not suffer from the aforementioned intrinsic difficulties associated with Equations \ref{Cahn_Hilliard} and \ref{Allen_Cahn}, but instead combine their advantages. \subsection{Conservative and bounded phase field method} \label{sec:phase_field} In \cite{Sun2007}, the curvature-driven flow in Allen-Cahn is subtracted out to obtain a second-order PDE suitable for two-phase simulations. Later on, \cite{Chiu_and_Lin} were inspired by the conservative-level-set literature to reformulate the phase field in \cite{Sun2007} in conservative form. We adopt the same PDE, given by \begin{equation} \frac { \partial \phi }{ \partial t } +\nabla \cdot (\vec { u } \phi )=\gamma \nabla \cdot \left [\epsilon \nabla \phi -\phi (1-\phi ) \left ( \frac { { \nabla }\phi }{ \left| { \nabla }\phi \right| } \right ) \right ]. \label{phitrans} \end{equation} The right-hand side in Equation \ref{phitrans} is exactly the same as that used in the reinitialization step of the original conservative level-set method\citep{Olsson2005}. The authors in \cite{Chiu_and_Lin} time-integrated Equation \ref{phitrans} on a semi-staggered grid using a dispersion-relation-preserving upwind scheme specially developed for the convection-diffusion equation. Despite this, boundedness of $\phi$ was not guaranteed, and they had to resort to mass-redistribution and clipping to handle overshoots and undershoots. Such ad-hoc procedures can harm the accuracy and robustness of simulations, particularly at high $Re$ numbers and high density ratios. In \cite{Mirjalili_boundedness}, a simple non-dissipative space-time discretization for Equation \ref{phitrans} was introduced for staggered Cartesian grids. It was analytically and numerically shown that with appropriate selection of the free parameters ($\epsilon$ and $\gamma$), \begin{equation} { \epsilon }/{ \Delta x }\ge\frac { { \gamma }/{ { \left| \vec { u } \right| }_{ max } }+1 }{ 2{ \gamma }/{ \left| \vec { u } \right| }_{ max } }, \label{general_crossover} \end{equation} explicit time-integration would always result in bounded values for $\phi$. We use this phase field approach in this work, which allows us to avoid diffusive upwinding, unphysical mass redistribution and interface reinitialization. In \cite{Mirjalili_comparison}, for an incompressible flow ($\nabla\cdot\vec{u}=0$), this interface capturing scheme was coupled to the non-conservative form of Navier-Stokes equation, \begin{equation} \frac{\partial \vec{u}}{\partial t}+\nabla\cdot(\vec{u}\otimes\vec{u})=\frac{1}{\rho}\left\{-\nabla P+\nabla\cdot[\mu(\nabla \vec{u}+\nabla^{T}\vec{u})]+\rho\vec{g}+\vec{F}_{ST}\right\}, \label{NS} \end{equation} where density and viscosity were linear with respect to $\phi$, and surface tension forces were computed via the the standard continuum surface force (CSF) method, \begin{equation} \vec{F}_{ST}=\sigma\kappa\nabla\phi. \end{equation} In this formulation, $\kappa$, the curvature field, was computed using the normal vector, $\kappa=\nabla\cdot\vec{n}$ and the normal vector was given by $\vec{n}={\nabla\phi}/{\left|\nabla\phi\right|}$. All spatial derivatives in these equations were computed using central differences on a staggered grid. Namely, $P$,$\phi$,$\rho$ and $\mu$ were stored at cell centers while velocity values were stored at the center of their corresponding cell faces. This is a standard approach that results in desirable conservation properties for single phase flows when central difference schemes are used \citep{Morinishi1998}. For low $Re$ or low density ratio two-phase flows, \cite{Mirjalili_comparison} established the accuracy and efficiency of this fully coupled solver using several canonical two-phase flow tests in addition to a problem involving drop-pool impact. \subsection{Consistent momentum transport} \label{sec:consistent_conservative_momentum_transport} Historically, in the framework of the one-fluid formulation, coupling Equation \ref{NS} to the interface-capturing step has been the most prevalent way of simulating incompressible two-phase flows \citep{bell1992,Tryggvason2011,Sussman2000,Desjardins2008,Kim2005,Ding2007,Dong2012,Chiu_and_Lin}. However, in various classes of two-phase flow methods, it has been observed that special numerical treatments must be performed to ensure numerical stability of this form of Navier-Stokes for high density ratios and/or high $Re$ numbers. In \cite{Sussman2007}, the authors were able to handle high density ratios while solving Equation \ref{NS} coupled to the CLSVOF method\citep{Sussman2000}. This was done by extrapolating the velocity of the denser phase into the lighter phase and using it to perform phase advection and calculation of the convective fluxes in the momentum equation. Other authors have used Equation \ref{NS} in conjunction with sharp interface methods by employing TVD time integration and upwinding schemes for spatial derivatives\citep{Fuster2009}. For diffuse interface methods, \cite{Ding2007} extended the work of \cite{Boyer2002} to solve Equation \ref{NS} coupled to the Cahn-Hilliard equation. The test cases in that work had limited $Re$, similar to the cases presented in \cite{Mirjalili_comparison}. In addition, this methodology did not satisfy any energy laws, unlike the seminal method of \cite{Jacqmin1999} which guarantees that the total energy of the system is non-increasing. This issue was resolved by \cite{Shen_Yang} by modifying the form of the momentum transport equation by introducing an auxiliary variable, $\sigma=\sqrt{\rho}$. This resulted in a method that is robust in simulating high density ratio two-phase flows with the Cahn-Hilliard phase field method. Similarly, \cite{Wang2019} proposed a stabilized phase field method using the entropy-viscosity method (EVM) that performs robustly at high $Re$ and high density ratios. Despite their stability, these proposed methodologies neither conserve momentum nor kinetic energy, even at the PDE level and in the absence of capillary and viscous forces. In \cite{Rudman1997}, the authors were able to successfully simulate flows with high density ratios by coupling their geometric VOF scheme to the conservative form of the Navier-Stokes equation. This form is given by: \begin{equation} \frac { \partial (\rho \vec{ u } ) }{ \partial t } +\nabla \cdot (\rho \vec{ u } \otimes \vec{ u } )=-\nabla P+\nabla \cdot \left[\mu (\nabla \vec { u } +\nabla ^{ T }\vec{ u } ) \right ]+{\vec{ F }}_{ ST }. \label{momentum_generic_one_fluid_conservative} \end{equation} The main two features of their implementation are as follows: \begin{itemize} \item Solving Equation \ref{momentum_generic_one_fluid_conservative}, where the convective term must be in conservative form. \item Using the mass flux that was computed in the phase advection step (geometrically in \cite{Rudman1997}) to convect velocity. \end{itemize} Both of these features are critical to robustly simulating high density ratios. In the continuous limit of the equations, solving the conservative form of the momentum transport equation guarantees conservation of momentum and energy, albeit in the absence of capillary and viscous forces. Even in the presence of viscous and nonconservative capillary forces, this is expected to lead to improved robustness. Nevertheless, just solving the conservative form of the momentum equation does not yield a accurate/robust method for high density ratios. Rather, the second feature outlined above is a necessary condition for accurate and robust simulations of high density ratio two phase flows, especially at turbulent conditions. It is necessary to compute the convective flux of momentum in a manner consistent with the mass flux computed during phase advection. We will clarify this point further on for our diffuse interface method using theoretical analysis and numerical tests. While \cite{Rudman1997} employed a staggered grid configuration in addition to a finer mesh for mass advection, \cite{Bussmann2002} extended their methodology to unstructured grids. Later on, \cite{Raessi2012} applied their approach to level set methods. Using the same principals, it is commonly seen that state of the art two-phase flow methods, especially in the VOF class of schemes (where mass flux is readily available from the interface transport step) have adopted this consistent momentum transport approach\citep{Popinet2009,LeChenadec2013,Ivey_thesis,fuster2018}. Up until now, such mass-corrections in the momentum equation have been absent in phase field implementations. As mentioned above, in diffuse interface methods for incompressible and immiscible two-phase flows, the nonconservative form of Navier-Stokes (Equation \ref{NS}) is typically solved. This has limited the application of these methods to either moderate density ratios or low $Re$ flows. Accordingly, for the diffuse interface model proposed in Section \ref{sec:consistent_conservative_momentum_transport}, through the numerical studies presented in \cite{Mirjalili_comparison} and \cite{Mirjalili_SNH}, we have observed robust and accurate simulations of low to moderate $Re$ two-phase flows with density ratios up to $1000$. However, in this work, through analysis and multiple numerical tests, including a simulation of a high density ratio and high $Re$ jet in cross-flow, we demonstrate that in order to solve high density ratio and/or high $Re$ two phase flows Equation \ref{NS} is not the appropriate equation to be coupled with the phase field evolution equation (Equation \ref{phitrans}). Instead, in the same spirit as \cite{Rudman1997}, it is necessary to solve a conservative form of the momentum equation while correcting the convective term to achieve consistent advection of mass and momentum. The main contribution of this work is the introduction of a consistent and kinetic-energy-conserving momentum transport methodology for conservative phase field equations. In the framework of the phase field equation given by Equation \ref{phitrans}, we show that our consistent and conservative momentum transport methodology can handle high density ratio turbulent two-phase flows, while showing the inaccuracies or lack of robustness if one uses \ref{NS} or \ref{momentum_generic_one_fluid_conservative} for momentum transport. It is important to point out that regardless of the density ratio, by using central difference operators on a staggered grid, our methodology discretely conserves mass, momentum and kinetic energy in the absence of capillary and viscous forces. To the best of our knowledge, among all the different methods belonging to the various classes of incompressible two-phase flow methods, our method is the first to accomplish this feat for non-unity density ratios. In the following, in Section \ref{sec:ccmt_intro} we present our proposed form for the momentum transport PDE with appropriate physical justifications. Next, in Section \ref{sec:proof_KE_conservation} we provide analytical proof that in both continuous and discrete levels, the resulting methodology conserves kinetic energy in the absence of viscous and capillary forces. To show the improvement with the proposed modification, in addition to canonical tests, we present simulations of a practical jet in cross-flow involving turbulent conditions and high density ratios in Section \ref{sec:jet_crossflow}. These simulations are of note, because although many phase field methods have been proposed recently, realistic practical two-phase calculations, especially in turbulent conditions and realistic density ratios, are rarely carried out using phase field methods. Inspired by the work of \cite{Jacqmin1999}, we then develop a spurious current reducing, curvature-free surface tension force implementation for our novel phase field method in Section \ref{sec:EBCSF}. Finally, we conclude this article with a summary of our contributions in Section \ref{sec:Conclusions}. \section{Proposed consistent, kinetic-energy-conserving momentum transport equation} \label{sec:ccmt_intro} Let's consider a two-phase system, where the density of the two-phases are not equal. This is relevant to almost all practical two-phase flow systems. By considering the phase field model given by Equation \ref{phitrans} along with \begin{equation} \rho=(\rho_{l}-\rho_{g})\phi+\rho_{g}, \label{rho} \end{equation} where $\rho_{l}$ and $\rho_{g}$ are the densities of the two fluids, the mass conservation equation is found to be \begin{equation} \frac { \partial \rho }{ \partial t } +\nabla \cdot(\vec{u} \rho )=\nabla \cdot\vec{S}. \label{mass_conservation_di} \end{equation} $\vec{S}$ is a conservative but artificial mass flux term given by \begin{equation} \vec{ S } =\gamma ({\rho}_{l}-{\rho}_{g}) \left[ \epsilon \nabla \phi -\phi(\phi - 1)\vec{n} \right ] =\gamma \left[ \epsilon \nabla \rho -\frac { ({ \rho }_{ l }-\rho)(\rho -{ \rho }_{ g }) }{ { \rho }_{ l }-{ \rho }_{ g } }\vec{n} \right ]. \label{S_defn} \end{equation} Here, $\vec{n}={\nabla\phi}/{|\nabla\phi|}$, as defined before. From Equation \ref{S_defn} it is clear that this mass flux is active only around the interface. This artificial flux, whose purpose is to maintain the hyperbolic tangent shape of the interface profile, displaces matter across the interface. The mere transport of matter via $\vec{S}$ from one side of the interface to another, results in a local increase/decrease in momentum and kinetic energy that is unaccounted for if one solves Equation \ref{NS} coupled to Equation \ref{phitrans}. This unaccounted momentum transfer can become catastrophic at high density ratios or even at moderate density ratios when the flow $Re$ is high. Our proposed solution is based on the notion that velocity field can be interpreted in two ways. First, velocity can be interpreted as the flux of volume (matter) per unit area per unit time. Secondly, velocity can also be considered as momentum per unit mass. For the transport of any quantity of interest, it is the first notion of velocity that must be used when computing the fluxes of that quantity. For instance, consider the problem of scalar field transport with a given velocity field using finite volume method. The flux of the scalar into a cell is computed via the velocity field which represents the volume flux into the cell. The concept of momentum is in fact irrelevant in such a problem. More precisely, it is the first notion of velocity, or more appropriately volumetrix flux, that transports any tensor of interest in Reynolds' transport theorem. Now, in the momentum equation for single phase constant density flows, the two notions of velocity are the same as $\rho$ is a constant. For two phase flows though, the transported quantity is $\vec{u}$, defined with the second notion as momentum per unit mass, while the mass flux that transports it must be computed in a manner which is discretely consistent with the discrete mass/phase advection equation. As mentioned above, this principle has been utilized in modern sharp-interface two-phase solvers\citep{Rudman1997,Bussmann2002,Raessi2012,LeChenadec2013,Ivey_thesis,fuster2018}. In our proposed diffuse interface framework, we are algebraically altering the mass fluxes via the right-hand side terms in Equation \ref{phitrans}. As such, the mass fluxes transporting any transportee, including momentum per unit mass, $\vec{u}$, are given by $\rho\vec{u}-\vec{S}$. With this, we propose the following modified consistent momentum equation, given by \begin{equation} \frac { \partial (\rho \vec{ u } ) }{ \partial t } +\nabla \cdot \left[(\rho \vec{ u }-\vec{S} )\otimes \vec{ u } \right ]=-\nabla P+\nabla \cdot \left[\mu (\nabla \vec{ u } +\nabla ^{ T }\vec{ u } ) \right ]+\vec{{ F }_{ ST }} . \label{mom_con} \end{equation} This algebraic modification to the momentum transport equation is necessitated by the artificial terms present in the mass conservation equation. Discretely, the transporting mass flux in the convective term should be computed using exactly the same central difference and interpolation operators used in discretizing Equation \ref{phitrans} (see \cite{Mirjalili_boundedness}). An important advantage of using Equation \ref{mom_con} for momentum transport is that in addition to momentum, in the absence of viscous and capillary effects, it conserves the kinetic energy of the system on the continuous and also discrete level by virtue of using central finite differences. Neither of the other forms of the momentum equation given by Equations \ref{NS} and \ref{momentum_generic_one_fluid_conservative} satisfy kinetic energy conservation even on the PDE level. \subsection{Proof of kinetic energy conservation} \label{sec:proof_KE_conservation} Writing Equation \ref{mass_conservation_di} in index form, \begin{equation} \frac { \partial \rho }{ \partial t } +\frac { \partial { U }_{ i } }{ \partial { x }_{ i } } =0, \label{mass_conservation_index_form} \end{equation} where ${U}_{i}=\rho{u}_{i}-{S}_{i}$ is the mass flux including the artificial contributions from the right hand side of Equation \ref{phitrans}. Now consider the index form of Equation \ref{mom_con} in the absence of viscous and capillary forces given by \begin{equation} \frac { \partial \rho { u }_{ i } }{ \partial t } +\frac { \partial { U }_{ j }{ u }_{ i } }{ \partial { x }_{ j } } =-\frac { \partial P }{ \partial { x }_{ i } }. \label{mom_con_index_form} \end{equation} It is clear from the conservative form of the right hand side terms, that total momentum of the system is conserved. For a periodic domain $\Omega$ we have \begin{equation} \frac { \partial }{ \partial t } \int _{ \Omega }{{ \rho }{ u }_{ i }dV } =0. \label{total_mom_equation} \end{equation} For an incompressible flow, we have ${\partial {u}_{i}}/{\partial {x}_{i}}=0$. If we multiply Equation \ref{mass_conservation_index_form} by $-\frac{1}{2}{u}_{i}{u}_{i}$, and Equation \ref{mom_con_index_form} by ${u}_{i}$ (component by component), summation yields \begin{equation} \frac { \partial }{ \partial t } (\frac { 1 }{ 2 } { \rho }{ u }_{ i }{ u }_{ i })=-\frac { 1 }{ 2 } { u }_{ i }{ u }_{ i }\frac { \partial { U }_{ j } }{ \partial { x }_{ j } } +{ u }_{ i }\frac { \partial { (U }_{ j }{ u }_{ i }) }{ \partial { x }_{ j } } -{ u }_{ i }\frac { \partial P }{ \partial { x }_{ i } } =-\frac { \partial { (U }_{ j }(\frac { 1 }{ 2 } { u }_{ i }{ u }_{ i })) }{ \partial { x }_{ j } } -\frac { \partial (P{ u }_{ j }) }{ \partial { x }_{ j } }. \label{KE_transport_continuous} \end{equation} In Equation \ref{KE_transport_continuous}, the two terms on the right hand side of the last equation are in conservative form and represent kinetic energy transport via the flux of mass and pressure work, respectively. These transport terms cannot contribute to the overall kinetic energy of the system. As such, for a periodic domain, $\Omega$, we have \begin{equation} \frac { \partial }{ \partial t } \int _{ \Omega }{ (\frac { 1 }{ 2 } { \rho }{ u }_{ i }{ u }_{ i })dV } =0. \label{total_KE_equation} \end{equation} Including the diffusive fluxes on the right hand side of Equation \ref{mom_con_index_form} would lead to a strictly negative term on the right hand side of Equation \ref{total_KE_equation}, representing dissipation of kinetic energy. We have so far shown that our proposed form for the momentum transport equation (Equation \ref{mom_con}) satisfies conservation of momentum and kinetic energy (Equations \ref{total_mom_equation} and \ref{total_KE_equation}). In comparison, Equation \ref{NS} neither conserves momentum nor kinetic energy, while Equation \ref{momentum_generic_one_fluid_conservative} is only momentum conserving. Now we focus on proving momentum and kinetic energy conservation in the discrete sense. As mentioned before, we use staggered structured grids throughout this work. Velocity vectors and all fluxes, including the mass fluxes, are stored on their respective faces, while pressure, density, viscosity and the phase field variable are on cell centers. We carry out the proof in 2D, but extension to 3D is straight-forward. For this section, we concern ourselves with uniform Cartesian meshes. It is important to note that discrete conservation of kinetic energy is satisfied in the semi-discrete sense, where the temporal derivatives are not discretized. To simplify notation we denote the velocity in the $x$ and $y$ direction with $u$ and $v$. The x component of velocity on the left face of the $(i,j)-th$ cell is thus ${u}_{i-1/2,j}$. A similar format is used for the mass fluxes, by which $U$ and $V$ represent the mass flux in the $x$ and $y$ direction, respectively. Using central differences, the semi-discrete version of Equation \ref{mass_conservation_index_form} in 2D reads \begin{equation} \frac { \partial { \rho }_{ i,j } }{ \partial t } +\frac { { U }_{ i+1/2,j }-{ U }_{ i-1/2,j } }{ \Delta x } +\frac { { V }_{ i,j+1/2 }-{ V }_{ i,j-1/2 } }{ \Delta y } =0, \label{mass_conservation_semidiscrete} \end{equation} where $\Delta x$ and $\Delta y$ are the mesh size in the $x$ and $y$ directions. It is important to note that the mass fluxes must be discretely obtained from the corresponding fluxes in the phase advection routine, defined necessarily at the same location. In an incompressible flow, by solving a Poisson system we enforce that \begin{equation} \frac { { u }_{ i+1/2,j }-{ u }_{ i-1/2,j } }{ \Delta x } +\frac { { v }_{ i,j+1/2 }-{ v }_{ i,j-1/2 } }{ \Delta y } =0. \label{incompressible_semidiscrete} \end{equation} The semi-discretized version of Equation \ref{mom_con_index_form} in the x direction reads \begin{multline} \frac { \partial (\frac { { \rho }_{ i,j }+{ \rho }_{ i-1,j } }{ 2 } { u }_{ i-1/2,j } )}{ \partial t } +\frac { \frac { { U }_{ i+1/2,j }+{ U }_{ i-1/2,j } }{ 2 } \frac { { u }_{ i+1/2,j }+{ u }_{ i-1/2,j } }{ 2 } -\frac { { U }_{ i-1/2,j }+{ U }_{ i-3/2,j } }{ 2 } \frac { { u }_{ i-1/2,j }+{ u }_{ i-3/2,j } }{ 2 } }{ \Delta x } +\\ \frac { \frac { { V }_{ i,j+1/2 }+{ V }_{ i-1,j+1/2 } }{ 2 } \frac { { u }_{ i-1/2,j+1 }+{ u }_{ i-1/2,j } }{ 2 } -\frac { { V }_{ i,j-1/2 }+{ V }_{ i-1,j-1/2 } }{ 2 } \frac { { u }_{ i-1/2,j }+{ u }_{ i-1/2,j-1 } }{ 2 } }{ \Delta y } +\frac { { P }_{ i,j }-{ P }_{ i-1,j } }{ \Delta x } =0. \label{mom_con_semidiscrete_x} \end{multline} A similar equation can be written for the $y$ component, \begin{multline} \frac { \partial (\frac { { \rho }_{ i,j }+{ \rho }_{ i,j-1 } }{ 2 } { v }_{ i,j-1/2 }) }{ \partial t } +\frac { \frac { { U }_{ i+1/2,j }+{ U }_{ i+1/2,j-1 } }{ 2 } \frac { { v }_{ i+1,j-1/2 }+{ v }_{ i,j-1/2 } }{ 2 } -\frac { { U }_{ i-1/2,j }+{ U }_{ i-1/2,j-1 } }{ 2 } \frac { { v }_{ i,j-1/2 }+{ v }_{ i-1,j-1/2 } }{ 2 } }{ \Delta x } \\ \frac { \frac { { V }_{ i,j+1/2 }+{ V }_{ i,j-1/2 } }{ 2 } \frac { { v }_{ i,j+1/2 }+{ v }_{ i,j-1/2 } }{ 2 } -\frac { { V }_{ i,j-1/2 }+{ V }_{ i,j-3/2 } }{ 2 } \frac { { v }_{ i,j-1/2 }+{ v }_{ i,j-3/2 } }{ 2 } }{ \Delta y } +\frac { { P }_{ i,j }-{ P }_{ i,j-1 } }{ \Delta y } =0. \label{mom_con_semidiscrete_y} \end{multline} It is once again clear from the conservative form of the fluxes, that momentum is discretely conserved for both $x$ and $y$ components. In other words, in a periodic domain \begin{equation} \frac { \partial }{ \partial t } \sum _{ i,j }{ (\frac { { \rho }_{ i,j }+{ \rho }_{ i-1,j } }{ 2 } { u }_{ i-1/2,j }) } =\frac { \partial }{ \partial t } \sum _{ i,j }{ (\frac { { \rho }_{ i,j }+{ \rho }_{ i,j-1 } }{ 2 } { v }_{ i,j-1/2 }) } =0, \label{total_mom_conservation_semidiscrete} \end{equation} where the location at which the momentum components are discretely defined is on the faces. In a similar manner we define the discrete kinetic energy of the system as \begin{equation} KE=\sum _{ i,j }{\frac { 1 }{ 2 } (\frac { { \rho }_{ i,j }+{ \rho }_{ i-1,j } }{ 2 } { u }_{ i-1/2,j }{ u }_{ i-1/2,j }+\frac { { \rho }_{ i,j }+{ \rho }_{ i,j-1 } }{ 2 } { v }_{ i,j-1/2 }{ v }_{ i,j-1/2 }) }. \label{KE_discrete_defn} \end{equation} To find the evolution equation for ${\partial KE}/{\partial t}$, we interpolate Equation \ref{mass_conservation_semidiscrete} onto the $x$ and $y$ faces and multiply by $({-u}_{i-1/2,j}{u}_{i-1/2,j})/2$ and $({-v}_{i,j-1/2}{v}_{i,j-1/2})/2$, respectively. After summation, the result is added to the product of Equation \ref{mom_con_semidiscrete_x} with ${u}_{i-1/2,j}$ and the product of Equation \ref{mom_con_semidiscrete_y} with ${v}_{i,j-1/2}$. Finally, the result for all cells is summed to find the evolution of total kinetic energy. We assume that the domain has periodic boundary conditions on all boundaries. The temporal term is ${\partial KE}/{\partial t}$. The pressure terms become \begin{multline} \sum _{ i,j }{ \frac { { P }_{ i,j }{ u }_{ i-1/2,j }-{ P }_{ i-1,j }{ u }_{ i-1/2,j } }{ \Delta x } +\frac { { P }_{ i,j }{ v }_{ i,j-1/2 }-{ P }_{ i,j-1 }{ v }_{ i,j-1/2 } }{ \Delta y } } =\\\sum _{ i,j }{ { P }_{ i,j }(\frac { { u }_{ i-1/2,j }-{ u }_{ i+1/2,j } }{ \Delta x } +\frac { { v }_{ i,j-1/2 }-{ v }_{ i,j+1/2 } }{ \Delta y } ) } =0, \label{pressure_terms_semidiscrete} \end{multline} where Equation \ref{incompressible_semidiscrete} is used to find the last equation. If we consider products in the form of ${U}_{a,b}{u}_{c,d}{u}_{e,f}$, \begin{multline} \sum _{ i,j }{ (\frac { -1 }{ 2 } }{ u }_{ i-1/2,j }{ u }_{ i-1/2,j }\frac { { U }_{ i+1/2,j }-{ U }_{ i-3/2,j } }{ 2\Delta x } +\\{ u }_{ i-1/2,j }\frac { \frac { { U }_{ i+1/2,j }+{ U }_{ i-1/2,j } }{ 2 } \frac { { u }_{ i+1/2,j }+{ u }_{ i-1/2,j } }{ 2 } -\frac { { U }_{ i-1/2,j }+{ U }_{ i-3/2,j } }{ 2 } \frac { { u }_{ i-1/2,j }+{ u }_{ i-3/2,j } }{ 2 } }{ \Delta x } )=\\ \sum _{ i,j }\frac { 1 }{ 4\Delta x } ({ U }_{ i+1/2,j }{ u }_{ i+1/2,j }{ u }_{ i-1/2,j }+{ U }_{ i-1/2,j }{ u }_{ i-1/2,j }{ u }_{ i+1/2,j }-\\{ U }_{ i-1/2,j }{ u }_{ i-1/2,j }{ u }_{ i-3/2,j }-{ U }_{ i-3/2,j }{ u }_{ i-3/2,j }{ u }_{ i-1/2,j })=0. \label{Uu} \end{multline} And terms in the form of ${V}_{a,b}{u}_{c,d}{u}_{e,f}$ are \begin{multline} \sum _{ i,j }{ (\frac { -1 }{ 2 } } { u }_{ i-1/2,j }{ u }_{ i-1/2,j }\frac { { U }_{ i+1/2,j }-{ U }_{ i-3/2,j } }{ 2\Delta x }\\ +{ u }_{ i-1/2,j }\frac { \frac { { V }_{ i,j+1/2 }+{ V }_{ i-1,j+1/2 } }{ 2 } \frac { { u }_{ i-1/2,j+1 }+{ u }_{ i-1/2,j } }{ 2 } -\frac { { V }_{ i,j-1/2 }+{ V }_{ i-1,j-1/2 } }{ 2 } \frac { { u }_{ i-1/2,j }+{ u }_{ i-1/2,j-1 } }{ 2 } }{ \Delta y } )=\\ \sum _{ i,j } \frac { 1 }{ 4\Delta x } ({ V }_{ i,j+1/2 }{ u }_{ i-1/2,j+1 }{ u }_{ i-1/2,j }+V_{ i-1,j+1/2 }{ u }_{ i-1/2,j+1 }{ u }_{ i-1/2,j }-\\V_{ i,j-1/2 }{ u }_{ i-1/2,j }{ u }_{ i-1/2,j-1 }-{ V }_{ i-1,j-1/2 }{ u }_{ i-1/2,j }{ u }_{ i-1/2,j-1 })=0. \end{multline} Similarly, if we consider the summation of the terms in the form of ${U}_{a,b}{v}_{c,d}{v}_{e,f}$ and ${V}_{a,b}{v}_{c,d}{v}_{e,f}$ across all cells in the domain, they would both be zero. With that, we have shown that \begin{equation} \frac{\partial KE}{\partial t}=\frac{\partial }{\partial t}\sum _{ i,j }{\frac { 1 }{ 2 } (\frac { { \rho }_{ i,j }+{ \rho }_{ i-1,j } }{ 2 } { u }_{ i-1/2,j }{ u }_{ i-1/2,j }+\frac { { \rho }_{ i,j }+{ \rho }_{ i,j-1 } }{ 2 } { v }_{ i,j-1/2 }{ v }_{ i,j-1/2 }) }=0, \label{KE_discrete_conservation} \end{equation} or in other words, Equation \ref{mom_con} coupled to Equation \ref{phitrans} (which is equivalent to Equation \ref{mass_conservation_di}) conserves the discrete kinetic energy of the system. Once again, including the discretized diffusive terms in the right hand side of Equations \ref{mom_con_semidiscrete_x} and \ref{mom_con_semidiscrete_y} results in ${\partial KE}/{\partial t}\le 0$. \section{Numerical tests} \label{sec:mom_con_numerical_tests} In this section, we use two simple test cases in addition to simulations of a realistic jet in crossflow to demonstrate the necessity of using our proposed consistent and conservative version of the momentum transport equation (Equation \ref{mom_con}). \subsection{Case I} In this test case, we demonstrate the failure of merely using the conservative form of momentum equation (Equation \ref{momentum_generic_one_fluid_conservative}) with no accounting for the artificial mass fluxes. Namely, if instead of Equation \ref{mom_con}, we spatially and temporally discretize \begin{equation} \frac { \partial (\rho \mathbf { u } ) }{ \partial t } +\nabla \cdot (\rho \mathbf { u } \otimes \mathbf { u } )=-\nabla P+\nabla \cdot \left[\mu (\nabla \mathbf { u } +\nabla ^{ T }\mathbf { u } ) \right ]+{\vec{ F }}_{ ST }, \label{mom_conservative_inconsistent} \end{equation} which we refer to as the inconsistent method, the inconsistencies between mass and momentum fluxes result in huge errors. To demonstrate this, we consider simple simulations of drop advection performed by discretely solving Equations \ref{mom_conservative_inconsistent} and \ref{mom_con} for momentum transport using central differences and RK4 time-stepping. A $D=0.3$ drop of fluid 1 is advected in a periodic $1\times1\times1$ domain otherwise filled with fluid 2. The density of the fluids are chosen to be ${\rho}_{1}=10$ and ${\rho}_{2}=1$, while viscosity of both phases and surface tension is zero. The initial velocity is $(u,v,w)=(0,0,1)$ everywhere in the domain, and the mesh is $64\times64\times64$. Analytically, the drop should remain spherical during advection, and the velocity vector should not change. In Figure \ref{fig:drop_advection_mom_cons_vs_incons}, the initial profile of the drop along with the result of advecting the drop for one period (to $t=1$) using Equations \ref{mom_conservative_inconsistent} and \ref{mom_con} is shown. Clearly, while the drop is kept almost spherical with Equation \ref{mom_con}, the profile of the drop advected with the momentum-conserving but inconsistent momentum transport model given by Equation \ref{mom_conservative_inconsistent} is distorted and incorrect. \begin{figure} \centering \begin{subfigure}{0.32 \linewidth} \includegraphics[width=0.95\textwidth]{files/init_phi_xyz.png} \caption{} \label{fig:init_drop_advection_mom_cons} \end{subfigure} \begin{subfigure}{0.32 \linewidth} \includegraphics[width=0.95\textwidth] {files/s0_xyz.png} \caption{} \label{fig:final_inconsist_drop_advection_mom_cons} \end{subfigure} \begin{subfigure}{0.32 \linewidth} \includegraphics[width=0.95\textwidth] {files/son_xyz.png} \caption{} \label{fig:final_inconsist_drop_advection_mom_cons} \end{subfigure} \caption{Drop interface profiles for drop advection with uniform velocity of $(0,0,1)$ in the domain (Case I) at (a) $t=0$, (b) $t=1$ using Equation \ref{mom_conservative_inconsistent}, (c) $t=1$ using Equation \ref{mom_con}} \label{fig:drop_advection_mom_cons_vs_incons} \end{figure} To further analyze this problem, we can look at the velocity field to understand why the drop profile is distorted when Equation \ref{mom_conservative_inconsistent} is solved. In Figure \ref{fig:drop_advection_mom_cons_vs_incons_vel} the value of velocity in the $x$ and $z$ direction are displayed at $t=1$ on top of the drop profile. Recall that the initial velocity was given by $(u,v,w)=(0,0,1)$, and the final velocity field using Equation \ref{mom_con} seems to have preserved that velocity field. On the other hand, the inconsistent momentum transport equation has altered the velocity field significantly. This is precisely what is meant by an inconsistent momentum advection scheme. It must be noted that had we chosen ${\rho}_{2}$ to be equal to ${\rho}_{1}$, both methods would have delivered correct solutions for this test. \begin{figure} \centering \begin{subfigure}{0.48 \linewidth} \includegraphics[width=0.95\textwidth]{files/s0_yz.png} \caption{} \label{fig:init_drop_advection_mom_cons} \end{subfigure} \begin{subfigure}{0.48 \linewidth} \includegraphics[width=0.95\textwidth] {files/son_yz.png} \caption{} \label{fig:final_inconsist_drop_advection_mom_cons} \end{subfigure} \begin{subfigure}{0.48 \linewidth} \includegraphics[width=0.95\textwidth] {files/s0_xy.png} \caption{} \label{fig:final_inconsist_drop_advection_mom_cons} \end{subfigure} \begin{subfigure}{0.48 \linewidth} \includegraphics[width=0.95\textwidth] {files/son_xy.png} \caption{} \label{fig:final_inconsist_drop_advection_mom_cons} \end{subfigure} \caption{Velocity components at $t=1$ shown on top of the drop profile for drop advection with uniform velocity of $(0,0,1)$ in the domain (Case I). Shown is (a) $w$ from Equation \ref{mom_conservative_inconsistent}, (b) $w$ from Equation \ref{mom_con}, (c) $u$ from Equation \ref{mom_conservative_inconsistent}and (d) $u$ from Equation \ref{mom_con}.} \label{fig:drop_advection_mom_cons_vs_incons_vel} \end{figure} Applying the non-conservative form of the momentum transport equation (Equation \ref{NS}) to the above test case also results in $(u,v,w)(t)=(0,0,1)$ and a drop profile similar to the results obtained from Equation \ref{mom_con}. This would be the case for even extremely high density ratios. However, in what follows, we add one level of complexity to the test case, to see that Equation \ref{NS} also fails to produce acceptable results at high density ratios. \subsection{Case II} We consider a test case that is inspired by its 2D version presented in \cite{Bussmann2002} and \cite{Raessi2012}. In a periodic $1\times1\times1$ box of initially stationary fluid 2, a dense drop of fluid 1 with diameter $D=0.2$ is advected in the $x$-direction with initial velocity $(u,v,w)=(1,0,0)$. The spatial resolution is chosen to be $64\times64\times64$, and no surface tension or viscous forces are present. We consider very high density ratios up to $\rho_{1}/\rho_{2}={10}^{7}$ to test the robustness of the solvers. There is no exact boundary for the drop in DI simulations, and the density is very high within the transition zone. In this case, the velocity in the domain is initialized using a hyperbolic tangent profile where the $U=0.5$ contour lies on points where the density is not very high, say ${10}^{3}{\rho}_{2}$. A CFL number of 0.25 is chosen for these simulations. Theoretically, when the density ratio is very high, the drop should not "feel" the presence of the surrounding phase and should not deform. The nonconservative formulation of Equation \ref{NS} and our modified momentum transport equation given in Equation \ref{mom_con} are compared for this test case. By means of these simulations, we observe that the simulations performed using Equation \ref{NS} are not robust at density ratios of ${10}^{4}$ and above, whereas simulations with Equation \ref{mom_con} are stable even at a density ratio of ${10}^{7}$ and successfully capture the theoretical solution. The drop profiles at various high density ratios are shown in Figure \ref{fig:dense_drop}. The time of the snapshots for the simulations with our conservative and consistent method are all at $t=10$, while for the non-conservative momentum solver, the time of the snapshot is shortly before the solver becomes numerically unstable (at $t<10$). \begin{figure} \centering \begin{subfigure}{0.38 \linewidth} \includegraphics[width=0.95\textwidth]{files/non_4_r336.png} \caption{} \label{fig:init_drop_advection_mom_cons} \end{subfigure} \begin{subfigure}{0.38 \linewidth} \includegraphics[width=0.95\textwidth] {files/con_4_r208.png} \caption{} \label{fig:final_inconsist_drop_advection_mom_cons} \end{subfigure} \begin{subfigure}{0.38 \linewidth} \includegraphics[width=0.95\textwidth] {files/non_6_r137.png} \caption{} \label{fig:final_inconsist_drop_advection_mom_cons} \end{subfigure} \begin{subfigure}{0.38 \linewidth} \includegraphics[width=0.95\textwidth] {files/con_6_r70.png} \caption{} \label{fig:final_inconsist_drop_advection_mom_cons} \end{subfigure} \begin{subfigure}{0.38 \linewidth} \includegraphics[width=0.95\textwidth] {files/non_7_r105.png} \caption{} \label{fig:final_inconsist_drop_advection_mom_cons} \end{subfigure} \begin{subfigure}{0.38 \linewidth} \includegraphics[width=0.95\textwidth] {files/con_7_r51.png} \caption{} \label{fig:final_inconsist_drop_advection_mom_cons} \end{subfigure} \caption{Drop profile shown for case of a dense drop advected in an initially stationary background (Case II) for (a) density ratio of ${10}^{4}$ at $t=8.41$ with Equation \ref{NS}, (b) density ratio of ${10}^{4}$ at $t=10$ with Equation \ref{mom_con}, (c) density ratio of ${10}^{6}$ at $t=2.72$ with Equation \ref{NS}, (d) density ratio of ${10}^{6}$ at $t=10$ with Equation \ref{mom_con}, (e) density ratio of ${10}^{7}$ at $t=2.01$ with Equation \ref{NS}, (f) density ratio of ${10}^{7}$ at $t=10$ with Equation \ref{mom_con}.} \label{fig:dense_drop} \end{figure} We can also examine the evolution of discrete kinetic energy of the system, defined in Equation \ref{KE_discrete_defn}, in these simulations. In Figure \ref{fig:KE_drops}, we demonstrate how radically the kinetic energy of the system changes when simulating even low density ratio simulations of this test case with Equation \ref{NS}. Crucially, we can see how Equation \ref{NS} is numerically unstable at high density ratios. On the other hand, it is clear that the total kinetic energy of the system is constant for simulations that solve Equation \ref{mom_con} for momentum transport. This is expected, as in the absence of surface tension and viscous forces, we have proven in Section \ref{sec:proof_KE_conservation} that Equation \ref{mom_con} conserves total kinetic energy. Errors in kinetic energy conservation are small and due merely to the fact that the continuity equation (Equation \ref{incompressible_semidiscrete} in 2D) is enforced with a non-zero tolerance when numerically solving the Poisson pressure system. \begin{figure} \centering \includegraphics[width=0.75\textwidth]{files/KE_axis_large.png} \caption{KE conservation error of the system in the dense drop advection case plotted for different density ratios. Different solid lines represent simulations using Equation \ref{NS} for various density ratios, while the dashed line represents the very small (on the order of Poisson tolerance-${10}^{-12}$) KE conservation error for all density ratios using Equation \ref{mom_con}.} \label{fig:KE_drops} \end{figure} It must be noted that increasing mesh resolution while solving Equation \ref{NS} coupled to Equation \ref{phitrans} does not improve conservation errors or robustness of the solver. We have examined this for the dense drop advection problem, in addition to the jet in cross-flow simulations presented in the next section. \subsection{Jet in cross-flow simulation} \label{sec:jet_crossflow} While there is no shortage of new studies on phase field methods, we rarely see these methods being applied to realistic engineering problems. In most articles in fact, the authors study canonical problems or flows with low $Re$ numbers or not so high density ratios. In this section, we employ our diffuse interface method to simulate the realistic problem of a jet in cross-flow. This is a suitable problem to examine the practical importance of using Equation \ref{mom_con} for momentum transport. To this aim, we have performed simulations at various density ratios to compare the solutions obtained from solving Equation \ref{mom_con} or \ref{NS} for momentum transport. Both of these equations are coupled to Equation \ref{phitrans} and are solved in the same framework. The inlet conditions are laminar for the jet but the flow becomes turbulent downstream. Experimental measurements for this problem using water jets in atmospheric conditions have been performed by \cite{Sallam2004} among others. This problem has also been studied numerically using a CLSVOF scheme in \cite{Li2012}. It should be noted that primary breakup in this problem is due to the gas cross-flow and not turbulence or vorticity in the jet. Furthermore, the specific case selected for simulation belongs to the multi-mode breakup regime which has the most interesting dynamics, has relevance to aerospace applications, and requires reasonable mesh resolution. In the simulations presented in \cite{Li2012}, the authors used a uniform grid or adaptive mesh refinement to construct the mesh. In our study, we use a non-uniform staggered grid which is concentrated and has uniform spacing around the jet. In Section \ref{sec:non_uniform_mesh}, we will explain how Equations \ref{phitrans}, \ref{mom_con} and \ref{NS} must be discretized for a non-uniform Cartesian mesh. In particular, we will explain how we are able to achieve the following important properties on non-uniform meshes: \begin{itemize} \item Equation \ref{phitrans} is discretized in a manner that the interface thickness varies in space according to the local mesh, while staying within the $0-1$ bounds. This is a superior option compared to choosing $\epsilon$ based on the coarsest mesh in the domain, especially in scenarios where the momentum transport equation must be solved in a domain much larger than where the interfaces reside (e.g. the jet in cross-flow case). \item Similar to \cite{Ham2002} who showed how one can achieve a fully conservative second order finite difference scheme for incompressible single phase flows on non-uniform grids, by using volume-weighted interpolation on the convective terms, we can extend Equation \ref{KE_discrete_conservation} to non-uniform grids. \end{itemize} The geometric specification of the jet and domain, in addition to boundary conditions, are chosen following \cite{Li2012}. The domain size is $2 cm\times1.5 cm\times3.5 cm$, where the cross-flow blows in the positive $x$ direction and the jet is injected in the $y$ direction. The jet orifice has a diameter of ${D}_{ori}=0.8 mm$ and is situated at $(0.2 cm, 0.75 cm, 0 cm)$. Inflow boundary conditions are used for the gas inflow at $x=0 cm$, in addition to the jet inflow at the jet orifice. For all other points at $y=0 cm$, no-slip boundary conditions are imposed. For the downstream boundary at $x=2 cm$, convective outflow boundary conditions are implemented, while no penetration, free-slip boundary conditions are used on all the other boundaries of the domain. The domain is sufficiently large in the $y$ and $z$ direction such that the boundary conditions in the $z$ direction and the top wall do not influence the results of our simulations. The initial conditions for velocity are $(u,v,w)(\vec{x})=({U}_{g},0,0)$, for all $x$ inside the domain (not on boundaries). The liquid jet is thus injected into this uniform velocity cross-flow at $t=0$. The dimensional and non-dimensional parameters governing this problem can be seen in Table \ref{tab:jet_params}. In this table, in addition to the familiar parameters, the momentum flux ratio, $q={\rho}_{l}{U}_{l}^{2}/({\rho}_{g}{U}_{g}^{2})$ is used to characterize the problem. We have explored two cases, with different density ratios but matched ${Re}_{l}$, ${We}_{g}$, $q$ and identical geometric configurations. For this to be possible, the viscosity ratio between the two cases must be different as well. Table \ref{tab:jet_params} fully specifies these two cases. All dimensional parameters are reported in metric units. It must be noted that both of these density ratios are physically relevant, as even the reduced density of ratio of $100$ is common for a liquid jet that is injected into a pressurized gas chamber. \begin{table}[H] \begin{center} \scalebox{0.8}{ \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c} Case & ${\rho}_{l}$ & ${\rho}_{g}$ & ${\mu}_{l}$ & ${\mu}_{g}$ & $\sigma$ & ${D}_{ori}$ & ${U}_{l}$ & ${U}_{g}$ & ${\rho}_{l}/{\rho}_{g}$ & ${\mu}_{l}/{\mu}_{g}$ & ${We}_{g}$ & ${Re}_{l}$ & $q$\\ \hline 1 & 118 & 1.18 & 0.000307 & 0.0000186 & 0.0708 & 0.0008 & 51.45 & 54.8 & 100 & 16.51 & 40 & 15800 & 88.2 \\ 2 & 997 & 1.18 & 0.000894 & 0.0000186 & 0.0708 & 0.0008 & 17.7 & 54.8 & 845 & 48 & 40 & 15800 & 88.2 \\ \end{tabular}} \end{center} \vspace{1mm} \caption{Physical parameters and material properties for jet in cross-flow simulations.} \label{tab:jet_params} \end{table} A summary of all the simulations that we examine herein is reported in Table \ref{tab:jet_sims}. In what follows, we go through these simulations and focus on comparing the two options for the momentum transport equation when applied to solving the same physical cases (Table \ref{tab:jet_params}). \begin{table}[H] \begin{center} \begin{tabular}{c|c|c|c|c} Simulation & Case & Mom. Eqn. & Resolution & Robustness\\ \hline 1 & 1 & nonconservative, Eq. \ref{NS} & ${D}_{ori}/24$ & stable\\ 2 & 1 & conservative, Eq. \ref{mom_con} & ${D}_{ori}/24$ & stable\\ 3 & 1 & nonconservative, Eq. \ref{NS} & ${D}_{ori}/36$ & unstable\\ 4 & 1 & conservative, Eq. \ref{mom_con} & ${D}_{ori}/36$ & stable\\ 5 & 2 & nonconservative, Eq. \ref{NS} & ${D}_{ori}/24$ & unstable\\ 6 & 2 & conservative, Eq. \ref{mom_con} & ${D}_{ori}/24$ & stable\\ \end{tabular} \end{center} \vspace{1mm} \caption{Summary of the set of simulations studied.} \label{tab:jet_sims} \end{table} Simulations 1 and 2 in Table \ref{tab:jet_sims} are coarse simulations of Case 1, in which the density ratio is $100$. In Figure \ref{fig:jet_case_1_coarse}, a side-by-side comparison of the output of the simulations at $t=2.74\times{10}^{-4}s$ can be observed from two different views. Based on experimental images and simulation results at such $Re$, $q$ and $We$ in literature\citep{Sallam2004,Li2012}, it is clear that solving the conservative and consistent version of the Navier-Stokes equation (Equation \ref{mom_con}) gives much more realistic break-up of the jet and jet column trajectory. The non-conservative form of Navier-Stokes (Equation \ref{NS}) leads to premature break-up of the jet and nonphysical perturbations early on in its trajectory. It should be reminded that both of these simulations are coarse and employ highly stretched meshes in order to have a uniform grid around the jet with 24 cells across its diameter. As such, artificially stretched drops can be observed especially in the case of simulation 2 where the jet has reached higher into the stretched region of the mesh. These artifacts can be reduced via mesh refinement, using a uniform mesh, or a different surface tension force implementation tailored for non-uniform grids. \begin{figure} \centering \begin{subfigure}{0.49 \linewidth} \includegraphics[width=\textwidth]{files/mdr_noncnsrv_22_xy.png} \caption{} \label{fig:jet_non_xy} \end{subfigure} \begin{subfigure}{0.49 \linewidth} \includegraphics[width=\textwidth]{files/mdr_cnsrv_19_xy.png} \caption{} \label{fig:jet_con_xy} \end{subfigure}\\ \begin{subfigure}{0.49 \linewidth} \includegraphics[width=\textwidth]{files/mdr_noncnsrv_22_yz.png} \caption{} \label{fig:jet_non_yz} \end{subfigure} \begin{subfigure}{0.49 \linewidth} \includegraphics[width=\textwidth]{files/mdr_cnsrv_19_yz.png} \caption{} \label{fig:jet_con_yz} \end{subfigure} \caption{Results of jet in cross-flow simulation of Case 1 using Equations \ref{NS} and \ref{mom_con} for momentum transport. Zoom-ins on the jet profile with (a) x-y view of Simuation 1 using Equation \ref{NS} (b) x-y view of Simuation 2 using Equation \ref{mom_con} (c) y-z view of Simuation 1 using Equation \ref{NS} (d) y-z view of Simuation 2 using Equation \ref{mom_con}.} \label{fig:jet_case_1_coarse} \end{figure} In Figure \ref{fig:jet_case_1_coarse_vels}, we show velocity contours in the $x-y$ mid-plane at $t=2.74\times{10}^{-4}s$. A qualitative comparison with \cite{Li2012} (this density ratio was not explored by them) reveals that Simuation 2 is producing much more realistic results. \begin{figure} \centering \begin{subfigure}{0.32 \linewidth} \includegraphics[width=\textwidth]{files/mdr_noncnsrv_22_u.png} \caption{} \label{fig:jet_non_xy} \end{subfigure} \begin{subfigure}{0.32 \linewidth} \includegraphics[width=\textwidth]{files/mdr_cnsrv_19_u.png} \caption{} \label{fig:jet_con_xy} \end{subfigure}\\ \begin{subfigure}{0.32 \linewidth} \includegraphics[width=\textwidth]{files/mdr_noncnsrv_22_v.png} \caption{} \label{fig:jet_non_yz} \end{subfigure} \begin{subfigure}{0.32 \linewidth} \includegraphics[width=\textwidth]{files/mdr_cnsrv_19_v.png} \caption{} \label{fig:jet_con_yz} \end{subfigure}\\ \begin{subfigure}{0.32 \linewidth} \includegraphics[width=\textwidth]{files/mdr_noncnsrv_22_w.png} \caption{} \label{fig:jet_con_yz} \end{subfigure} \begin{subfigure}{0.32 \linewidth} \includegraphics[width=\textwidth]{files/mdr_cnsrv_19_w.png} \caption{} \label{fig:jet_con_yz} \end{subfigure} \caption{Results of jet in cross-flow simulation of Case 1 using Equations \ref{NS} and \ref{mom_con} for momentum transport. An $x-y$ view of (a) $u$ from Simuation 1 using Equation \ref{NS} (b) $u$ from Simuation 2 using Equation \ref{mom_con} (c) $v$ from Simuation 1 using Equation \ref{NS} (d) $v$ from Simuation 2 using Equation \ref{mom_con}(e) $w$ from Simuation 1 using Equation \ref{NS} (f) $w$ from Simuation 2 using Equation \ref{mom_con}.} \label{fig:jet_case_1_coarse_vels} \end{figure} As we increase the resolution for both the nonconservative and conservative methods (Simuations 3, 4 respectively), solving the nonconservative form of Navier-Stokes (Equation \ref{NS}) in Simuation 3 is no longer robust and the velocity values grow indefinitely around time $t=1\times{10}^{-4}s$. This is due to the inconsistency between this form of the momentum transport equation and the phase field advection equation. Contrarily, Simulation 4 runs stably, avoiding any robustness issues. It is thereby clear that the discrete mass-momentum consistency and conservation of kinetic energy for this form of the momentum transport equation increases the robustness of our diffuse interface method. The advantage of using the consistent and conservative formulation for momentum transport is further substantiated as we increase the density ratio and simulate Case 2. Simulations 5 and 6 reveal that while the nonconservative momentum transport equation (Equation \ref{NS}) results in blow-up of velocity magnitudes, our modified momentum transport equation (Equation \ref{mom_con}) captures the jet in a physical manner (with reference to \cite{Sallam2004,Li2012}) even with a relatively coarse mesh for diffuse interface standards (24 cells across jet diameter). Figure \ref{fig:jet_hdr} shows the jet at $t=6\times{10}^{-4}s$ where it has bent significantly and starting to reach a fully developed state. Using the correlation proposed by \cite{Wu1997} and the adjusted drag coefficient by \cite{Sallam2004}, the experimental jet trajectory is included in Figure \ref{fig:jet_hdr_xy}, showing a good agreement at this resolution. \begin{figure} \centering \begin{subfigure}{0.49 \linewidth} \includegraphics[width=\textwidth]{files/jet_hdr_yz.png} \caption{} \label{fig:jet_hdr_yz} \end{subfigure} \begin{subfigure}{0.49 \linewidth} \includegraphics[width=\textwidth]{files/jet_hdr_xy.png} \caption{} \label{fig:jet_hdr_xy} \end{subfigure} \caption{Results of jet in cross-flow simulation of Case 2 using Equation \ref{mom_con} for momentum transport (Simulation 6). Zoom-ins on the jet profile with (a) $y-z$ view (b) $x-y$ view along with the experimental jet trajectory.} \label{fig:jet_hdr} \end{figure} In Figure \ref{fig:jet_hdr_vels}, the velocity contour values on the $x-y$ mid-plane are depicted for simulation 6. By comparing against the uniform grid CLSVOF results from \cite{Li2012}, which had 20 cells across the jet diameter, the accuracy of our results seem to be promising, especially considering the difference in cost levels between our phase field method and CLSVOF solvers. One can envision that with higher resolution, phase field can possibly yield more accurate results than those CLSVOF simulations with less cost. More refined calculations can shed light on this matter. \begin{figure} \centering \begin{subfigure}{0.32 \linewidth} \includegraphics[width=\textwidth]{files/jet_hdr_U.png} \caption{} \label{fig:jet_hdr_U} \end{subfigure} \begin{subfigure}{0.32 \linewidth} \includegraphics[width=\textwidth]{files/jet_hdr_V.png} \caption{} \label{fig:jet_hdr_V} \end{subfigure} \begin{subfigure}{0.32 \linewidth} \includegraphics[width=\textwidth]{files/jet_hdr_W.png} \caption{} \label{fig:jet_hdr_W} \end{subfigure} \caption{Results of jet in cross-flow simulation of Case 2 using Equation \ref{mom_con} for momentum transport (Simulation 6). Displayed are the contour plots on the $x-y$ mid-plane for (a) $u$ (b) $v$ (c) $w$.} \label{fig:jet_hdr_vels} \end{figure} \section{Proposed free energy based surface tension force} \label{sec:EBCSF} Capillary effects play a significant role in many industrial and atmospheric applications such as atomization of liquid jets, ink-jet printers, coating processes and breaking waves. In fact, surface tension is a primary factor in the dynamics of ubiquitous events such as coalescence, break-up, film retraction and instabilities. Such events can lead to topological features such as fingers, crowns, droplets or bubbles. Hence, accurate implementation of surface tension forces is an essential requirement for two-phase flow solvers. Miscalculation of these forces can not only lead to unphysical predictions and failure to correctly capture topological features, but also artificial effects such as spurious currents that can in practice have detrimental effects on the kinetic energy or other integral quantities one may be interested in. The need for accurate representation of surface tension has been recognized by researchers. This is reflected by numerous approaches for surface tension modeling within literature. Broadly, surface tension models can be split into integral and volumetric formulations. In integral formulations, surface tension is numerically implemented as a tangential stress at the interface, conserving momentum by default. Continuous surface stress models belong to this category of surface tension formulations. On the other hand, surface tension is implemented as a body force in volumetric formulations (directly in the momentum equation or indirectly as a pressure jump). If accurate, these formulations conveniently allow for discrete balance of pressure and surface tension. Well-balanced integral formulations are much harder to obtain. Very recently, \cite{moataz2018} proposed a two-dimensional algorithm in the context of level-set methods with this property. In general, however, volumetric formulations readily allow for the discrete balance between pressure gradients and capillary forces, preventing the generation of spurious currents, and rendering them much more popular than integral formulations despite failing to conserve total momentum of the system. In \cite{Popinet2018}, it was shown that all popular surface tension models are discretely equivalent to the continuous surface force method(CSF) where surface tension is applied as a body force given by \begin{equation} \vec{{ F }_{ ST }} =\sigma \kappa { { \delta }_{ \epsilon }\vec{n} }. \label{CSF_form} \end{equation} In Equation \ref{CSF_form}, ${\delta}_{\epsilon}$ is a numerical Dirac delta function specific to each method. The accuracy of all these methods depends on the accuracy of computing curvature, and their primary goal is to correctly predict the pressure jump across the interface. While the specific manner in which the right hand side terms in Equation \ref{CSF_form} are computed varies, the CSF form can be applied to different one-fluid methods such as volume of fluid, level set and phase field. In this section, we put forward a different paradigm for computing surface tension forces, and apply it to our selected phase field model. This method for computing surface tension forces utilizes a numerically defined surface energy in a manner consistent with discrete energy conservation principles. Physically speaking, there is work associated with displacing a fluid interface. If surface tension opposes the displacement, then kinetic energy is stored as surface energy. Conversely, if surface tension is pulling the interface in the same direction as its movement, then some of the surface energy is converted to kinetic energy. We use this principle to compute surface tension forces by first defining a numerical surface energy and then computing the surface tension force from the rate of change of surface energy with respect to interface displacement. This novel approach is applied here in the framework of the phase field method given by Equation \ref{phitrans}. The result is a free energy based surface tension force that does not require curvature calculation and reduces spurious currents while preserving the accuracy of the coupled two-phase flow solver. We note that while we present this approach in this framework, the idea of utilizing a defined discrete surface energy to compute surface tension forces can be applied to other interface-capturing schemes likes VOF and level set as well. Similarly, we expect reduction of spurious currents and better total energy conservation when this paradigm is utilized in those frameworks. In the following, we first discuss the properties a discrete surface energy must have and explain how this idea has been historically used in the context of the Cahn-Hilliard phase field method. Next, we introduce how this idea can be generalized to any interface-capturing method and apply it to our phase field method. We examine the accuracy and advantages of our proposed surface tension force calculation technique via two numerical tests, demonstrating a reduction in spurious currents and improvement in accuracy. \subsection{Discrete surface energy} Surface energy is the excess energy at the surface of a material compared to the bulk. Namely, in order to create a surface, molecular bonds within the bulk which are energetically favorable have to be disrupted by some work. That work is stored as surface free energy. Surface tension, $\sigma$, is defined as the amount of work per unit area required to stretch the interface which is equivalent to the amount of surface energy stored per unit area. For any line segment on the surface, $\sigma$ is also the tensile force per unit length exerted tangential to the surface and in the normal direction of the line. Consider a bounded domain $\Omega$ containing a surface $S$. Surface energy density ($\sigma$) can be converted to a volumetric surface energy density via the surface Dirac delta function (${\delta}_{s}$) that is non-zero only on the interface. The total surface energy is then given by \begin{equation} {E}_{s}=\oint _{ S }{ \sigma dS }=\int _{ \Omega }{\sigma{\delta}_{s}(\vec{x}-\vec{{x}_{s}})dV}. \label{E_s_def} \end{equation} For one-fluid models, numerical techniques for including surface tension forces all effectively smoothen the force as implied by Equation \ref{CSF_form}. This is expected for diffuse interface approaches, but also turns out to be the case for sharp interface approaches as no mesh can resolve the infinitesimally thin interface between phases. Following the same logic, numerical volumetric surface energy density, ${\rho}_{s}$, which we will refer to as free energy density for convenience throughout this paper, can be represented through a numerical Dirac delta function, \begin{equation} {\rho}_{s}=\sigma{\delta}_{e}(\vec{x}-\vec{{x}_{s}}). \label{rho_s_def} \end{equation} A numerical model for ${\rho}_{s}$ must satisfy the following properties: \begin{itemize} \item In the absence of interfaces, ${\rho}_{s}=0$ everywhere and by definition the free energy stored on the interface of two phases is positive, thus ${\rho}_{s}\ge0$. \item One of the practical implications of mesh refinement ($\Delta\rightarrow0$) is the reduction of the width of the numerical Dirac delta function denoted by $w({\delta}_{e})$. As the mesh is refined, the volumetric surface energy should converge towards the analytical value for free surface energy, \begin{equation} \lim_{\Delta\rightarrow0}{\int_{\Omega}{{\rho}_{s}dV}}={E}_{s}=\oint _{ S }{ \sigma dS }. \label{energy_limit} \end{equation} \item Equation \ref{energy_limit} is valid for any control volume. Imagine an arbitrary surface element contained in a control volume obtained from sufficiently extruding the surface element along the normal vector in both directions (based on the numerical width of the interface, $w({\delta}_{e})$). Then, Equation \ref{energy_limit} would give \begin{equation} \lim_{\Delta\rightarrow0}{\int_{-w({\delta}_{e})}^{w({\delta}_{e})}{\rho}_{s}\vec{d{x}_{n}}}=\sigma, \label{energy_limit_cont} \end{equation} where $\vec{d{x}_{n}}$ is a vector in the normal direction to the interface. In all one-fluid interface capturing schemes, a phase indicator function is advected, either geometrically or algebraically, from which the interface is implicitly obtained. In all phase field methods, the phase field denoted by $\phi$ is a smoothly varying scalar which assumes constant preset values at pure phases. In such methods, the interface is thickened artificially in the direction normal to the surface and Equation \ref{energy_limit_cont} becomes \begin{equation} \lim_{\Delta\rightarrow0}{\int_{-\infty}^{\infty}{\rho}_{s}(\phi)\vec{d{x}_{n}}}=\sigma. \label{energy_limit_pf} \end{equation} This equation is used in phase field methods when calculating the values for the constants involved in computing the free energy density (${\rho}_{s}$). \end{itemize} \subsection{Energy-based surface tension force for Cahn-Hilliard} \label{sec:ebcsf_ch} The PDE in Cahn-Hilliard drives towards a state of minimum free surface energy, where the free surface energy is defined in Equation \ref{CH_energy_defn}. Based on Equation \ref{CH_energy_defn}, it is clear that for Cahn-Hilliard, ${\rho}_{s}(\phi)={ \epsilon }^{ 2 }{ \left| \nabla \phi \right| }^{ 2 }+W(\phi)$. Surface energy is thus defined as \begin{equation} SE=\int_{\Omega}{{\rho}_{s}dV}. \label{SE_def_phase_field} \end{equation} Equation \ref{Cahn_Hilliard} can be written in the form \begin{equation} \frac { \partial \phi }{ \partial t } + \nabla\cdot(\vec { u }\phi)={ \nabla }^{ 2 }{\mu}_{s}, \label{Cahn_Hilliard_ST} \end{equation} where ${\mu}_{s}$ is the functional derivative of the free surface energy with respect to $\phi$, also known as the chemical potential, \begin{equation} {\mu}_{s}(\phi)=\frac { \delta { \rho }_{ s } }{ \delta \phi } ={ \epsilon }^{ 2 }{ \nabla }^{ 2 }\phi -W'(\phi ). \end{equation} As explained in the introduction, surface tension force can be defined as the rate of change of free surface energy with respect to interface displacement. Having such surface tension forcing would automatically enforce the increase in surface energy due to phase advection to balance the work done against surface tension (kinetic energy loss) during interface displacement. Phase field methods are suitable candidates for implementing this idea as the free surface energy is often a well-defined function of $\phi$ and its derivatives. Indeed, \cite{Jacqmin1999} employed this paradigm for the Cahn-Hilliard equation by arguing that for arbitrary interface configurations and flow fields, there must be a diffuse force exerted by the fluid such that the change in kinetic energy is always opposite to the change in surface free energy. In the continuous limit, for an incompressible flow, the rate of change of free energy due to convection at any point in the domain is \begin{equation} {(\frac{\partial {\rho}_{s}}{\partial t})}_{conv}=\frac{\delta {\rho}_{s}}{\delta \phi}{(\frac{\partial\phi}{\partial t})}_{conv}={\mu}_{s}{(\frac{\partial\phi}{\partial t})}_{conv}=-{\mu}_{s}\nabla\cdot(\vec { u }\phi)=-{\mu}_{s}\vec{ u }\cdot(\nabla\phi). \label{Cahn-Hilliard_energy_rate_transfer} \end{equation} On the other hand, the change in kinetic energy in the form of work to overcome capillary forces, represented by $\vec{{F}_{ST}}$, is $\vec{u}\cdot\vec{{F}_{ST}}$. Therefore, based on Equation \ref{Cahn-Hilliard_energy_rate_transfer}, they concluded that for the change in kinetic energy to balance the change in surface energy, \begin{equation} {F}_{ST}={\mu}_{s}\nabla\phi. \label{eqn:CH_ST_force} \end{equation} This surface tension forcing appears on the right hand side of the momentum equation, as can be seen in Equations \ref{NS}, \ref{momentum_generic_one_fluid_conservative} and \ref{mom_con}. For now, let's assume that the two phases have the same density, or similar to \cite{Jacqmin1999}, the density variations are small such that Boussinesq approximation can be used. Equations \ref{NS} and \ref{mom_con} are equivalent with this assumption. If we multiply Equation \ref{Cahn_Hilliard_ST} by ${\mu}_{s}$ and Equation \ref{NS} by $\vec{u}$, sum and integrate throughout $\Omega$, we find the rate of change of total energy, given by \begin{equation} \frac { \partial (KE+SE) }{ \partial t } =\int _{ \Omega }{[ -\mu (\nabla {\vec{u} }\cdot\nabla {\vec{ u }})-\kappa (\nabla { { \mu }_{ s } }\cdot\nabla { \mu }_{ s })] dV }, \label{total_energy_CH} \end{equation} which is clearly negative. At the discrete level, \cite{Jacqmin1999} first defined a discrete surface energy and kinetic energy norm for a staggered uniform mesh configuration, and then showed that the above properties would still hold by using central differences for spatial discretization. Specifically, when displacing an interface, the increase in discrete surface energy due to phase advection would balance the discrete kinetic energy spent on overcoming the surface tension force. Moreover, a discrete version of Equation \ref{total_energy_CH} was found in which the rate of change of discrete total energy in a bounded domain would be always non-positive, resulting in the rapid elimination of spurious currents. From this, it can be interpreted that while being robust, using this surface tension forcing along with the Cahn-Hilliard equation results in a dissipative method, in the discrete and continuous sense. Later, \cite{Shen_Yang} extended this work to non-unity density ratios by modifying the momentum equation. As explained in Section \ref{sec:Intro}, phase field models based on the Cahn-Hilliard equation suffer from inherent issues that can hinder their performance in realistic applications such as turbulent two-phase flows. \subsection{Energy-based surface tension calculation for current phase field} \label{sec:EBCSF} In Sections \ref{sec:phase_field} we introduced a phase field model (Equation \ref{phitrans}) that does not suffer from the aforementioned inherent problems of the Cahn-Hilliard equation. We also provided a modified momentum transport equation (Equation \ref{mom_con}) that in the absence of capillary and viscous effects, discretely conserves momentum and kinetic energy. Phase field methods are amenable to defining interfacial quantities such as surface energy, as they readily transform all surface quantites to volumetric functions in space. In this section, after defining a discrete surface energy functional for our phase field method, we apply the physical principle explained above to compute surface tension force as the discrete rate of change of surface energy to interface displacement. The right-hand side of our phase field introduced in Equation \ref{phitrans}, reproduced here for convenience, \begin{equation} \frac { \partial \phi }{ \partial t } +\nabla \cdot \left(\vec{ u } \phi \right)=\nabla \cdot \left[\gamma \left(\epsilon \nabla \phi -\phi \left(1-\phi \right)\frac { { \nabla \phi } }{ \left| { \nabla \phi } \right| } \right) \right], \label{phitrans2} \end{equation} is not a gradient flow to any known energy (right-hand side does not strictly decrease surface energy). However, just like the Cahn-Hilliard equation, the equilibrium profile from Equation \ref{phitrans2} is a hyperbolic-tangent profile and as a result, the Ginzburg-Landau-Wilson free energy functional can be used to obtain an energy that we know is minimized at equilibrium. As a matter of fact, we have numerically found that when $\vec{u}=0$, the surface energy functional given by \begin{equation} SE=\int _{ \Omega }{{\rho}_{se}dV}=\int _{ \Omega }{ \frac{3\sigma}{\epsilon}\left[{ \epsilon }^{ 2 }{ \left| \nabla \phi \right| }^{ 2 }+{({\phi}^{2}-\phi)}^{2}\right]dV }, \label{SE_def_our_pf} \end{equation} is strictly decreased by Equation \ref{phitrans2} when applied to any tangent hyperbolic profile (in the form of ${\phi}_{0}=(1+tanh(\vec{x}/{2\epsilon}))/2$, where $\epsilon$ is arbitrary), as it sharpens or diffuses the interface to its correct thickness, enforced by $\epsilon$. The potential is then \begin{equation} {\mu}_{se}=\frac{\delta SE}{\delta \phi}= \frac{6\sigma}{\epsilon}\left[-{ \epsilon }^{ 2 }{\nabla}^{2}\phi+\phi(\phi-1)(2\phi-1)\right]. \label{pot_def} \end{equation} Recall that surface tension force can be defined as the rate of change of free surface energy with respect to interface displacement. With this definition, any rise in surface energy is balanced by a corresponding dip in kinetic energy and vice versa. There is no true interface in phase field methods. Instead, the phase field variable, $\phi$ is advected in the domain. The rate at which the discrete surface energy in the domain changes as $\phi$ and its higher derivatives are altered in response to advection of $\phi$, is the discrete value for the surface tension force. From Equation \ref{phitrans2} the rate at which $\phi$ varies due to advection is ${(\partial\phi/\partial t)}_{conv}=-\nabla\cdot(\vec{u}{\phi})=-\vec{u}\cdot\nabla\phi$. The rate of change of discrete surface energy with respect to time due to phase field advection is $\partial {\rho}_{s}/\partial t=(\delta {\rho}_{s}/\delta \phi){(\partial\phi/\partial t)}_{conv}$. The rate at which $\phi$ is displaced due to advection is given by $-\vec{u}$. As such, surface tension force, defined as the rate of change of surface energy per displacement of $\phi$ is given by $(\partial{\rho}_{s}/\partial t)/(-u)$, \begin{equation} \vec{{F}_{ST}}=(\delta {\rho}_{s}/\delta \phi)\nabla\phi={\mu}_{s}\nabla\phi. \label{ebcsf_formula} \end{equation} Notice that unlike \cite{Jacqmin1999}, we did not explicitly attempt to balance the increase in kinetic energy with the decrease in surface energy and vice versa. Instead, we practiced the general paradigm, that locally, on any point on the interface, surface tension force is equal to the rate of energy exchange to/from surface energy per unit displacement of the interface. Consider a two-phase system with ${\rho}_{1}={\rho}_{2}$, for which Equations \ref{NS} and \ref{mom_con} are equivalent. After substituting the surface tension force ($\vec{{ F }_{ ST }}={\mu}_{s}\nabla \phi$) in either of these these equations, say Equation \ref{mom_con}, summing the product of Equation \ref{phitrans2} with the potential and Equation \ref{mom_con} with $\vec{u}$, and then integrating gives the rate of change of total energy to be \begin{equation} \frac { \partial (KE+SE) }{ \partial t } =\int _{ \Omega }{ \left\{-\mu (\nabla \vec{u}.\nabla \vec{u})+{\mu}_{s}\gamma \nabla \cdot \left [\epsilon \nabla \phi -\phi (1-\phi ) \left ( \frac { { \nabla }\phi }{ \left| { \nabla }\phi \right| } \right ) \right ]\right\}dV }. \label{total_energy_balance_di} \end{equation} We emphasize that on the PDE level, Equation \ref{total_energy_balance_di} implies the balance between the increase of $SE$ due to advection, ${\mu}_{s}\nabla\cdot(\vec{u}\phi)$, and work done to oppose the surface tension force, $\vec{u}\cdot{F}_{ST}=\vec{u}\cdot({\mu}_{s}\nabla \phi)$. We maintain this property discretely by using central differences on our staggered grid configuration. The first term on the right-hand-side of Equation \ref{total_energy_balance_di} represents viscous dissipation and is always negative. However, as explained before, the second term which represents the rate of change of surface energy due to the compression and diffusion terms in the right-hand-side of Equation \ref{phitrans2} is not necessarily negative. While this does not guarantee elimination of spurious currents, it demonstrates that our chosen phase field method is less dissipative compared to Cahn-Hilliard. Indeed however, in the following we numerically show that compared to the original CSF model, the new energy-based formulation significantly reduces spurious currents while also improving accuracy of surface tension force calculation. \subsubsection{Spurious Currents} \label{sec:spur_cur} This test case is a standard benchmark for two-phase flow solvers\citep{Williams1998,Francois2006,Herrmann2008}. A 2D drop with diameter $D=0.4$ is placed in a $1\times1$ box with slip boundary conditions and zero initial flow. The drop and surrounding fluid have equal densities and viscosities. Theoretically, no flow should be generated, and the drop and its background should remain unchanged. The dimensionless parameter characterizing this problem is the Laplace number, $La=\sigma \rho D / { \mu }^{ 2 }$. We set $La=12000$ for our comparison between the original continuum surface force (CSF) model and our energy-based model continuum surface force model (Equation \ref{ebcsf_formula}), which we denote by EBCSF. Both the CFS and EBCSF model are balanced-force methods\citep{Francois2006}. As such, if the surface tension forces were computed exactly, both would generate no spurious currents, as they allow for the discrete balance between pressure gradients and surface tension forces. Therefore, any sustained flow in the numerical solution is due to errors in computing the surface tension force. We will examine the magnitude of the largest velocity in the domain at non dimensional time $t^*=\sigma t/( \mu D ) =250$, which is a large enough time for the surface tension and viscous forces to have physically balanced each other and for spurious currents to be fully developed. To obtain optimal convergence using DI, we set $\epsilon={\Delta x}^{2/3}/\sqrt[3]{32}$. In Figure \ref{fig:spurious_currents_ebcsf}, the magnitude of $Ca={u}_{max}\mu/\sigma$ is plotted at various levels of refinement. It is clear that EBCSF reduces spurious currents considerably compared to CSF. Moreover, the convergence rate of EBCSF seems to be higher than CSF as well. \begin{figure} \centering \includegraphics[width=0.75\textwidth]{files/sp_curr_h23_midterm.png} \caption{Spurious currents for the curvature-based CSF formulation versus the new curvature-free method, which we denote as energy-based continuous surface force, EBCSF.} \label{fig:spurious_currents_ebcsf} \end{figure} \subsubsection{Standing Wave} \label{sec:standing_wave} In this 2D test case \citep{Popinet1999,Gueyffier1999,Gerlach2006,Herrmann2008}, by examining surface oscillations of a standing wave, one can assess the accuracy of a two-phase flow solver in capturing the interaction between inertial, capillary and viscous forces. Here, we utilize this test to verify the accuracy of EBCSF in computing surface tension forces. A single small amplitude wave with wavelength $\lambda=2\pi$ is placed between two immiscible fluids in a $[0,2\pi]\times[0,2\pi]$ domain. The initial perturbation amplitude is ${A}_{0}=0.01\lambda$ and ${y}_{0}=\pi$. The initial condition for the wave height is \begin{equation} { h }_{ wave }(x,t=0)=y-{ y }_{ 0 }+{ A }_{ 0 }cos(x-\Delta x/2). \end{equation} Boundary conditions are periodic in the $x$ direction and slip boundary conditions are used for the top and bottom walls. If the two fluids have equal kinematic viscosity, $\nu$, then the amplitude of the wave is given analytically(\cite{Prosperetti1981}): \begin{equation} { A }_{ ex }=\frac { 4(1-4\beta ){ \nu }^{ 2 } }{ 8(1-4\beta ){ \nu }^{ 2 }+{ \omega }_{ 0 }^{ 2 } } { A }_{ 0 }erfc\sqrt { \nu t } +\sum _{ i=1 }^{ 4 }{ \frac { { z }_{ i } }{ { Z }_{ i } } (\frac { { \omega }_{ 0 }^{ 2 }{ A }_{ 0 } }{ { z }_{ i }^{ 2 }-\nu } )exp[({ z }_{ i }^{ 2 }-\nu )t]erfc({ z }_{ i }\sqrt { t } ) }, \label{eqn:prosperetti} \end{equation} where ${z}_{i}$ are the roots of \begin{equation} { z }^{ 4 }-4\beta \sqrt { \nu } { z }^{ 3 }+2(1-6\beta ){ \nu z }^{ 2 }+4(1-3\beta ){ \nu }^{ 3/2 }z+(1-4\beta ){ \nu }^{ 2 }+{ \omega }_{ 0 }^{ 2 }=0. \end{equation} Here, we set $\sigma=2$ ${\rho}_{1}={\rho}_{2}=1$, $\nu=0.064720863$ and the time step is $dt=0.003$. At different resolutions, we measure the amplitude of the standing wave at $x=\Delta x/2$. The amplitude of the standing wave for simulations performed with EBCSF are plotted against the theoretical solution in Figure \ref{fig:ebscf_sw}. It is clear that with higher resolution, the solution is converging to the theoretical prediction given by \ref{eqn:prosperetti}. After computing the error with respect to the theoretical solution, we compare the root-mean-squared (r.m.s.) error of the simulation predictions using CSF and EBCSF in Figure \ref{fig:surface_wave_di}. Both surface tension computation schemes give first order convergence rates, while EBCSF proves to be more accurate than CSF. \begin{figure} \centering \begin{subfigure}{0.49 \linewidth} \includegraphics[width=0.95\textwidth]{ebcsf_amplitudes_dec_2019.png} \caption{} \label{fig:ebscf_sw} \end{subfigure} \begin{subfigure}{0.49 \linewidth} \includegraphics[width=0.95\textwidth] {files/sw_rms_errors_h23_midterm.png} \caption{} \label{fig:surface_wave_di} \end{subfigure} \caption{Amplitude of the standing wave at $x=\Delta x/2$ using the EBCSF surface tension force method against the theoretical solution (a), r.m.s error values for interface location against resolution for standing wave simulations using CSF and EBCSF (b).} \end{figure} Lastly, using Equation \ref{mom_con}, if we combine the corrections introduced in Section \ref{sec:consistent_conservative_momentum_transport} with the energy-based surface tension force, we recover Equation \ref{total_energy_balance_di} for non-unity density ratios. \section{Conclusions} \label{sec:Conclusions} In the framework of a conservative and bounded second order phase field method, we have introduced a momentum and kinetic energy conserving momentum transport model. This proposed model is consistent with the phase/mass transport equation, resulting in significant improvement in the robustness, stability and accuracy of two-phase flow simulations at high density ratios and/or high $Re$ numbers. This was confirmed via canonical and realistic problems studied herein. Additionally, a general paradigm for numerical evaluation of surface tension forces is presented and implemented on the adopted phase field method. This new surface tension force model does not require computation of curvature values, and instead utilized discrete definition of free surface energy. We numerically demonstrated that this model is an upgrade to the the commonly used CSF model. All in all, the two main contributions in this work can be adapted to other two-phase flow models. For phase field models, the momentum treatment can be easily adapted, allowing for robust and accurate simulations at high density ratios and turbulent conditions. Moreover, for various interface-capturing methods, after defining a discrete surface energy density, one can take advantage of the surface tension paradigm introduced here to compute surface tension forces. This is expected to improve the conservation of total energy and capillary force predictions. \begin{appendices} \section{Non-uniform Cartesian grids} \label{sec:non_uniform_mesh} In this section, we first explain how Equation \ref{phitrans} should be discretized such that the boundedness properties of the proposed second order phase field method can be extended to non-uniform grids. Then, we present the discretization strategy for Equation \ref{mom_con} that will allow for consistent and conservative momentum transport on non-uniform grids. \subsection{Phase field advection} The thickness of the interface in phase field methods is usually picked to be a globally constant value. In order to be able to resolve the interface everywhere in the domain, the thickness would then be selected based on the coarsest mesh size. This results in either thick interfaces or over-resolved regions. We here propose an alternate strategy for Equation \ref{phitrans}, in which the interface thickness, given by $\epsilon$, varies according to the local mesh. The space-time discretization for forward Euler time-stepping in 2-D for Equation \ref{phitrans} becomes \begin{multline} { \phi }_{ i,j }^{ k+1 }={ \phi }_{ i,j }^{ k }+\Delta t(-\frac { { u }_{ i+1/2,j }^{ k }({ \phi }_{ i+1,j }^{ k }+{ \phi }_{ i,j }^{ k })/2-{ u }_{ i-1/2,j }^{ k }({ \phi }_{ i,j }^{ k }+{ \phi }_{ i-1,j }^{ k })/2 }{ { \Delta x }_{ i,j } } -\\\frac { { v }_{ i,j+1/2 }^{ k }({ \phi }_{ i,j+1 }^{ k }+{ \phi }_{ i,j }^{ k })/2-{ v }_{ i,j-1/2 }^{ k }({ \phi }_{ i,j }^{ k }+{ \phi }_{ i,j-1 }^{ k })/2 }{ { \Delta y }_{ i,j } } \\ \gamma \frac { { \epsilon }_{ x,i+1/2,j }({ \phi }_{ i+1,j }^{ k }-{ \phi }_{ i,j }^{ k })/{ \Delta x }_{ i+1/2,j }-{ \epsilon }_{ x,i-1/2,j }({ \phi }_{ i,j }^{ k }-{ \phi }_{ i-1,j }^{ k })/{ \Delta x }_{ i-1/2,j } }{ { \Delta x }_{ i,j } } +\\\gamma \frac { { \epsilon }_{ y,i,j+1/2 }({ \phi }_{ i,j+1 }^{ k }-{ \phi }_{ i,j }^{ k })/{ \Delta y }_{ i,j+1/2 }-{ \epsilon }_{ y,i,j-1/2 }({ \phi }_{ i,j }^{ k }-{ \phi }_{ i,j-1 }^{ k })/{ \Delta y }_{ i,j-1/2 } }{ { \Delta y }_{ i,j } } +\\ \gamma \frac { { ({ (\phi }_{ i+1,j }^{ k }) }^{ 2 }-{ \phi }_{ i+1,j }^{ k })\hat { { n }^{ k }_{ i+1,j } } -{ { ((\phi }_{ i-1,j }^{ k }) }^{ 2 }-{ \phi }_{ i-1,j }^{ k })\hat { { n }^{ k }_{ i-1,j } } }{ { 2\Delta x }_{ i,j } } +\\\gamma \frac { { ({ (\phi }_{ i,j+1 }^{ k }) }^{ 2 }-{ \phi }_{ i,j+1 }^{ k })\hat { { n }^{ k }_{ i,j+1 } } -{ { ((\phi }_{ i,j-1 }^{ k }) }^{ 2 }-{ \phi }_{ i,j-1 }^{ k })\hat { { n }^{ k }_{ i,j-1 } } }{ { 2\Delta y }_{ i,j } } ), \label{eqn:no_n_assumption_discrete} \end{multline} where ${\Delta x}_{i-1/2,j}$ is the node to node distance between node $(i,j)$ and $(i-1,j)$, computed as ${\Delta x}_{i-1/2,j}=({\Delta x}_{i,j}+{\Delta x}_{i-1,j})/2$, and ${\epsilon}_{x,i-1/2,j}$ is the interface thickness in the $x$ direction at the left face of the cell $(i,j)$. Naturally, ${\Delta y}_{i,j-1/2}$ and ${\epsilon}_{x,i,j-1/2}$ are defined in similar manner. The $\epsilon$ vector is thus stored like the velocity vector. Any arbitrary non-uniform Cartesian grid in physical space can be mapped to an isotropic, uniform Cartesian grid in computational space for which ${\Delta x}_{comp}={\Delta y}_{comp}$ are global constants. Moreover, in order to have constant interface resolution in physical space, we need to have constant interface thickness in computational space. Since the thickness and shape of the interface is determined by the balance of the sharpening and diffusive fluxes on the right-hand side of Equation \ref{phitrans}, we compute these fluxes in the computational space. In that computation, ${\epsilon}_{comp}$ is a global constant. Based on Equation \ref{eqn:no_n_assumption_discrete}, this is equivalent to having set ${\epsilon}_{x,i-1/2,j}/{\Delta y}_{i,j-1/2}={\epsilon}_{y,i,j-1/2}/{\Delta y}_{i,j-1/2}={\epsilon}_{comp}/{\Delta x}_{comp}$ a constant value for all $i$ and $j$. Additionally, the normal vector is computed via finite differences on the computational grid. With these conditions, it can then be proven that boundedness for $\phi$ can be maintained as long as \begin{equation} \frac{{\epsilon}_{comp}}{{\Delta x}_{comp}}\ge\frac { { \gamma }/{\left|\vec{u}\right|}_{max}+1 }{ 2{ \gamma }/{\left|\vec{u}\right|}_{max}}, \label{criteria_nonuniform} \end{equation} and the time-step is chosen appropriately based on numerical stability, \begin{equation} \Delta t \le \left[\frac{{\left|\vec{u}\right|}_{max}/\gamma}{{\epsilon}_{comp}/{\Delta x}_{comp}}\right]\frac{min({\Delta x}_{min},{\Delta y}_{min})}{2\left|\vec{u}\right|_{max}}, \end{equation} where $\left[{(\left|\vec{u}\right|}_{max}/\gamma)/({\epsilon}_{comp}/{\Delta x}_{comp})\right]$ is a constant chosen by the user according to Equation \ref{criteria_nonuniform}. \subsection{Momentum transport} Following the work of \cite{Ham2002}, when dealing with non-uniform Cartesian grids, we use volume weighted averaging when computing convective fluxes in Equation \ref{mom_con}. To interpolate a field $\psi$ from cell center to x-face using volume weighted averaging, \begin{equation} {\tilde{\psi}}_{i-1/2,j}=\frac{{\psi}_{i-1,j}{\Delta x}_{i-1,j}/2+{\psi}_{i,j}{\Delta x}_{i,j}/2}{{\Delta x}_{i-1/2,j}} \end{equation} In order to have consistent and kinetic energy conserving momentum transport we use this type of averaging in \begin{itemize} \item interpolating $\rho$ in the temporal terms from nodes onto face cells in Equation \ref{mom_con_index_form} (this interpolated value is used when solving the pressure Poisson equation) \item interpolating ${U}_{j}$ in Equation \ref{mom_con_index_form} for $j\neq i$ \item interpolating $\phi$ onto face centers when computing $\nabla\cdot(\vec{u}\phi)$ \end{itemize} \end{appendices} \section{Acknowledgements} This work was financially supported by Office of Naval Research (Grant No. 119675) and NASA (Grant No. 127881). Mr. Pedro Milani is acknowledged for his contribution to the development of the energy-based scheme for surface tension forces. \section{References}
2023-04-23T06:41:34.056Z
2019-12-24T02:01:49.000Z
redpajama/arxiv
arxiv_0001
2,674
15,112
2fa59e32fe573c84d10d945aa84907eb290a5336
\section{Introduction} A continuing problem in general relativity is proving or disproving the cosmic censorship hypothesis (CCH) developed by Penrose \cite{pen69,pen98,wald08}. This hypothesis states that a physically realistic gravitational collapse must end in the formation of a black hole (BH). Penrose postulates that a naked singularity cannot physically realistically form because the singularity would always be hidden within the event horizon of the BH \cite{pen98}. In recent years, many contradictions to the CCH have been developed \cite{pen98, prose, wald}. These contradictions generally fall into two major categories. The first category discusses how naked singularities are the end state of gravitational collapse; therefore, naked singularities violate the conditions sufficient for cosmic censorship. The second category contradicts the CCH by assuming the existence of naked singularities, and demonstrates that they are a violation of the CCH terms \cite{C2012}. \\ \indent A major advance in strong field lensing involved the formulation of the Virbhadra-Ellis lens equation \cite{DeA, V2009}. The Virbhadra-Ellis lens equation theoretically models small and large deflections of light near a singularity, and generates image positions based on these conditions. This is important because large bending angles of light are commonly found in strong field lensing, and most equations cannot accurately model these large angles \cite{V2002}. Competing lens equations like the one proposed by Frittelli and Newman provide exact solutions \cite{F1999,F2000}. However, the results obtained are very similar to the results of the Virbhadra-Ellis lens equation. Therefore, the Virbhadra-Ellis lens equation is the preferred equation because it is much simpler and more commonly cited in the literature \cite{V2009}.\\ \indent There has been much research in recent years regarding gravitational lensing for various types of massive dark objects (MDO) \cite{P1,P2,P3,P4,P5,P6,P7,P8,P9,P10,P11,P12,P13,P14}. In order to probe the nature of the supermassive MDO in the center of the Milky Way, all research regarding lensing observables is very useful. The work of Virbradra et al., characterized 4 major types of singularities: the Schwarzschild black hole (SBH), weakly naked singularity (WNS), marginally strongly naked singularity (MSNS), and strongly naked singularity (SNS) \cite{V2002, V2008}. A SNS can be qualitatively distinguished from a SBH, WNS, and MSNS using strong field gravitational lensing. Additionally, qualitative differences were found amongst three types of SNS lensing, as previously described by Virbhadra and Keeton. These qualitative differences allowed Virbhadra and Keeton to describe 3 major types of SNS. The three major types are: Type 1 ($\nu$=0.04), Type 2 ($\nu$=0.02), and Type 3 ($\nu$=0.001) \cite{V2008}. \\ \indent SNS can be qualitatively and quantitatively differentiated. SNS for $\nu$=0.04, $\nu$=0.02 and $\nu$=0.01 give rise to four images, and double Einstein rings when $\beta$=0 \cite{DeA, V2008}. SNS for $\nu$=0.001 give rise to four images, and no Einstein rings when $\beta$=0. Recently, DeAndrea studied a SNS between Type 2 and 3 ($\nu$=0.01) that interestingly found negative time delay results \cite{DeA}. Negative time delay signatures are only found for SNS; only positive time delay signatures are found for SBH, WNS, and MSNS \cite{DeA, V2008}. This paper expands upon the work of Virbhadra and Keeton, and considers an SNS with $\nu$=0.01. A model is proposed for a SNS as the Milky Way Galactic center. A static spherically symmetric Janis-Newman-Winicour (JNW) metric is used to determine light propagation \cite{jnw68}. We find this approach appropriate given the lack of scientific evidence in support of the CCH. \\ \indent We compute magnification centroid, magnification centroid shift, and total absolute magnification for many values of $\beta$. Our results provide a deeper understanding of the lensing characteristics for the $\nu$=0.01 SNS. This is in addition to the time delay signatures DeAndrea previously explored \cite{DeA}. All computations were completed using MATHEMATICA software \cite{MATH}.\\ \section{\label{sec:LE&DE} Virbhadra-Ellis Lens Equation} The Virbhadra-Ellis gravitational lens equation\cite{V2002} is presented here: \begin{equation} \tan\beta = \tan\theta - \alpha \end{equation} with \begin{equation} \alpha \equiv \frac{D_{ds}}{D_s} [\tan\theta + \tan(\hat{\alpha} - \theta)]. \end{equation} $D_{s}$ refers to the observer-source distance, $D_{ds}$ the lens-source distance, and $D_d$ the observer-lens distance. $\alpha$ refers to the light bending angle. $\theta$ and $\beta$ refer to respectively, angular positions of an image and unlensed source measured from the optical axis. Here, the impact parameter is: $J = D_d sin \theta$. Refer to \cite{DeA} for a schematic diagram of gravitational lensing, showing all angles and distances presented in the lens equation. \section{\label{sec:LE&DE} Deflection Angle and Impact Parameter} Under the conditions of generally static and spherically symmetric spacetime, {\it Virbhadra et al.} \cite{vn98} characterized the line element as: \begin{equation} ds^2 = B(r)dt^2 - A(r)dr^2 - D(r)r^2(d\vartheta + sin^2\vartheta d\phi^2) \end{equation} For a light ray with the closest distance of approach $(r_0)$, the deflection $a(r_0)$ is characterized as \cite{vn98}: \begin{equation} \hat{\alpha}(r_0)=2\int_{(r_0)}^{\infty}\left(\frac{A(r)}{D(r)}\right)^\frac{1}{2} \left[\left(\frac{r}{r_0}\right)^2 \frac{D(r)}{D(r_0)}\frac{B(r_0}{B(r)} - 1\right]^\frac{1}{2}\frac{dr}{d} - \pi \end{equation} and the impact parameter $J(r_0)$ is characterized as \cite{vn98}: \begin{equation} J(r_0) = (r_0) \sqrt{\frac{D(r_0)}{B(r_0)}}. \end{equation} \\ \indent This is presented for a Schwarzschild SNS by \cite{VE2000}: \begin{eqnarray} \hat{\alpha}(x_0)=2\int_{x_0}^{\infty}\frac{dx}{\sqrt{(\frac{x}{x_0})^2\left(1-\frac{1}{x_0}\right)\left(1-\frac{1}{x}\right)}}-\pi \label{Schwarzschild strongly naked singularity} \end{eqnarray} \\ where \cite{VE2000}: \begin{eqnarray} x = \frac{r}{2M} \ and \ x_0 = \frac{r_0}{2M} \label{Schwarzschild strongly naked singularity Second} \end{eqnarray} \section{\label{sec:LE&DE} The Janis-Newman-Winicour Metric} Janis, Newman and Winicour provided a general static and spherically symmetric solution to the Einstein massless scalar field equations. For constant and real parameters of the mass ($M$) and scalar charge ($q$), the line element (see \cite{jnw68,v97}) is characterized by: \begin{eqnarray} ds^2 &=& \left(1-\frac{b}{r}\right)^{\nu} dt^2 - \left(1-\frac{b}{r}\right)^{-\nu} dr^2 \nonumber \\ &-& \left(1-\frac{b}{r}\right)^{1-\nu} r^2 \left(d\vartheta^2 +\sin^2\vartheta \ d\varphi^2\right) \label{Janis-Newman-WinicourMetric} \end{eqnarray} The massless scalar field is characterized by: \begin{equation} \Phi = \frac{q}{b\sqrt{4\pi}}\ln\left(1-\frac{b}{r}\right), \end{equation} with: \begin{equation} \nu = \frac{2M}{b}\; \mbox {and}\; b = 2\sqrt{M^2 + q^2}. \end{equation} There is only one photon sphere situated at the radial distance \cite{V2002,v97,vec01}: \begin{equation} r_{ps}=\frac{b(1+2\nu)}{2}. \end{equation} There is a naked singularity at $r=b$. Only for $\frac{1}{2}<\nu\leq1$ does the photon sphere exist. Defining \cite{vjj}: \begin{equation} \rho=\frac{r}{b} \; \mbox{and} \; \rho_0=\frac{r_0}{b}, \end{equation} the deflection angle $\hat{\alpha}$ for a light ray is \cite{V2002,vn98}: \begin{widetext} \begin{equation} \hat{\alpha}(\rho_0)=2\int_{\rho_0}^{\infty}\frac{d\rho}{\rho\sqrt{1-\frac{1}{\rho}}\sqrt{(\frac{\rho}{\rho_0})^2\left(1-\frac{1}{\rho}\right)^{1-2\nu}\left(1-\frac{1}{\rho_0}\right)^{2\nu-1}-1}}-\pi. \end{equation} \vspace{2ex} \end{widetext} \section{Magnification Centroid and Total Magnification Equations} The magnification centroid of images is characterized by: \begin{equation} \hat{\Theta}=\frac{\Sigma \theta |\mu_i|}{\Sigma |\mu_i|} \end{equation} where angles measured clockwise to the optic axis are positive and angles counterclockwise are negative. \\ \\ The magnification centroid shift is defined as: \begin{equation} \Delta\Theta=\beta - \hat{\Theta} \end{equation} The total magnification is defined as: \begin{equation} \mu_{tot}=\Sigma |\mu_i| \end{equation} \section{\label{sec:LE&DE} Computations and Results} Virbhadra and Keeton found several important trends in magnification and magnification centroid for SNS lensing. These trends were derived from extensive data on the Type 1 ($\nu$=0.04), Type 2 ($\nu$=0.02), and Type 3 ($\nu$=0.001) SNS. In our current article, we examine a $\nu$=0.01 case in order to distinguish lensing behavior of SNS between types 2 and 3. The supermassive center of the Milky Way galaxy is modelled as a SNS ($\nu$ = 0.01, mass $M$ = 3.61 x $10^6 M_{\bigodot}$, $D_d$ = 7.62 kpc, $D_{ds}$/$D_s$=$\frac{1}{2}$). \\ \indent We calculate magnification centroid, magnification centroid shift, and total magnification using MATHEMATICA, for a large number of angular source positions ($\beta$). The Galactic MDO is modelled as a JNW SNS lens, as the JNW spacetime is proven stable under scalar field perturbations \cite{Sadu}. Due to the nature of SNS having no photon sphere, there should be no relativistic images produced, thus we did not attempt to study relativistic images in this paper \cite{V2002}. \\ \indent Table I displays values of magnification centroid, magnification centroid shift, and total magnification for a large number of angular source positions. Refer to Tables 1-2 by \cite{DeA} for the quantitative values of image positions for a large range of $\beta$. Figures 1-5 are schematic diagrams of SNS gravitational lensing, visualizing image positions as $\beta$ increases. $I_{oo}$ denotes the outer image on the opposite side of the source, $I_{io}$ is the inner image on the opposite side of the source, $I_{is}$ is the inner image on the same side of source, and $I_{os}$ is the outer image on the same side of source. Figure 1 shows SNS gravitational lensing at $\beta$=0, where a double Einstein ring is present. The schematic diagrams denoted Figure 2 and Figure 3 show image positions as $\beta$ increases. Note that as $\beta$ increases, $I_{oo}$ and $I_{io}$ come closer, while $I_{is}$ and $I_{os}$ separate. Figure 3 represents the point where the light ray of $I_{is}$ will invert its deflection. Figure 4 shows a radial caustic, when $I_{io}$ and $I_{oo}$ are located in the same position, and become highly magnified. As $\beta$ further increases, $I_{io}$ and $I_{oo}$ will eventually disappear, shown in Figure 5. The nature of curve for Figures 6-8 are all qualitatively similar to black holes, but quantitatively different. The quantitative differences can be used to classify the type of MDO lens.\\ \indent Magnification centroid for given values of $\beta$ follows a roughly linear trend, with slight deviations initially observed for small values of $\beta$. Magnification centroid shift is zero when $\beta$=0. At $\beta$=0 two Einstein rings can be observed. Magnification centroid shift increases to a maximum value as $\beta$ approaches 2 arcseconds, then continues to decrease to a limiting value of 0. The total magnification is very large initially, and as $\beta$ increases the total magnification decreases to a limit of 1.\\ \begin{center} \includegraphics[width=3in,height=3.37in]{FIGURE1.png} \DeclareGraphicsExtensions{.png} \captionof{figure}{SNS Gravitational lensing at $\beta$􏰂=0, showing a double Einstein ring} \label{FIGURE1} \end{center} \begin{center} \includegraphics[width=3in,height=3.68in]{FIGURE2.png} \DeclareGraphicsExtensions{.png} \captionof{figure}{Schematic diagram of gravitational lensing by a SNS} \label{FIGURE2} \end{center} \begin{center} \includegraphics[width=3in,height=3.68in]{FIGURE3.png} \DeclareGraphicsExtensions{.png} \captionof{figure}{Schematic diagram of gravitational lensing by a SNS} \label{FIGURE3} \end{center} \begin{center} \includegraphics[width=3in,height=3.68in]{FIGURE4.png} \DeclareGraphicsExtensions{.png} \captionof{figure}{Schematic diagram of gravitational lensing by a SNS, where values of 􏰂$\beta$ for secondary images coincide} \label{FIGURE4} \vspace{15ex} \end{center} \begin{center} \includegraphics[width=3in,height=3.68in]{FIGURE5.png} \DeclareGraphicsExtensions{.png} \captionof{figure}{Schematic diagram of gravitational lensing by a SNS, where both secondary images are annihilated} \label{FIGURE5} \end{center} \begingroup \squeezetable \begin{table*} \caption{\label{tab:Table1} We model the Galactic supermassive center as a strongly naked singularity ($\nu$ = 0.01, mass $M$ = 3.61 x $10^6 M_{\bigodot}$, $D_d$ = 7.62 kpc, $D_{ds}$/$D_s$=$\frac{1}{2}$) and, using Mathematica, compute magnification centroid, magnification centroid shift, and total magnification for a large number of angular source positions ($\beta$).} \begin{ruledtabular} \begin{tabular}{l|lllllll} \multicolumn{1}{c|}{ $\beta (arcsec)$}& \multicolumn{6}{c}{Images on the opposite side of the source.}\\ & $\hat{\Theta} (arcsec)$ & $\Delta\hat{\Theta} (arcsec)$ & $\mu_{tot}$ \\ \hline \hline\noalign{\smallskip} $10^{-6} $&$1.5 * 10^{-6} $&$5.00 * 10^{-7} $&$ 1.3882 * 10^{6} $ \\ $10^{-5} $&$1.5 * 10^{-5} $&$5.00 * 10^{-6} $&$ 1.3882 * 10^{5} $ \\ $10^{-4} $&$1.5 * 10^{-4} $&$5.00 * 10^{-5} $&$ 1.3882 * 10^{4} $ \\ $10^{-3} $&$1.5 * 10^{-3} $&$5.00 * 10^{-4} $&$ 1.3882 * 10^{3}$ \\ $10^{-2} $&$1.5 * 10^{-2} $&$5.00 * 10^{-3} $&$ 1.3882 * 10^{2}$ \\ $10^{-1} $&$1.5 * 10^{-1} $&$4.99 * 10^{-2} $&$ 1.3909 * 10^{1} $ \\ $1.0 $&$1.40 $&$0.397 $&$ 1.6449$ \\ $2.0 $&$2.49 $&$0.491 $&$ 1.1477 $ \\ $3.0 $&$3.45 $&$0.450 $&$ 1.0482$\\ $4.0 $&$4.39 $&$0.388 $&$ 1.0194$ \\ $5.0 $&$5.33 $&$0.334 $&$ 1.0090 $ \\ $6.0 $&$6.29 $&$0.290 $&$ 1.0047$ \\ $7.0 $&$7.26 $&$0.255 $&$ 1.0027$ \\ $8.0 $&$8.23 $&$0.227 $&$1.0016 $ \\ $9.0 $&$9.20 $&$0.204 $&$1.0010$ \\ $10.0 $&$10.2 $&$0.186 $&$1.0007 $ \\ \end{tabular} \end{ruledtabular} \vspace{10ex} \end{table*} \endgroup \begin{center} \includegraphics[width=3in,height=2.05in]{FIGURE6.png} \DeclareGraphicsExtensions{.png} \captionof{figure}{The magnification centroid shift 􏰀$\hat{\Theta}$ plotted against angular source position $\beta$ for $\nu$ = 0.01; $D_{ds}/D_s$=$\frac{1}{2}$} \label{Fig1} \end{center} \begin{center} \includegraphics[width=3in,height=2.05in]{FIGURE7.png} \DeclareGraphicsExtensions{.png} \captionof{figure}{The magnification centroid shift 􏰀$\Delta$$\hat{\Theta}$ plotted against angular source position $\beta$ for $\nu$ = 0.01; $D_{ds}/D_s$=$\frac{1}{2}$} \label{Fig2} \end{center} \begin{center} \includegraphics[width=3in,height=2.05in]{FIGURE8.png} \DeclareGraphicsExtensions{.png} \captionof{figure}{Total magnification (􏰀$\mu_{tot}$) plotted against angular source position $\beta$ for $\nu$ = 0.01; $D_{ds}/D_s$=$\frac{1}{2}$} \label{Fig3} \end{center} \section{Summary and Discussion} We are able to model the magnification centroid, magnification centroid shift, and total absolute magnification for a strongly naked singularity with $\nu$=0.01. When images are not resolved, total magnification and magnification centroid contribute greatly to our study of lensing. These results demonstrate that an SNS with $\nu$=0.01 has quantitatively different values for all three parameters compared to SNS with other values of $\nu$. Therefore, the quantitative values of magnification centroid, magnification centroid shift, and total absolute magnification depend on the value of $\nu$=0.01 even among SNS \cite{V2008}. Compared to the SBH, the magnification centroid will be quantitatively lower and total magnification will be quantitatively higher for a constant value of $\beta$ \cite{V2002, V2008}. This is due to the higher value of the scalar charge to mass ratio. \\ \indent However, our results for $\nu$=0.01 SNS show qualitatively similar trends to the SNS, SBH, WNS, and MSNS trends that were previously described by Virbhadra and Keeton \cite{V2008}. Despite the differences in $\nu$ value, the four types of singularities show similar trends. It is found that as $\beta$ increases, magnification centroid increases as long as $\nu$ is constant. In addition, as $\nu$ decreases, magnification centroid decreases for a constant value of $\beta$. \\ \indent The magnification centroid shift, which is mathematically derived from the magnification centroid, also shows similar trends based on $\beta$ values; this is for SNS, SBH, WNS, and MSNS. Magnification centroid shift rapidly increases with $\beta$, attains a peak value, and then slowly declines to 0. Finally, when ν is constant, total magnification has a high value for very small values of $\beta$. This value of total magnification rapidly decreases as $\beta$ increases. When $\beta$ is constant, as $\nu$ increases, total magnification also increases. This increase in total magnification is likely due to the scalar charge. \\ \indent The values for magnification centroid, magnification centroid shift, and total absolute magnification add to the current knowledge on SNS with $\nu$=0.01. Previously, DeAndrea explored time delays for SNS lensing with $\nu$=0.01. It was found that for the $\nu$=0.01 case, unlike the types of SNS studied by Virbhadra and Keeton, all four images have negative time delays even for $\beta$=0. The primary image has a negative time delay for any value of $\beta$, and the time delays for the other three images remained negative until very large values of $\beta$ \cite{DeA}. This was not found in the results of Virbhadra and Keeton \cite{V2008}. \\ \indent Observations of our results would disprove the Cosmic Censorship Hypothesis and would pave the way to develop a quantum theory of gravity, as very strong gravitational fields will be accessible to observation. Next generation imaging technology could attain the goal of experimentally imaging a black hole \cite{K2005}. The NASA MAXIM mission plans to create an X-ray interferometer with 100 nanoarcsecond resolution \cite{NASA}. Since imaging a black hole is attainable at at 300 nanoarcsecond resolution, this project makes it more of a reality. To minimize demagnification; the source, lens, and observer must be highly aligned. When making observations of the universe at great distances, uncertainty will be present due to cosmic variance \cite{B2000}. Since we considered the Janis-Newman-Winicour spacetime with a scalar field, we must expect effects on lensing due to dark energy \cite{H2013}. Thus, these results could be potentially useful for the study of dark energy lensing. The absorption and scattering of electromagnetic radiation near the galactic center causes additional difficulties. \section{Acknowledgements} We thank our mentor K. S. Virbhadra for the time he took to contribute to our learning. His enthusiasm for this field motivated us to reach new levels of understanding and intelligence in astrophysics. All in all, we appreciate everything that he has done for us the last several years.
2023-04-23T06:41:36.166Z
2015-08-18T02:08:07.000Z
redpajama/arxiv
arxiv_0001
2,734
3,033
aaea995117ad350110d44a9053392cb6bf36d9ba
\section{Introduction} FU Ori stars are young stellar objects (YSOs) that are currently in outburst, driven by enhanced accretion from a protoplanetary disk onto a central young star \citep{hartmann1996}. While photometric variability is a general empirical feature of YSOs, with much of that variability driven by accretion-related phenomena occurring near the star-disk magnetospheric region, the accretion burst and large-amplitude outbursts of some YSOs put them in a distinct category. Currently there are several poorly defined sub-categories of burst and outburst behavior. Discrete brightening events can last from days to weeks (the bursts), to months and decades, with generally larger amplitudes (outbursts) for the longer duration events. The most extreme outbursts, known as FU Ori objects, are commonly interpreted as the result of disk instabilities driving enhanced accretion and leading to crushing of the magnetospheric region that otherwise channels the accretion. The innermost disk geometry likely transitions during an outburst into a more standard inner accretion disk, with a classical boundary layer between the disk and the star, as in CVs and other compact object accretors (though no such boundary layer has been observed in any FU Ori star). Several of the most famous FU Ori's are optically visible (FU Ori, V1057 Cyg, V1515 Cyg) and the observational properties of these early prototypes have long served, over the past five decades, to guide the definition and interpretation of the FU Ori class. A salient characteristic is detection of a large-amplitude (4-6 mag) optical outburst, with a rise time of months to years. The FU Ori class later expanded to also include sources in which a substantial near-infrared (rather than optical) brightening was detected. Additional candidates are those in which no outburst was documented through observation, but the source presents a spectrum that is ``FU Ori-like". At least half to two-thirds of the presently claimed FU Ori population is embedded, with $A_V > 5$ mag \citep{connelley2018}. \section{Identification of FU Ori Outbursts in Photometric Surveys} In recent years, optical time domain surveys (e.g. PTF/ZTF, ASAS-SN, Gaia, ATLAS) have continued to detect large-amplitude, long timescale photometric events that can be spectroscopically followed up and confirmed as analogs to the traditionally defined FU Ori sources. Examples over the past decade include V2493 Cyg (HBC 722), V960 Mon, Gaia 17bpi, and Gaia 18dvy. At the same time, long term near-infrared (e.g. VVV) and mid-infrared (e.g. NEOWISE) time domain survey data has become available, leading to a plethora of outburst candidates. Only some of these candidates are amenable to the spectroscopic follow-up that would support an FU Ori classification, however. And true confirmation would require multi-wavelength spectroscopy in order to detect the temperature and velocity gradients expected from an accretion disk-dominated system. In many of these cases of large or moderate-amplitude infrared brightenings, only a $K$-band spectrum, or perhaps even just a lightcurve, is available. It thus becomes important to understand if there are photometric criteria that can be used to discriminate between FU Ori and other types of young star variable phenomena (as well as non-YSO contaminants). The mid-infrared expectations are particularly important to establish, as the threshold for declaring an outburst event between e.g. Spitzer and WISE observations separated by up to 5-7 years, or in NEOWISE data that now spans 7 years, is only 1.5 to 2 mag \citep{fischer19,park21}. This is a mere factor of $\sim 5$ brightness increase, rather than the standard factor of $\sim$100 that is typically sought in optical searches. For a disk outburst scenario, luminosity increases are a straightforward consequence of increases in disk accretion rate, perhaps combined with a change in the geometrical flow from being along magnetic fields, to equatorial. For large enough outburst accretion rates, the entire ultraviolet, optical, near-infrared, and potentially mid-infrared (depending on the radial extent of the outbursting region) wavelengths should be disk-dominated. Far-infrared and millimeter wavelengths are sensitive to reprocessed emission from the high-accretion zone. The experiment we perform here is to contrast hypothetical low-state accretion disks, with a stereotypical high-state disk, and quantify expected outburst amplitudes as a function of wavelength. \section{Predicted Outburst Amplitudes} Our toy model consists in the low state of: \begin{itemize} \item A progenitor low-mass pre-main sequence star, with assumed values of $T_{eff,*} =3800$ K and $R_* = 1.5 ~R_\odot$ for its temperature and radius. The SED is specified by a NextGen photosphere having luminosity $L_* = 4\pi R_*^2 \sigma T_{eff,*}^4$; \item A dust disk with inner radius corresponding to an assumed temperature of dust destruction, $T_{max,\ dust} = 1400$ K. The SED is modelled under the optically thick assumption, as a summation of blackbodies from the different radial annuli. The passive disk luminosity is $L_{dust}= 0.25 L_*$; \item A gaseous accretion disk with inner radius fixed at $2 ~R_\odot$. Although the magnetospherically defined inner truncation radius would vary as a function of accretion rate, we keep this value constant for simplicity. The total accretion luminosity is given by $L_{gas,\ acc} = G M_* \dot M / R_*$ with a fraction up to 1/2 of this radiated by the disk, depending on the accretion flow geometry. \end{itemize} Our model in the high state consists of: \begin{itemize} \item A pure accretion disk as presented in \cite{rodriguez2021}. The fiducial case here is a disk with maximum temperature $T_{gas,\ max} = 7000$ K and accretion rate of $\dot M = 10^{-5} ~M_\odot$/yr. As above, 1/2 of the accretion luminosity is assumed to be radiated by the disk\footnote{The remainder is probably released in an extended ``boundary region" which differs from a radially thin classical boundary layer \citep{popham1996}.}. \end{itemize} Contributions to the pre-outburst total source luminosity are $L_* = 0.42 ~L_\odot$, $L_{dust} = 0.10 ~L_\odot$, and $L_{gas,\ acc} = 0.046, 0.46, 4.6 ~L_\odot$ for corresponding accretion rates of $\dot M = 10^{-8}, 10^{-7}, \textrm{and\ } 10^{-6} ~M_\odot$/yr, respectively. The post-outburst disk has $L_{gas,\ acc} = 40 ~L_\odot$, which is 100 times $L_*$. In Figure~\ref{ampwave} we show the pre-outburst and post-outburst disk model spectral energy distributions, along with the predicted amplitudes of the outbursts. As expected given the inner disk heating, the source brightening is larger amplitude towards the blue optical and ultraviolet. The detailed trend with wavelength, from the red optical -- where most outbursts have been discovered -- to the mid-infrared -- where the most comprehensive and uniform data set exists in NEOWISE, strongly depends on the low-state $\dot M$. For our fiducial star, the amplitudes are relatively flat with wavelength in the lowest $\dot M$ case, typical of low-mass T Tauri stars in the Class II phase. The amplitudes steepen towards the highest $\dot M$ case, which is more characteristic of the Class I phase. The outburst amplitude behavior would also depend on the properties of the underlying central star, primarily its temperature, and on the detailed temperature structure in the disk. We have adopted the classical $T(r)\propto r^{-3/4}$ distribution of a flat disk, whereas a flatter profile such as that for a flared disk would reduce the constrast, most significantly at the longer wavelengths $>$3 $\mu$m. \section{Summary and Implications} We have provided a simple guide to expected outburst amplitudes as a function of wavelength for episodically occuring accretion-state transitions in YSOs, from T Tauri type to FU Ori-like disk accretion. Consistent with traditionally quoted values from lightcurves, the models show 4-6 mag blue-optical outburst amplitudes and 1.5-4 mag mid-infrared amplitudes. Observation of such wavelength-dependent trends can help confirm FU Ori events in the absence of more secure diagnostics like high dispersion spectroscopy. \begin{figure} \centering \includegraphics[scale=0.32]{comb_sed_8.png}\includegraphics[scale=0.32]{comb_sed_7.png}\includegraphics[scale=0.32]{comb_sed_6.png} \caption{{\bf Top panels:} Bright-state spectral energy distribution for an accretion disk-dominated FU Ori type system, accreting at $\dot M = 10^{-5} ~M_\odot$/yr (magenta), compared to composite spectral energy distributions (green) for passive $+$ accretion disk systems having low-state accretion rates of $\dot M = 10^{-8}, 10^{-7}, 10^{-6} ~M_\odot$/yr. In each panel, the same underlying stellar photosphere (navy blue), and passive dust reprocessing disk (cyan) are shown, along with the active gaseous accretion disk (orange); the assumed distance is $d=800$ pc. {\bf Bottom panels:} Expected wavelength-dependent amplitude of an accretion outburst, calculated simply as the flux ratio of the magenta and the green curves, converted to magnitudes. As the low-state accretion rate increases, the outburst amplitudes decrease, and the wavelength dependence of the brightening steepens. } \label{ampwave} \end{figure} \begin{acknowledgements} We thank Lee Hartmann and Will Fischer for giving this short contribution a once-over. \end{acknowledgements} \newpage
2023-04-23T06:41:36.622Z
2022-01-05T02:08:06.000Z
redpajama/arxiv
arxiv_0001
2,754
1,508
be1fe70b94939e7f28470d8dd8e01356d6cd9df5
\section{Introduction} The present paper deals with the question whether (supervised) learning by means of neural networks based on the usual activation functions logistic, tanh and sinusoid has limited applications or is an universal tool. For this purpose we introduce an idealization of the notion of learning algorithm for multilayer neural networks. This idealization is inspired in the backpropagation procedure and is called perfect learning algorithm. It relies on a specification $\Pi$, also called perfect, which assigns to each training data a parameter vector which constitutes a global minimizer of the quadratic error function involved, if the error reaches an exact minimum. A perfect learning algorithm has a perfect specification $\Pi$ and assigns to each training data set a numerical representation of a parameter vector which satisfies the specification $\Pi$. The existing versions of backpropagation become then interpreted as attempts to satisfy the requirement of perfectness of learning algorithms. Therefore it depends on our definition of algorithm whether we are able to affirm that perfect learning algorithms really exist. We may also simply ask whether there exist continuously differentiable perfect learning algorithms where the specification $\Pi$ is continuously differentiable. The aim of this paper is to give a negative answer to this question in case of differentiable perfect learning algorithms, provided the length of the training data set exceeds the number of involved parameters and the activation functions are logistic, tanh or sin. \vspace{0.5cm} Automatic learning is in fact not a modern concept, it has a long history. In particular, ancient greek astronomy was marked by a deep epistemological discussion about the scope of automatic learning techniques. Since the beginning of ancient greek astronomy in the 4th century BC by Eudoxus of Cnidus and Callippus of Cyzicus there was a tacit assumption that the explanation and prediction of the motions of heavenly bodies and the description of their nature must rely on geometrical models. This leads to the question of the principles (and their foundations) which should be satisfied in advance by the geometric model which is used for the description of the celestial phenomena. Nevertheless, answering this question was considered as a subject of physics, whereas genuine astronomy became restricted to the exact description of the orbits of the heavenly bodies. Only in a final stage of this reasoning the parameters of the geometric model under consideration should be adjusted by the astronomer in order to make the model predictive. This had to be done primarily in order ``to save the appearances'' (\foreignlanguage{greek}{σῴζειν τὰ φαινόμενα}), and only secondarily with the aim to justify the particular geometric model. This modus operandi anticipated already the modern concept of automatic learning. Nevertheless, let us observe that this --for our historical view of automatic learning well suited-- exposition of interaction between distinct epistemological concepts didn't remain undisputed between specialists of ancient greek astronomy. Anyway, the restrictions introduced by physics to the geometric modelling of astronomy (mainly under the influence of Plato and Aristotle in the 5th and 4th century BC) led to a long--lasting effort with successively changing geometric models in order to improve their explanatory and predictive power. The geometric models inspired by the physics of Plato and Aristotle were geocentric and divided the cosmos in two regions, the motionless spherical earth and a heavenly region surrounding it, containing multiple spheres rotating at different speeds around distinct axes. In this sense, Eudoxus of Cnidus, motivated by Plato's requirement that the planetarian orbits should be decomposable into uniform circular motions, presupposed for their geometric models that the heavenly bodies move each one with his own constant angular speed on concentric circles around the motionless center of the earth. Each one of these bodies moved thus on the equator of the corresponding sphere. In the sequel the development of ancient greek astronomy was characterized by the introduction of a series of new concepts and views into the geometric modelling of the motions of heavenly bodies in order to improve the explanatory and predictive power for observable celestial phenomena. Simultaneously the requirement was maintained that the physical principle of the uniform circular motions of the heavenly bodies should be preserved. This became achieved by admitting uniform epicyclic and uniform excentric circular motions (on a circle, called deferent, with center distant from the earth). This eventful development of geocentric astronomy converged finally in the 2th century AD to the mathematical and astronomical treatise of Claudius Ptolemy on the apparent motions of the stars and planetary paths, today remembered as “Almagest” (from arabic \sample \ ), whereas the original title was ``\foreignlanguage{greek}{Μαθηματικὴ Σύνταξις}''. In order to achieve the goal to account for the observed motions of planets conserving the principle of uniform circular movements, Ptolemy introduced a final mathematical tool, the so called equant point, with respect to which the epicycle under consideration moves with constant angular speed along the deferent of the excentricity. Ptolemy's work influenced as of the abbasid period strongly the astronomy, first of the muslim and then of the medieval christian world. Even Copernicus who popularized in the 16th century AD the heliocentric point of view for the planetary system maintained still one of the standard beliefs of his time, namely that the motions of celestial bodies must be decomposable into uniform circular movements. This point of view obliged him to retain for his orbit calculations a complex system of epicycles like in the Ptolemaic system. In this context let us remark that the geocentric point of view was dominant in ancient greek astronomy but not exclusive. In the 3rd century BC Aristarchus of Samos proposed a heliocentric geometric model for astronomical tasks which allowed him to estimate in terms of earth radii the sizes of the sun and the moon as well as their distances from the earth. However, these estimations turned later out to be far below the real ones. \vspace{0.5cm} Like in the case of automatic learning by neural networks all these astronomical calculations produce only approximative results. Nevertheless, one may ask whether the intricate orbits of all heavenly bodies really may be arbitrarily well approximated in this simple geometric way, piling up, if necessary, sufficiently many epicycles and epicycles of epicycles, etc. An analogous question may be asked also for automatic learning by neural networks with given activation functions. Certain answers to this question are the subject of the so called universal approximation theorems. For example, an early one of these theorems says that standard multilayer feedforward networks with as few as one hidden layer using arbitrary activation functions are capable to approximate up to any desired degree of accuracy any Borel measurable function from one finite dimensional space to another, provided sufficiently many hidden units are available. In this sense, the multilayer feedforward networks form a class of universal approximators \cite{HSW89}. Turning back to the first question about the motions of heavenly bodies from the geocentric point of view, an answer was given in \cite{S25} by the italian astronomer Giovanni Schiaparelli (1835--1910). In the case of a single heavenly body moving in the plane around the origin, the piling up of an arbitrary finite number of epicycles such that each one moves with constant angular velocity, gives rise to a finite sum of trigonometric functions, which represents a kind of generalized Fourier polynomial. The polynomials obtained in this way approximate under a suitable seminorm arbitrary well any Besicovitch almost periodic function. \vspace{0.5cm} For details about origins and development and subsequent influence of ancient greek astronomy we refer to \cite{D08} and \cite{D13}. \newpage \section{An unfeasibility result} Let $m,n\in\mathbb N$ and $X_l$, $V_k$, $S$, $W_l^k$, $T_k$, $1 \leq k \leq m$, $1 \leq l \leq n$ indeterminates and $X:=(X_1,\dots,X_n)$, $V:=(V_1,\dots,V_m)$, $W:=(W_l^k)_{ \substack{1 \leq k \leq m \\ 1 \leq l \leq n}}$, $T:=(T_1,\dots,T_m)$. Let $f$ and $g_1,\dots,g_m$ be suitable on $\R$ defined activation functions and $g:\R^m \to \R^m$ the map defined by $g(u_1,\dots,u_m):=(g_1(u_1),\dots,g_m(u_m))$ with $(u_1,\dots,u_m)\in\R^m$. From now on we shall deal only with three layer neural networks with inputs $X_1,\dots,X_n$, one single output, $m$ neurons on the hidden layer, $m(n+1)$ weights, $m+1$ thresholds and activation functions $f$ and $g_1,\dots,g_m$. Formally we can describe the architecture of these networks by \begin{align*} O_{V,S,W,T}(X) := & f(S+V \cdot g(T + W \cdot X)) \\ = & f(S+\sum_{1\leq k \leq m} V_k g_k (T_k + \sum_{1\leq l \leq n} W^k_l X_l)), \end{align*} where the dot refers to the inner vector and the matrix--vector products. Let $p\in\mathbb N$ and \[ \mathcal{U}_p:=\{ ((\gamma_1, \zeta_1),\dots,(\gamma_p, \zeta_p)) \in\R^{p\times (n+1)}, \gamma_1,\dots,\gamma_p \text{\ all\ distinct } \}. \] The elements of $\mathcal{U}_p$ constitute the training data of length $p$ which we are going to consider. Let $(\gamma_1,\dots,\gamma_p)\in \R^{p\times n}$ a sequence of $p$ distinct points of $\R^n$. For each parameter vector $(v,s,w,t)\in\R^m\times\R\times\R^{m\times n}\times \R^m = \R^{m(n+2)+1}$ the architecture $O_{V,S,W,T}(X)$ produces a neural network \[ o_{v,s,w,t}(X):= f(s+v\cdot g(t+w\cdot X)) \] which can be evaluated in $\gamma_1,\dots,\gamma_p$ returning the vector \[ ((\gamma_1, o_{v,s,w,t}(\gamma_1)),\dots,(\gamma_p, o_{v,s,w,t}(\gamma_p))) \in\mathcal{U}_p. \] The task of a perfect learning algorithm is to find, if possible, for any training example $((\gamma_1, \zeta_1),\dots,(\gamma_p, \zeta_p))\in\mathcal{U}_p $ a parameter vector $(v,s,w,t)\in \R^{m(n+2)+1}$ such that the quadratic error function \[ E(v,s,w,t):=\sum_{1\leq i \leq p} (\zeta_i - o_{v,s,w,t} (\gamma_i))^2 \] reaches a global minimum exactly. If such a minimum does not exist, no condition is imposed on $(v,s,w,t)$. Thus we may specify a perfect learning algorithm by a map $\Pi:\mathcal{U}_p\to\R^{m(n+2)+1}$ which assigns to each training data set an exact global minimizer of the error function if such a minimizer exists. We shall call the perfect algorithm continuosly differentiable if its specification $\Pi$ it is. In the sequel let $A_1,\dots,A_p$ and $B_1,\dots,B_p$ new indeterminates. With these notations we may formulate the following result. \begin{lemma} \label{lemma perfect algorithm} Let $f$ and $g_1,\dots,g_m$ be continuously differentiable with $f'(0)\neq 0$ and let $O_{V,S,W,T}(X)=f(S+V\cdot g(T+W\cdot X))$ the neural network architecture considered before. Suppose that the generic determinant $\text{det}(g_1(A_i B_j))_{1\leq i,j \leq p}$ does not vanish identically. Then for $p> m(n+2)+1$ there does not exist a continuosly differentiable perfect algorithm satisfying the specification $\Pi:\mathcal{U}_p \to \R^{m(n+2)+1}$ above. \end{lemma} \begin{proof} By assumption we have $\text{det}(g_1(A_i B_j))_{1\leq i,j \leq p} \neq 0$. One concludes easily that we may choose $\rho_1,\dots,\rho_p,\gamma_1,\dots,\gamma_p\in\R^n$ with $\gamma_1,\dots,\gamma_p$ all distinct such that $\text{det}(g(\rho_i \cdot \gamma_j))_{1\leq i,j \leq p}\neq 0$ holds. Let $\theta:\R^{m(n+2)+1}\to\R^p$ the map which assigns to each parameter vector $(v,s,w,t)\in \R^{m(n+2)+1}$ the image $\theta(v,s,w,t):= (o_{v,s,w,t}(\gamma_1),\dots,o_{v,s,w,t}(\gamma_p))$ and observe that $\theta$ is continuously differentiable. Suppose that the statement of the lemma is wrong. Then there exists nonnegative integer parameters $p$, $m$, $n$ with $p>m(n+2)+1$ and continuously differentiable perfect algorithm satisfying the specification above. Let $\pi:\R^p\to\R^{m(n+2)+1}$ be the map which assigns to $\zeta=(\zeta_1,\dots,\zeta_p)\in\R^p$ the image $\pi(\zeta):= \Pi((\gamma_1,\zeta_1),\dots,(\gamma_p,\zeta_p))$. The specification $\Pi$ is continuously differentiable thus same holds also true for $\pi$. Since $\Pi$ specifies a perfect learning algorithm we see that for each parameter vector $(v,s,w,t)\in \R^{m(n+2)+1}$ the parameter vector $(\pi\circ\theta)(v,s,w,t)) $ minimizes the quadratic error function \[ E(v',s',w',t'):=\sum_{1\leq i \leq p} (o_{v,s,w,t}(\gamma_i) - o_{v',s',w',t'} (\gamma_i))^2 \] for $(v',s',w',t')\in \R^{m(n+2)+1}$. This minimum is zero. Thus we have $o_{v,s,w,t}(\gamma_i) = o(\pi\circ\theta)(v,s,w,t)$ for $1\leq i \leq p$ and therefore the identity \begin{equation} \label{theta} \theta\circ\pi\circ\theta=\theta \end{equation} For $1\leq i \leq p$ let $v_i:= (V_1,0,\dots,0)$ $s_i:=0$ $w_i\in\R^{m\times n}$ the matrix which contains $\rho_i$ as its first row and $0$ elsewhere and $t:=(0,\dots,0)$. Let $\beta_i(V_1):= (v_i,s_i,w_i,t_i)$. The corresponding function $\beta_i:\R\to\R^{m(n+2)+1}$ is continuously differentiable. We have \begin{align*} (\theta\circ\beta_i)(V_1) & = (o_{v_i,s_i,w_i,t_i}(\gamma_1),\dots,o_{v_i,s_i,w_i,t_i}(\gamma_p)) \\ & = (f(V_1 g_1(\rho_i \cdot \gamma_1)),\dots,f(V_1 g_1(\rho_i \cdot \gamma_p))) \end{align*} and therefore \[ \frac{d}{d V_1}(\theta \circ \beta_i)(0)= f'(0)(g_1(\rho_i\cdot \gamma_1),\dots,g_1(\rho_i\cdot \gamma_p)). \] Since $\pi$ is continuosly differentiable it is in particular differentiable in $\theta\circ\beta_i(0)=(f(0),\dots,f(0))$ and therefore $\pi\circ\theta\circ\beta_i$ is differentiable in $0$. We infer from the chain rule applied to the identity \eqref{theta} \begin{align*} f'(0)(g_1(\rho_i\cdot\gamma_1),\dots,g_1(\rho_i\cdot\gamma_p)) & = \frac{d}{d V_1} (\theta \circ \beta_i)(0) \\ & = d \theta((\theta\circ\beta_i)(0)) (\frac{d}{d V_1} (\pi\circ\theta \circ \beta_i)(0)). \end{align*} Observe now that $(\theta\circ\beta_i)(0)=(f(0),\dots,f(0))$ is independent from $1\leq i \leq p$ and therefore also the linear map $M:=d\theta(\pi\circ\theta\circ\beta_i)(0)$. Recalling that $f'(0)\neq 0$ and $\text{det}g_1((\rho_i\cdot\gamma_j))_{1\leq i,j \leq p} \neq 0$ holds we see that the $p$ vectors $f'(0)(g_1(\rho_i\cdot\gamma_1),\dots,g_1(\rho_i\cdot\gamma_p)), 1\leq i \leq p$, are all linearly independent. On the other and the vectors $\frac{d}{d V_1} (\pi\circ\theta \circ \beta_i)(0), 1\leq i \leq p $, are all contained in $\R^{m(n+2)+1}$ and mapped by the linear map $M$ on the previous $p$ vectors. This implies $p \leq m(m+2)+1$, which contradicts our assumption $p > m(m+2)+1 $. \end{proof} It seems us worth to comment that our proof of Lemma \ref{lemma perfect algorithm} requires the differentiability of $\Pi$ only in one single point, namely $(\gamma_1, f(0), \dots, \gamma_p, f(0))$. \vspace{0.5cm} Let $g:\R\to\R$ be a function of class $\mathbb C^{\infty}$ satisfying an algebro--differential equation as follows: there exists a polynomial $G\in\R[T]$ of positive degree with $g'=G(g)$ (here $T$ is a new indeterminate). Suppose that $G(g(0))\neq 0$ holds. Let $p\in\mathbb N$ and $A_1,\dots,A_p,B_1\dots,B_p$ indeterminates as before. With these notations and assumptions we have \begin{lemma} \label{lemma determinant g} $\text{det}(g(A_iB_j))_{1\leq i,j \leq p}\neq 0$ \end{lemma} \begin{proof} By induction on $p$. For $p=1$ there is nothing to prove, because $g'(0)=G(g(0))\neq 0$ implies $g\neq 0$. Developing the determinant $\text{det}(g(A_iB_j))_{1\leq i,j \leq p}$ in the first row we obtain polynomials $M_1,\dots,M_p$ in $g(A_iB_j),1\leq i \leq p, 1\leq j \leq p$ such that \[ \text{det}(g(A_iB_j))_{1\leq i,j \leq p} = \sum_{1\leq i \leq p} g(A_iB_1)M_i \] holds. Since $(-1)^{i+1}M_i$ is a cofactor of the matrix $g(A_iB_j)_{1\leq i,j \leq p}$ we may apply our inductive hypothesis and conclude $M_1\neq 0,\dots,M_p\neq 0$. For $k\in\mathbb Z_{\geq 0}$ let $P_k\in\R[T]$ be recursively defined as $P_0:=T$ and $P_{k+1}:=P_k'G$. Then we have $\text{deg}P_{k+1} = \text{deg}P_k'+\text{deg}G \geq \text{deg}P_k$ and therefore $P_k$ is of positive degree for all $k\in\mathbb N$. In particular we conclude $P_k \neq 0$ for any $k\in\mathbb Z_{\geq 0}$. Observe that \begin{align*} \frac{\partial}{\partial B_1} \sum_{1\leq i \leq p} P_k(g(A_iB_1))A_i^k M_i & = \sum_{1\leq i \leq p} P_k'(g(A_iB_1))g'(A_iB_1))A_i^{k+1} M_i = \\ \sum_{1\leq i \leq p} (P_k'\cdot G)(g(A_i\cdot B_1)) A_i^{k+1}M_i & = \sum_{1\leq i \leq p} P_{k+1}(g(A_i\cdot B_1))A_i^{k+1}M_i \end{align*} and \[ \text{det}g(A_i\cdot B_j)_{1\leq i \leq j} = \sum_{1\leq i \leq p} g(A_iB_1)M_i = \sum_{1\leq i \leq p} P_0(g(A_iB_j))\cdot A_i^0 M_i \] holds. This implies \begin{equation} \label{estrella} \frac{\partial^k}{\partial B_1^k} \text{det}(g(A_i\cdot B_j))_{1\leq i,j \leq p} = \sum_{1\leq i \leq p} P_k(g(A_i\cdot B_1)) A_i^k M_i \end{equation} Since $P_k\neq 0$ holds we can write $P_k = Q_k (T - g(0))^{m_k}$ for some polynomial $Q_k\in\R[T]$ with $Q_k(g(0))\neq 0$ and $m_k\in\mathbb Z_{\geq 0}$. Observe that $P_{k+m_k}$ can be written as $P_{k+m_k}= S(T) (T-g(0))+ Q_k G^{m_k}$, where $S(T) \in\R[T]$ is a suitable polynomial. Therefore we have $P_{k+m_k}(g(0))\neq 0$. This implies that for any $k\in\mathbb Z_{\geq 0}$ there exists $k'\geq k$ with $P_{k'}(g(0))\neq 0$. Now we may chose integers $0 \leq k_1 < \dots < k_p$ such that $P_{k_j}(g(0))\neq 0$ holds for $1\leq j \leq p$. Assume now that $\text{det}(g(A_iB_j))_{1\leq i,j \leq p}$ vanishes identically. Then \eqref{estrella} implies \[ \sum_{1\leq i \leq p } P_k (g(A_iB_1)) A_i^k M_i = 0 \] for any $k\in\mathbb Z$ and in particular \[ \sum_{1\leq i \leq p } P_{k_j} (g(A_iB_1)) A_i^{k_j} M_i = 0 \text{\ \ and\ } \sum_{1\leq i \leq p } P_{k_j} (g(0)) A_i^{k_j} M_i = 0 \] for any $1\leq j \leq p$. Observing $\text{det}(P_{k_j}(0)A_i^{k_j})_{1\leq i,j \leq p}\neq 0 $ we conclude $M_1=0,\dots,M_p=0$ which is a contradiction. \end{proof} \begin{lemma} \label{lemma determinant} Let notations be as in the previous lemma. Then we have \[ \text{det}(\text{sin}A_iB_j)_{1\leq i,j \leq p} \neq 0. \] \end{lemma} \begin{proof} Again by induction in $p$. The case $p=1$ is obvious. In order to treat the case $p>1$ we develop the determinant $\text{det}(\text{sin}A_iB_j)_{1\leq i,j \leq p}$ as before obtaining polynomials $M_1,\dots,M_p$ in $\text{sin}(A_iB_j)$, $1\leq i \leq p$, $1\leq j \leq p$ such that \[ \text{det}(\text{sin}A_iB_j)_{1\leq i,j \leq p} = \sum_{1\leq i \leq p} \text{sin}(A_iB_1)M_i \] holds. For $1\leq i \leq p$ the expression $(-1)^{i+1}M_i$ is a cofactor of $(\text{sin} A_iB_j)_{1\leq i,j \leq p}$ and we can conclude inductively $M_1\neq 0,\dots, M_p\neq 0$. We assume $\text{det}(\text{sin}A_iB_j)_{1\leq i,j \leq p} = 0$ and derive iteratively the identity \[ \frac{\partial^k}{\partial B_1^k} \sum_{1\leq i \leq p} \text{sin} (A_i B_1) M_i = 0 \] with respect to $B_1$. This yields for $k\in\mathbb Z_{\geq 0}$ identities \[ \sum_{1\leq i \leq p} \text{cos} (A_i B_1) A_i^{2k+1}M_i = 0 \] and therefore \[ \sum_{1\leq i \leq p} A_i^{2k+1} M_i =0. \] Similarly as before we conclude $M_1=0,\dots,M_p=0$, a contradiction. \end{proof} Taking into account the well--known first order differential equations for the logistic and tanh functions, we may summarize the outcome of Lemma \ref{lemma perfect algorithm}, \ref{lemma determinant g}, \ref{lemma determinant} by the following statement which demonstrates the unfeasibility of continuously differentiable perfect learning algorithms. \begin{theorem} \label{theorem 1} For the usual activation functions logistic, tanh and sinusoid there do not exist differentiable perfect learning algorithms able to learn for any neural network architecture with at least one hidden layer any training data of length exceeding the number of parameters. \end{theorem} Let us observe that we do not dispose over a general algorithmic model able to capture in its whole extent the notion of learning algorithm. Hence, in view of Theorem \ref{theorem 1}, we are only able to state a kind of metaconjecture, namely that general perfect learning algorithms do not exist. The practical counterpart of this can also be found by practical experience. In particular there is no option to improve backpropagation to a perfect learning algorithm. This argument may be reinforced experimentally as follows. Let $p>>m(n+2)+1$ and choose $v,s,w,t$ and $\gamma_1,\dots,\gamma_p$ at random (so $\gamma_1,\dots,\gamma_p$ are all distinct). Compute numerical representations for $\zeta_1 := o_{v,s,w,t}(\gamma_1),\dots, \zeta_p := o_{v,s,w,t}(\gamma_p)$ and apply backpropagation to this representation of the training data set $((\gamma_1,\zeta_1),\dots,(\gamma_p,\zeta_p))$. The algorithm returns a numerical representation of a parameter vector $(v',s',w',t')$ and an error $E(v',s',w',t')$. We may expect that \[ E(v',s',w',t')>>0 \] holds. This situation is made possible by local minima. In consequence the usual justification of backpropagation with reference to global minima is incomplete. Therefore the real fundamentation of backpropagation is exclusively based on practical evidence, not on theory. Finally let us state that mutatis mutandis Theorem \ref{theorem 1} is also true for neural network architectures over the complex numbers with polynomial activation functions. This can easily be seen combining the arguments of the proof of \cite[Theorem 18]{BHM16} with the arguments of the proof of Lemma \ref{lemma perfect algorithm}. \section{Outlook} In textbooks backpropagation becomes usually motivated as an attempt to solve by a simple algorithm a particular global minimization problem. Theorem \ref{theorem 1} expresses under moderate differentiability conditions the unfeasability of this purpose. In case of neural networks with activation functions which are polynomials over the reals (or, more generally, semialgebraic functions), efficient real quantifier elimination procedures may be applied in order to solve the corresponding global minimization problem (see \cite{HRS90}, \cite{R} and in particular \cite[14.2]{BPR}). This way to proceed leads to complexity upper bounds which are singly exponental in the number of parameters of the neural network architecture under consideration. Nevertheless, in this general setting we cannot expect more efficient worst case complexity bounds. On the other hand, the activation functions logistic, tanh and sin are pfaffian. Hence, for learning purposes, it makes sense to consider more generally neural network architectures with arbitrary pfaffian activation functions. In order to proceed in an analogous way as in the polynomial or semialgebraic case for the solution of the underliying global minimization problem, the subpfaffian set up is the most suitable one. The final complexity outcome is similar as in the polynomial or semialgebraic case (see \cite{GV01} and the survey \cite{GV04}). \bibliographystyle{alphaabbr}
2023-04-23T06:41:37.264Z
2022-01-05T02:05:02.000Z
redpajama/arxiv
arxiv_0001
2,775
4,040
689e2b50a2edbf505eaa8de3de66bf3a1a8a68bb
\section{Introduction}} \label{sec:introduction} \IEEEPARstart{L}{ast} decade's research in deep learning lead to tremendous boosts in predictive performance for various tasks including image classification \cite{krizhevsky2012imagenet}, object recognition \cite{NIPS2015_5638, 44872, DBLP:conf/cvpr/RedmonDGF16}, natural language processing \cite{journals/corr/abs-1301-3781, conf/interspeech/MikolovKBCK10} and speech recognition \cite{hinton2012deep, hannun2014speech}. However, safety remains a great concern when it comes to implementing these models in real-world conditions \cite{DBLP:journals/corr/AmodeiOSCSM16, journals/corr/JanaiGBG17}. Failing to detect possible errors or over-estimating the confidence of a prediction may carry serious repercussions in critical visual-recognition applications such as in autonomous driving, medical diagnosis \cite{medicaldiag2018} or nuclear power plant monitoring \cite{Linda:2009:NNB:1704175.1704190}. Classification with a reject option \cite{Chow1957AnOC,bartlettreject2008,NIPS2016_6336}, also known as \emph{selective classification} \cite{elyaniv10a,NIPS2017_7073}, consists in a scenario where the classifier is given the option to reject an instance instead of predicting its label. Equipped with a reject option, a classifier could decide to stick to the prediction or, on the contrary, to hand over to a human or a back-up system with, \textit{e.g.}\xspace, other sensors, or simply to trigger an alarm. One common approach for tackling the problem is to discriminate with a confidence-based criterion: For an instance $\bm{x}$, along with a prediction $f(\bm{x})$, a scalar value $g(\bm{x})$ that quantifies the confidence of the classifier in its prediction is also provided. Correctly identifying uncertain predictions thanks to low confidence values $g(\bm{x})$ could be beneficial for classification improvements in active learning \cite{pmlr-v70-gal17a} or for efficient exploration in reinforcement learning \cite{Gal:2016:DBA:3045390.3045502}. On a related matter, one would expect the confidence criterion to correlate successful predictions with high values. Some paradigms, such as self-training with pseudo-labeling \cite{lee-icml2013, Li_2019_CVPR}, consist in picking and labeling the most confident samples before retraining the network accordingly. The performance improves by selecting successful predictions thanks to an accurate confidence criterion. A final perspective, linked to failure prediction \cite{DBLP:conf/ivs/HeckerDG18, hendrycks17baseline, NIPS2018_7798}, is the capacity of models to provide a ranking which enables distinguishing correct from erroneous predictions. In each of the previous tasks, obtaining reliable estimates of the predictive confidence is then of prime importance. Confidence estimation has been explored in a wide variety of applications, including computer vision \cite{hendrycks17baseline,kendall2015bayesian}, speech recognition \cite{latticeRNNspeech1, latticeRNNspeech2, confspeech2011}, reinforcement learning \cite{Gal:2016:DBA:3045390.3045502} or machine translation \cite{Blatz:2004}. A widely used baseline with neural-network classifiers is to take the value of the predicted class' probability, namely the \textit{maximum class probability} (MCP), given by the softmax layer output. Although recent evaluations of MCP with modern deep models reveal reasonable performance~\cite{hendrycks17baseline}, they still suffer from several conceptual drawbacks. In particular, MCP leads by design to high confidence values, even for erroneous predictions, since the largest softmax output is used. This design tends to make erroneous and correct predictions overlap in terms of confidence and thus limits the capacity to distinguish them. In this work, we identify a better confidence criterion, the \emph{true class probability} (TCP), for deep neural network classifiers with a reject option. For a sample $\bm{x}$, TCP corresponds to the probability of the model with respect to the true class $y$ of that sample, which naturally reflects a better-behaved model's confidence. We provide simple guarantees of the quality of this criterion regarding confidence estimation. Since the true class is obviously unknown at test time, we propose a novel approach which consists in designing an auxiliary network specifically dedicated to estimate the confidence of a prediction. Given a trained classifier $f$, this auxiliary network learns the TCP criterion from data. In inference, we use its scalar output as the confidence estimate $g(\bm{x})$ associated with the prediction. When applied to failure prediction, we observe significant improvements over strong baselines. Our approach is also adequate for self-training strategies in unsupervised domain adaptation. To meet the challenge of this task in semantic segmentation, we propose an enhanced architecture with structured output and adopt an adversarial learning scheme which enforces alignment between confidence maps in source and target domains. A thorough analysis of our approach, including relevant variations, ablation studies and qualitative evaluations of confidence estimates, helps to gain insight about its behavior. \smallskip In summary, our contributions are as follows: \begin{itemize} \item We define a novel confidence criterion, the \emph{true class probability}, which exhibits an adequate behavior for confidence estimation; \item We propose to design an auxiliary neural network, coined \emph{ConfidNet}, which aims to learn this confidence criterion from data; \item We apply this approach to the task of failure prediction and to self-training in unsupervised domain adaptation with adequate choices of architecture, loss function and learning scheme; \item We extensively experiment across various benchmarks and backbone networks to validate the relevance of our approach on both tasks. \end{itemize} The paper is organized as follows. In Section~\ref{sec:related_work}, we provide an overview of the most relevant related works on confidence estimation, failure prediction, self-training and unsupervised domain adaptation. Section~\ref{sec:learning_confidence} exposes our approach for confidence estimation based on learning an adequate criterion via an auxiliary network. We also describe how it relates to classification with a reject option. In Section~\ref{sec:confidnet}, we adapt our approach to failure prediction by introducing an architecture, a loss function and a learning scheme for this task. Similarly, Section~\ref{sec:conda} details the instantiation of our approach for confidence-based self-training in unsupervised domain adaptation (DA), which we denote as ConDA. In particular, we present two additions, an adversarial loss and a multi-scale confidence architecture, which further help to improve the performance for this task. Finally, we report experimental studies in Section~\ref{sec:experiments}. This paper extends a previous conference publication \cite{corbiere2019ConfidNet} by introducing: (1) An comprehensive adaptation of the approach to improve the key step of self-training from pseudo-labels in semantic segmentation with DA; (2) An exploration of the classification-with-rejection framework, which strengthens the rationale of the proposed approach. \section{Related work} \label{sec:related_work} \subsection{Confidence estimation} Confidence estimation in machine learning has been around for many decades, firstly linked to the idea of classification with a reject option \cite{Chow1957AnOC}. Following works \cite{bartlettreject2008,NIPS2016_6336, cortes2016, zaragoza1998confidence} explored alternative rejection criteria. In particular, \cite{cortes2016} proposes to jointly learn the classifier and the selection function. El-Yaniv \cite{elyaniv10a} provides an analysis of the risk-coverage trade-off that occurs when classifying with a reject option. More recently, \cite{NIPS2017_7073, Geifman2019SelectiveNetAD} extend the approach to deep neural networks, considering various confidence measures. Since the wide adoption of deep learning methods, confidence estimation has raised even more interest as recent works reveal that modern neural networks tend to be overconfident \cite{conf/cvpr/NguyenYC15}, non-calibrated~\cite{GuoPSW17,Neumann18c}, sensitive to adversarial attacks~\cite{goodfellow2014explaining, DBLP:journals/corr/SzegedyZSBEGF13} and inadequate to distinguish in- from out-of-distribution examples~\cite{hendrycks17baseline, liang2018enhancing, NIPS2017_7219}. Bayesian neural networks \cite{bnn1996} offer a principled approach for confidence estimation by adopting a Bayesian formalism which models the weight posterior distribution. As the true posterior cannot be evaluated analytically in complex models, various approximations have been developed, such as variational inference \cite{Blundell:2015:WUN:3045118.3045290, NIPS2019_9472, Gal:2016:DBA:3045390.3045502} or expectation propagation \cite{pmlr-v37-hernandez-lobatoc15}. In particular, MC Dropout \cite{Gal:2016:DBA:3045390.3045502} has raised a lot of interest due to the simplicity of its implementation. Predictions are obtained by averaging softmax vectors from multiple feed-forward passes through the network with dropout layers. When applied to regression, the predictive distribution uncertainty can be summarized by computing statistics, \textit{e.g.}\xspace, variance. However, when using MC Dropout for uncertainty estimation in classification tasks, the predictive distribution is averaged to a point-wise softmax estimate before computing standard uncertainty criteria such as entropy. It is worth mentioning that these entropy-based criteria measure the softmax output dispersion, where the uniform distribution has maximum entropy. It is not clear how well these dispersion measures are adapted to distinguishing failures from correct predictions, especially with deep neural networks which output overconfident predictions~\cite{GuoPSW17}: for example, it might be very challenging to discriminate a peaky prediction corresponding to a correct prediction from an incorrect overconfident one. Lakshminarayanan \textit{et al}. \cite{NIPS2017_7219} propose an alternative to Bayesian neural networks by leveraging an ensemble of neural networks to produce well-calibrated uncertainty estimates. However, it requires training multiple classifiers, which has a considerable computing cost in training and inference time. \begin{figure*}[t] \centering \begin{minipage}[c]{0.45\linewidth} \centering \includegraphics[width=\linewidth]{images/fig1a.png} \subcaption{Maximum Class Probability} \label{fig:density-plot-mcp} \end{minipage}% \hspace{0.3cm} \begin{minipage}{0.45\linewidth} \centering \includegraphics[width=\linewidth]{images/fig1b.png} \subcaption{True Class Probability} \label{fig:density-plot-tcp} \end{minipage} \caption{\textbf{Distributions of different confidence measures over correct and erroneous predictions of a given model.} When ranking according to MCP (a) the test predictions of a convolutional model trained on CIFAR-10, we observe that correct ones (in green) and misclassifications (in red) overlap considerably, making it difficult to distinguish them. On the other hand, ranking samples according to TCP (b) alleviates this issue and allows a much better separation.} \label{fig:density-plot} \end{figure*} \subsection{Failure prediction} In the context of classification, a widely used baseline for failure prediction is to take the value of the predicted class' probability given by the softmax layer output, namely the \emph{maximum class probability} (MCP), suggested by \cite{oldconfmcp1995} and revised by \cite{hendrycks17baseline}. As stated before, MCP presents several limits regarding both failure prediction and out-of-distribution detection, as it outputs unduly high confidence values. Blatz \textit{et al}. \cite{Blatz:2004} introduce a method for confidence estimation in machine translation by solving a binary classification between correct and erroneous predictions. More recently, Jiang \textit{et al.} \cite{NIPS2018_7798} proposed a new confidence measure, `Trust Score', which measures the agreement between the classifier and a modified nearest-neighbor classifier on the test examples. More precisely, the confidence criterion used in Trust Score is the ratio between the distance from the sample to the nearest class different from the predicted class and the distance to the predicted class. One clear drawback of this approach is its lack of scalability, since computing nearest neighbors in large datasets is extremely costly in both computations and memory. Another more fundamental limitation related to the Trust Score itself is that local distance computation becomes less meaningful in high dimensional spaces~\cite{distance-curse}, which is likely to negatively affect the performances of this method as shown in Section~\ref{subsec:exp_confidnet}. In tasks closely related to failure prediction, Guo \textit{et al}. \cite{GuoPSW17}, for confidence calibration, and Liang \textit{et al}. \cite{liang2018enhancing}, for out-of-distribution detection, proposed to use temperature scaling to mitigate confidence values. However, this does not affect the ranking of the confidence score and therefore the separability between errors and correct predictions. DeVries \textit{et al}. \cite{devries2018learning} share with us the same purpose of learning confidence in neural networks. Their work differs from ours by focusing on out-of-distribution detection and learning jointly a distribution confidence score and classification probabilities. In addition, their criterion is based on an interpolation between output probabilities and target distribution whereas we specifically define a criterion suited to failure prediction. \subsection{Self-training in domain adaptation} \textbf{Unsupervised Domain Adaptation (UDA).} UDA has received a lot of attention over the past few years because of its importance for a variety of real-world problems, such as robotics or autonomous driving. Most works in this line of research aim at minimizing the discrepancy between the data distributions in source and target domains. Adopting an adversarial training approach \cite{gradreversal2016} has yielded most recent progress in the semantic segmentation task by producing indistinguishable source-target distributions in the space of features extracted by modern convolutional deep neural nets. To cite a few methods: CyCADA~\cite{Hoffman_cycada2017} first stylizes the source-domain images as target-domain images before aligning source and target in the feature space; AdaptSegNet~\cite{Tsai_adaptseg_2018} constructs a multi-level adversarial network to perform output-space domain adaptation at different feature levels; AdvEnt~\cite{vu2018advent} aligns the entropy of the pixel-wise predictions with an adversarial loss; BDL~\cite{Li_2019_CVPR} learns alternatively an image translation model and a segmentation model that promote each other. \smallskip\noindent \textbf{Self-Training.} Semi-supervised learning designates the general problem where a decision rule must be learned from both labeled and unlabeled data. Among the methods applied to address this problem, self-training with pseudo-labeling~\cite{lee-icml2013} is a simple strategy that relies on picking up the current predictions on the unlabeled data and using them as if they were true labels for further training. It is shown in \cite{lee-icml2013} that the effect of pseudo-labeling is equivalent to entropy regularization~\cite{grandvalet-nips2005}. In a UDA setting, the idea is to collect pseudo-labels on the unlabeled target-domain samples in order to have an additional supervision loss in the target domain. To select only reliable pseudo-labels, such that the performance of the adapted semantic segmentation network effectively improves, BDL~\cite{Li_2019_CVPR} resorts to standard selection with MCP. ESL~\cite{saporta2020esl} uses instead the entropy of the prediction as confidence criterion for its pseudo-label selection. CBST~\cite{zou2018unsupervised} proposes an iterative self-training procedure where the pseudo-labels are generated based on a loss minimization. In~\cite{zou2018unsupervised}, the authors also propose a way to balance the classes in their pseudo-labels to avoid the dominance of large classes as well as a way to introduce spatial priors. More recently, the CRST framework~\cite{Zou_2019_ICCV} proposes multiple types of confidence regularization to limit the propagation of errors caused by noisy pseudo-labels. \begin{figure*}[t] \centering \includegraphics[width=0.95\linewidth]{images/fig2.png} \caption{\textbf{Learning confidence approach.} The fixed classification network $F$, with parameters $\mathbf{w} =(\mathbf{w}_{E},\mathbf{w}_{\text{cls}})$, is composed of a succession of convolutional and fully-connected layers (encoder $E$) followed by last classification layers with softmax activation. The auxiliary confidence network $C$, with parameters $\bm{\theta}$, builds upon the feature maps extracted by the encoder $E$, or its fine-tuned version $E'$ with parameters $\mathbf{w}_{\text{E'}}$: they are passed to ConfidNet, a trainable multi-layer module with parameters $\bm{\varphi}$. The auxiliary model outputs a confidence score $C(\bm{x};\bm{\theta})\in[0,1]$, with $\bm{\theta} = \bm{\varphi}$ in absence of encoder fine-tuning and $\bm{\theta} =(\mathbf{w}_{E'},\bm{\varphi})$ in case of fine-tuning.} \label{confidnet_network} \end{figure*} \section{Learning a model's confidence with an auxiliary model} \label{sec:learning_confidence} In this section, we first briefly introduce the task of classification with a reject option, along with necessary notations. We then introduce an effective confidence-rate function for neural-net classifiers and we present our approach to learn this target confidence-rate function thanks to an auxiliary neural network. For sake of simplicity, we consider in this section a generic classification task, where the input is raw or transformed signals and the expected output is a predicted category. The semantic segmentation task we address in Section \ref{sec:conda} is in effect a pixel-wise classification of localized features derived from the input image. \subsection{Problem formulation} \label{sec:problem_formulation} Let us consider a dataset $\mathcal{D}= \{ (\bm{x}_n, y_n) \}_{n=1}^N$ composed of $N$ \textit{i.i.d.} training samples, where $\bm{x}_n \in \mathcal{X} \subset \mathbb{R}^D$ is a $D$-dimensional data representation, deep feature maps from an image or the image itself for instance, and $y_n \in \mathcal{Y}=\llbracket 1, K \rrbracket$ is its true class among the $K$ pre-defined categories. These samples are drawn from an unknown joint distribution $P(X,Y)$ over $(\mathcal{X}, \mathcal{Y})$. A \textit{selective classifier} \cite{elyaniv10a, NIPS2017_7073} is a pair $(f,g)$ where $f: \mathcal{X} \rightarrow \mathcal{Y}$ is a \textit{prediction function} and $g: \mathcal{X} \rightarrow \{0,1\}$ is a \textit{selection function} which enables to reject a prediction: \begin{equation} (f,g)(\bm{x}) = \begin{cases} f(\bm{x}), & \text{if}\ g(\bm{x})=1 \, , \\ \text{reject}, &\text{if}\ g(\bm{x})=0 \, . \\ \end{cases} \end{equation} In this work, we focus on classifiers based on artificial neural networks. Given an input $\bm{x}$, such a network $F$ with parameters $\mathbf{w}$ outputs non-negative scores over all classes, which are normalized through softmax. If well trained, this output can be interpreted as the predictive distribution $P(Y \vert \bm{x}, \hat{\mathbf{w}}) = F(\bm{x};\hat{\mathbf{w}}) \in \Delta$, with $\Delta$ the probability $K$-simplex in $\mathbb{R}^{K}$ and $\hat{\mathbf{w}}$ the learned weights. Based on this distribution, the predicted sample class is usually the maximum \textit{a posteriori} estimate: \begin{equation} f(\bm{x}) = \mathrm{arg}\!\max_{k \in \mathcal{Y}}~P(Y = k \vert \bm{x}, \hat{\mathbf{w}}) = \mathrm{arg}\!\max_{k \in \mathcal{Y}} F(\bm{x};\hat{\mathbf{w}})[k]. \label{eq:F2f} \end{equation} \indent We are not interested here in trying to improve the accuracy of the already-trained model $F$, but rather to make its future use more reliable by endowing the system with the ability to recognize when the prediction might be wrong. To this end, a \textit{confidence-rate function} $\kappa_f:\mathcal{X} \rightarrow \mathbb{R}^{+}$ is associated to $f$ so as to assess the degree of confidence of its predictions, the higher the value the more certain the prediction \cite{elyaniv10a, NIPS2017_7073}. A suitable confidence-rate function should correlate erroneous predictions with low values and successful predictions with high values. Finally, given a user-defined threshold $\delta \in \mathbb{R}^+$, the selection function $g$ can be simply derived from the confidence rate: \begin{equation} g(\bm{x})= \begin{cases} 1 & \text{if}\ \kappa_f(\bm{x}) \geq \delta \, , \\ 0 & \text{otherwise.} \\ \end{cases} \end{equation} \subsection{TCP, an effective confidence-rate function} For a given input $\bm{x}$, a standard confidence-rate function for a classifier $F$ is the probability associated to the predicted max-score class, that is the \textit{maximum class probability}: \begin{equation} \text{MCP}_F(\bm{x}) = \max_{k \in \mathcal{Y}} P(Y=k \vert \bm{x}, \hat{\mathbf{w}}) = \max_{k \in \mathcal{Y}} F(\bm{x};\hat{\mathbf{w}})[k]. \end{equation} However, by taking the largest softmax probability as confidence estimate, MCP leads to high confidence values both for correct and erroneous predictions alike, making it hard to distinguish them, as shown in Figure~\ref{fig:density-plot-mcp}. On the other hand, when the model misclassifies an example, the probability associated to the true class $y$ is lower than the maximum one and likely to be low. Based on this simple observation, we propose to consider instead this \emph{true class probability} as a suitable confidence-rate function. For any admissible input $\bm{x}\in\mathcal{X}$, we assume the \textit{true} class $y(\bm{x})$ is known, which we denote $y$ for simplicity. The TCP confident rate is defined as \begin{equation} \text{TCP}_F(\bm{x},\,y) = P(Y=y \vert\,\bm{x}, \hat{\mathbf{w}}) = F(\bm{x};\hat{\mathbf{w}})[y]. \end{equation} \smallskip\noindent \textbf{Simple guarantees.} With TCP, the following properties hold (see derivation in Appendix A.1). Given a properly labelled example $(\bm{x},y)$, then: \begin{itemize} \item $\text{TCP}_F(\bm{x},y)> 1/2$ $\Rightarrow$ $f(\bm{x}) = y$, \textit{i.e.}\xspace the example is correctly classified by the model; \item $\text{TCP}_F(\bm{x},y) < 1/K$ $\Rightarrow$ $f(\bm{x}) \neq y$, \textit{i.e.}\xspace the example is wrongly classified by the model, \end{itemize} where class prediction $f(\bm{x})$ is defined by (\ref{eq:F2f}). Within the range $[1/K, 1/2]$, there is no guarantee that correct and incorrect predictions will not overlap in terms of TCP. However, when using deep neural networks, we observe that the actual overlap area is extremely small in practice, as illustrated in Figure~\ref{fig:density-plot-tcp} on the CIFAR-10 dataset. One possible explanation comes from the fact that modern deep neural networks output overconfident predictions and therefore non-calibrated probabilities~\cite{GuoPSW17}. We provide consolidated results and analyses on this aspect in Section~\ref{sec:experiments} and in Appendix A.2. \subsection{Learning to predict TCP with a neural network} Using TCP as a confidence-rate function on a model's output would be of great help when it comes to reliably estimate its confidence. However, the true classes $y$ are obviously not available when estimating confidence on test inputs. We propose to \emph{learn TCP confidence from data}. More formally, for the classification task at hand, we consider a parametric selective classifier $(f,g)$, with $f$ based on an already-trained neural network $F$. We aim at deriving its companion selection function $g$ from a learned estimate of the TCP function of $F$. To this end, we introduce an \textit{auxiliary model} $C$, with parameters $\bm{\theta}$, that is intended to predict $\text{TCP}_F$ and to act as a confidence-rate function for the selection function $g$. An overview of the proposed approach is available in Figure~\ref{confidnet_network}. This model is trained such that, at runtime, for an input $\bm{x}\in\mathcal{X}$ with (unknown) true label $y$, we have: \begin{equation} C(\bm{x};\bm{\theta}) \approx \text{TCP}_F(\bm{x},y). \end{equation} \indent In practice, this auxiliary model $C$ will be a neural network trained under full supervision on $\mathcal{D}$ to produce this confidence estimate. To design this network, we can transfer knowledge from the already-trained classification network. Throughout its training, $F$ has indeed learned to extract increasingly-complex features that are fed to its final classification layers. Calling $E$ the encoder part of $F$, a simple way to transfer knowledge consists in defining and training a multi-layer head with parameters $\bm{\varphi}$ that regresses $\mathrm{TCP}_F$ from features encoded by $E$. We call \textit{ConfidNet} this module. As a result of this design, the complete confidence network $C$ is composed of a frozen encoder followed by trained ConfidNet layers. As we shall see in Section \ref{sec:confidnet}, the complete architecture might be later fine-tuned, including the encoder, as in classic transfer learning. In that case, $\bm{\theta}$ will encompass the parameters of both the encoder and the ConfidNet's layers. In the rest of the paper, we detail the different network architectures, loss functions and learning schemes of ConfidNet for two distinct applications: Classification failure prediction and self-training for semantic segmentation with domain adaptation. In both tasks, a ranking of unlabelled samples that allows a clear distinction of correct predictions from erroneous ones is crucial. The proposed auxiliary model offers a new solution to this problem. \section{Application to failure prediction} \label{sec:confidnet} Given a trained model, failure prediction is the task of predicting at run-time whether the model has taken a correct decision or not for a given input. As discussed in Section \ref{sec:related_work}, there are different ways to attack this task, which has many real-world applications in safety-critical systems especially. With a confidence-rate function in hand, the task can be simply set as thresholding this function, exactly in the same way the selection function works in prediction with a reject option. In this Section, we discuss how ConfidNet can be used for that exact purpose in the context of image classification. \subsection{Architecture} State-of-art image classification models are composed of convolutional layers followed by one or more fully-connected layers and a final softmax operation. In order to work with such a classification network $F$, we build ConfidNet upon a late intermediate representation of $F$. ConfidNet is designed as a small multilayer perceptron composed of a succession of dense layers with a final sigmoid activation that outputs $C(\bm{x};\bm{\theta})\in[0,1]$. As explained in Section \ref{sec:learning_confidence}, we will train this network in a supervised manner, such that it predicts well the true-class probability assigned by $F$ to the input image. Regarding the capacity of ConfidNet, we have empirically found that increasing further its depth leaves performance unchanged for estimating the confidence of the classification network (see Appendix B.4 for more details). \subsection{Loss function} As we want to regress a score between $0$ and $1$, we use a mean-square-error (MSE) loss to train the confidence model: \begin{equation} \label{eq:loss-conf} \mathcal{L}_{\text{conf}}(\bm{\theta};\mathcal{D}) = \frac{1}{N} \sum_{n=1}^N \big(C(\bm{x}_n;\bm{\theta}) - \text{TCP}_F(\bm{x}_n,y_n)\big)^2. \end{equation} Since the final task here is the prediction of failures, with confidence prediction being only a means toward it, a more explicit supervision with failure/success information could be considered. In that case, the previous regression loss could still be used, with 0 (failure) and 1 (success) target values instead of TCP. Alternatively, a binary cross entropy loss (BCE) for the error-prediction task using the predicted confidence as a score could be used. Seeing failure detection as a ranking problem, where good predictions must be ranked before erroneous ones according to the predicted confidence, a batch-wise ranking loss can also be utilized~\cite{Mohapatra_2018_CVPR}. We experimentally assessed all these alternative losses, including a focal version \cite{focaloss} of the BCE to focus on hard examples, as discussed in Section~\ref{subsec:learning_variants}. They lead to inferior performance compared to using (\ref{eq:loss-conf}). This might be due to the fact that TCP conveys more detailed information than a mere binary label on the quality of the classifier's prediction for a sample. Hinton \textit{et al}. \cite{distillation} make a similar observation when using soft targets in knowledge distillation. In situations where only very few error samples are available, this fine-grained information improves the performance of the final failure detection (see Section~\ref{subsec:learning_variants}). \begin{figure*}[t] \begin{center} \includegraphics[width=\linewidth]{images/fig3.png} \end{center} \caption{\textbf{Overview of proposed confidence learning for domain adaptation (ConDA) in semantic segmentation}. Given images in source and target domains, we pass them to the encoder part of the segmentation network $F$ to obtain their feature maps. This network $F$ is fixed during this phase and its weights are not updated. The confidence maps are obtained by feeding these feature maps to the trainable head of the confidence network $C$, which includes a multi-scale ConfidNet module. For source-domain images, a regression loss $\mathcal{L}_{\text{conf}}$ (\ref{eq:loss-conf-conda}) is computed to minimize the distance between $\mathsf{C}_{\xso}^{\btheta}$ and the fixed true-class-probability map $\text{TCP}_F(\x_{\so}, \bm{y}_{\text{s}})$. An adversarial training scheme -- based on discriminator's loss $\mathcal{L}_D(\bm{\psi})$ (Eq.\,\ref{eq:l_Dconf}) and adversarial part $\mathcal{L}_{\text{adv}}(\bm{\theta})$ of confidence net's loss (Eq.\,\ref{eq:l_C}) --, is also added to enforce the consistency between the $\mathsf{C}_{\xso}^{\btheta}$'s and $\mathsf{C}_{\xtg}^{\btheta}$'s. Dashed arrows stand for paths that are used only at train time. } \label{fig:confidence_training} \end{figure*} \subsection{Learning scheme} \label{subsec:confidnet-learning} We decompose the parameters of the classification network $F$ into $\mathbf{w} = (\mathbf{w}_{E}, \mathbf{w}_{\text{cls}})$, where $\mathbf{w}_{E}$ denotes its encoder's weights and $\mathbf{w}_{\text{cls}}$ the weights of its last classification layers. Such as in transfer learning, the training of the confidence network $C$ starts by fixing the shared encoder and training only ConfidNet's weights $\bm{\varphi}$. In this phase, the loss (\ref{eq:loss-conf}) is thus minimized only w.r.t. $\bm{\theta} = \bm{\varphi}$. In a second phase, we further fine-tune the complete network $C$, including its encoder which is now untied from the classification encoder $E$ (the main classification model must remain unchanged, by definition of the addressed problem). Denoting $E'$ this now independent encoder, and $\mathbf{w}_{E'}$ its weights, this second training phase optimizes (\ref{eq:loss-conf}) w.r.t. $\bm{\theta} = (\mathbf{w}_{E'}, \bm{\varphi})$ with $\mathbf{w}_{E'}$ initially set to $\mathbf{w}_{E}$. We also deactivate dropout layers in this last training phase and reduce learning rate to mitigate stochastic effects that may lead the new encoder to deviate too much from the original one used for classification. Data augmentation can thus still be used. ConfidNet can be trained using either the original training set or a validation set. The impact of this choice is evaluated in Section~\ref{subsec:learning_variants}. In Section \ref{sec:experiments}, we put this framework at work on several standard image-classification benchmarks and analyse its effectiveness in comparison with alternative approaches. \section{Application to self-training in semantic segmentation with domain adaptation} \label{sec:conda} Unsupervised domain adaptation for semantic segmentation aims to adapt a segmentation model trained on a labeled source domain to a target domain devoid of annotation. Formally, let us consider the annotated source-domain training set $\mathcal{D}_{\text{s}} = \{ (\x_{\so,n}, \bm{y}_{\text{s},n}) \}_{n=1}^{N_{\text{s}}}$, where $\x_{\so,n}$ is a color image of size $(H,W)$ and $\bm{y}_{\text{s},n} \in \mathcal{Y}^{H\times W}$ its associated ground-truth segmentation map. A segmentation network $F$ with parameters $\mathbf{w}$ takes as input an image $\bm{x}$ and returns a predicted \textit{soft}-segmentation map $F(\bm{x};\mathbf{w}) = \mathsf{P}_{\x}^{\bw} \in [0,1]^{H\times W\times K}$, where $\mathsf{P}_{\x}^{\bw}[h,w,:] = P(Y[h,w] \,\vert\, \bm{x};\mathbf{w})\in\Delta$. The final prediction of the network is the segmentation map $f(\bm{x})$ defined pixel-wise as $f(\bm{x})[h,w] = \arg\!\max_{k\in\mathcal{Y}} \mathsf{P}_{\x}^{\bw}[h,w,k]$. This network is learned with full supervision from the source-domain samples in $\mathcal{D}_{\text{s}}$, using a cross-entropy loss, while leveraging a set $\mathcal{D}_{\text{t}}$ of unlabelled target-domain examples. \subsection{Self-training for unsupervised domain adaptation} In UDA, the main challenge is to use the unlabeled target set $\mathcal{D}_{\text{t}} = \{\bm{x}_{\text{t},n}\}_{n=1}^{N_{\text{t}}}$ available during training to learn domain-invariant features on which the segmentation model would behave similarly in both domains. As reviewed in Section \ref{sec:related_work}, a variety of techniques have been proposed to do that, in particular for the task of semantic segmentation. Leveraging automatic pseudo-labeling of target-domain training examples is in particular a simple, yet powerful way to further improve UDA performance with self-training. One key ingredient of such an approach being the selection of the most promising pseudo-labels, the proposed auxiliary confidence-prediction model lends itself particularly well to this task. In the rest of this section, we detail how the proposed approach to confidence prediction can be adapted to semantic segmentation, with application to domain adaptation through self-training. The resulting framework, called ConDA, is illustrated in Figure \ref{fig:confidence_training}. A high-level view of self-training for semantic segmentation with UDA is a follows: \begin{enumerate} \item Train a segmentation network for the target domain using a chosen UDA technique;\label{step1} \item Collect pseudo-labels among the predictions that this network makes on the target-domain training images;\label{step2} \item Train a new semantic-segmentation network from scratch using the chosen UDA technique in combination with supervised training on target-domain data with pseudo-labels; \item Possibly, repeat from step~\ref{step2} by collecting better pseudo-labels after each iteration. \end{enumerate} While the general idea of self-training is simple and intuitive, collecting good pseudo-labels is quite tricky: If too many of them correspond to erroneous predictions of the current segmentation network, the performance of the whole UDA can deteriorate. Thus, a measure of confidence should be used in order to only gather reliable predictions as pseudo-labels and to reject the others. \subsection{Selecting pseudo-labels with a confidence model} Following the self-training framework previously described, a confidence network $C$ is learned at step~(\ref{step2}) to predict the confidence of the UDA-trained semantic segmentation network $F$ and used to select only trustworthy pseudo-labels on target-domain images. To this end, the framework proposed in Section \ref{sec:learning_confidence} in an image classification setup, and applied to predicting erroneous image classification in Section \ref{sec:confidnet}, needs here to be adapted to the structured output of semantic segmentation. Semantic segmentation can be seen as a pixel-wise classification problem. Given a target-domain image $\x_{\tg}$, we want to predict both its soft semantic map $F(\x_{\tg};\mathbf{w})$ and, using an auxiliary model with trainable parameters $\bm{\theta}$, its confidence map $C(\x_{\tg};\bm{\theta}) = \mathsf{C}_{\xtg}^{\btheta} \in [0,1]^{H\times W}$. Given a pixel $(h,w)$, if its confidence $\mathsf{C}_{\xtg}^{\btheta}[h,w]$ is above a chosen threshold $\delta$, we label it with its predicted class $f(\x_{\tg})[h,w] = \arg\!\max_{k\in\mathcal{Y}} \mathsf{P}_{\xtg}^{\bw}[h,w,k]$, otherwise it is masked out. Computed over all images in $\mathcal{D}_{\text{t}}$, these incomplete segmentation maps constitute target pseudo-labels that are used to train a new semantic-segmentation network. Optionally, we may repeat from step~(\ref{step2}) and learn alternately a confidence model to collect pseudo-labels and a segmentation network using this self-training. \subsection{Confidence training with adversarial loss} \label{sec:adv-loss} To train the segmentation confidence network $C$, we propose to jointly optimize two objectives. Following the approach proposed in Section \ref{sec:learning_confidence}, the first one supervises the confidence prediction on annotated source-domain examples using the known true class probabilities for the predictions from $F$. Specific to semantic segmentation with UDA, the second one is an adversarial loss that aims at reducing the domain gap between source and target. A complete overview of the approach is provided in Figure \ref{fig:confidence_training}. \smallskip\noindent\textbf{Confidence loss.} The first objective is a pixel-wise version of the confidence loss in (\ref{eq:loss-conf}). On annotated source-domain images, it requires $C$ to predict at each pixel the score assigned by $F$ to the (known) true class: \begin{equation} \label{eq:loss-conf-conda} \mathcal{L}_{\text{conf}}(\bm{\theta};\mathcal{D}_{\text{s}}) = \frac{1}{N_{\text{s}}} \sum_{n=1}^{N_{\text{s}}} \big\| \mathsf{C}_{\xson}^{\btheta} - \text{TCP}_F(\x_{\so,n},\bm{y}_{\text{s},n}) \big\|^2_{\text{F}}, \end{equation} where $\|\cdot\|_{\text{F}}$ denotes the Frobenius norm and, for an image $\bm{x}$ with true segmentation map $\bm{y}$ and predicted soft one $F(\bm{x};\hat{\mathbf{w}})$, we note \begin{equation} \text{TCP}_F(\bm{x},\bm{y})[h,w] = F(\bm{x};\hat{\mathbf{w}})\Big[h,w,\bm{y}[h,w]\Big] \end{equation} at location $(h,w)$. On a new input image, $C$ should predict at each pixel the score that $F$ will assign to the unknown true class, which will serve as a confidence measure. However, compared to the application in previous Section, we have here the additional problem of the gap between source and target domains, an issue that might affect the training of the confidence model as in the training of the segmentation model. \smallskip\noindent \textbf{Adversarial loss.} The second objective concerns the domain gap. While model $C$ learns to estimate TCP on source-domain images, its confidence estimation on target-domain images may suffer dramatically from this domain shift. As classically done in UDA, we propose an adversarial learning of our auxiliary model in order to address this problem. More precisely, we want the confidence maps produced by $C$ in the source domain to resemble those obtained in the target domain. A discriminator $D:[0,1]^{H \times W} \rightarrow \{0,1\}$, with parameters $\bm{\psi}$, is trained concurrently with $C$ with the aim to recognize the domain (1 for source, 0 for target) of an image given its confidence map. The following loss is minimized w.r.t. $\bm{\psi}$: \begin{align} \mathcal{L}_D(\bm{\psi};\mathcal{D}_{\text{s}}\cup\mathcal{D}_{\text{t}}) = &\frac{1}{N_{\text{s}}}\sum\limits_{n=1}^{N_{\text{s}}} \mathcal{L}_\text{adv}(\x_{\so,n},1) + \nonumber \\ &\frac{1}{N_{\text{t}}}\sum\limits_{n=1}^{N_{\text{t}}} \mathcal{L}_\text{adv}(\x_{\tg,n},0), \label{eq:l_Dconf} \end{align} where $\mathcal{L}_\text{adv}$ denotes the cross-entropy loss of the discriminator based on confidence maps: \begin{equation} \mathcal{L}_\text{adv}(\bm{x},\lambda) = -\lambda\log\big(D(\mathsf{C}_{\x}^{\btheta};\bm{\psi})\big) - (1-\lambda)\log(1-D\big(\mathsf{C}_{\x}^{\btheta};\bm{\psi})\big), \end{equation} for $\lambda= \{0,1\}$, which is a function of both $\bm{\psi}$ and $\bm{\theta}$. In alternation with the training of the discriminator using (\ref{eq:l_Dconf}), the adversarial training of the confidence net is conducted by minimizing, w.r.t. $\bm{\theta}$, the following loss: \begin{equation} \mathcal{L}_C(\bm{\theta};\mathcal{D}_{\text{s}}\cup\mathcal{D}_{\text{t}}) = \mathcal{L}_\text{conf}(\bm{\theta}; \mathcal{D}_{\text{s}}) + \frac{\lambda_\text{adv}}{N_{\text{t}}}\sum\limits_{n=1}^{N_{\text{t}}}\mathcal{L}_\text{adv}(\x_{\tg},1), \label{eq:l_C} \end{equation} where the second term, weighted by $\lambda_\text{adv}$, encourages $C$ to produce maps in target domain that will confuse the discriminator. This adversarial scheme for confidence learning also acts as a regularizer during training, improving the robustness of the unknown TCP target confidence. As the training of $C$ may actually be unstable, adversarial training provides additional information signal, in particular imposing that confidence estimation should be invariant to domain shifts. We empirically observed that this adversarial confidence learning provides better confidence estimates and improves convergence and stability of the training scheme. \begin{table*}[t] \caption{\textbf{Comparison of confidence estimation methods for failure prediction and selective classification}. For each dataset, all methods share the same classification network. For MC Dropout, test accuracy is averaged through random sampling. The first three metrics are percentages and concern failure prediction. The two last ones (the lower, the better) concern selective classification and their values have been multiplied by $10^3$ for clarity. Scores are averaged over 5 runs, best results are in \textbf{bold}, second best ones are \underline{underlined}.} \label{comparative-results} \centering \begin{adjustbox}{max width=\textwidth} \begin{tabular}{cl|rrr|rr} \toprule Dataset & Model & FPR\,@\,95\%\,TPR\,$\downarrow$& AUPR \,$\uparrow$ & AUROC \,$\uparrow$& AURC \,$\downarrow$& E-AURC\,$\downarrow$ \\ \midrule \multirow{4}{*}{\shortstack[c]{\ubold{MNIST} \\ MLP}} & MCP~\cite{hendrycks17baseline} & 14.88 {\scriptsize $\pm 1.42$} & 47.25 {\scriptsize $\pm 1.67$} & 97.28 {\scriptsize $\pm 0.20$} & 0.83 {\scriptsize $\pm 0.07$} & 0.61 {\scriptsize $\pm 0.06$} \\ & MC Dropout~\cite{Gal:2016:DBA:3045390.3045502} & 15.17 {\scriptsize $\pm 1.08$} & 40.98 {\scriptsize $\pm 1.24$} & 97.10 {\scriptsize $\pm 0.18$} & 0.85 {\scriptsize $\pm 0.07$} & 0.63 {\scriptsize $\pm 0.06$} \\ & Trust Score~\cite{NIPS2018_7798} & \underline{14.80} {\scriptsize $\pm 2.03$} & \underline{52.13} {\scriptsize $\pm 1.79$} & \underline{97.36} {\scriptsize $\pm 0.10$} & \underline{0.82} {\scriptsize $\pm 0.04$} & \underline{0.59} {\scriptsize $\pm 0.03$} \\ & ConfidNet & \ubold{11.61} {\scriptsize $\pm 1.96$} & \ubold{59.72} {\scriptsize $\pm 1.90$} & \ubold{97.89} {\scriptsize $\pm 0.14$} & \ubold{0.70} {\scriptsize $\pm 0.05$} & \ubold{0.47} {\scriptsize $\pm 0.04$} \\ \midrule \multirow{4}{*}{\shortstack[c]{\ubold{MNIST} \\ SmallConvNet}} & MCP~\cite{hendrycks17baseline} & 5.53 {\scriptsize $\pm 1.25$} & 36.08 {\scriptsize $\pm 3.60$} & 98.49 {\scriptsize $\pm 0.07$} & \underline{0.15} {\scriptsize $\pm 0.01$} & \underline{0.12} {\scriptsize $\pm 0.01$} \\ & MC Dropout~\cite{Gal:2016:DBA:3045390.3045502} & \ubold{5.03} {\scriptsize $\pm 0.72$} & \underline{42.12} {\scriptsize $\pm 5.52$} & \underline{98.53} {\scriptsize $\pm 0.12$} & 0.16 {\scriptsize $\pm 0.01$} & \underline{0.12} {\scriptsize $\pm 0.01$} \\ & Trust Score~\cite{NIPS2018_7798} & 9.60 {\scriptsize $\pm 2.69$} & 33.47 {\scriptsize $\pm 3.82$} & 98.20 {\scriptsize $\pm 0.23$} & 0.18 {\scriptsize $\pm 0.03$} & 0.15 {\scriptsize $\pm 0.02$} \\ & ConfidNet & \underline{5.32} {\scriptsize $\pm 1.14$} & \ubold{45.45} {\scriptsize $\pm 3.75$} & \ubold{98.72} {\scriptsize $\pm 0.07$} & \ubold{0.13} {\scriptsize $\pm 0.02$} & \ubold{0.10} {\scriptsize $\pm 0.01$} \\ \midrule \multirow{4}{*}{\shortstack[c]{\ubold{SVHN} \\ SmallConvNet}} & MCP~\cite{hendrycks17baseline} & \underline{32.17} {\scriptsize $\pm 0.91$} & \underline{46.20} {\scriptsize $\pm 0.50$} & \underline{92.93} {\scriptsize $\pm 0.13$} & \underline{5.58} {\scriptsize $\pm 0.14$} & \underline{4.50} {\scriptsize $\pm 0.09$} \\ & MC Dropout~\cite{Gal:2016:DBA:3045390.3045502} & 33.54 {\scriptsize $\pm 1.06$} & 45.15 {\scriptsize $\pm 1.29$} & 92.84 {\scriptsize $\pm 0.08$} & 5.70 {\scriptsize $\pm 0.11$} & 4.61 {\scriptsize $\pm 0.09$} \\ & Trust Score~\cite{NIPS2018_7798} & 34.01 {\scriptsize $\pm 1.11$} & 44.77 {\scriptsize $\pm 1.30$} & 92.65 {\scriptsize $\pm 0.29$} & 5.72 {\scriptsize $\pm 0.11$} & 4.64 {\scriptsize $\pm 0.12$} \\ & ConfidNet & \ubold{29.90} {\scriptsize $\pm 0.76$} & \ubold{48.64} {\scriptsize $\pm 1.08$} & \ubold{93.15} {\scriptsize $\pm 0.15$} & \ubold{5.51} {\scriptsize $\pm 0.09$} & \ubold{4.43} {\scriptsize $\pm 0.08$} \\ \midrule \multirow{4}{*}{\shortstack[c]{\ubold{CIFAR-10} \\ VGG16}} & MCP~\cite{hendrycks17baseline} & \underline{49.19} {\scriptsize $\pm 1.42$} & \underline{48.37} {\scriptsize $\pm 0.69$} & \underline{91.18} {\scriptsize $\pm 0.32$} & \underline{12.66} {\scriptsize $\pm 0.61$} & \underline{8.71} {\scriptsize $\pm 0.50$} \\ & MC Dropout~\cite{Gal:2016:DBA:3045390.3045502} & 49.67 {\scriptsize $\pm 2.66$} & 48.08 {\scriptsize $\pm 0.99$} & 90.70 {\scriptsize $\pm 1.96$} & 13.31 {\scriptsize $\pm 2.63$} & 9.46 {\scriptsize $\pm 2.41$} \\ & Trust Score~\cite{NIPS2018_7798} & 54.37 {\scriptsize $\pm 1.96$} & 41.80 {\scriptsize $\pm 1.97$} & 87.87 {\scriptsize $\pm 0.41$} & 17.97 {\scriptsize $\pm 0.45$} & 14.02 {\scriptsize $\pm 0.34$} \\ & ConfidNet & \ubold{45.08} {\scriptsize $\pm 1.58$} & \ubold{53.72} {\scriptsize $\pm 0.55$} & \ubold{92.05} {\scriptsize $\pm 0.34$} & \ubold{11.78} {\scriptsize $\pm 0.58$} & \ubold{7.88} {\scriptsize $\pm 0.44$} \\ \midrule \multirow{4}{*}{\shortstack[c]{\ubold{CIFAR-100} \\ VGG16}} & MCP~\cite{hendrycks17baseline} & 66.55 {\scriptsize $\pm 1.56$} & 71.30 {\scriptsize $\pm 0.41$} & 85.85 {\scriptsize $\pm 0.14$} & 113.23 {\scriptsize $\pm 2.98$} & 51.93 {\scriptsize $\pm 1.20$} \\ & MC Dropout~\cite{Gal:2016:DBA:3045390.3045502} & \underline{63.25} {\scriptsize $\pm 0.66$} & \underline{71.88} {\scriptsize $\pm 0.72$} & \underline{86.71} {\scriptsize $\pm 0.30$} & \ubold{101.41} {\scriptsize $\pm 3.45$} & \ubold{46.45} {\scriptsize $\pm 1.91$} \\ & Trust Score~\cite{NIPS2018_7798} & 71.90 {\scriptsize $\pm 0.93$} & 66.77 {\scriptsize $\pm 0.52$} & 84.41 {\scriptsize $\pm 0.15$} & 119.41 {\scriptsize $\pm 2.94$} & 58.10 {\scriptsize $\pm 1.09$} \\ & ConfidNet & \ubold{62.70} {\scriptsize $\pm 1.04$} & \ubold{73.55} {\scriptsize $\pm 0.57$} & \ubold{87.17} {\scriptsize $\pm 0.21$} & \underline{108.46} {\scriptsize $\pm 2.62$} & \underline{47.15} {\scriptsize $\pm 0.95$} \\ \bottomrule \end{tabular} \end{adjustbox} \end{table*} \subsection{Multi-scale ConfidNet architecture} \label{sec:confidnet-multi} In semantic segmentation, models consist of fully convolutional networks where hidden representations are 2D feature maps. This is in contrast with the architecture of classification models considered in Section \ref{sec:confidnet}. As a result, ConfidNet module must have a different design here: Instead of fully-connected layers, it is composed of $1\!\times\!1$ convolutional layers with the adequate number of channels. In many segmentation datasets, the existence of objects at multiple scales may complicate confidence estimation. As in recent works dealing with varying object sizes~\cite{ChenPK0Y16}, we further improve our confidence network $C$ by adding a multi-scale architecture based on spatial pyramid pooling. It consists of a computationally efficient scheme to re-sample a feature map at different scales, and then to aggregate the confidence maps. From a feature map, we apply parallel atrous convolutional layers with $3\!\times\!3$ kernel size and different sampling rates, each of them followed by a series of 4 standard convolutional layers with $3\!\times\!3$ kernel size. In contrast with convolutional layers with large kernels, atrous convolution layers enlarge the field of view of filters and help to incorporate a larger context without increasing the number of parameters and the computation time. Resulting features are then summed before upsampling to the original image size of $H\times W$. We apply a final sigmoid activation to output a confidence map with values between 0 and 1. The whole architecture of the confidence model $C$ is represented in the orange block of Figure~\ref{fig:confidence_training}, along with its training given a fixed segmentation model $F$ (blue block) with which it shares the encoder. Such as in previous section, fine-tuning the encoder within $C$ is also possible, although we did not explore the option in this semantic segmentation context due to the excessive memory overhead it implies. \section{Experiments} \label{sec:experiments} We evaluate our approach on the two tasks presented in previous sections: Failure prediction in classification settings and semantic segmentation with domain adaptation. \subsection{Failure prediction} \label{subsec:exp_confidnet} In this section, we present comparative experiments against state-of-the-art confidence-estimation approaches and Bayesian methods on various datasets. Then, we study the effect of learning variants on our approach. \subsubsection{Experimental setup} The experiments are conducted on image datasets of varying scale and complexity: MNIST \cite{lecun-mnisthandwrittendigit} and SVHN \cite{svhn-dataset} datasets provide small and relatively simple images of digits (10 classes). CIFAR-10 and CIFAR-100 \cite{Krizhevsky09} propose more complex object-recognition tasks on low resolution images. \begin{figure*}[t] \centering \includegraphics[width=0.7\linewidth]{images/fig4.png} \caption{\textbf{Limitations of MC Dropout's confidence measure.} Two test samples from SVHN dataset, which are respectively misclassified (left) and correctly classified (right) by a given model $F$, illustrate these limits. The entropies of the predicted class distributions (averaged over Monte Carlo dropout layers and displayed as histograms) are equally high, at around 0.79, resulting in equally low MC Dropout confidence estimates. In contrast, both MCP and TCP approximated by ConfidNet clearly differ as expected for the two examples. Yet, ConfidNet has the best behavior, being the lowest for the erroneous model's prediction and the highest for the correct one.} \label{visu-entropy} \end{figure*} \begin{table*}[t] \caption{\textbf{Impact of the choice of training data on the error-prediction performance of ConfidNet.} Comparison in AUPR between training on a model's train set or on a validation set.} \centering \label{validationset} \begin{adjustbox}{max width=\textwidth} \begin{tabular}{lcccccc} \toprule Variant & \ubold{MNIST} & \ubold{MNIST} & \ubold{SVHN} & \ubold{CIFAR-10} & \ubold{CIFAR-100} \\ & MLP & SmallConvNet & SmallConvNet & VGG-16 & VGG-16 \\ \midrule ConfidNet-\emph{train} & 59.72\% {\scriptsize $\pm 1.90$} & 45.45\% {\scriptsize $\pm 3.75$} & 48.64\% {\scriptsize $\pm 1.08$} & 53.72\% {\scriptsize $\pm 0.55$} & 73.55\% {\scriptsize $\pm 0.57$} \\ ConfidNet-\emph{val} & 38.22\% {\scriptsize $\pm 2.26$} & 31.90\% {\scriptsize $\pm 2.42$} & 43.15\% {\scriptsize $\pm 1.69$} & 53.01\% {\scriptsize $\pm 1.06$} & 73.82\% {\scriptsize $\pm 0.76$} \\ \bottomrule \end{tabular} \end{adjustbox} \end{table*} The classification models range from small convolutional networks for MNIST and SVHN to the larger VGG-16 architecture for the CIFAR datasets. We also consider a multi-layer perceptron (MLP) with one hidden layer to investigate performances on small models. ConfidNet is attached to the penultimate layer of the convolutional neural network. Further details about datasets, architectures, training and metrics can be found in Appendix B. We measure the quality of failure prediction following standard metrics used in the literature~\cite{hendrycks17baseline}: AUROC, the area under the receiver operating characteristic; FPR\,@\,95\%\,TPR, the false-positive rate measured when the true-positive rate is 95\%; and AUPR, the area under the precision-recall curve, using here incorrect model's predictions as positive detection samples (see details in Appendix B.2). Among these metrics, AUPR is the most directly related to the failure detection task, and is thus the prevalent one in our assessment. As an additional, indirect way to assess the quality of the predicted classifier's confidence, we also consider the selective classification problem that was discussed in Section \ref{sec:problem_formulation}. In this setup, the predictions by the classifier $F$ that get a predicted confidence below a defined threshold are rejected. Given a coverage rate (the fraction of examples that are not rejected), the performance of the classifier should improve. The impact of this selection, and hence of the underlying confidence-rate function, is measured in average with the area under the risk-coverage curve (AURC) and its normalized variant \emph{Excess}-AURC (E-AURC) \cite{geifman2018biasreduced}. \subsubsection{Comparative results} Along with our approach, we implemented competitive confidence and uncertainty estimation methods including MCP \cite{hendrycks17baseline}, Trust Score \cite{NIPS2018_7798}, and Monte-Carlo Dropout (MC Dropout) \cite{Gal:2016:DBA:3045390.3045502}. Comparative results are summarized in Table~\ref{comparative-results}. We observe that our approach outperforms the other methods in every setting, with a significant gap on small models/datasets. This confirms that TCP is an adequate confidence criterion for failure prediction and that our approach ConfidNet is able to learn it. Trust Score also delivers good results on small datasets/models such as MNIST. While ConfidNet still performs well on more complex datasets, Trust Score's performance drops, which might be explained by high-dimensionality issues with distances. Regarding selective classification results (AURC and E-AURC), we also provide risk-coverage curves in Appendix B.8 . We also improve state-of-art performances from MC Dropout. While MC Dropout leverages ensembling based on dropout layers, taking as confidence measure the entropy on the average softmax distribution may not be always adequate. In Figure~\ref{visu-entropy}, we show side-by-side samples with similar distribution entropy. The left image is misclassified while the right one enjoys a correct prediction. In fact, the entropy is a permutation-invariant measure on discrete probability distributions: A correct 3-class prediction with score vector $[0.65, 0.34, 0.01]$ has the same entropy-based confidence as an incorrect one with probabilities $[0.34, 0.65, 0.01]$. In contrast, our approach can discriminate between an incorrect and a correct prediction, despite both having similarly-spread distributions. Note that, while fine-tuning ConfidNet doubles the overall computational complexity by using an auxiliary network with its own encoder, our approach is still better at failure prediction than a two-model ensemble (see. Appendix B.7). \subsubsection{Effect of learning variants} \label{subsec:learning_variants} We analyse in Table~\ref{cloning-analysis} the effect of the encoder fine-tuning that is described in Section~\ref{subsec:confidnet-learning}. Learning only ConfidNet on top of the pre-trained encoder $E$ (that is, $\bm{\theta} = \bm{\varphi}$), our confidence network already achieves significant improvements w.r.t. the baselines. With a subsequent fine-tuning of both modules (that is $\bm{\theta} = (\mathbf{w}_{E'},\bm{\varphi}))$, its performance is further boosted in every setting, by around 1-2\%. Note that using a vanilla fine-tuning without the deactivation of the dropout layers did not bring any improvement. \begin{table}[ht] \centering \caption{\textbf{Impact of the encoder fine-tuning on the error-prediction performance of ConfidNet}. Comparison in AUPR on two benchmarks with different backbones.} \label{cloning-analysis} \begin{adjustbox}{max width=\textwidth} \begin{tabular}{lcc} \toprule & \ubold{MNIST} & \ubold{CIFAR-100} \\ & SmallConvNet & VGG-16\\ \midrule Confidence training & 44.54\% & 71.30\% \\ ~~+ Encoder fine-tuning & 45.45\% & 73.55\%\\ \bottomrule \end{tabular} \end{adjustbox} \end{table} \begin{table*}[t] \centering \caption{\textbf{Comparative performance on semantic segmentation with synth-to-real unsupervised domain adaptation.} Results in per-class IoU and class-averaged mIoU on GTA5\,$\triangleright$\,Cityscapes. All methods are based on a DeepLabv2 backbone.} \resizebox{\textwidth}{!}{% \begin{tabular}{l | c | c c c c c c c c c c c c c c c c c c c|c} \toprule \multicolumn{22}{c}{GTA5\,$\triangleright$\,Cityscapes}\\ \toprule Method & \rotatebox{90}{Self-Train.} & \rotatebox{90}{road} & \rotatebox{90}{sidewalk} & \rotatebox{90}{building} & \rotatebox{90}{wall} & \rotatebox{90}{fence} & \rotatebox{90}{pole} & \rotatebox{90}{light} & \rotatebox{90}{sign} & \rotatebox{90}{veg} & \rotatebox{90}{terrain} & \rotatebox{90}{sky} & \rotatebox{90}{person} & \rotatebox{90}{rider} & \rotatebox{90}{car} & \rotatebox{90}{truck} & \rotatebox{90}{bus} & \rotatebox{90}{train} & \rotatebox{90}{mbike} & \rotatebox{90}{bike} & mIoU \\ \midrule AdaptSegNet~\cite{Tsai_adaptseg_2018} & & 86.5 & 25.9 & 79.8 & 22.1 & 20.0 & 23.6 & 33.1 & 21.8 & 81.8 & 25.9 & 75.9 & 57.3 & 26.2 & 76.3 & 29.8 & 32.1 & 7.2 & \textbf{29.5} & 32.5 & 41.4 \\ CyCADA~\cite{Hoffman_cycada2017} & & 86.7 & 35.6 & 80.1 & 19.8 & 17.5 & \textbf{38.0} & \textbf{39.9} & \textbf{41.5} & 82.7 & 27.9 & 73.6 & \textbf{64.9} & 19.0 & 65.0 & 12.0 & 28.6 & 4.5 & 31.1 & 42.0 & 42.7 \\ DISE~\cite{chang2019all} & & 91.5 & 47.5 & 82.5 & 31.3 & 25.6 & 33.0 & 33.7 & 25.8 & 82.7 & 28.8 & 82.7 & 62.4 & 30.8 & 85.2 & 27.7 & 34.5 & 6.4 & 25.2 & 24.4 & 45.4 \\ AdvEnt~\cite{vu2018advent} & & 89.4 & 33.1 & 81.0 & 26.6 & 26.8 & 27.2 & 33.5 & 24.7 & 83.9 & 36.7 & 78.8 & 58.7 & 30.5 & 84.8 & 38.5 & 44.5 & 1.7 & 31.6 & 32.4 & 45.5 \\ \midrule CBST~\cite{zou2018unsupervised} & \checkmark & 91.8 & 53.5 & 80.5 & 32.7 & 21.0 & 34.0 & 28.9 & 20.4 & 83.9 & 34.2 & 80.9 & 53.1 & 24.0 & 82.7 & 30.3 & 35.9 & 16.0 & 25.9 & \textbf{42.8} & 45.9 \\ MRKLD~\cite{Zou_2019_ICCV} & \checkmark & 91.0 & 55.4 & 80.0 & 33.7 & 21.4 & 37.3 & 32.9 & 24.5 & 85.0 & 34.1 & 80.8 & 57.7 & 24.6 & 84.1 & 27.8 & 30.1 & \textbf{26.9} & 26.0 & 42.3 & 47.1 \\ BDL~\cite{Li_2019_CVPR} & \checkmark & 91.0 & 44.7 & 84.2 & 34.6 & \textbf{27.5} & 30.2 & 36.0 & 36.0 & 85.0 & \textbf{43.6} & 83.0 & 58.6 & \textbf{31.6} & 83.3 & 35.3 & 49.7 & 3.3 & 28.8 & 35.6 & 48.5 \\ ESL~\cite{saporta2020esl} & \checkmark & 90.2 & 43.9 & 84.7 & 35.9 & 28.5 & 31.2 & 37.9 & 34.0 & 84.5 & 42.2 & 83.9 & 59.0 & 32.2 & 81.8 & 36.7 & 49.4 & 1.8 & 30.6 & 34.1 & 48.6 \\ \rowcolor{Gray} ConDA & \checkmark & \textbf{93.5} & \textbf{56.9} & \textbf{85.3} & \textbf{38.6} & 26.1 & 34.3 & 36.9 & 29.9 & \textbf{85.3} & 40.6 & \textbf{88.3} & 58.1 & 30.3 & \textbf{85.8} & \textbf{39.8} & \textbf{51.0} & 0.0 & 28.9 & 37.8 & \textbf{49.9} \\ \bottomrule \end{tabular} } \label{tab:conda-gta2cityscapes} \end{table*} Given the small number of erroneous-prediction samples that are available due to deep neural network over-fitting, we also experimented confidence training on a hold-out dataset. We report the results on all datasets in Table~\ref{validationset} for validation sets with 10\% of samples. We observe a general performance drop when using a validation set for training TCP confidence. The drop is especially pronounced for small datasets (MNIST), where models reach more than 97\% of train and validation accuracies. Consequently, with a high accuracy and a small validation set, we do not get a larger absolute number of errors using a hold-out set rather than the train set. One solution would be to increase the validation-set size, but this would damage the model's prediction performance. By contrast, we take care with our experiments to base our confidence estimation on models with levels of test predictive performance that are similar to those of the baselines. On CIFAR-100, the gap between train accuracy and validation accuracy is substantial (95.56\% vs. 65.96\%), which may explain the slight improvement for confidence estimation using a validation set (+0.17\%). We think that training ConfidNet on a validation set with models reporting low/medium test accuracies could improve the approach. \begin{table}[ht] \caption{\textbf{Effect of the loss on the error-detection performance of ConfidNet.} Comparison in AUPR between proposed MSE loss and three other alternatives.} \centering \label{loss-analysis2} \begin{tabular}{crrrr} \toprule Dataset & MSE & BCE & Focal & Ranking \\ \midrule \ubold{SVHN} & \ubold{50.72\%} & 50.00\% & 49.96\% & 48.11\% \\ \ubold{CIFAR-10} & \ubold{49.94\%} & 47.95\% & 47.76\% & 44.04\% \\ \bottomrule \end{tabular} \end{table} In Table~\ref{loss-analysis2}, we compare training ConfidNet with the MSE loss (\ref{eq:loss-conf}) to training with a binary-classification cross-entropy loss (BCE), a focal BCE loss and a batch-wise approximate ranking loss. Even though BCE specifically addresses the failure prediction task, it achieves lower performances on CIFAR-10 and SVHN datasets. Similarly, the focal loss and the ranking one yield results below TCP's performance in every tested benchmark. Our intuition is that TCP regularizes the training by providing finer-grain information about the quality of the classifier's predictions. This is especially important in the difficult learning configuration where only very few error samples are available due to the good performance of the classifier. \subsection{Unsupervised domain adaptation in semantic segmentation} \label{subsec:exp_conda} \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{images/fig5.png} \caption{\textbf{Qualitative results of pseudo-label selection for semantic-segmentation adaptation.} The three first columns present target-domain images of the GTA5\,$\triangleright$\,Cityscapes benchmark (a) along with their ground-truth segmentation maps (b) and the predicted maps before self-training (c). We compare pseudo-labels collected with MCP (d) and with ConDA (e). Green (resp. red) pixels are correct (resp. erroneous) predictions selected by the method and black pixels are discarded predictions. ConDA retains fewer errors while preserving approximately the same amount of correct predictions.} \label{fig:qualitative_results} \end{figure*} In this section, we analyse on several semantic segmentation benchmarks the performance of ConDA, our approach to domain adaptation with confidence-based self-training. We report comparisons with state-of-the-art methods on each benchmark. We also analyse further the quality of ConDA's pseudo-labelling and demonstrate via an ablation study the importance of each of its components. \subsubsection{Experimental setup} As in many UDA works for semantic segmentation, we consider the specific task of adapting from synthetic to real data in urban scenes. We present in particular experiments in the common set-up, denoted GTA5\,$\triangleright$\,Cityscapes, where GTA5~\cite{richter-eecv2016} is the synthetic source dataset while the real-word target dataset is Cityscapes~\cite{cordts-cvpr2016}. We also validate our approach on two other benchmarks -- SYNTHIA\,$\triangleright$\,Cityscapes and SYNTHIA\,$\triangleright$\,Mapillary Vistas~\cite{neuhold-iccv2017} -- in Appendix C.3. The GTA5~\cite{richter-eecv2016} dataset is composed of 24,966 images extracted from the eponymous game, of dimension $1914 \times 1052$ and semantically annotated with 19 classes in common with Cityscapes~\cite{cordts-cvpr2016}. Cityscapes~\cite{cordts-cvpr2016} is a dataset of real street-level images. For domain adaptation, we use the training set as target dataset during training. It is composed of 2,975 images of dimension $2048 \times 1024$. All results are reported in terms of intersection over union (IoU) per class or mean IoU over all classes (mIoU); the higher this percentage, the better. We evaluate the proposed self-training method on AdvEnt~\cite{vu2018advent}, a state-of-the-art UDA approach. AdvEnt~\cite{vu2018advent} proposes an adversarial learning framework for domain adaptation: Instead of the softmax output predictions, AdvEnt aligns the entropy of the pixel-wise predictions. All the implementations are done with the PyTorch framework~\cite{NEURIPS2019_bdbca288}. The semantic segmentation models are initialized with DeepLabv2 backbones pretrained on ImageNet~\cite{krizhevsky2012imagenet}. Due to computational constraints, we only train the multi-scale ConfidNet without encoder fine-tuning. Further information about architectures and implementation details of training and metrics can be found in Appendix C.1. \subsubsection{Comparison with state of the art} The results of semantic segmentation on the Cityscapes validation set using GTA5 as source domain are available in Table~\ref{tab:conda-gta2cityscapes}. All the methods rely on DeepLabv2 as their segmentation backbone. We first notice that self-training-based methods from the literature are superior on this benchmark, with performance reaching up to $48.6\%$ mIoU with ESL~\cite{saporta2020esl}. ConDA outperforms all those methods by reaching $49.9\%$ mIoU. \subsubsection{Analysis} \noindent \textbf{Ablation Study.} To study the effect of the adversarial training and of the multi-scale confidence architecture on the confidence model, we perform an ablation study on the GTA5\,$\triangleright$\,Cityscapes benchmark. The results on domain adaptation after re-training the segmentation network using collected pseudo-labels are reported in Table~\ref{tab:ablationstudy}. In this table, ``ConfidNet'' refers to the simple network architecture defined in Section~\ref{sec:confidnet} (adapted to segmentation by replacing the fully connected layers by $1\!\times\! 1$ convolutions of suitable width); ``Adv. ConfidNet'' denotes the same architecture but with the adversarial loss from Section \ref{sec:adv-loss} added to its learning scheme; ``Multi-scale ConfidNet'' stands for the architecture introduced in Section \ref{sec:confidnet-multi}; Finally, the full method, ``ConDA'' amounts to having both this architecture and the adversarial loss. We notice that adding the adversarial learning achieves significantly better performance, for both ConfidNet and multi-scale ConfidNet, with respectively $+1.4$ and $+0.8$ point increase. Multi-scale ConfidNet (resp. adv. multi-Scale ConfidNet) also improves performance up to $+0.9$ point (resp. $+0.3$) from their ConfidNet architecture counterpart. These results stress the importance of both components of the proposed confidence model. \begin{table}[ht] \centering \resizebox{0.95\linewidth}{!}{% \begin{tabular}{l|cc|c} \toprule Model & Multi-Scale. & Adv & mIoU\\ \midrule ConfidNet & & & 47.6 \\ Multi-Scale ConfidNet & \checkmark & & 48.5\\ Adv. ConfidNet & & \checkmark & 49.0 \\ ConDA (Adv. Multi-scale ConfidNet) & \checkmark & \checkmark & \ubold{49.9} \\ \bottomrule \end{tabular} } \caption{\textbf{Ablation study on semantic segmentation with pseudo-labelling-based adaptation.} Full-fledged ConDA approach is compared on GTA5\,$\triangleright$\,Cityscapes to stripped-down variants (with/without multi-scale architecture in ConfidNet, with/without adversarial learning).} \label{tab:ablationstudy} \end{table} \smallskip\noindent \textbf{Quality of pseudo-labels.} Here we analyze the effectiveness of MCP and ConDA as confidence measures to select relevant pseudo-labels in the target domain. For a given fraction of retained pseudo-labels (coverage) on target-domain training images, we compare in Figure~\ref{fig:conda_analysis} the proportion of those labels that are correct (accuracy). ConDA outperforms MCP for all coverage levels, meaning it selects significantly fewer erroneous predictions for the next round of segmentation-model training. Along with the segmentation adaptation improvements presented earlier, these coverage results demonstrate that reducing the amount of noise in the pseudo-labels is key to learning a better segmentation adaptation model. Figure~\ref{fig:qualitative_results} presents qualitative results of those pseudo-labels methods. We find again that MCP and ConDA seem to select around the same amount of correct predictions in their pseudo-labels, but with ConDA picking out a lot fewer erroneous ones. \begin{figure}[ht] \centering \includegraphics[width=0.9\linewidth,trim={0 0cm 0 0},clip]{images/fig6.png} \vspace{-0.3cm}\caption{\textbf{Comparative quality of selected pseudo-labels}. Proportion of correct pseudo-labels (precision) for different coverages on GTA5\,$\triangleright$\,Cityscapes, for MCP and ConDA.} \label{fig:conda_analysis} \vspace{-0.5cm} \end{figure} \section{Conclusion} \label{sec:conclusion} In this paper, we defined a new confidence criterion, TCP, which enjoys simple guarantees and empirical evidence of improving the confidence estimation for classifiers with a reject option. We proposed a specific method to learn this criterion with an auxiliary neural network built upon the encoder of the model that is monitored. Applied to failure prediction, this learning scheme consists in training the auxiliary network and then enabling the fine-tuning of its encoder (the one of the monitored classifier remains frozen). In each image classification experiment, we were able to improve the capacity of the model to distinguish correct from erroneous samples and to achieve better selective classification. Besides failure prediction, other applications can benefit from this improved confidence estimation. In particular, we showed that applied to self-training with pseudo-labels, our approach reaches state-of-the-art results on three synthetic-to-real unsupervised-domain-adaptation benchmarks (GTA5\,$\triangleright$ Cityscapes, SYNTHIA\,$\triangleright$ Cityscapes and SYNTHIA\,$\triangleright$ Mapillary Vistas). To achieve these results, we equipped the auxiliary model with a multi-scale confidence architecture and supplemented the confident loss with an adversarial training scheme to enforces alignment between confidence maps in source and target domains. One limitation of this approach is the number of errors available in training. Further work includes exploring methods to artificially generate errors, such as aggressive data augmentation. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
2023-04-23T06:41:37.872Z
2021-06-01T02:45:44.000Z
redpajama/arxiv
arxiv_0001
2,803
11,098
dc37af0526c5e6e84faf7fb8e44626a7a5cf3798
\section{Introduction} Dynamical mass generation (DMG) \cite{Brauner:2005hw,Maris:1999nt,Fischer:2003rp,Aguilar:2005sb,Bowman:2005vx,Aguilar:2010cn,Cloet:2013jya,Mitter:2014wpa,Binosi:2016wcx,Ayala:2006sv,Libanov:2005vu,Benes:2008ir} is, by definition, a non-perturbative phenomenon. Historically, it has mostly been relevant to QCD physics \cite{Cornwall:1981zr}, particularly to the lightest family of quarks due to limited capacity of QCD interactions to generate large dynamical masses. In the new physics, the phenomenon is equally relevant to masses considerably lighter than the electroweak scale. As existence of scalar sector in nature has become a possibility after the experimental finding of the Higgs boson \cite{pdg,Carena:2002es,Aad:2012tfa,Chatrchyan:2012xdj,MalbertionbehalfoftheCMSCollaboration:2018eqs}, ascertaining the extent of dynamical masses of new scalar fields due to different interactions \cite{Bezrukov:2021mio,Bezrukov:2013fka,Mukhanov:2005sc,Lee:2017qve,Bento:2000ah,Bertolami:2016ywc,Munoz:2017ezd,Martin:1997ns,Lee:2017qve} can not be underestimated. \par Yukawa interaction vertex is among the phenomenologically interesting vertices \cite{Schwartz:2013pla,Kaku:1993ym,Peskin:1995ev}. A model containing the vertex and the standard model (SM) Higgs field serves as an important avenue to explore the scalar sector. Two Higgs doublet models (2HDM) \cite{Gunion:2002zf,Cabrera:2020lmg,Altmannshofer:2020shb} are among widely studied models for various reasons. The model has its own strength in scalar sector, i.e. it presents two important scenarios once the SM Higgs \cite{Aad:2012tfa,Chatrchyan:2012xdj} is introduced in the theory. First, the model presents a mock up scenario of fermions interacting with the SM Higgs. Second, the model presents an opportunity to investigate how same particles from two different families interact with each other. A highly interesting aspect in both scenarios is that the physics of dark matter in scalar sector arising from possible Higgs-ultralight scalar ($m_{s}=O(10^{-22})$ eV) interactions \cite{Rindler-Daller:2013zxa,Harko:2014vya,Chavanis:2011zi,Huang:2013oqa,Marsh:2015xka,Hui:2016ltb,Primack:2009jr,Hu:2000ke} can also be studied in the model. It immediately justifies the investigation related to dynamical mass generation in the model. \par This paper is an extension of Wick Cutkosky (WC) model \cite{Darewych:1998mb,Sauli:2002qa,Efimov:2003hs,Nugaev:2016uqd,Darewych:2009wk} to incorporate two complex doublet fields, termed as the SM Higgs and the second Higgs fields for convenience, which interact with each other only by a real singlet scalar field. The paper is a continuation of the study of a two Higgs doublet model \cite{Mufti:2021lit}. One of the motives for the study is to understand extent of generation of dynamical masses for a scalar singlet field in different regions of the parameter space of the model. The scalar singlet acts as a mediator field between the two different, and otherwise non-interacting, complex doublet fields under the Yukawa interactions. The renormalized \footnote{The terms renormalized mass and physical mass are interchangeably used throughout the paper for the SM Higgs. The term scalar field is reserved only for the singlet scalar.} masses of the two complex doublet scalars are kept at their physical masses in order to study the phenomenon of DMG from the perspective of phenomenology. The physical mass of the SM Higgs is kept at its experimentally known value \cite{pdg} while the renormalized mass of the second Higgs is chosen for different scenarios mentioned above. A certain advantage of fixing masses is reduction of the parameter space to be explored. Hence, It effectively renders the model only a 3 dimensional parameter space in the Lagrangian. It greatly facilitates in training an algorithm \cite{Alpaydin14}, which is not a novel approach in quantum field theory \cite{Bachtis:2021xoh,Halverson:2020trp,Akutagawa:2020yeo}, over the sample of calculated dynamical masses in different regions of the parameter space, the details are given in the next section. \par The approach of Dyson-Schwinger equations (DSEs) \cite{Schwinger:1951ex,Schwinger:1951hq,Swanson:2010pw,Roberts:1994dr,Rivers:1987hi} is used for the study. The DSEs for the three field propagators are considered while the interaction vertices are fixed at their tree level form up to certain renormalization dependent terms, the details are given in the next section. \par There exist other renormalizable vertices to further extend the model \cite{Schwartz:2013pla,Kaku:1993ym,Peskin:1995ev}. However, such extensions may not be suitable for investigations using the approach of DSEs due to possible effects of further truncations and ansatz \cite{Roberts:1994dr} in the model. An extended model is to be reported somewhere else \cite{Mufti:2022abc} for which another non-perturbative method \cite{Ruthe:2008rut} is employed. \section{Technical Details} The Euclidean version of the Lagrangian \footnote{The technical details, a significant part of which can also be found somewhere else, are generously included to keep the section self-contained.} with the counter terms is given by \begin{equation} \label{Lagrangian:eq} \begin{split} L = \frac{1}{2}(1+A) \partial_{\mu} \phi(x) \partial^{\mu} \phi(x) + \frac{1}{2} (m_{s}^{2}+B) \phi^{2}(x) + (1+\alpha) \partial_{\mu} h^{\dagger}(x) \partial^{\mu} h(x) \\ + (m_{h}^{2}+\beta) h^{\dagger}(x) h(x) + (1+a) \partial_{\mu} H^{\dagger}(x) \partial^{\mu} H(x) + (m_{H}^{2}+b) H^{\dagger}(x) H(x) \\ + (\lambda_{1}+C_{1}) \phi(x) h^{\dagger}(x) h(x) + (\lambda_{2}+C_{2}) \phi(x) H^{\dagger}(x) H(x) \end{split} \end{equation} where $A$, $B$, $\alpha$, $\beta$, $a$, $b$, $C_{1}$, and $C_{2}$ are coefficients due to the counter terms in the Lagrangian \footnote{The bare scalar singlet mass ($m_{s}$) is kept in the equation \ref{Lagrangian:eq} for the sake of clarity.}. The real singlet scalar field is represented by $\phi(x)$, $h(x)$ is designated for the SM Higgs boson while $H(x)$ represents the second Higgs boson. The resulting DSEs for the field propagators are given below: \begin{equation} \label{hdse:eq} \begin{split} D_{h}^{-1}(p)=(1+\alpha) p^{2} + m^{2}_{h} (1+\alpha) + 2 (1+A) (1+\alpha) (1+a) \sigma_{h} + \\ (\lambda_{1}+C_{1}) \int_{-\Lambda}^{\Lambda} \frac{d^{4}q}{(2\pi)^{4}} D_{s}(q) \Gamma_{1}(-p,q) D_{h}(q-p) \end{split} \end{equation} \begin{equation} \label{Hdse:eq} \begin{split} D_{H}^{-1}(p)=(1+a) p^{2} + m^{2}_{H} (1+a) + 2 (1+A) (1+\alpha) (1+a) \sigma_{H} + \\ (\lambda_{2}+C_{2}) \int_{-\Lambda}^{\Lambda} \frac{d^{4}q}{(2\pi)^{4}} D_{s}(q) \Gamma_{2}(-p,q) D_{H}(q-p) \end{split} \end{equation} \begin{equation} \label{sdse:eq} \begin{split} D_{s}^{-1}(p)=(1+A) p^{2} + m^{2}_{s} (1+A) + 2 (1+A) (1+\alpha) (1+a) \sigma_{s} + \\ (\lambda_{1}+C_{1}) \int_{-\Lambda}^{\Lambda} \frac{d^{4}q}{(2\pi)^{4}} D_{h}(q) \Gamma_{1}(q,-p) D_{h}(q-p)+ \\ (\lambda_{2}+C_{2}) \int_{-\Lambda}^{\Lambda} \frac{d^{4}q}{(2\pi)^{4}} D_{H}(q) \Gamma_{2}(q,-p) D_{H}(q-p) \end{split} \end{equation} where the following definitions are used: \begin{subequations} \label{mterms:eq} \begin{align} \beta = \alpha m^{2}_{h} + 2(1+A) (1+\alpha) (1+a) \sigma_{h} \\ b = a m^{2}_{H} + 2(1+A) (1+\alpha) (1+a) \sigma_{H} \\ B = A m^{2}_{s} + 2(1+A) (1+\alpha) (1+a) \sigma_{s} \end{align} \end{subequations} $\sigma_{h}$, $\sigma_{H}$, $\sigma_{s}$ are the terms to be determined during a computation. Due to their nature, above definitions do not impose any constraints on the equations. The definition of the two vertices during computations is given below: \begin{subequations} \label{vers:eq} \begin{align} \Gamma_{1}(u,v)=(1+A) (1+\alpha) (1+a) \tilde{\Gamma}_{1}(u,v) \\ \Gamma_{2}(u,v)=(1+A) (1+\alpha) (1+a) \tilde{\Gamma}_{2}(u,v) \end{align} \end{subequations} Hence, the DSEs for the three field propagators become \begin{equation} \label{hfdse:eq} \begin{split} D^{-1}_{h}(p)=(1+\alpha) [\ p^{2} + \frac{m^{2}_{h,r}}{(1+\alpha)} + (\lambda_{1}+C_{1})(1+A)(1+a) \\ \int_{-\Lambda}^{\Lambda} \frac{d^{4}q}{(2\pi)^{4}} D_{s}(q) \tilde{\Gamma}_{1}(-p,q) D_{h}(q-p) ]\ \end{split} \end{equation} \begin{equation} \label{Hfdse:eq} \begin{split} D^{-1}_{H}(p)=(1+a) [\ p^{2} + \frac{m^{2}_{H,r}}{1+a} + (\lambda_{2}+C_{2})(1+A)(1+\alpha) \\ \int_{-\Lambda}^{\Lambda} \frac{d^{4}q}{(2\pi)^{4}} D_{s}(q) \tilde{\Gamma}_{2}(-p,q) D_{H}(q-p) ]\ \end{split} \end{equation} \begin{equation} \label{sfdse:eq} \begin{split} D^{-1}_{s}(p)=(1+A) [\ p^{2} + 2 (1+a) (1+\alpha) \sigma_{s} + (\lambda_{1}+C_{1})(1+a)(1+\alpha) \\ \int_{-\Lambda}^{\Lambda} \frac{d^{4}q}{(2\pi)^{4}} D_{h}(q) \tilde{\Gamma}_{1}(q,-p) D_{h}(q-p) + (\lambda_{2}+C_{2})(1+a)(1+\alpha) \\ \int_{-\Lambda}^{\Lambda} \frac{d^{4}q}{(2\pi)^{4}} D_{H}(q) \tilde{\Gamma}_{2}(q,-p) D_{H}(q-p) ]\ \end{split} \end{equation} where in equation \ref{hfdse:eq} the renormalized mass for the SM Higgs ($ m_{h,r} $) is fixed at 125.09 GeV during the entire study, while the renormalized mass of the second Higgs boson is fixed during each computation \footnote{The definition of the squared physical mass of the SM Higgs is $m^{2}_{h,r}=m^{2}_{h}+\beta$, and the definition of the squared renormalized mass of the second Higgs is $m^{2}_{H,r}=m^{2}_{H}+b$.}. Equations \ref{hfdse:eq}-\ref{sfdse:eq} are the three DSEs considered for the study. The bare mass of the scalar field is set to zero in \ref{sfdse:eq} in order to investigate the dynamical masses of the singlet scalar field. Lastly, the quantities $\tilde{\Gamma}_{1}(u,v)$ and $\tilde{\Gamma}_{2}(u,v)$ are fixed at $\lambda_{1}$ and $\lambda_{2}$, respectively \cite{Roberts:1994dr}. However, in the current investigation the vertices can still change depending upon the contributions from the coefficients in the counter terms, see equations \ref{vers:eq}. \par For each of the propagators, the following renormalization conditions are used. \begin{equation} \label{hcond:eq} D_{h}^{ij}(p) |_{p^{2}=m^{2}_{h,r}} = \frac{\delta ^{ij}}{p^{2}+m^{2}_{h,r}} |_{p^{2}=m^{2}_{h,r}} \end{equation} \begin{equation} \label{Hcond:eq} D_{H}^{ij}(p) |_{p^{2}=m^{2}_{H,r}} = \frac{\delta ^{ij}}{p^{2}+m^{2}_{H,r}} |_{p^{2}=m^{2}_{H,r}} \end{equation} \begin{equation} \label{scond:eq} D_{s}(p) |_{p=1} = \frac{1}{p^{2}} |_{p=1} \end{equation} The following two conditions are also imposed to numerically compute the correlation functions and the other quantities which are introduced for the counter terms. \begin{equation} \label{hleast:eq} \begin{split} \int_{-\Lambda}^{\Lambda} (\ -D^{-1}_{h}(p)+ (1+\alpha) [\ p^{2} + \frac{m^{2}_{h,r}}{(1+\alpha)} + (\lambda_{1}+C_{1})(1+A)(1+a) \\ \int_{-\Lambda}^{\Lambda} \frac{d^{4}q}{(2\pi)^{4}} D_{s}(q) \tilde{\Gamma}_{1}(-p,q) D_{h}(q-p) ]\ )\ ^{2} dp =0 \end{split} \end{equation} \begin{equation} \label{Hleast:eq} \begin{split} \int_{-\Lambda}^{\Lambda} (\ -D^{-1}_{H}(p)+(1+a) [\ p^{2} + \frac{m^{2}_{H,r}}{1+a} + (\lambda_{2}+ C_{2})(1+A)(1+\alpha) \\ \int_{-\Lambda}^{\Lambda} \frac{d^{4}q}{(2\pi)^{4}} D_{s}(q) \tilde{\Gamma}_{2}(-p,q) D_{H}(q-p) ]\ )\ ^{2} dp =0 \end{split} \end{equation} Equations \ref{hleast:eq} - \ref{Hleast:eq} are indeed the implementation of the least square method with errors $E_{1}$ and $E_{2}$ defined below. \begin{equation} \label{herr:eq} \begin{split} E_{1}= \int_{-\Lambda}^{\Lambda} (\ -D^{-1}_{h}(p)+ (1+\alpha) [\ p^{2} + \frac{m^{2}_{h,r}}{(1+\alpha)} + (\lambda_{1}+C_{1})(1+A)(1+a) \\ \int_{-\Lambda}^{\Lambda} \frac{d^{4}q}{(2\pi)^{4}} D_{s}(q) \tilde{\Gamma}_{1}(-p,q) D_{h}(q-p) ]\ )\ ^{2} dp \end{split} \end{equation} \begin{equation} \label{Herr:eq} \begin{split} E_{2}=\int_{-\Lambda}^{\Lambda} (\ -D^{-1}_{H}(p)+(1+a) [\ p^{2} + \frac{m^{2}_{H,r}}{1+a} + (\lambda_{2}+C_{2})(1+A)(1+\alpha) \\ \int_{-\Lambda}^{\Lambda} \frac{d^{4}q}{(2\pi)^{4}} D_{s}(q) \tilde{\Gamma}_{2}(-p,q) D_{H}(q-p) ]\ )\ ^{2} dp \end{split} \end{equation} With imposition of these constraints, the problem at hand becomes that of optimization in which solutions are sought which satisfy equations \ref{hleast:eq}-\ref{Hleast:eq}. \par An additional condition given below is also imposed in order to ensure positivity of renormalized squared dynamical scalar mass and evade unwanted numerical fluctuations which may arise due to the difference between the fixed renormalized masses and the dynamically generated masses, and the ever-present limitation in momentum resolution. \begin{equation} \label{mass2cond:eq} \begin{split} m^{2}_{s,r} = (1 + A) (\ m^{2}_{s} + 2(1+\alpha) (1+a) \sigma_{s} )\ \geq 0 \end{split} \end{equation} \par In order to further suppress numerical fluctuations, the SM Higgs is expanded in the form given below: \begin{equation} \label{hexp1:eq} D^{ij}_{h}(p)= \delta^{ij} \frac{1}{c(p^{2}+d+f(p))} \end{equation} with $f(p)$ given by \begin{equation} \label{hexp2:eq} f(p) = \frac{\displaystyle \sum_{l=0}^{N} a_{l} p^{2l}}{\displaystyle \sum_{l=0}^{N} b_{l} p^{2l}} \end{equation} In equations \ref{hexp1:eq}-\ref{hexp2:eq}, $c$, $d$, $a_{l}$, and $b_{l}$ are the coefficients to be determined during a computation. A similar expansion with different coefficients is used for the second Higgs propagator. Beside stability, these expansions are also time efficient while performing renormalization and updating the SM and the second Higgs propagators. \par The computation starts with $\sigma_{H}=\sigma_{s}=C_{1}=C_{2}=0$, i.e. with no contribution by the counter terms to the renormalized masses and the two Yukawa couplings. Both Higgs propagators are also rendered their respective tree level structures. For the SM Higgs in equations \ref{hexp1:eq}-\ref{hexp2:eq}, $c=1$ and $d=m^{2}_{h}$ \footnote{$m^{2}_{h}=m^{2}_{h,r}$ is used throughout the study.} while all the other coefficients are zero. A similar setup of coefficients is used for the second Higgs. The terms $1+\alpha$ and $1+a$ are calculated from the renormalization conditions in equations \ref{hcond:eq} and \ref{Hcond:eq}. The scalar propagator assumes the values using the equation \ref{sdse:eq} and the quantity $1+A$ is calculated by the renormalization condition \ref{scond:eq} \footnote{The scalar propagator is calculated without the term $1+A$ and then then renormalization condition sets the value of the term $1+A$.}. \par An iteration involves updating of the correlation functions and parameters. During an iteration, first, the $\sigma_{s}$, $C_{1}$, and $C_{2}$ are updated as in the mentioned ordered. The update of each of these quantities is performed using Newton Raphson's method with the criteria imposed by the least square method in equations \ref{hleast:eq}-\ref{Hleast:eq}. The updated value is accepted only when both of the errors $E_{1}$ and $E_{2}$ reduce, see equations \ref{herr:eq}-\ref{Herr:eq}. \par It is followed by the SM Higgs propagator for which the coefficients in equations \ref{hexp1:eq}-\ref{hexp2:eq} are updated with the above mentioned criteria of acceptance. Upon each change, the SM Higgs propagator is calculated from equations \ref{hexp1:eq}-\ref{hexp2:eq} and renormalized using equation \ref{hcond:eq}. \par Lastly, the second Higgs propagator is updated using the same procedure as described above for the SM Higgs, but using equation \ref{Hcond:eq} for renormalization. Upon every change, the scalar propagator is calculated from equation \ref{sfdse:eq} and renormalized using equation \ref{scond:eq}, as mentioned earlier. \par A computation concludes only when there are either no further improvements in the quantities such that $E_{1}$ and $E_{2}$ are further reduced, or both of these errors are equal or below the preset value of tolerance. The value of tolerance is set at $10^{-20}$. \par Gauss quadrature algorithm is used for numerical integration in the DSEs. The algorithms are developed in C++ environment. \par The solutions presented are unique in the sense that order of updates performed on the propagators and other quantities practically does not effect the quantities being computed. \par It is assumed that the model is not trivial. The assumption is supported by the fact that, despite that $\phi^{4}$ theory \cite{Hasenfratz:1988kr,Gliozzi:1997ve,Weber:2000dp} is found trivial \cite{Jora:2015yga,Aizenman:1981zz,Weisz:2010xx,Siefert:2014ela,Hogervorst:2011zw}, Higgs interaction with gauge bosons does not render the model trivial \cite{Maas:2013aia,Maas:2014pba}. \par A peculiar feature of the model is the possibility of negative Hamiltonian due to certain paths \cite{Rivers:1987hi}. However, it has been noticed in a different investigation of scalar interactions \cite{Mufti:2022abc,Ruthe:2008rut} that in the presence of SM Higgs mass the corresponding histories do not contribute during simulations \footnote{In the potential part of the model, the dominant contribution comes from the term containing the SM (squared) Higgs mass ($125.09$ GeV) while only $\phi(x)$ may force the potential term below zero. Hence, $\phi(x)$ must take significantly large values at all points of the spacetime to effectively compete the term containing $m_{h}$. Numerically, it is not favored in Monte Carlo simulations due to precision related issues and that the fields are rendered values from certain (usually Gaussian) distributions. Another argument is that a Monte Carlo simulation proceeds towards smaller action values which, depending upon the model, are favored by the fields with relatively lower values.}. Presence of large $m_{H}$ in the model further diminishes the possibility that the potential term will cause negative Hamiltonian. Hence, it is assumed here that the results are not effected by the above mentioned feature of the potential term. \par Once the scalar dynamical masses in the parameter space are calculated, knowledge from machine learning \cite{Alpaydin14} is employed to represent the results in an attempt to understand how scalar dynamical masses manifest in the parameter space and estimate critical coupling value if possible. There are three free parameters in the model. Hence, scalar dynamical mass is expanded in terms of the second Higgs mass $m_{H}$, and the two Yukawa couplings ($\lambda_{1}$ and $\lambda_{2}$) as given below: \begin{equation} \label{sDMGexpansion:eq} m_{s,f} (m_{H},\lambda_{1},\lambda_{2}) = \sum_{i=0}^{i=6} \sum_{j=0}^{j=6} \sum_{k=0}^{k=6} a_{ijk} m_{H}^{i} \lambda_{1}^{j} \lambda_{2}^{k} \end{equation} where $m_{s,f}$ is the function representing scalar dynamical mass in the parameter space. The error value $E_{f}$ is described by the following expression: \begin{equation} \label{sdmgerror:eq} E_{f} =\frac{1}{N}\sqrt{\sum_{i=0}^{i=N}(m_{s,r} - m_{s,f})_{i}^{2}} \end{equation} where $N$ is the number of explored points in the parameter space. The model is studied on 64 points for each of the cutoff values set at $10$ TeV and $100$ TeV. The procedure of training at a fixed cutoff value starts with all weights $a_{ijk}$ set to zero. The results from the DSE calculations are successively introduced to the algorithm from higher to lower couplings, hence the results from DSE calculations being examined increase on every introduction. The weights are varied using Newton Raphson's method \footnote{Other methods are also used for this purpose, see \cite{Alpaydin14} for instance.} such that the error value in equation \ref{sdmgerror:eq} decreases. If the value does not decrease, the change in the weight is not accepted. All weights are examined during each iteration. Once, the algorithm has reached the point when it either can not further improve the weights or the error value has reached its tolerance, further results are introduced to the algorithm and weights are reexamined, hence the algorithm is retrained. The tolerance is set at the value $10^{-16}$. In short, scalar dynamical mass is fitted by the expansion in equation \ref{sDMGexpansion:eq} while successively introducing the results from DSE calculations and readjusting the weights $a_{ijk}$. Throughout the training, $0 \leq m_{s,f} (m_{H},\lambda_{1},\lambda_{2}) $ is imposed. In order to remove any personal bias, no value of scalar dynamical mass was taken as outliers. \par As is commonly known, training algorithms on finite data (and duration) may not render extreme accuracy. In addition, the scalar dynamical masses and other calculated quantities (from DSEs) may also suffer from numerical fluctuations. However, a relatively smoother description of the data immensely helps in understanding the overall features of the model and making nontrivial estimates. Hence, the algorithm training was employed to assist in our conclusions regarding the critical coupling value in the model. \section{Field Propagators} \begin{figure} \centering \includegraphics[width=\linewidth]{h1prs1.eps} \caption{\label{fig:h1prs1} The SM Higgs propagators with $m_{H}=0.001$ GeV and $m_{H}=1.0$ GeV are plotted for cutoff values at $10$ TeV and $100$ TeV. The parameters in the legend are given as $(m_{H},\lambda_{1},\lambda_{2})$ with all of the parameters mentioned in GeV. For the same cutoff, every two consecutive propagators are $1.0$ TeV apart on the momentum axis for the sake of clarity.} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{h1prs2.eps} \caption{\label{fig:h1prs2} The SM Higgs propagators with $m_{H}=100$ GeV and $m_{H}=1000$ GeV are plotted for cutoff values at $10$ TeV and $100$ TeV. The parameters in the legend are given as $(m_{H},\lambda_{1},\lambda_{2})$ with all of the parameters mentioned in GeV. For the same cutoff, every two consecutive propagators are $1.0$ TeV apart on the momentum axis for the sake of clarity.} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{h2prs1.eps} \caption{\label{fig:h2prs1} The second Higgs propagators with $m_{H}=0.001$ GeV and $m_{H}=1.0$ GeV are plotted for cutoff values at $10$ TeV and $100$ TeV. The parameters in the legend are given as $(m_{H},\lambda_{1},\lambda_{2})$ with all of the parameters mentioned in GeV. For the same cutoff, every two consecutive propagators are $1.0$ TeV apart on the momentum axis for the sake of clarity.} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{h2prs2.eps} \caption{\label{fig:h2prs2} The second Higgs propagators with $m_{H}=100$ GeV and $m_{H}=1000$ GeV are plotted for cutoff values at $10$ TeV and $100$ TeV. The parameters in the legend are given as $(m_{H},\lambda_{1},\lambda_{2})$ with all of the parameters mentioned in GeV. For the same cutoff, every two consecutive propagators are $1.0$ TeV apart on the momentum axis for the sake of clarity.} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{sprs1.eps} \caption{\label{fig:sprs1} The scalar propagators with $m_{H}=0.001$ GeV and $m_{H}=1.0$ GeV are plotted for cutoff values at $10$ TeV and $100$ TeV. The parameters in the legend are given as $(m_{H},\lambda_{1},\lambda_{2})$ with all of the parameters mentioned in GeV. For the same cutoff, every two consecutive propagators are $1.0$ TeV apart on the momentum axis for the sake of clarity.} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{sprs2.eps} \caption{\label{fig:sprs2} The scalar propagators with $m_{H}=100$ GeV and $m_{H}=1000$ GeV are plotted for cutoff values at $10$ TeV and $100$ TeV. The parameters in the legend are given as $(m_{H},\lambda_{1},\lambda_{2})$ with all of the parameters mentioned in GeV. For the same cutoff, every two consecutive propagators are $1.0$ TeV apart on the momentum axis for the sake of clarity.} \end{figure} Unlike the vertices in the model, the field propagators are rendered considerable freedom to have momentum dependence while conforming to the renormalization conditions. Scalar singlet field propagator has the highest freedom since it takes contributions from interactions with both Higgs field. However, such freedom may also allow contributions from numerical fluctuations of both Higgs propagators as well as other involved quantities in the model. \par The SM Higgs propagators are given in figures \ref{fig:h1prs1}-\ref{fig:h1prs2}. An immediate observation is that cutoff effects are not significant throughout the explored parameter space, particularly for the cases with $\lambda_{1} < 1$. The deviations are mostly due to the multiplicative constant which is computed using the renormalization condition \ref{hcond:eq}. \par In the parameter space with $\lambda_{1}$ as low as $10^{-6}$, there is no significant dependence of the propagators. However, for higher couplings slight enhancement is observed. Considerable changes occur only at $\lambda_{1}=1.0$. The propagators are found to have similar qualitative behavior for all the renormalized masses of the second Higgs which indicates that the second Higgs field does not influence the SM propagators which could have been possible through the scalar propagators. Overall, there is no significant changes in the SM Higgs propagators in the parameter space of the model. Hence, it is deduced that if there is any non-trivial structure in the phase space, it does not influence the SM Higgs propagators \cite{Maas:2013aia} and possibly the other field propagators since such effects can also translate to other quantities by the coupled DSEs. \par In the model, the second Higgs field differs from the SM Higgs field due to different renormalized masses and couplings. Hence, particularly for masses in the vicinity of the SM Higgs mass, it is expected to have similar behavior within numerical fluctuations. The propagators are shown in figures \ref{fig:h2prs1}-\ref{fig:h2prs2}. There are no considerable cutoff effects, as is the case with the SM Higgs propagators. However, a certain dependence on $m_{H}$ is observed in the propagators which is relatively stronger for $m_{H} < m_{h}$ and weakens as $m_{H}$ approaches $m_{h}$. The two Higgs propagators have similar behavior for $m_{H} \simeq m_{h}$ which validates the implementation of the algorithms. For $m_{H} = 1$ TeV dependence on couplings is lost since the bare mass of the second Higgs dominates the contributions in the propagators, hence, rendering it a tree level structure up to the renormalization dependent term. The propagators are suppressed for such a large renormalized mass which is expected for the case of dominant tree level contribution in the propagator. \par The scalar singlet propagators are shown in figures \ref{fig:sprs1}-\ref{fig:sprs2}. As argued in above, cutoff effects are evident on the propagators \footnote{There may also be contributions from the numerical interpolation performed during renormalization of the scalar propagators. In order to suppress these effects, resolution in momentum is kept at its highest feasible value.}. The overall behavior is that for higher couplings, particularly $\lambda_{2}$ since $m_{h}$ is kept fixed, the propagators are enhanced depending upon the second Higgs mass $m_{H}$. As $m_{H}$ increases, this effect tends to disappear in favor of a tree level structure, as is the case for the other field propagators. The scalar propagators suffer strongly from the cutoff effects despite that both Higgs propagators are not effected to such an extent. It implies that the cutoff effects must show up in at least one of the other calculated quantities other than the two Higgs propagators. \section{Dynamical Scalar Masses} \begin{figure} \centering \includegraphics[width=\linewidth]{smasses10TeV.eps} \caption{\label{fig:smasses10TeV} Dynamical scalar masses are plotted against second Higgs mass $m_{H}$ for various couplings at $10$ TeV cutoff. The couplings (in GeV) are shown in the caption as $\lambda_{1},\lambda_{2}$. Every consecutive pair of couplings is slightly displaced along x-axis for clarity. The error bars represent difference between the values of scalar dynamical masses obtained by computation (plotted values) and by training of the algorithms.} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{smasses100TeV.eps} \caption{\label{fig:smasses100TeV} Dynamical scalar masses are plotted against second Higgs mass $m_{H}$ for various couplings at $100$ TeV cutoff. The couplings (in GeV) are shown in the caption as $\lambda_{1},\lambda_{2}$. Every consecutive pair of couplings is slightly displaced along x-axis for clarity. The error bars represent difference between the values of scalar dynamical masses obtained by computation (plotted values) and by training of the algorithms.} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{fittedsDMG10.eps} \caption{\label{fig:smassfit10TeV} Dynamical scalar masses obtained by training algorithms are shown against the two couplings $\lambda_{1}$ and $\lambda_{2}$, for various second Higgs masses $m_{H}$ in GeV and at cutoff $\Lambda=10$ TeV indicated on the caption.} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{fittedsDMG100.eps} \caption{\label{fig:smassfit100TeV} Dynamical scalar masses obtained by training algorithms are shown against the two couplings $\lambda_{1}$ and $\lambda_{2}$ for various second Higgs masses at 100 TeV, for various second Higgs masses $m_{H}$ in GeV and at cutoff $\Lambda=10$ TeV indicated on the caption.} \end{figure} Since the scalar propagator also contains its dynamically generated mass, the arguments related to the cutoff effects and numerical artifacts translate into the dynamical masses. However, there are certain vivid features in the masses shown in the figures \ref{fig:smasses10TeV}-\ref{fig:smasses100TeV}. \par Considering the dynamical masses for 100 TeV cutoff in figure \ref{fig:smasses100TeV}, the extent of the mass production is relatively higher for the second Higgs mass in MeVs and high second coupling for a number of cases. For very small couplings, large difference between the computed value and $m_{s,f}$, possibly due to numerical fluctuations, suggests that these two points could have been taken as outliers. Hence, it is concluded with confidence that the dynamically generated scalar mass in the model is restricted well within 200 MeVs. It is clear that the Yukawa interactions in the scalar sector have a shortcoming when it comes to producing dynamical masses in GeVs. \par It is also evident from figures \ref{fig:smasses10TeV}-\ref{fig:smasses100TeV} that cutoff effects are far less severe for the cutoff in hundreds of TeVs. For higher cutoff, the model produces scalar masses with less dependence on the second Higgs mass. Furthermore, the production of scalar mass is significantly lower if one of the couplings is as low as $10^{-6}$ GeV, unless the other coupling reaches $1.0$ GeV, see figure \ref{fig:smasses100TeV}. Hence, the model does posses a demarcation over the coupling values in the vicinity of $1.0$ and $10^{-6}$ GeV. For the case of both couplings at $10^{-6}$ GeV, scalar mass practically looses dependence on the two couplings. It is taken as a sign of existence of critical coupling value between $10^{-3}$ GeV and $10^{-6}$ GeV. \par An interesting feature in the low coupling region is that for higher cutoff values the model produces smaller masses which also tend to decrease with the two couplings. One may expect that the behavior persists until the critical coupling value. If this is indeed the case, the model may be useful to study ultralight scalar interactions \cite{Rindler-Daller:2013zxa,Harko:2014vya,Chavanis:2011zi,Huang:2013oqa,Marsh:2015xka,Hui:2016ltb,Primack:2009jr,Hu:2000ke}, though it may undoubtedly be a daunting task from the perspective of numerical precision. \par Scalar dynamical masses obtained by the trained algorithm are plotted in figures \ref{fig:smassfit10TeV}-\ref{fig:smassfit100TeV}. The weights of the expansion in equation \ref{sDMGexpansion:eq} are given in appendix A. An immediate observation is existence of a unique value of the critical coupling in the model below $10^{-3}$ GeV. Determining exact value is hampered by the ever-present limitation of the data in machine learning. However, it is evident that the model strongly favors a critical coupling in the region $10^{-6} < \lambda_{i} < 10^{-3}$ GeV. \par Despite the limitations in training of algorithms and numerical fluctuations in the exposed data, the weights $a_{ijk}$ are found to have a particular behavior. Firstly there are strong contributions for the terms with $k = 0$ and $i,j \neq 0$, while there is practically no contributions by $a_{ijk}$ for $i,j=0$ irrespective of the value of $k$. Furthermore, increasing the cutoff suppresses $a_{ijk}$ for most of the $i$ and $k$ values at $j=0$. Since the model has significant cutoff effects which becomes milder as the cutoff is raised, it is expected that scalar dynamical mass may not receive significant contributions from terms involving $j=0$ at high cutoff values. Since the mass of the SM Higgs boson is kept fixed throughout the investigation, these observations suggest that the (magnitude of) contributions to dynamical scalar mass favor interactions with the SM Higgs boson mass. \section{Conclusion} In the presence of an SM Higgs, the two Higgs doublet model has the capacity to dynamically produce scalar mass significantly larger than of the lightest quarks and leptons. As the cutoff is raised above $100$ TeV, stability in the mass with magnitude below $200$ MeV ensues against the second Higgs mass. It is expected that at a cutoff much higher than $100$ TeV, a single value of dynamical mass is favored by the model irrespective of the second Higgs mass. \par However, the dynamical mass has sensitivity to the couplings which diminishes as the couplings are reduced below $10^{-3}$ GeV, with the minimum mass being produced. It strongly implies existence of a critical coupling between $10^{-3}$ GeV and $10^{-6}$ GeV. It presents an opportunity to investigate the model to understand new physics involving the particles considerably lighter than $1$ GeV, such as ultra-light scalars. At the same time, it invites for further study of richer models which contain higher renormalizable vertices in an attempt to explore existence of critical couplings and extent of mass generation due to diversity in interactions. \par The role of cutoff effects can not be neglected in the model. Scalar propagator and scalar dynamical mass suffer the most, while for masses larger than $100$ GeV the field propagators are relatively less effected as was observed in the two Higgs propagators. Since dynamical masses are less than $1$ GeV, numerical fluctuations and the cutoff effects hamper in finding an accurate description of how the masses behave in the parameter space. However, considering sufficient number of points in the parameter space a mathematical description for dynamical masses was still found which concurs with the deduction regarding the critical coupling in the model. \par A model with the capacity to dynamically produce masses considerably larger than the lightest quarks and leptons can certainly not be ruled out in scalar interactions. The study invites extensions, such as by including richer interactions or other fields, to have a better understanding of how (particularly light) scalars play role at the fundamental level in our universe. \section{Acknowledgments} This work was supported by Lahore University of Management Sciences, Pakistan for developing the algorithms and performing computations. \section{Appendix A} The values of the weights $a_{ijk}$ in equation \ref{sDMGexpansion:eq} for cutoff values at $\Lambda=10$ TeV and $\Lambda=100$ TeV are given below: \begin{center} \begin{longtable}{ | m{0.5cm} | m{0.5cm}| m{0.5cm} | m{3.5cm} | m{3.5cm} | } \hline $i$ & $j$ & $k$ & $a_{ijk}$ (10 TeV) & $a_{ijk}$ (100 TeV) \\ \hline 0 & 0 & 0 & 0.062470119000000 & -0.005102152199999 \\ \hline 1 & 0 & 0 & 0.432639286999998 & 0.067154687499999 \\ \hline 0 & 1 & 0 & 0.431424060300000 & 0.276498992400000\\ \hline 0 & 0 & 1 & 0.000127544300000 & 0.000023271900000\\ \hline 2 & 0 & 0 & -0.196267245799999 & 0.826964182800007\\ \hline 1 & 1 & 0 & 3.483278381999988 & -2.646594633199990\\ \hline 1 & 0 & 1 & 0.001256076799999 & 0.000786048000000\\ \hline 0 & 2 & 0 & -2.426385300799987 & -0.005208820599999\\ \hline 0 & 1 & 1 & -0.001780857000000 & -0.000014844700000\\ \hline 0 & 0 & 2 & -0.0000001083 & -0.000000023100000\\ \hline 3 & 0 & 0 & -0.595301560000001 & -0.158007649599999\\ \hline 2 & 1 & 0 & 0.311572103699999 & 5.428065021000011\\ \hline 2 & 0 & 1 & -0.001414899099999 & -0.002876003599999\\ \hline 1 & 2 & 0 & -2.967471651800008 & -3.556474306000007\\ \hline 1 & 1 & 1 & -0.011901846299999 & 0.013548386599999\\ \hline 1 & 0 & 2 & -0.0000018441 & -0.000000603000000\\ \hline 0 & 3 & 0 & 3.360127912999990 & -0.928262618399999\\ \hline 0 & 2 & 1 & 0.004300518299999 & -0.002201626699999\\ \hline 0 & 1 & 2 & 0.000000999000000 & -0.000000020900000\\ \hline 0 & 0 & 3 & 0.0 & 0.0\\ \hline 4 & 0 & 0 & 1.045801962500008 & -0.402565733399999\\ \hline 3 & 1 & 0 & -1.420411216799999 & -0.572378273000001\\ \hline 3 & 0 & 1 & 0.003927822499999 & -0.000554230599999\\ \hline 2 & 2 & 0 & -1.122774422700004 & -0.529377973400000\\ \hline 2 & 1 & 1 & 0.000976498800000 & -0.005964260699999\\ \hline 2 & 0 & 2 & 0.0000004915 & 0.000001401500000\\ \hline 1 & 3 & 0 & 0.322474038600001 & 3.267544194000007\\ \hline 1 & 2 & 1 & 0.005696769799999 & -0.001591530000000\\ \hline 1 & 1 & 2 & 0.0000093254 & -0.000008751100000\\ \hline 1 & 0 & 3 & 0.0 & 0.0\\ \hline 0 & 4 & 0 & -3.398481779199986 & 1.658322491499994\\ \hline 0 & 3 & 1 & -0.002802221199999 & 0.001725826300000\\ \hline 0 & 2 & 2 & -0.0000000495 & 0.000000259100000\\ \hline 0 & 1 & 3 & 0.0 & 0.0\\ \hline 0 & 0 & 4 & 0.0 & 0.0\\ \hline 5 & 0 & 0 & -0.831871660400005 & 0.132814269100000\\ \hline 4 & 1 & 0 & 0.949120657999995 & -1.788138425800004\\ \hline 4 & 0 & 1 & -0.003439259299999 & 0.001322541799999\\ \hline 3 & 2 & 0 & -1.903333068200005 & -4.520240236500010\\ \hline 3 & 1 & 1 & -0.005681518600000 & 0.006113348800000\\ \hline 3 & 0 & 2 & 0.0000018543 & 0.000000118100000\\ \hline 2 & 3 & 0 & 5.737533950000011 & 2.994909360000007\\ \hline 2 & 2 & 1 & 0.004100454999999 & -0.003434122199999\\ \hline 2 & 1 & 2 & 0.0000045994 & -0.000004740900000\\ \hline 2 & 0 & 3 & 0.0 & 0.0\\ \hline 1 & 4 & 0 & -4.607335835299982 & 3.560205072699987\\ \hline 1 & 3 & 1 & -0.0017640611 & 0.000632370000000\\ \hline 1 & 2 & 2 & -0.000002936 & 0.000001718100000\\ \hline 1 & 1 & 3 & 0.0 & 0.0\\ \hline 1 & 0 & 4 & 0.0 & 0.0\\ \hline 0 & 5 & 0 & 2.476695333800004 & -1.438056151699995\\ \hline 0 & 4 & 1 & -0.001170340799999 & 0.000608526900000\\ \hline 0 & 3 & 2 & -0.0000001277 & 0.000000238100000\\ \hline 0 & 2 & 3 & 0.0 & 0.0\\ \hline 0 & 1 & 4 & 0.0 & 0.0\\ \hline 0 & 0 & 5 & 0.0 & 0.0\\ \hline 6 & 0 & 0 & 0.250543694400000 & -0.303845693199999\\ \hline 5 & 1 & 0 & -0.872865771300006 & -0.570790623500001\\ \hline 5 & 0 & 1 & -0.0014202491 & -0.000099293799999\\ \hline 4 & 2 & 0 & -2.629992650700005 & -4.544265494000010\\ \hline 4 & 1 & 1 & -0.002726043499999 & 0.004424348300000\\ \hline 4 & 0 & 2 & 0.0000007263 & 0.000000419100000\\ \hline 3 & 3 & 0 & 4.539023046400008 & -1.631924930300001\\ \hline 3 & 2 & 1 & 0.004210737 & 0.000127814799999\\ \hline 3 & 1 & 2 & 0.0000038225 & -0.000004546200000\\ \hline 3 & 0 & 3 & 0.0 & 0.0\\ \hline 2 & 4 & 0 & -2.361741260400014 & 7.959941882000007\\ \hline 2 & 3 & 1 & 0.0004258372 & -0.003437821899999\\ \hline 2 & 2 & 2 & -0.0000105292 & 0.000010411400000\\ \hline 2 & 1 & 3 & 0.0 & 0.0\\ \hline 2 & 0 & 4 & 0.0 & 0.0\\ \hline 1 & 5 & 0 & 2.65929292970001 & -3.122536580600026\\ \hline 1 & 4 & 1 & 0.007337953699999 & -0.008389909299999\\ \hline 1 & 3 & 2 & -0.0000050914 & 0.000004083200000\\ \hline 1 & 2 & 3 & 0.0 & 0.0\\ \hline 1 & 1 & 4 & 0.0 & 0.0\\ \hline 1 & 0 & 5 & 0.0 & 0.0\\ \hline 0 & 6 & 0 & -0.408070357499999 & 0.605073464699998\\ \hline 0 & 5 & 1 & 0.0012935581 & -0.001141694000000\\ \hline 0 & 4 & 2 & -0.0000006819 & 0.000000429700000\\ \hline 0 & 3 & 3 & 0.0 & 0.0\\ \hline 0 & 2 & 4 & 0.0 & 0.0\\ \hline 0 & 1 & 5 & 0.0 & 0.0\\ \hline 0 & 0 & 6 & 0.0 & 0.0\\ \hline \end{longtable} \end{center} \bibliographystyle{plain}
2023-04-23T06:41:38.666Z
2022-08-23T02:11:21.000Z
redpajama/arxiv
arxiv_0001
2,828
6,709
6e06098a0e1474cba5fa3dea638f0009484c5a17
\section{Introduction} The following model has been proposed for the description of the steady-state of a simple Electrostatic MEMS device: \begin{equation} \label{MEMS.1} \arraycolsep=1.5pt \left\{ \begin{array}{ll} \alpha \Delta^2 u = \left( \beta \int_\Omega | \nabla u|^2 dx + \gamma \right) \Delta u + \frac{ \lambda f(x)}{ (1-u)^2 \left( 1 + \chi \int_\Omega \frac{dx}{(1-u)^2} \right)} &\quad \hbox{in }\Omega \\ 0<u<1 &\quad \hbox{in } \Omega \\ u=\alpha \partial_\nu u =0 &\quad \hbox{on } \partial \Omega , \quad \end{array} \right. \end{equation} where $ \alpha, \beta, \gamma, \chi \ge 0$, $ f \in C( \overline{\Omega},[0,1])$ are fixed, $ \Omega$ is a bounded domain in $ {\mathbb{R}}^N$ and $ \lambda \ge 0$ is a varying parameter (see for example Bernstein and Pelesko \cite{BP}). The function $ u(x)$ denotes the height above a point $ x \in \Omega\subset {\mathbb{R}}^N$ of a dielectric membrane clamped on $ \pOm$, once it deflects torwards a ground plate fixed at height $z=1$, whenever a positive voltage -- proportional to $\lambda$ -- is applied. \medskip \noindent In studying this problem, one typically makes various simplifying assumptions on the parameters $ \alpha, \beta, \gamma, \chi$, and the first approximation of (\ref{MEMS.1}) that has been studied extensively so far is the equation \begin{eqnarray*} \hskip 150pt \left\{ \begin{array}{ll} -\Delta u= \lambda \frac{f(x)}{(1-u)^2} &\text{in } \Omega\\ \hfill 0<u<1 \quad \quad &\text{in } \Omega\hskip 150pt (S)_{ \lambda,f} \\ \hfill u=0 \quad \quad \quad &\text{on }\partial \Omega, \end{array} \right. \end{eqnarray*} where we have set $ \alpha = \beta = \chi=0$ and $ \gamma=1$ (see for example \cite{Esp,EGG,GG} and the monograph \cite{EGG.book}) . This simple model, which lends itself to the vast literature on second order semilinear eigenvalue problems, is already a rich source of interesting mathematical problems. The case when the ``permittivity profile" $f$ is constant ($f=1$) on a general domain was studied in \cite{MP}, following the pioneering work of Joseph and Lundgren \cite{JL} who had considered the radially symmetric case. The case for a non constant permittivity profile $ f$ was advocated by Pelesko \cite{P}, taken up by \cite{GPW}, and studied in depth in \cite{Esp,EGG,GG}. The starting point of the analysis is the existence of a pull-in voltage $\lambda^*(\Omega, f)$, defined as $$ \lambda^*(\Omega, f):= \sup \Big\{ \lambda >0: \hbox{there exists a classical solution of } (S)_{\lambda, f} \Big\}.$$ It is then shown that for every $ 0 < \lambda < \lambda^*$, there exists a smooth minimal (smallest) solution of $(S)_{\lambda, f}$, while for $ \lambda > \lambda^*$ there is no solution even in a weak sense. Moreover, the branch $ \lambda \mapsto u_\lambda(x)$ is increasing for each $ x \in \Omega$, and therefore the function $u^*(x):= \lim_{\lambda \nearrow \lambda^*} u_\lambda(x)$ can be considered as a generalized solution that corresponds to the pull-in voltage $\lambda^*$. Now the issue of the regularity of this extremal solution -- which, by elliptic regularity theory, is equivalent to whether $ \sup_\Omega u^*<1$ -- is an important question for many reasons, not the least of which being the fact that it decides whether the set of solutions stops there, or whether a new branch of solutions emanates from a bifurcation state $(u^*,\lambda^*)$. This issue turned out to depend closely on the dimension and on the permittivity profile $f$. Indeed, it was shown in \cite{GG} that $u^*$ is regular in dimensions $1\leq N\leq 7$, while it is not necessarily the case for $N\geq 8$. In other words, the dimension $N=7$ is critical for equation $(S)_\lambda$ (when $f=1$, we simplify the notation $(S)_{\lambda,1}$ into $(S)_\lambda$). On the other hand, it is shown in \cite{EGG} that the regularity of $u^*$ can be restored in any dimension, provided we allow for a power law profile $|x|^\eta$ with $\eta$ large enough. \medskip \noindent The case where $ \beta = \gamma = \chi=0$ (and $ \alpha=1$) in the above model, that is when we are dealing with the following fourth order analog of $(S)_\lambda$ \begin{eqnarray*} \hskip 150pt \left\{ \begin{array}{ll} \Delta^2 u= \frac{\lambda}{(1-u)^2} &\text{in } \Omega\\ 0<u<1 &\text{in } \Omega \hskip 150pt (P)_\lambda \\ u=\partial_\nu u=0 &\text{on }\partial \Omega, \end{array} \right. \end{eqnarray*} was also considered by \cite{CDG,LY} but with limited success. One of the reasons is the lack of a ``maximum principle" which plays such a crucial role in developing the theory for the Laplacian. Indeed, it is a well known fact that such a principle does not normally hold for general domains $ \Omega$ (at least for the clamped boundary conditions $ u = \partial_\nu u =0$ on $ \pOm$) unless one restricts attention to the unit ball $ \Omega =B$ in $ {\mathbb{R}}^N$, where one can exploit a positivity preserving property of $\Delta^2$ due to T. Boggio \cite{Boggio}. This is precisely what was done in the references mentioned above, where a theory of the minimal branch associated with $(P)_\lambda$ is developed along the same lines as for $(S)_\lambda$. The second obstacle is the well-known difficulty of extracting energy estimates for solutions of fourth order problems from their stability properties. This means that the methods used to analyze the regularity of the extremal solution for $(S)_\lambda$ could not carry to the corresponding problem for $(P)_\lambda$. \medskip \noindent This is the question we address in this paper as we eventually show the following result. \begin{thm} The unique extremal solution $u^*$ for $(P)_{\lambda^*}$ in $B$ is regular in dimension $1\leq N \le 8$, while it is singular (i.e, $ \sup_B u^*=1$) for $ N \ge 17$. \end{thm} \noindent Actually, we believe that the critical dimension for $(P)_\lambda$ in $B$ is $N=8$, as opposed to being equal to $7$ in $(S)_\lambda$. We add that our methods are heavily inspired by the recent paper of Davila et al. \cite{DDGM} where it is shown that $N=12$ is the critical dimension for the fourth order nonlinear eigenvalue problem $$\left\{\begin{array}{ll} \Delta^2 u= \lambda e^u &\text{in } B\\ u=\partial_\nu u=0 &\text{on }\partial B, \end{array} \right. $$ while the critical dimension for its second order counterpart (i.e., the Gelfand problem) is $N=9$. \medskip \noindent Throughout this paper, we will always consider problem $(P)_\lambda$ on the unit ball $B$. We start by recalling some of the results from \cite{CDG} concerning $(P)_\lambda$, that will be needed in the sequel. We define $$\lambda^*:= \sup\Big\{ \lambda> 0: \hbox{ there exists a classical solution of }(P)_\lambda \Big\},$$ and note that we are not restricting our attention to radial solutions. We will deal also with weak solutions: \begin{dfn} We say that $u$ is a weak solution of $(P)_\lambda $ if $ 0 \le u \le 1$ a.e. in $B$, $ \frac{1}{(1-u)^2} \in L^1(B)$ and \[ \int_B u \Delta^2 \phi = \lambda \int_B \frac{\phi}{(1-u)^2}, \qquad \forall \phi \in C^4(\bar B) \cap H_0^2(B).\] We say that $ u $ is a weak super-solution (resp. weak sub-solution) of $(P)_\lambda$ if the equality is replaced with the inequality $ \ge $ (resp. $ \le $) for all $\phi \in C^4(\bar B) \cap H_0^2(B)$ with $\phi \ge 0$. \end{dfn} \noindent We also introduce notions of regularity and stability. \begin{dfn} Say that a weak solution $u$ of $(P)_\lambda $ is regular (resp. singular) if $\|u\|_\infty<1$ (resp. $=$) and stable (resp. semi-stable) if $$\mu_1(u)=\inf \left\{ \int_B ( \Delta \phi)^2 -2 \lambda \int_B \frac{ \phi^2}{(1-u)^3}: \phi \in H_0^2(B), \| \phi \|_{L^2}=1 \right\}$$ is positive (resp. non-negative). \end{dfn} \noindent The following extension of Boggio's principle will be frequently used in the sequel (see \cite[Lemma 16]{AGGM} and \cite[Lemma 2.4]{DDGM}): \begin{lemma}[Boggio's Principle]\label{boggio} Let $u\in L^1(B)$. Then $u\geq 0$ a.e. in $B$, provided one of the following conditions hold: \begin{enumerate} \item $u\in C^4(\overline{B})$, $\Delta^2 u\geq 0$ on $B$, and $u=\frac{\partial u}{\partial n}= 0$ on $\partial B$. \item $\int_{B} u\Delta^2\phi\,dx\geq 0$ for all $0\leq \phi \in C^4(\overline{B})\cap H_0^2(B)$. \item $u\in H^2(B)$, $u=0$ and $\frac{\partial u}{\partial n} \leq 0$ on $\partial B$, and $\int_{B} \Delta u \Delta \phi \geq 0$ for all $0\leq \phi \in H^2_0(B)$. \end{enumerate} Moreover, either $u\equiv 0$ or $u>0$ a.e. in $B$. \end{lemma} \noindent The following theorem summarizes the main results in \cite{CDG} that will be needed in the sequel: \begin{thm}\label{CdG} The following assertions hold: \begin{enumerate} \item For each $ 0 < \lambda < \lambda^*$ there exists a classical minimal solution $u_\lambda$ of $ (P)_\lambda$. Moreover $ u_\lambda $ is radial and radially decreasing. \item For $ \lambda > \lambda^*$, there are no weak solutions of $(P)_\lambda$. \item For each $ x \in B$ the map $ \lambda \mapsto u_\lambda(x)$ is strictly increasing on $ (0,\lambda^*)$. \item The pull-in voltage $ \lambda^*$ satisfies the following bounds: \[ \max \left\{ \frac{ 32(10N-N^2-12)}{27}, \frac{128-240N+72N^2}{81} \right\} \le \lambda^* \le \frac{4 \nu_1}{27}\] where $ \nu_1$ denotes the first eigenvalue of $ \Delta^2 $ in $H_0^2(B)$. \item For each $ 0 < \lambda < \lambda^*$, $u_\lambda$ is a stable solution (i.e., $ \mu_1(u_\lambda)>0$). \end{enumerate} \end{thm} \noindent Using the stability of $ u_\lambda $, it can be shown that $ u_\lambda $ is uniformly bounded in $H_0^2(B)$ and that $ \frac{1}{1-u_\lambda} $ is uniformly bounded in $L^3(B)$. Since now $ \lambda \mapsto u_\lambda(x)$ is increasing, the function $ u^*(x):= \lim_{ \lambda \nearrow \lambda^*} u_\lambda(x)$ is well defined (in the pointwise sense), $ u^* \in H_0^2(B)$, $ \frac{1}{1-u^*} \in L^3(B)$ and $ u^*$ is a weak solution of $ (P)_{\lambda^*}$. Moreover $ u^*$ is the unique weak solution of $(P)_{\lambda^*}$. \medskip \noindent The second result we list from \cite{CDG} is critical in identifying the extremal solution. \begin{thm} If $ u \in H_0^2(B)$ is a singular weak solution of $(P)_\lambda$, then $ u $ is semi-stable if and only if $ (u, \lambda) =(u^*,\lambda^*)$. \end{thm} \section{The effect of boundary conditions on the pull-in voltage} As in \cite{DDGM}, we are led to examine problem $(P)_\lambda$ with non-homogeneous boundary conditions such as $$\hskip 150pt \left\{\begin{array}{ll} \Delta^2 u= \frac{ \lambda}{(1-u)^2} &\hbox{in } B \\ \alpha<u<1 &\hbox{in }B \hskip 150 pt (P)_{\lambda, \alpha, \beta}\\ u= \alpha\:,\:\:\partial_\nu u = \beta &\hbox{on } \partial B, \end{array}\right. $$ where $\alpha, \beta$ are given. \medskip \noindent Notice first that some restrictions on $ \alpha $ and $ \beta$ are necessary. Indeed, letting $\Phi(x):=( \alpha - \frac{\beta}{2} ) + \frac{\beta}{2} |x|^2 $ denote the unique solution of \begin{equation} \label{Phi} \left\{ \begin{array}{ll} \Delta^2 \Phi = 0 &\hbox{in } B \\ \Phi = \alpha\:,\:\: \partial_\nu \Phi = \beta&\hbox{on }\partial B, \end{array}\right. \end{equation} we infer immediately from Lemma \ref{boggio} that the function $u-\Phi$ is positive in $B$, which yields to $$\sup_B \Phi<\sup_B u\leq 1.$$ To insure that $\Phi$ is a classical sub-solution of $(P)_{\lambda,\alpha,\beta}$, we impose $\alpha \not= 1$ and $\beta \leq 0$, and condition $\displaystyle \sup_B \Phi<1$ rewrites as $\alpha-\frac{\beta}{2} < 1$. We will then say that the pair $ (\alpha, \beta)$ is {\it admissible} if $\beta \leq 0$, and $\alpha-\frac{\beta}{2} < 1$. \medskip \noindent This section will be devoted to obtaining results for $ (P)_{ \lambda, \alpha ,\beta}$ when $ (\alpha, \beta)$ is an admissible pair, which are analogous to those for $ (P)_\lambda$. To cut down on notation, we shall sometimes drop $ \alpha $ and $ \beta$ from our expressions whenever such an emphasis is not needed. For example in this section $ u_\lambda $ and $ u^*$ will denote the minimal and extremal solution of $ (P)_{\lambda, \alpha , \beta}$. \medskip \noindent We now introduce a notion of weak solution for $(P)_{\lambda,\alpha,\beta}$. \begin{dfn} We say that $u$ is a weak solution of $(P)_{\lambda,\alpha,\beta}$ if $\alpha \leq u \le 1$ a.e. in $B$, $ \frac{1}{(1-u)^2} \in L^1(B)$ and if \[ \int_B (u-\Phi) \Delta^2 \phi = \lambda \int_B \frac{\phi}{(1-u)^2}, \qquad \forall \phi \in C^4(\bar B) \cap H_0^2(B),\] where $\Phi$ is given in (\ref{Phi}). We say that $ u $ is a weak super-solution (resp. weak sub-solution) of $(P)_{\lambda,\alpha,\beta}$ if the equality is replaced with the inequality $ \ge $ (resp. $ \le $) for $ \phi \ge 0$. \end{dfn} \noindent We now define as before \[ \lambda^*:= \sup \{ \lambda > 0: (P)_{\lambda, \alpha, \beta} \; \mbox{ has a classical solution} \}\] and \[\lambda_*:= \sup \{ \lambda > 0: (P)_{\lambda, \alpha, \beta} \; \mbox{ has a weak solution} \}.\] Observe that by the Implicit Function Theorem, one can always solve $(P)_{\lambda,\alpha,\beta}$ for small $\lambda$'s. Therefore, $\lambda^*$ (and also $\lambda_*$) is well defined. \medskip \noindent Let now $U$ be a weak super-solution of $(P)_{\lambda,\alpha,\beta}$. Recall the following standard existence result. \begin{thm} [\cite{AGGM}] \label{exist} For every $0\leq f \in L^1(B)$, there exists a unique $0\leq u \in L^1(B)$ which satisfies $$\int_B u \Delta^2 \phi=\int_B f \phi$$ for all $\phi \in C^4(\bar B) \cap H_0^2(B)$. \end{thm} \noindent We can now introduce the following ``weak iterative scheme": Start with $u_0=U$ and (inductively) let $u_n$, $n \geq 1$, be the solution of $$\int_B (u_n-\Phi) \Delta^2 \phi=\lambda \int_B \frac{\phi}{(1-u_{n-1})^2}\qquad\:\forall \: \phi \in C^4(\bar B) \cap H_0^2(B)$$ given by Theorem \ref{exist}. Since $0$ is a sub-solution of $(P)_{\lambda,\alpha,\beta}$, one can easily show inductively by using Lemma \ref{boggio} that $\alpha \leq u_{n+1}\leq u_n \leq U$ for every $n \geq 0$. Since $$(1-u_n)^{-2}\leq (1-U)^{-2} \in L^1(B),$$ we get by Lebesgue Theorem, that the function $u=\displaystyle \lim_{n \to +\infty} u_n$ is a weak solution of $(P)_{\lambda,\alpha,\beta}$ such that $\alpha \leq u\leq U$. In other words, the following result holds. \begin{thm} \label{super} Assume the existence of a weak super-solution $U$ of $(P)_{\lambda,\alpha,\beta}$. Then there exists a weak solution $u$ of $(P)_{\lambda,\alpha,\beta}$ so that $\alpha \leq u \leq U$ a.e. in $B$. \end{thm} \noindent In particular, we can find a weak solution of $(P)_{\lambda,\alpha,\beta}$ for every $\lambda \in (0,\lambda_*)$. Now we show that this is still true for regular weak solutions. \begin{thm} \label{cch} Let $ (\alpha, \beta)$ be an admissible pair and let $u$ be a weak solution of $(P)_{\lambda,\alpha,\beta}$. Then for every $0<\mu<\lambda$, there is a regular solution for $(P)_{\mu,\alpha,\beta}$. \end{thm} \begin{proof} Let $ \E\in (0,1)$ be given and let $ \bar u=(1-\E)u+\E \Phi$, where $\Phi$ is given in (\ref{Phi}). We have that $$\sup_B \bar u\leq (1-\E)+\E \sup_B \Phi<1\:,\quad \inf_B \bar u\geq (1-\E)\alpha +\E \inf_B \Phi=\alpha,$$ and for every $0\leq \phi \in C^4(\bar B) \cap H_0^2(B)$ there holds: \begin{eqnarray*} \int_B (\bar u-\Phi) \Delta^2 \phi &=& (1-\E) \int_B (u-\Phi)\Delta^2 \phi = (1-\E)\lambda \int_B \frac{\phi}{(1-u)^2}\\ &=& (1-\E)^3 \lambda \int_B \frac{\phi}{(1-\bar u+\E (\Phi-1))^2} \geq (1-\E)^3 \lambda \int_B \frac{\phi}{(1-\bar u)^2}. \end{eqnarray*} Note that $ 0 \le (1-\E)(1-u)=1 - \bar{u}+\E (\Phi -1) <1-\bar u$. So $ \bar{u}$ is a weak super-solution of $ (P)_{ (1-\E)^3 \lambda, \alpha , \beta}$ satisfying $\displaystyle \sup_B \bar u<1$. From Theorem \ref{super} we get the existence of a weak solution $w$ of $ (P)_{ (1-\E)^3 \lambda, \alpha , \beta}$ so that $\alpha \leq w\leq \bar u$. In particular, $\displaystyle \sup_B w<1$ and $w$ is a regular weak solution. Since $\E \in (0,1)$ is arbitrarily chosen, the proof is complete. \end{proof} \noindent Theorem \ref{cch} implies in particular the existence of a regular weak solution $U_\lambda$ for every $\lambda \in (0,\lambda_*)$. Introduce now a ``classical" iterative scheme: $u_0=0$ and (inductively) $u_n=v_n +\Phi$, $n \geq 1$, where $v_n \in H_0^2(B)$ is the (radial) solution of \begin{equation} \label{pranzo} \Delta^2 v_n=\Delta^2(u_n-\Phi)= \frac{\lambda}{(1-u_{n-1})^2} \qquad\hbox{in }B. \end{equation} Since $v_n \in H_0^2(B)$, $u_n$ is also a weak solution of (\ref{pranzo}), and by Lemma \ref{boggio} we know that $\alpha \leq u_n\leq u_{n+1} \leq U_\lambda$ for every $n \geq 0$. Since $\displaystyle \sup_B u_n \leq \displaystyle \sup_B U_\lambda<1$ for $n\geq 0$, we get that $(1-u_{n-1})^{-2} \in L^2(B)$ and the existence of $v_n$ is guaranteed. Since $v_n$ is easily seen to be uniformly bounded in $H_0^2(B)$, we have that $u_\lambda:=\displaystyle \lim_{n \to +\infty}u_n$ does hold pointwise and weakly in $H^2(B)$. By Lebesgue Theorem, we have that $u_\lambda$ is a radial weak solution of $(P)_{\lambda,\alpha,\beta}$ so that $\displaystyle \sup_B u_\lambda\leq \displaystyle \sup_B U_\lambda<1$. By elliptic regularity theory \cite{ADN} $u_\lambda \in C^\infty(\bar B)$ and $u_\lambda-\Phi=\partial_\nu(u_\lambda-\Phi)=0$ on $\partial B$. So we can integrate by parts to get $$\int_B \Delta^2 u_\lambda \phi =\int_B \Delta^2(u_\lambda-\Phi) \phi=\int_B (u_\lambda-\Phi)\Delta^2 \phi=\lambda \int_B \frac{\phi}{(1-u_\lambda)^2}$$ for every $\phi \in C^4(\bar B) \cap H_0^2(B)$. Hence, $u_\lambda$ is a radial classical solution of $(P)_{\lambda,\alpha,\beta}$ showing that $\lambda^*=\lambda_*$. Moreover, since $\Phi$ and $v_\lambda:=u_\lambda-\Phi$ are radially decreasing in view of \cite{Sor}, we get that $u_\lambda$ is radially decreasing too. Since the argument above shows that $u_\lambda<U$ for any other classical solution $U$ of $(P)_{\mu,\alpha,\beta}$ with $\mu \geq \lambda$, we have that $u_\lambda$ is exactly the minimal solution and $u_\lambda$ is strictly increasing as $\lambda \uparrow \lambda^*$. In particular, we can define $ u^*$ in the usual way: $ u^*(x)= \displaystyle \lim_{\lambda \nearrow \lambda^*} u_\lambda(x)$. \medskip \noindent Finally, we show the finiteness of the pull-in voltage. \begin{thm} If $ (\alpha, \beta)$ is an admissible pair, then $\lambda^*(\alpha, \beta) <+\infty$. \end{thm} \begin{proof} Let $u$ be a classical solution of $ (P)_{\lambda, \alpha, \beta}$ and let $ (\psi, \nu_1)$ denote the first eigenpair of $ \Delta^2$ in $H_0^2(B)$ with $ \psi >0$. Now, let $ C $ be such that \[ \int_{\partial B} (\beta \Delta \psi - \alpha \partial_\nu \Delta \psi) = C \int_B \psi. \] Multiplying $ (P)_{\lambda,\alpha,\beta}$ by $ \psi$ and then integrating by parts one arrives at \[ \int_B \left( \frac{ \lambda}{(1-u)^2} - \nu_1 u -C \right) \psi =0. \] Since $ \psi>0$ there must exist a point $\bar x \in B$ where $\frac{ \lambda}{(1-u(\bar x))^2} - \nu_1 u(\bar x) -C \le 0.$ Since $\alpha<u(\bar x)<1$, one can conclude that $ \lambda \le \sup_{\alpha< u <1} ( \nu_1 u +C)(1-u)^2$, which shows that $ \lambda^*<+\infty$. \end{proof} \noindent The following summarizes what we have shown so far. \begin{thm} If $(\alpha,\beta)$ is an admissible pair, then $\lambda^* \in (0,+\infty)$ and the following hold: \begin{enumerate} \item For each $ 0 < \lambda < \lambda^*$ there exists a classical, minimal solution $u_\lambda$ of $(P)_{\lambda,\alpha,\beta}$. Moreover $ u_\lambda $ is radial and radially decreasing. \item For each $ x \in B$ the map $ \lambda \mapsto u_\lambda(x)$ is strictly increasing on $ (0,\lambda^*)$. \item For $ \lambda > \lambda^*$ there are no weak solutions of $(P)_{\lambda,\alpha,\beta}$. \end{enumerate} \label{quasi} \end{thm} \subsection{Stability of the minimal branch of solutions} This section is devoted to the proof of the following stability result for minimal solutions. We shall need yet another notion of $H^2(B)-$weak solutions, which is an intermediate class between classical and weak solutions. \begin{dfn} We say that $u$ is a $H^2(B)-$weak solution of $(P)_{\lambda,\alpha,\beta}$ if $u -\Phi \in H_0^2(B)$, $ \alpha \le u \le 1$ a.e. in $B$, $ \frac{1}{(1-u)^2} \in L^1(B)$ and if \[ \int_B \Delta u \Delta \phi = \lambda \int_B \frac{\phi}{(1-u)^2}, \qquad \forall \phi \in C^4(\bar B) \cap H_0^2(B),\] where $\Phi$ is given in (\ref{Phi}). We say that $u$ is a $H^2(B)-$weak super-solution (resp. $H^2(B)-$weak sub-solution) of $(P)_{\lambda,\alpha,\beta}$ if for $ \phi \ge 0$ the equality is replaced with $ \ge$ (resp. $ \le $) and $u\geq \alpha$ (resp. $\leq$), $\partial_\nu u \leq \beta$ (resp. $\geq$) on $\partial B$. \end{dfn} \begin{thm} \label{stable} Suppose $ (\alpha,\beta)$ is an admissible pair. \begin{enumerate} \item The minimal solution $ u_\lambda $ is then stable and is the unique semi-stable $H^2(B)-$weak solution of $(P)_{\lambda,\alpha,\beta}$. \item The function $ u^*:= \displaystyle \lim_{\lambda \nearrow \lambda^*} u_\lambda$ is a well-defined semi-stable $H^2(B)-$weak solution of $(P)_{\lambda^*,\alpha, \beta}$. \item When $u^*$ is classical solution, then $\mu_1(u^*)=0$ and $u^*$ is the unique $H^2(B)-$weak solution of $(P)_{\lambda^*,\alpha,\beta}$. \item If $ v$ is a singular, semi-stable $H^2(B)-$weak solution of $ (P)_{ \lambda, \alpha, \beta}$, then $ v=u^*$ and $ \lambda = \lambda^*$ \end{enumerate} \end{thm} \noindent The crucial tool is a comparison result which is valid exactly in this class of solutions. \begin{lemma} \label{shi} Let $ (\alpha, \beta)$ be an admissible pair and $u$ be a semi-stable $H^2(B)-$weak solution of $(P)_{\lambda, \alpha, \beta}$. Assume $ U $ is a $H^2(B)-$weak super-solution of $(P)_{\lambda, \alpha, \beta}$ so that $U-\Phi \in H_0^2(B)$. Then \begin{enumerate} \item $ u \le U$ a.e. in $B$; \item If $ u$ is a classical solution and $ \mu_1(u)=0$ then $ U=u$. \end{enumerate} \end{lemma} \begin{proof} (i) Define $ w:= u-U$. Then by the Moreau decomposition \cite{M} for the biharmonic operator, there exist $ w_1,w_2 \in H_0^2(B)$, with $ w=w_1 + w_2$, $ w_1 \ge 0$ a.e., $\Delta^2 w_2 \le 0 $ in the $H^2(B)-$weak sense and $\int_B \Delta w_1 \Delta w_2=0$. By Lemma \ref{boggio}, we have that $w_2 \le 0$ a.e. in $B$.\\ Given now $ 0 \le \phi \in C_c^\infty(B)$, we have that \[ \int_B \Delta w \Delta \phi \leq \lambda \int_B (f(u) - f(U)) \phi, \] where $ f(u):= (1-u)^{-2}$. Since $ u$ is semi-stable, one has \begin{eqnarray*} \lambda \int_B f'(u) w_1^2 \le \int_B (\Delta w_1)^2 = \int_B \Delta w \Delta w_1 \le \lambda \int_B ( f(u) - f(U)) w_1. \end{eqnarray*} Since $ w_1 \ge w$ one also has \[ \int_B f'(u) w w_1 \le \int_B (f(u)-f(U)) w_1,\] which once re-arranged gives \[ \int_B \tilde{f} w_1 \geq 0,\] where $ \tilde{f}(u)= f(u) - f(U) -f'(u)(u-U)$. The strict convexity of $f$ gives $ \tilde{f} \le 0$ and $ \tilde{f}< 0 $ whenever $u \not= U$. Since $w_1 \ge 0$ a.e. in $B$ one sees that $ w \le 0 $ a.e. in $B$. The inequality $ u \le U$ a.e. in $B$ is then established. \medskip \noindent (ii) Since $u$ is a classical solution, it is easy to see that the infimum in $\mu_1(u)$ is attained at some $\phi$. The function $\phi$ is then the first eigenfunction of $\Delta^2-\frac{2\lambda}{(1-u)^3}$ in $H_0^2(B)$. Now we show that $ \phi$ is of fixed sign. Using the above decomposition, one has $ \phi= \phi_1 + \phi_2$ where $ \phi_i \in H_0^2(B)$ for $i=1,2$, $ \phi_1 \ge 0$, $ \int_B \Delta \phi_1 \Delta \phi_2=0$ and $ \Delta^2 \phi_2 \le 0$ in the $H^2_0(B)-$weak sense. If $ \phi$ changes sign, then $ \phi_1 \not\equiv 0$ and $ \phi_2 <0$ in $B$ (recall that either $\phi_2<0$ or $\phi_2=0$ a.e. in $B$). We can write now: \begin{eqnarray*} 0 = \mu_1(u) \le \frac{ \int_B (\Delta (\phi_1 -\phi_2))^2 - \lambda f'(u) ( \phi_1 - \phi_2)^2}{ \int_B ( \phi_1 - \phi_2)^2} < \frac{ \int_B ( \Delta \phi)^2 - \lambda f'(u) \phi^2 }{ \int_B \phi^2} =\mu_1(u) \end{eqnarray*} in view of $\phi_1 \phi_2<-\phi_1\phi_2$ in a set of positive measure, leading to a contradiction.\\ So we can assume $ \phi \ge 0$, and by the Boggi's principle we have $\phi>0$ in $B$. For $ 0 \le t \le 1$ define $$g(t)=\int_B \Delta \left[t U+(1-t)u \right] \Delta \phi - \lambda \int_B f( tU+(1-t)u) \phi,$$ where $\phi$ is the above first eigenfunction. Since $ f$ is convex one sees that $$g(t)\geq \lambda \int_B \left[t f(U)+(1-t)f(u)-f(tU+(1-t)u)\right]\phi \geq 0$$ for every $t \geq 0$. Since $ g(0) =0$ and $$ g'(0)= \int_B \Delta (U-u) \Delta \phi-\lambda f'(u)(U-u)\phi=0 ,$$ we get that \[ g''(0)=- \lambda \int_B f''(u) (U-u)^2 \phi\geq 0.\] Since $f''(u)\phi>0$ in $B$, we finally get that $ U=u$ a.e. in $B$. \end{proof} \noindent Based again on Lemma \ref{boggio}(3), we can show a more general version of the above Lemma \ref{shi}. \begin{lemma} \label{poo} Let $ (\alpha,\beta)$ be an admissible pair and $\beta'\leq 0$. Let $u$ be a semi-stable $H^2(B)-$weak sub-solution of $(P)_{\lambda, \alpha,\beta}$ with $u=\alpha$, $\partial_\nu u=\beta' \geq \beta$ on $\partial B$. Assume that $U$ is a $H^2(B)-$weak super-solution of $(P)_{\lambda, \alpha ,\beta}$ with $U=\alpha$, $\partial_\nu U=\beta$ on $\partial B$. Then $ U \ge u$ a.e. in $B$. \end{lemma} \begin{proof} Let $ \tilde{u} \in H_0^2(B)$ denote a weak solution to $ \Delta^2 \tilde{u}= \Delta^2 (u-U)$ in $B$. Since $\tilde{u}-u+U=0$ and $\partial_\nu(\tilde{u}-u+U)\leq 0$ on $\partial B$, by Lemma \ref{boggio} one has that $ \tilde{u} \ge u-U $ a.e. in $B$. Again by the Moreau decomposition \cite{M}, we may write $\tilde u$ as $ \tilde{u} = w+v $, where $ w,v \in H_0^2(B)$, $ w \ge 0 $ a.e. in $B$, $ \Delta^2 v \le 0$ in a $H^2(B)-$weak sense and $\int_B \Delta w \Delta v=0$. Then for $ 0 \le \phi \in C^4 (\bar B)\cap H_0^2(B)$ one has \[ \int_B \Delta \tilde{u} \Delta \phi =\int_B \Delta(u-U) \Delta \phi \leq \lambda \int_B (f(u)- f( U)) \phi .\] In particular, we have that \[ \int_B \Delta \tilde{u} \Delta w \le \lambda \int_B ( f(u)-f(U)) w.\] Since by semi-stability of $u$ \begin{eqnarray*} \lambda \int_B f'(u) w^2\leq \int_B ( \Delta w)^2 = \int_B \Delta \tilde{u} \Delta w , \end{eqnarray*} we get that \[ \int_B f'(u) w^2 \le \int_B ( f(u)-f(U)) w.\] By Lemma \ref{boggio} we have $v\leq 0$ and then $ w \ge \tilde{u} \ge u -U$ a.e. in $B$. So we see that \[ 0 \le \int_B \left( f(u)-f(U)-f'(u)(u-U) \right) w.\] The strict convexity of $ f$ implies as in Lemma \ref{shi} that $ U \ge u $ a.e. in $B$. \end{proof} \noindent We shall need the following a-priori estimates along the minimal branch $u_\lambda$. \begin{lemma} \label{extremalsol} Let $ (\alpha, \beta)$ be an admissible pair. Then one has \[ 2 \int_B \frac{( u_\lambda - \Phi)^2}{(1-u_\lambda)^3} \le \int_B \frac{ u_\lambda - \Phi}{(1-u_\lambda)^2},\] where $ \Phi$ is given in (\ref{Phi}). In particular, there is a constant $C>0$ so that for every $\lambda \in (0,\lambda^*)$, we have \begin{equation} \label{tardi} \int_B (\Delta u_\lambda)^2+\int_B \frac{1}{(1-u_\lambda)^3} \leq C. \end{equation} \end{lemma} \begin{proof} Testing $ (P)_{\lambda, \alpha , \beta}$ on $ u_\lambda - \Phi \in C^4(\bar B) \cap H^2_0(B)$, we see that \begin{eqnarray*} \lambda \int_B \frac{ u_\lambda - \Phi}{(1-u_\lambda)^2} = \int_B \Delta u_\lambda \Delta( u_\lambda - \Phi) =\int_B ( \Delta (u_\lambda - \Phi))^2 \ge 2 \lambda \int_B \frac{ (u_\lambda- \Phi)^2}{( 1-u_\lambda)^3} \end{eqnarray*} in view of $\Delta^2 \Phi=0$. In particular, for $\delta>0$ small we have that \begin{eqnarray*} \int_{\{|u_\lambda-\Phi| \geq \delta \}}\frac{1}{(1-u_\lambda)^3}&\leq & \frac{1}{\delta^2} \int_{\{|u_\lambda-\Phi| \geq \delta \}}\frac{(u_\lambda-\Phi)^2}{(1-u_\lambda)^3} \leq \frac{1}{\delta^2} \int_B \frac{1}{(1-u_\lambda)^2}\\ &\leq &\delta \int_{\{|u_\lambda-\Phi| \geq \delta \}}\frac{1}{(1-u_\lambda)^3}+C_\delta \end{eqnarray*} by means of Young's inequality. Since for $\delta$ small, $$\int_{\{|u_\lambda-\Phi| \leq \delta \}}\frac{1}{(1-u_\lambda)^3}\leq C'$$ for some $C'>0$, we can deduce that for every $\lambda \in (0,\lambda^*)$, $$\int_B \frac{1}{(1-u_\lambda)^3} \leq C$$ for some $C>0$. By Young's and H\"older's inequalities, we now have $$\int_B (\Delta u_\lambda)^2=\int_B \Delta u_\lambda \Delta \Phi+\lambda \int_B \frac{u_\lambda -\Phi}{(1-u_\lambda)^2}\leq \delta \int_B (\Delta u_\lambda)^2 +C_\delta+C \left(\int_B \frac{1}{(1-u_\lambda)^3} \right)^{\frac{2}{3}}$$ and estimate (\ref{tardi}) is therefore established.\end{proof} \medskip \noindent We are now ready to establish Theorem \ref{stable}.\\ {\bf Proof (of Theorem \ref{stable}):} (1)\, Since $\|u_\lambda\|_\infty <1$, the infimum defining $\mu_1(u_\lambda)$ is achieved at a first eigenfunction for every $\lambda \in (0,\lambda^*)$. Since $\lambda \mapsto u_\lambda(x)$ is increasing for every $x \in B$, it is easily seen that $\lambda \mapsto \mu_1( u_\lambda)$ is an increasing, continuous function on $ (0, \lambda^*)$. Define \[ \lambda_{**}:= \sup\{ 0 <\lambda < \lambda^*: \: \mu_1( u_\lambda) >0 \} .\] We have that $ \lambda_{**}= \lambda^*$. Indeed, otherwise we would have that $ \mu_1(u_{ \lambda_{**}}) =0$, and for every $ \mu \in ( \lambda_{**}, \lambda^*)$ $ u_{\mu}$ would be a classical super-solution of $ (P)_{ \lambda_{**},\alpha, \beta}$. A contradiction arises since Lemma \ref{shi} implies $u_{\mu} = u_{\lambda_{**}}$.\\ Finally, Lemma \ref{shi} guarantees uniqueness in the class of semi-stable $H^2(B)-$weak solutions.\\ (2) \, By estimate (\ref{tardi}) it follows that $u_\lambda \to u^*$ in a pointwise sense and weakly in $H^2(B)$, and $ \frac{1}{1-u^*} \in L^3(B)$. In particular, $u^*$ is a $H^2(B)-$weak solution of $(P)_{ \lambda^*, \alpha ,\beta}$ which is also semi-stable as limiting function of the semi-stable solutions $\{u_\lambda\}$.\\ (3) Whenever $\|u^*\|_\infty<1$, the function $u^*$ is a classical solution, and by the Implicit Function Theorem we have that $\mu_1(u^*)=0$ to prevent the continuation of the minimal branch beyond $\lambda^*$. By Lemma \ref{shi} $u^*$ is then the unique $H^2(B)-$weak solution of $(P)_{\lambda^*,\alpha,\beta}$. An alternative approach --which we do not pursue here-- based on the very definition of the extremal solution $u^*$ is available in \cite{CDG} when $\alpha=\beta=0$ (see also \cite{Mar}) to show that $u^*$ is the unique weak solution of $(P)_{\lambda^*}$, regardless of whether $u^*$ is regular or not.\\ (4) \, If $ \lambda < \lambda^*$, by uniqueness $v=u_\lambda $. So $v$ is not singular and a contradiction arises. \medskip \noindent By Theorem \ref{quasi}(3) we have that $ \lambda = \lambda^*$. Since $ v $ is a semi-stable $H^2(B)-$weak solution of $ (P)_{ \lambda^*, \alpha, \beta}$ and $ u^*$ is a $H^2(B)-$weak super-solution of $ (P)_{\lambda^*, \alpha , \beta}$, we can apply Lemma \ref{shi} to get $ v \le u^*$ a.e. in $B$. Since $u^*$ is a semi-stable solution too, we can reverse the roles of $ v$ and $ u^*$ in Lemma \ref{shi} to see that $ v \ge u^*$ a.e. in $B$. So equality $v=u^*$ holds and the proof is done. \section{Regularity of the extremal solution for $ 1 \le N \le 8$ } We now return to the issue of the regularity of the extremal solution in problem $(P)_\lambda$. Unless stated otherwise, $ u_\lambda $ and $ u^*$ refer to the minimal and extremal solutions of $ (P)_\lambda$. We shall show that the extremal solution $ u^*$ is regular provided $ 1 \le N \le 8$. We first begin by showing that it is indeed the case in small dimensions: \begin{thm} $ u^*$ is regular in dimensions $ 1 \le N \le 4$. \label{regular1} \end{thm} \begin{proof} As already observed, estimate (\ref{tardi}) implies that $f(u^*)=(1-u^*)^{-2} \in L^{\frac{3}{2}}(B)$. Since $u^*$ is radial and radially decreasing, we need to show that $ u^*(0)<1$ to get the regularity of $ u^*$. The integrability of $f(u^*)$ along with elliptic regularity theory shows that $ u^* \in W^{4, \frac{3}{2}}(B)$. By the Sobolev imbedding Theorem we get that $u^*$ is a Lipschitz function in $B$.\\ Now suppose $ u^*(0)=1$ and $ 1 \le N \le 3$. Since $$\frac{1}{1-u} \ge \frac{C}{|x|}\qquad \hbox{in }B$$ for some $ C>0$, one sees that \[ \infty = C^3 \int_B \frac{1}{|x|^3} \le \int_B \frac{1}{(1-u^*)^3} < \infty.\] A contradiction arises and hence $u^*$ is regular for $ 1 \le N \le 3$.\\ For $N=4$ we need to be more careful and observe that $u^* \in C^{1, \frac{1}{3}}(\bar B)$ by the Sobolev Imbedding Theorem. If $ u^*(0)=1$, then $ \nabla u^*(0)=0$ and \[ \frac{1}{1-u^*} \ge \frac{C}{|x|^\frac{4}{3}} \qquad \hbox{in }B \] for some $ C>0$. We now obtain a contradiction exactly as above. \end{proof} \noindent We now tackle the regularity of $ u^*$ for $ 5 \le N \le 8$. We start with the following crucial result: \begin{thm} Let $ N \ge 5$ and $ (u^*, \lambda^*)$ be the extremal pair of $(P)_\lambda$. When $u^*$ is singular, then \[ 1-u^*(x) \le C_0 |x|^\frac{4}{3} \qquad \hbox{in }B,\] where $ C_0:= \left( \frac{\lambda^*}{\overline{\lambda}}\right)^\frac{1}{3}$ and $ \bar{\lambda}:= \frac{8 (N-\frac{2}{3}) (N- \frac{8}{3})}{9}$. \label{touchdown}\end{thm} \begin{proof} First note that Theorem \ref{CdG}(4) gives the lower bound: \begin{equation}\label{lowbound} \lambda^* \geq \bar \lambda= \frac{128-240N+72N^2}{81}. \end{equation} For $ \delta >0$, we define $ u_\delta(x):=1-C_\delta |x|^\frac{4}{3}$ with $ C_\delta:= \left( \frac{\lambda^*}{\bar \lambda}+\delta \right)^\frac{1}{3}>1$. Since $N\geq 5$, we have that $ u_\delta \in H^2_{loc}({\mathbb{R}}^N)$, $ \frac{1}{1-u_\delta} \in L^3_{loc}({\mathbb{R}}^N)$ and $ u_\delta $ is a $H^2-$weak solution of \[ \Delta^2 u_\delta = \frac{ \lambda^* + \delta \bar{ \lambda}}{ (1-u_\delta)^2} \qquad \mbox{ in } {\mathbb{R}}^N.\] We claim that $u_\delta \leq u^*$ in $B$, which will finish the proof by just letting $\delta \to 0$. \medskip \noindent Assume by contradiction that the set $\Gamma:=\{ r \in (0,1):u_\delta(r) >u^*(r) \}$ is non-empty, and let $r_1=\displaystyle \sup \:\Gamma$. Since \[ u_\delta(1) = 1 - C_\delta<0=u^*(1),\] we have that $0 < r_1 < 1$ and one infers that \[ \alpha:= u^*(r_1)=u_\delta(r_1) \:, \quad \beta:=( u^*)'(r_1) \geq u_\delta'(r_1) .\] Setting $u_{\delta,r_1}(r)=r_1^{-\frac{4}{3}}\left(u_\delta(r_1 r)-1 \right) +1$, we easily see that $u_{\delta,r_1}$ is a $H^2(B)-$weak super-solution of $(P)_{\lambda^*+\delta \bar \lambda,\alpha',\beta'}$, where $$\alpha':= r_1^{-\frac{4}{3}}( \alpha-1) +1\:,\quad \beta':= r_1^{-\frac{1}{3}} \beta.$$ \medskip \noindent Similarly, let us define $u^*_{r_1}(r)= r_1^{-\frac{4}{3}}\left( u^*(r_1 r)-1\right) +1$. The dilation map \begin{equation} w \to w_{r_1}(r)=r_1^{-\frac{4}{3}}\left( w(r_1 r)-1\right) +1 \end{equation} is a correspondence between solutions of $(P)_{\lambda}$ on $B$ and of $(P)_{\lambda,1-r_1^{-\frac{4}{3}},0}$ on $B_{r_1^{-1}}$ which preserves the $H^2-$integrability. In particular, $(u^*_{r_1},\lambda^*)$ is the extremal pair of $(P)_{\lambda,1-r_1^{-\frac{4}{3}},0}$ on $B_{r_1^{-1}}$ (defined in the obvious way). Moreover, $u^*_{r_1}$ is a singular semi-stable $H^2(B)-$ weak solution of $(P)_{\lambda^* ,\alpha',\beta'}$. \medskip \noindent Since $u^*$ is radially decreasing, we have that $ \beta' \le 0$. Define the function $w$ as $w(x):= ( \alpha' - \frac{\beta'}{2}) + \frac{ \beta'}{2} |x|^2 + \gamma(x) $, where $ \gamma $ is a solution of $ \Delta^2 \gamma= \lambda^* $ in $B$ with $ \gamma = \partial_\nu \gamma =0 $ on $ \partial B$. Then $ w$ is a classical solution of $$ \left\{ \begin{array}{ll} \Delta^2 w = \lambda^* &\hbox{in } B \\ w = \alpha'\:, \quad \partial_\nu w = \beta' &\hbox{on } \partial B. \end{array} \right.$$ Since $\frac{\lambda^*}{(1-u^*_{r_1})^2}\geq \lambda^*$, by Lemma \ref{boggio} we have $ u^*_{r_1} \ge w $ a.e. in $B$. Since $ w(0) = \alpha' - \frac{\beta'}{2}+ \gamma(0)$ and $ \gamma(0)>0$, the bound $ u^*_{r_1} \le 1$ a.e. in $B$ yields to $ \alpha' - \frac{\beta'}{2}<1$. Namely, $ (\alpha', \beta')$ is an admissible pair and by Theorem \ref{stable}(4) we get that $(u^*_{r_1},\lambda^*)$ coincides with the extremal pair of $(P)_{ \lambda, \alpha', \beta'}$ in $B$. \medskip \noindent Since $(\alpha',\beta')$ is an admissible pair and $u_{\delta,r_1}$ is a $H^2(B)-$weak super-solution of $(P)_{\lambda^*+\delta \bar \lambda,\alpha',\beta'}$, by Theorem \ref{super} we get the existence of a weak solution of $(P)_{\lambda^*+\delta \bar \lambda,\alpha',\beta'}$. Since $\lambda^*+\delta \bar \lambda>\lambda^*$, we contradict the fact that $\lambda^*$ is the extremal parameter of $(P)_{\lambda,\alpha',\beta'}$. \end{proof} \noindent Thanks to this lower estimate on $u^*$, we get the following result. \begin{thm} If $ 5 \le N \le 8$, then the extremal solution $u^*$ of $ (P)_\lambda$ is regular. \label{regular2} \end{thm} \begin{proof} Assume that $ u^*$ is singular. For $ \E>0$ set $\psi(x):= |x|^{ \frac{4-N}{2}+\E}$ and note that \[ (\Delta \psi)^2 = (H_N +O( \E)) |x|^{-N+2\E}, \qquad \mbox{ where}\qquad H_N:= \frac{N^2 (N-4)^2}{16}.\] Given $\eta \in C_0^\infty(B)$, and since $N\geq 5$, we can use the test function $\eta \psi \in H_0^2(B)$ into the stability inequality to obtain \[ 2 \lambda \int_B \frac{\psi^2}{(1-u^*)^3} \le \int_B (\Delta \psi)^2 +O(1), \] where $O(1)$ is a bounded function as $ \E \searrow 0$. By Theorem \ref{touchdown} we find that \[ 2 \bar \lambda \int_B \frac{\psi^2}{|x|^4} \le \int_B (\Delta \psi)^2 +O(1),\] and then \[ 2 \bar \lambda \int_B |x|^{-N+2\E} \le (H_N +O(\E)) \int_B |x|^{-N+2\E} +O(1).\] Computing the integrals one arrives at \[ 2 \bar \lambda \le H_N +O(\E).\] As $ \E \to 0$ finally we obtain $2 \bar \lambda \le H_N$. Graphing this relation one sees that $ N \ge 9$. \end{proof} \noindent We can now slightly improve the lower bound (\ref{lowbound}). \begin{cor} \label{lambda.bar} In any dimension $N\geq 1$, we have \begin{equation}\label{lower} \lambda^*>\bar \lambda=\frac{8 (N-\frac{2}{3}) (N- \frac{8}{3})}{9}. \end{equation} \end{cor} \begin{proof} The function $\bar{u}:=1-|x|^\frac{4}{3}$ is a $H^2(B)-$ weak solution of $(P)_{\bar \lambda,0,-\frac{4}{3}}$. If by contradiction $\lambda^*=\bar \lambda$, then $\bar u$ is a $H^2(B)-$weak super-solution of $(P)_\lambda$ for every $\lambda \in (0,\lambda^*)$. By Lemma \ref{shi} we get that $u_\lambda \le \bar{u}$ for all $ \lambda < \lambda^*$, and then $u^*\le \bar{u}$ a.e. in $B$. \medskip \noindent If $1\leq N \leq 8$, $u^*$ is then regular by Theorems \ref{regular1} and \ref{regular2}. By Theorem \ref{stable}(3) there holds $\mu_1(u^*)=0$. Lemma \ref{shi} then yields that $u^*=\bar u$, which is a contradiction since then $u^*$ will not satisfy the boundary conditions. \medskip \noindent If now $N\geq 9$ and $ \bar{\lambda} = \lambda^*$, then $C_0=1$ in Theorem \ref{touchdown}, and we then have $ u^* \geq \bar{u}$. It means again that $u^*=\bar u$, a contradiction that completes the proof. \end{proof} \section{The extremal solution is singular for $N \ge 17$} In this section, we will need the following improved Hardy-Rellich inequality, which is valid for $N \ge 5$ (see \cite{GM} and references therein): \[ \int_B (\Delta \psi)^2 \ge H_N \int_B \frac{\psi^2}{|x|^4} + C \int_B \psi^2 \qquad \forall \; \psi \in H_0^2(B) \] where $ H_N:= \frac{ N^2(N-4)^2}{16} $ is optimal and $C>0$. As in the previous section $(u^*,\lambda^*)$ denotes the extremal pair of $ (P)_\lambda$. We first show the following upper bound on $u^*$. \begin{lemma} \label{blow} If $ N \ge 9$, then $ u^* \le 1 - |x|^\frac{4}{3}$ in $B$. \end{lemma} \begin{proof} Recall that $ \bar{\lambda}:=\frac{8 (N-\frac{2}{3}) (N- \frac{8}{3})}{9} \leq \lambda^*$. If $ \bar{\lambda} = \lambda^*$, then by the proof of Corollary \ref{lambda.bar}, we know that $u^*\leq \bar u$.\\ Suppose now that $ \bar{\lambda} < \lambda^*$. We claim that $ u_\lambda \le \bar{u}$ for all $ \lambda \in ( \bar{\lambda}, \lambda^*)$. Indeed, fix $ \lambda $ and assume by contradiction that \[ R_1:= \inf \{ 0 \le R \le 1: u_\lambda < \bar{u} \mbox{ in } (R,1) \}>0.\] From the boundary conditions, one has that $ u_\lambda(r) < \bar{u}(r)$ as $ r\to 1^-$. Hence, $0<R_1<1$, $ \alpha:=u_\lambda(R_1)=\bar{u}(R_1)$ and $ \beta:=u_\lambda'(R_1) \le \bar{u}'(R_1)$. Introduce, as in the proof of Theorem \ref{touchdown}, the functions $(u_\lambda)_{R_1}$ and $(\bar u)_{R_1}$. We have that $(u_\lambda)_{R_1}$ is a classical super-solution of $(P)_{\bar \lambda,\alpha',\beta'}$, where $$\alpha':= R_1^{-\frac{4}{3}}( \alpha-1) +1\:,\quad \beta':= R_1^{-\frac{1}{3}} \beta.$$ Note that $(\bar u)_{R_1}$ is a $H^2(B)-$weak sub-solution of $(P)_{\bar \lambda,\alpha',\beta'}$ which is also semi-stable in view of $2\bar \lambda \leq H_N$ and the Hardy inequality. By Lemma \ref{poo}, we deduce that $(u_\lambda)_{R_1}\geq (\bar u)_{R_1}$ in $B$. Note that, arguing as in the proof of Theorem \ref{touchdown}, $(\alpha',\beta')$ is an admissible pair. \medskip \noindent We have therefore shown that $u_\lambda \geq \bar u$ in $B_{R_1}$ and a contradiction arises in view of the fact that $\displaystyle \lim_{x \to 0} \bar u(x)=1$ and $\|u_\lambda\|_\infty<1$. It follows that $u_\lambda \leq \bar u$ in $B$ for every $\lambda \in (\bar \lambda, \lambda^*)$, and in particular $u^*\leq \bar u$ in $B$. \end{proof} \noindent Our approach for showing that $ u^*$ is singular for large dimensions, will depend on the sign of $ H_N - 2 \lambda^*$. \begin{thm} If $ N \ge 9$ and $\lambda^* \le \frac{H_N}{2}$, then the extremal solution $ u^*$ of $(P)_\lambda$ is singular. \label{kkl} \end{thm} \begin{proof} Let $ \psi \in C_c^\infty(B)$ with $ \int_B \psi^2 =1$. By Lemma \ref{blow} and the improved Hardy-Rellich inequality (see \cite{GM}), one then has \begin{eqnarray*} \int_B (\Delta \psi)^2 - 2 \lambda^* \int_B \frac{ \psi^2}{(1-u^*)^3} \ge \int_B (\Delta \psi)^2 - H_N \int_B \frac{\psi^2}{|x|^4} \ge C. \end{eqnarray*} It follows that $ \mu_1(u^*)>0$ and $ u^*$ must be singular, since otherwise, one could use the Implicit Function Theorem to continue the minimal branch beyond $ \lambda^*$. \end{proof} \noindent We can now show the following result about the extremal solution. \begin{thm} \label{hjh} The following upper bounds on $\lambda^*$ hold in large dimensions. \begin{enumerate} \item If $N \ge 31$, then $\lambda^* \le 27 \bar{ \lambda}\leq \frac{H_N}{2}$. \item If $17 \le N \le 30$, then $\lambda^* \le \frac{H_N}{2}$. \end{enumerate} The extremal solution is therefore singular for dimension $N\geq 17$. \end{thm} \begin{proof} Consider for any $m>0$ the following function: \begin{equation} w_m:=1-3m/(3m-4)r^{4/3}+4 r^m/(3m-4) \end{equation} Assume first that $N \ge 31$, then $27 \bar{ \lambda} \le \frac{H_N}{2}$. We shall show that $w_2$ is a singular $H^2(B)-$weak sub-solution of $(P)_{27 \bar \lambda} $ that is semi-stable. Indeed, write \[ w_2:=1-|x|^{\frac{4}{3}}-2(|x|^\frac{4}{3}-|x|^2)=\bar{u}-\phi_0, \] where $ \phi_0:=2(|x|^\frac{4}{3}-|x|^2)$, and note that $w_2\in H_0^2(B)$, $ \frac{1}{1-w_2} \in L^3(B)$, $ 0 \le w_2 \le 1$ in $B$, and \[ \Delta^2 w_2 \le \frac{ 27 \bar{\lambda}}{(1-w_2)^2} \qquad \hbox{in }B\setminus \{0\}. \] So $w_2$ is $H^2(B)-$weak sub-solution of $(P)_{27 \bar \lambda} $. Moreover, since $ 27 \bar{ \lambda} \le \frac{H_N}{2}$, and since $\phi_0\geq 0$, we get that \begin{eqnarray*} 54 \bar \lambda \int_B \frac{ \psi^2}{(1-w_2)^3} \le H_N \int_B \frac{ \psi^2}{(|x|^\frac{4}{3}+\phi_0)^3} \le H_N \int_B \frac{\psi^2}{|x|^4} \le \int_B (\Delta \psi)^2 \end{eqnarray*} for all $ \psi \in H_0^2(B)$. Hence $w_2$ is also semi-stable. If now $ 27 \bar{ \lambda} < \lambda^*$, then by Lemma \ref{poo}, $ w_2$ is necessarily below the minimal solution $ u_{ 27 \bar{ \lambda}}$ which contradicts the fact that $ w_2 $ is singular. Hence $ \lambda^* \leq 27 \bar{ \lambda} \le \frac{H_N}{2}$. \medskip \noindent Now consider the function \[ w_3:=1- \frac{9}{5} r^\frac{4}{3} + \frac{4}{5} r^3. \] We show that it is a singular $H^2(B)-$weak sub-solution of $ (P)_\frac{H_N}{2}$ that is semi-stable. Indeed, we clearly have that $ 0 \le w_3 \le 1$ a.e. in $B$, $ w_3 \in H_0^2(B)$ and $ \frac{1}{1-w_3} \in L^3(B)$. To show the stability condition, we consider $ \psi \in C_c^\infty(B)$ and write \begin{eqnarray*} H_N \int_B \frac{ \psi^2}{(1-w_3)^3} &=& 125 H_N \int_B \frac{ \psi^2}{ (9r^\frac{4}{3}-4r^3)^3} \le 125 H_N \sup_{0<r<1} \frac{1}{(9-4r^{ 3-\frac{4}{3}})^3} \int_B \frac{\psi^2}{r^4} \\ &=& H_N \int_B \frac{ \psi^2}{r^4} \le \int_B (\Delta \psi)^2. \end{eqnarray*} An easy computation shows that \begin{eqnarray*} \frac{H_N}{2(1-w_3)^2} - \Delta^2 w_3 &=& \frac{ 25 H_N}{2 ( 9r^\frac{4}{3}-4r^3)^2} - \frac{ 9 \bar{\lambda}}{5 r^\frac{8}{3}} - \frac{12}{5}\frac{N^2-1}{r}\\ &=& \frac{25 N^2 (N-4)^2 }{32 ( 9 r^\frac{4}{3}-4r^3)^2} - \frac{8 ( N-\frac{2}{3})(N-\frac{8}{3})}{ 5 r^\frac{8}{3}} - \frac{12}{5}\frac{N^2-1}{r} \end{eqnarray*} and by using Maple, one can verify that this final quantity is nonnegative on $ (0,1)$, whenever $ 17 \le N \le 30$, hence $ w_3 $ is a subsolution of $ (P)_\frac{H_N}{2}$. If now, $\frac{H_N}{2}<\lambda^*$, then Lemma \ref{poo} would imply that the minimal solution $ u_\lambda$ is larger than $w_3$ and hence is singular for $ \frac{H_N}{2} < \lambda < \lambda^*$, which is a contradiction. \end{proof} \begin{remark} \rm We believe that the extremal solution is singular for all $N\geq 9$ and for that, one needs to construct again for the remaining cases $9\leq N\leq 16$ (and $\frac{H_N}{2} < \lambda^*$), a singular $H^2(B)-$weak sub-solution of $ (P)_\frac{H_N}{2}$ that is semi-stable. However, one can show that (at least for $N=9$) such a sub-solution cannot be obtained by simply perturbing $\bar u$ with a function of the form $\phi_0=\frac{4}{3} \beta r^\alpha(1-r^\beta)$.\\ \noindent The construction of such a sub-solution for the remaining cases, i.e. when $9\leq N\leq 16$ and $\frac{H_N}{2} < \lambda^*$, will therefore very likely require a computer assisted proof, that we leave open to the interested reader. \end{remark}
2023-04-23T06:10:01.869Z
2008-10-29T23:56:58.000Z
redpajama/arxiv
arxiv_0002
28
8,384
e5c558f8aae95088b520f23b6220b40adf232798
\section{Introduction} The coherent conversion of quantum information between mobile photonic qubits for communication and stationary material qubits for storage and data processing is an important building block of quantum networks. In atomic systems several ideas to realize such a \emph{quantum interface} have been suggested and experimentally demonstrated in recent years (see \cite{Kim08} for a review). For semiconductor quantum dots (QD) proposals for interfaces in analogy to the cavity-based atomic schemes have been put forward \cite{IAB+99}, \cite{YLS05} and major prerequisites such as strong coupling to a nano-cavity \cite{HBW+07} have been realized (see \cite{HaAw08} for a review). Here we will show how to realize a QD-based quantum interface between the \emph{nuclear spins} in a QD and the optical field. The read-out we propose maps the nuclear state to the output mode of the cavity directly, while the write-in proceeds by deterministic creation of entanglement between the nuclear spins and the cavity output-mode and subsequent teleportation. Our scheme has several attractive features: the very long nuclear spin lifetimes make the nuclei attractive for storing quantum information \cite{TIL03} and the use of collective states makes it possible to map not just qubits but also multi-photon states. In addition, typical electron spin decoherence processes will be suppressed: the major such process --hyperfine interaction with the lattice nuclear spins \cite{SKL03}-- is harnessed to achieve the desired coupling and the influence of other processes is weakened since the electronic states can be adiabatically eliminated from the dynamics. The price for this is a reduction in the speed of the mapping process and the necessity to initialize the nuclear spin ensemble in a highly polarized state. In view of the high nuclear polarization of above 80\% reported recently \cite{Maletinsky2008} the proposed protocol enables the high-fidelity mapping between a (traveling) optical field and the nuclear spin ensemble in a realistic setup. The paper is organized as follows: First, we introduce the system in Sec.~\ref{sec:system}. In Sec.~\ref{sec:adiabatic} we sketch the adiabatic elimination that yields the Hamiltonians that describe the effective coupling between light and nuclear spins (for a detailed derivation see App.~\ref{app:adiabatic}). Next, we explain the interface protocol in Sec.~\ref{sec:interface} and finally give an example for the implementation of the protocol in Sec.~\ref{sec.impl}. \begin{figure}[ht] \centering \includegraphics[scale=0.68]{systempaperkurz6neu.eps} \caption{\label{fig:1} (a) Singly charged QD coupled to high-Q optical cavity. (b) Level scheme of the QD. Optical and hyperfine transitions.} \end{figure} \section{System}\label{sec:system} We consider a self-assembled QD charged with a single conduction-band electron, whose spin-states $\ket{\uparrow},\ket{\downarrow}$ are split in a magnetic field. For clarity we first consider a simplified model, in which both electronic states are coupled by electric dipole transitions to the same charged exciton (trion) state $\ket{X}$ in a $\Lambda$-configuration, cf. Fig.~\ref{fig:1}. Note that the selection rules in QDs often make it necessary to consider more complicated level schemes. After introducing our protocol using this simplified model, we will present a setting to realize the required coupling and discuss the effect of corrections to Eq.~(\ref{eq:optical}) in Sec.~\ref{sec.impl}. The QD is strongly coupled to a high-Q nano-cavity \cite{HBW+07}. The two transitions are, respectively, off-resonantly driven by the cavity mode (frequency $\omega_c$) and a laser of frequency $\omega_l$, cf. Fig.~\ref{fig:1}, described by the Hamiltonian \begin{align}\label{eq:optical} H_\mr{opt}=&\frac{\Omega_c}{2}\,a^{\dagger}\, \ketbra{\downarrow}{X} + \frac{\Omega_l}{2}\,e^{+i\omega_l t}\ketbra{\uparrow}{X}+\textrm{h.c.}\nonumber\\\,&+\omega_c\, a^{\dagger}a+\omega_{X}\proj{X}+ \omega_z S^z , \end{align} where $\hbar=1$, $\Omega_l, \Omega_c$ are the Rabi frequencies of laser and cavity fields, $a^{\dagger}$, $a$ are the cavity photons, $\omega_X$ denotes the trion energy, $\omega_z$ the Zeeman splitting of the electronic states and $S^z=1/2(\ketbra{\uparrow}{\uparrow}-\ketbra{\downarrow}{\downarrow})$. In Sec.~\ref{sec.impl}, we discuss how to effectively realize such a three-level system in a quantum dot. A detailed discussion of cavity decay ($\ll\Omega_l, \Omega_c$) will be considered later on. As already mentioned, in most QDs the electron spin also has a strong hyperfine interaction with $N\sim10^4$-$10^6$ lattice nuclear spins\cite{SKL03}. For s-type electrons it is dominated\footnote{We neglect the non-contact parts of the hyperfine interaction \cite{FTCL09} and other small nuclear interactions such as the nuclear Zeeman term and the interaction between the nuclear spins.} by the Fermi contact term \begin{equation}\label{eqn:hfneu} H_\mathrm{hf}=\frac{A}{2}(S^+A^-+\mr{h.c.}) + A S^zA^z, \end{equation} where $A$ is the hyperfine coupling constant, $S^{\pm}$ are the electron spin operators and $A^{\pm,z}=\sum_j\,\alpha_j I_j^{\pm,z}$ are the collective nuclear spin operators (we consider spin-1/2 nuclei for simplicity). The individual coupling constants $\alpha_j$ are proportional to the electron wave function at site $j$ and normalized to $\sum_j\alpha_j=1$. A prerequisite for using nuclear spins as a quantum memory is to initialize them in a highly polarized state which also satisfies $A^-\ket{\psi_0}=0$, i.e. is decoupled from the electron in state $\ket{\hspace{-1pt}\downarrow}$ (``dark state''). Recently, nuclear polarization $P=\left<A^z\right>/(-1/2)$ of $P>80$\% has been reported \cite{Maletinsky2008} (see also \cite{BSG+05,SNM+08}). The dark state condition is the natural consequence of using $H_\mr{hf}$ to polarize the nuclei \cite{IKTZ03}, but has not yet been verified experimentally. It is useful to separate the large expectation value of $A^z$, which describes the effective magnetic field experienced by the electron spin due to the nuclei and write $A^z = \left<A^z\right>_{\psi_0}+\delta A^z$. Henceforth we include the first term in $H_\mr{opt}$ by introducing $\tilde\omega_z = \omega_z+A\langle A^z\rangle_{\psi_0}$.\\ In the high-polarization regime $1-P \ll 1$ a very convenient \textit{bosonic description} for the nuclear spins becomes available: all excitations out of the fully polarized state and in particular the collective spin operator $A^+$ are approximated by bosonic creation operators applied to the $N$-mode vacuum state \cite{Christ2008,KST+Fl09}. Replacing $A^-\to(\sum_j\alpha_j^2)^{1/2}b$ and $A^z\to(-\frac{1}{2}+\frac{1}{N}b^\dag b)$, Eq.~(\ref{eqn:hfneu}) reads (small corrections omitted in these replacements are discussed in Appendix~\ref{app:BosonicDesc}) \begin{equation} \label{eq:H3} \tilde{H}_{\textrm{hf}} = \frac{g_n}{2}(b^{\dagger}S^-+S^+b) + \frac{A}{N}S^z\left(b^\dagger b -\frac{N}{2}\right), \end{equation} where $g_n = A\sqrt{\sum_j\alpha_j^2}$. The expression $N_1=(\sum_j\alpha_j^2)^{-1}$ can be seen as the effective number of nuclear spins to which the electron couples. In the homogeneous case $\alpha_j=\mr{const}$ we have $N_1=N$. Neglecting very weakly coupled nuclei we have $N_1\approx N$ and we will just use $N$ in the following. The bosonic description emphasizes the relation to quantum optical schemes, gives access to the toolbox for Gaussian states and operations and allows a more transparent treatment of the corrections to the ideal Jaynes-Cummings-like coupling of Eq.~(\ref{eq:H3}); we will make use of this description later on. \section{Coupling cavity and nuclear spins}\label{sec:adiabatic} Our aim is to obtain from $H=H_\mr{opt}+H_\mr{hf}$ a direct coupling between nuclear spins and light. The Hamiltonian $H$ describes a complicated coupled dynamics of cavity, nuclei and quantum dot. Instead of making use of the full Hamiltonian (and deriving the desired mapping, e.g., in the framework of optimal control theory) we aim for a simpler, more transparent approach. To this end, we adiabatically eliminate \cite{BPM06} the trion and the electronic degrees of freedom, which leads to a Hamiltonian $H_{el}$ that describes a direct coupling between nuclear spins and light. As explained later, this can be achieved if the couplings (the Rabi frequency of the laser/cavity, the hyperfine coupling, respectively) are much weaker than the detunings to the corresponding transition: \begin{subequations} \begin{align} &\Delta'\gg\Omega_l,\Omega_c\sqrt{n},\label{eq:condelimi}\\ &\sqrt{\Delta'\,\,\tilde{\omega}_z}\gg\Omega_l,\Omega_c\sqrt{n},\label{eq:condelimii}\\ &\tilde{\omega}_z\gg g_n\sqrt{m}.\label{eq:condelimiii} \end{align} \end{subequations} Here, $\Delta'=\omega_X-\omega_l+\tilde{\omega}_z/2$ is the detuning, $n$ is the number of cavity photons, and $m$ the number of nuclear excitations. Note that typically $\tilde\omega_z<\Delta'$ such that condition (\ref{eq:condelimi}) becomes redundant. In addition to (\ref{eq:condelimi})-(\ref{eq:condelimiii}), we choose the adjustable parameters such that all first order and second order processes described by $H$ are off-resonant, but the (third order) process in which a photon is scattered from the laser into the cavity while a nuclear spin is flipped down (and its converse) is resonant. This leads to the desired effective interaction. The idea of adiabatic elimination is to perturbatively approximate a given Hamiltonian by removing a subspace from the description that is populated only with a very low probability due to chosen initial conditions and detunings or fast decay processes. If initially unpopulated states (in our case the trion state $\ket{X}$ and the electronic spin-up state $\ket{\uparrow}$) are only weakly coupled to the initially occupied states, they remain essentially unpopulated during the time evolution of the system and can be eliminated from the description. The higher order transitions via the eliminated levels appear as additional energy shifts and couplings in the effective Hamiltonian on the lower-dimensional subspace. The starting point is the Hamiltonian $H=H_{\textrm{opt}}+H_{\textrm{hf}}$ given by Eqs.~(\ref{eq:optical}) and (\ref{eqn:hfneu}). In order to get a time-independent Hamiltonian, we go to a frame rotating with $U^{\dagger}=\exp{[-i\omega_lt(a^{\dagger}a+ \proj{X})]}$: \begin{align}\label{rotatingframeham} H'=&\frac{\Omega_c}{2}(a^{\dagger}\ketbra{\downarrow}{X} +\text{h.c.})+\frac{\Omega_l}{2}(\ketbra{\uparrow}{X} +\text{h.c.})+\delta a^{\dagger}a+\tilde{\omega}_zS^z\notag\\& +\frac{A}{2}(A^+S^-+S^+A^-)+AS^z\delta A^z+\Delta\proj{X} , \end{align} with detunings $\Delta=\omega_{X}-\omega_l$ and $\delta=\omega_c-\omega_l$. Choosing the cavity and laser frequencies, $\omega_c$ and $\omega_l$, far detuned from the exciton transition and the splitting of the electronic states $\tilde{\omega}_z$ much larger than the hyperfine coupling $g_n$, such that conditions (\ref{eq:condelimi})-(\ref{eq:condelimiii}) are fulfilled, we can adiabatically eliminate the states $\ket{X}$, $\ket{\uparrow}$. A detailed derivation of the adiabatic elimination can be found in Appendix \ref{app:adiabatic}. It yields a Hamiltonian, that describes an effective coupling between light and nuclear spins \begin{eqnarray} \label{eq:Heff1a} H_{el} =&\frac{\Omega_c\Omega_lA}{8\Delta'\tilde{\omega}_z}(aA^+ +\mr{h.c.})+\omega_1 a^{\dagger}a\notag\\&-\frac{A}{2} \delta A^z-\frac{A^2}{4\tilde\omega_z}A^+A^-+T_{nl}, \end{eqnarray} where the energy of the photons $\omega_1=\delta-\frac{\Omega_c^2}{4{\Delta'}}$ and the energy of the nuclear spin excitations $\sim -\frac{A}{2N}-\frac{A^2}{4N\tilde{\omega}_z}$. By $T_{nl}$ we denote the nonlinear terms $T_{nl}=\frac{ A^3}{8\tilde{\omega}_z^2} A^+\delta A^zA^-+\frac{A^2}{4\tilde\omega_{z}^2}\delta a^{\dagger}aA^+A^-+\frac{\Omega_c^2\delta}{4{\Delta'}^2}a^{\dagger}a^{\dagger}aa$, which are small ($\|T_{nl}\|\ll\frac{\Omega_c\Omega_lA}{8\Delta'\tilde{\omega}_z} $) in the situation we consider ($\delta\ll\Omega_c, g_n/\tilde{\omega_z}\sim\Omega_l/\Delta'\ll1$) and neglected in the following. In the bosonic description of the nuclear spins that we introduced in Eq.~(\ref{eq:H3}) the Hamiltonian given by Eq.~(\ref{eq:Heff1a}) then reads \begin{eqnarray} \label{eq:Heff1b} H_{bs} = g (ab^{\dagger} +\mr{h.c.})+\omega_1 a^{\dagger}a+\omega_2 b^{\dagger}b, \end{eqnarray} with coupling strength $g$ given by \begin{equation}\label{eq:gideal} g=\frac{\Omega_c\Omega_lg_n}{8\Delta'\tilde{\omega}_z}. \end{equation} The energy of the nuclear spin excitations can now be written as $\omega_2=-\frac{A}{2N}-\frac{g_n^2}{4\tilde{\omega}_z}$. For resonant exchange of excitations between the two systems, we choose $\omega_1=\omega_2$. Then $H_\mr{bs}$ describes a beamsplitter-like coupling of the modes $a$ and $b$. Processes in which absorption (or emission) of a cavity photon is accompanied by a nuclear spin flip are resonant, and we have thus derived the desired effective interaction between light and nuclear spins. Since $\sqrt{\Omega_c\Omega_l/(\Delta'\tilde\omega_z)}\ll1$ the effective coupling $g$ is typically $2-3$ orders of magnitude smaller than the hyperfine coupling $g_n$. To illustrate the validity of the adiabatic elimination and the approximations leading to Eq.~(\ref{eq:Heff1b}), we have simulated the evolution of the two-photon Fock state $\psi_{20}$ (the first subscript denotes the number of photons, the second the number of nuclear spin excitations) under the full Hamiltonian $H'$ given by Eq.~(\ref{rotatingframeham}) and compared it to the evolution under the Hamiltonian $H_{bs}$ given by Eq.~(\ref{eq:Heff1b}). We assume full nuclear spin-down polarization and the validity of the bosonic description. In the simulation, we choose $\Omega_l=\Omega_c$, $\Omega_l/\Delta=1/10$, $\Omega_l^2/(\Delta\tilde{\omega}_z)=1/100$ and $g_n/\tilde{\omega}_z=1/50$, such that the conditions given by Eqs.~(\ref{eq:condelimi})-(\ref{eq:condelimiii}) are fulfilled. Fig.~\ref{fig:elimination} shows, that $H'$ is well approximated by $H_{bs}$, and that the nonlinear terms $T_{nl}$ can be neglected. Almost perfect Rabi-oscillations between the two-photon Fock state $\psi_{20}$ and the state with two nuclear spin excitations $\psi_{02}$ can be seen in Fig.~\ref{fig:elimination}. For $\psi_{01}$, the adiabatic elimination is an even better approximation to the full Hamiltonian as the nonlinear terms $T_{nl}$ and the conditions (\ref{eq:condelimi})-(\ref{eq:condelimiii}) depend on the excitation number. \begin{figure}[ht] \centering \includegraphics[scale=0.65]{plotelimination12NEU.eps} \caption{Evolution of the two-photon Fock state $\psi_{20}$ under the full Hamiltonian $H'$ (solid lines) and Hamiltonian $H_{bs}$ ($\times$, dashed and dotted lines), where the trion and the electronic spin-up state have been eliminated.} \label{fig:elimination} \end{figure} In the process leading to the beamsplitter coupling, a photon is scattered from the cavity into the laser mode while a nuclear spin excitation is created (and vice versa). If we interchange the role of laser and cavity field (i.e., the laser drives the $\ket{\hspace{-1pt}\downarrow}\leftrightarrow\ket{X}$ transition and the cavity couples to $\ket{\hspace{-1pt}\uparrow}$) then creation of a nuclear spin excitation is accompanied by scattering of a laser photon \emph{into} the cavity, i.e. the effective coupling becomes $a^\dag b^\dag+ab$. Tuning the energies such that $\omega_1=-\omega_2$, the driving laser now facilitates the \emph{joint} creation (or annihilation) of a spin excitation and a cavity photon, realizing a two-mode squeezing effective Hamiltonian \begin{equation} \label{eq:Hsq} H_\mr{sq} = g (a^\dagger b^\dagger +a b) + \omega_1 a^\dagger a + \omega_2 b^\dagger b. \end{equation} Here, the energy of the photons is $\omega_1=\delta\left(1+\frac{\Omega_c^2}{4{\Delta'}^2}\right)$, the energy of the nuclear spin excitations is $\omega_2=-\frac{A}{2N}-\frac{g_n^2}{4\tilde{\omega}_z}$, and the nonlinear terms are now given by $T_{nl}=\frac{ g_n^2}{4\tilde{\omega}_z^2}\frac{A}{2N} b^{\dagger}b^{\dagger}bb+\frac{g_n^2}{4\tilde{\omega_{z}}^2}\delta a^{\dagger}ab^{\dagger}b$. As before, they are much smaller than $g$ and can be neglected for low excitation number. To be able to freely switch between $H_\mr{bs}$ and $H_\mr{sq}$ simply by turning on and off the appropriate lasers, both the ``driven'' and the empty mode should be supported by the cavity. \section{Quantum Interface}\label{sec:interface} Now the obvious route to a quantum interface is via the Hamiltonian $H_\mr{bs}$: acting for a time $t=\pi/g$ it maps $a\to ib$ and $b\to ia$ thus realizing (up to a phase) a swap gate between cavity and nuclear spins. This and related ideas are explored in \cite{SCG09}. There are two problems with this approach: Compared to the effective coupling, present-day cavities are ``bad'' with cavity life time $\tau_\mr{cavity}\ll 1/g$, i.e., the cavity field will decay before its state can be mapped to the nuclei. Moreover, it is notoriously difficult to couple quantum information into high-Q cavities, despite proposals \cite{CZKM97} that address this issue. Both problems can be circumvented for our system by two key ideas: (i) to include the field modes into which the cavity decays in the description and (ii) to realize write-in via quantum teleportation. Moreover, read-out can be realized with similar techniques. In the following, we assume that all the light leaving the cavity can be collected and accessed optically. The combination of strong coupling and high collection efficiency has not yet been demonstrated for solid-state cavities, although there is remarkable progress towards that goal \cite{TEFV09}. Let us first consider the more complicated part, write-in. In a first step, the squeezing Hamiltonian $H_\mr{sq}$ (assisted by cavity decay) generates a strongly entangled two-mode squeezed state (TMSS) between the nuclear spins and the traveling-wave \emph{output field} of the cavity. Then quantum teleportation \cite{BrKi98} is used to deterministically write the state of another traveling-wave light field onto the nuclear mode. Similarly, $H_\mr{bs}$ can be used for read-out, by writing the state of the nuclei to the output field. Let us now consider $H_{sq}$ and quantitatively derive the entangled state and discuss the quality of the interface it provides. The Langevin equation of cavity and nuclear operators is (for $t\geq0$) \begin{equation}\label{eq:Langevin} \begin{split} \dot{a}(t) &= -igb(t)^\dag -\frac{\gamma}{2}a-\sqrt{\gamma}c_\mr{in}(t), \\ \dot{b}(t) &= -iga(t)^\dag, \end{split} \end{equation} where we have specialized to the case $\omega_1=-\omega_2$, transformed to an interaction picture with $H_0=\omega_1(a^\dag a-b^\dag b)$, and performed the rotating-wave and Markov approximations in the description of the cavity decay \cite{GZ00}. Here, $c_\mr{in}$ describes the vacuum noise coupled into the cavity and satisfies $[c_\mr{in}(t),c_\mr{in}^\dag(t')]=\delta(t-t')$. Integrating Eqs. (\ref{eq:Langevin}), we get \begin{equation}\label{eq:sol1} \begin{split} a(t) &= \alpha_{1}^-(t)a+\alpha_2(t)b^\dag + \sqrt{\gamma}\int_0^t\!\!\alpha_{1}^-(t-\tau)c_\mr{in}(\tau)d\tau, \\ b(t) &= \alpha_{2}(t)a^\dag+\alpha_{1}^+(t)b+ \sqrt{\gamma}\int^t\!\!\alpha_{2}(t-\tau)c_\mr{in}^\dag(\tau) d\tau,\\ \end{split} \end{equation} where $\alpha_{1}^{\pm}(t)=e^{-\gamma t/4}\left[ \cosh(\nu t)\pm \gamma/(4\nu)\sinh(\nu t) \right]$, $\alpha_2(t)=-ig/\nu e^{-\gamma t/4}\sinh(\nu t)$ and $\nu=\sqrt{(\gamma/4)^2+g^2}$; and $a,b\equiv a(0),b(0)$ in this equation. It may be remarked here that the analogous equations with $H_\mr{bs}$ instead of $H_\mr{sq}$ lead to almost identical solutions: now $a(t)$ is coupled to $b(t)$ instead of $b^\dag(t)$ and the only other change to \Eqref{eq:sol1} is to replace $\nu$ by $\tilde\nu=\sqrt{(\gamma/4)^2-g^2}$. While \Eqref{eq:sol1} describes a non-unitary time-evolution of the open cavity-nuclei system, the overall dynamics of system plus surrounding free field is unitary. It is also Gaussian, since all involved Hamiltonians are quadratic. Since all initial states are Gaussian as well the joint state of cavity, nuclei, and output field is a pure Gaussian state at any time. This simplifies the analysis of the dynamics and, in particular, the entanglement properties significantly: For pure states, the entanglement of one subsystem (e.g., the nuclei) with the rest is given by the entropy of the reduced state of the subsystem. Gaussian states are fully characterized by the first and second moments of the field operators $R_1 =(a+a^\dag)/\sqrt{2}$ and $R_2 =-i(a-a^\dag)/\sqrt{2}$ via the covariance matrix (CM) $\Gamma_{kl} = \langle\{R_k,R_l\}\rangle-2\langle R_k\rangle\langle R_l\rangle$ (where $\{,\}$ denotes the anticommutator). The CM of the reduced state of a subsystem [e.g., $\Gamma_\mr{nuc}(t)$ for the CM of the nuclei at time $t$] is given by the sub-matrix of $\Gamma$ that refers to covariances of system operators only. For a single mode, the entropy of the reduced system can be obtained from the determinant of the reduced CM and with $x(t)\equiv\det\Gamma_\mr{nuc}(t)$ we get a simple expression for the entropy (i.e. entanglement): \begin{equation} \label{eq:EoE} E(t)=x(t)\log_2x(t)-[x(t)-1]\log_2[x(t)-1]. \end{equation} Since the state at hand (including the output field) is pure and Gaussian it is fully determined by $x(t)$ up to local Gaussian unitaries \cite{GECP03}: it is locally equivalent to a TMSS $\ket{\psi(r)}=(\cosh r)^{-1}\sum_n(\tanh r)^n\ket{nn}$ with CM (in $2\times2$ block matrix form) \[ \Gamma_\mr{TMSS} = \left(\begin{array}{cc}\cosh(2r)\id_2&\sinh(2r)\sigma_z\\ \sinh(2r)\sigma_z&\cosh(2r)\id_2 \end{array}\right). \] The squeezing parameter $r$ is determined by $x(t)=\cosh^2(2r)$. From \Eqref{eq:sol1} we find that $\Gamma_\mr{nuc}(t)=\cosh[2r(t)]\id_2$ for all $t\geq0$, where $\cosh[r(t)]$ is given by \begin{eqnarray}\label{eq:coshr} \cosh r = e^{-\frac{\gamma t}{4}}\left( \frac{\gamma}{2\nu}\sinh(2\nu t)+\frac{g^2+\frac{\gamma^2}{8}}{2\nu^2}\cosh(2\nu t)+\frac{g^2}{2\nu^2} \right)^{1/2} \end{eqnarray} and quantifies how strongly the nuclei are entangled with cavity and output field. After turning off the coupling $g$ at time $t_\mr{off}$ the nuclei are stationary while the cavity decays to the vacuum. Therefore, the final entanglement of nuclei and output field at time $t-t_\mr{off}\gg1/\gamma$ is given by \Eqref{eq:EoE} with $x(t)=\cosh[2r(t_\mr{off})]^2$. Note that for $\gamma\gg g,1/t$ and keeping only the leading terms in Eq. (\ref{eq:coshr}), $\cosh[2r(t)]$ simplifies to $3[1-8(g/\gamma)^2]e^{\frac{4g^2}{\gamma}t}$, i.e., two-mode squeezing $r(t)$ grows linearly with time at rate $\sim\frac{4g^2}{\gamma}$. In order to perform the teleportation, a Bell measurement has to be performed on the output mode of the cavity and the signal state to be teleported. This is achieved by sending the two states through a 50:50 beam splitter and measuring the output quadratures \cite{BrKi98}. Hence the output mode of the cavity, $B_0$, needs to be known to properly match it with the signal mode at the beam splitter. It can be expressed as a superposition of the bath operators $c(x,t)$ as $B_0(t)=\int_\RR z^0(x,t)c(x,t) dx$. By definition, the mode $B_0$ contains all the photons emitted from the cavity, hence all other modes $B_{k\not=0}$ (from some complete orthonormal set of modes containing $B_0$) are in the vacuum state. This implies $\langle B_k(t) B_l(t)\rangle\propto\delta_{k0}\delta_{l0}$, from which the mode function $z^0$ can be determined as \begin{eqnarray}\label{eq:outputmode} z^0(x,t) &=& \alpha_2(t-x)/\sqrt{\int_{\RR}|\alpha_2(t-x)|^2dx}. \end{eqnarray} The procedure for write-in then is: let $H_\mr{sq}$ act for a time $t_1$ to create the TMSS $\psi(r(t_1))$ of the nuclei entangled with cavity and output field. To obtain a state in which the nuclei are only entangled to the output field, we switch the driving laser off $(g=0)$ and let the cavity decay for a time $t_2\gg \tau_\mr{cav}$, obtaining an (almost) pure TMSS of the nuclei and the output mode, which is used for quantum teleportation. Teleportation maps the state faithfully up to a random displacement $d$, which depends on the measurement result. This can be undone with the help of $H_\mr{bs}$ \cite{SCG09} to complete the write-in. The read-out step follows identical lines, except that $H_\mr{sq}$ is replaced by $H_\mr{bs}$ and no teleportation is necessary since the state of the nuclei is directly mapped to the output mode of the cavity; for more details see \cite{SCG09}. As mentioned, we assume that all light that leaves the cavity can be collected and further processed. Losses could be modeled by mixing the outgoing light with yet another vacuum and tracing over the latter. Considering a fully decayed cavity, the reduced state of nuclei and output mode is now mixed, but still entangled (unless the losses are $f=100\%$). Whether or not the state still allows for better-than classical teleportation depends on $f$ and $r$. E.g., for $r=1$ even at losses of 40\%, $F_\mr{tel}>0.7$ (and $>0.5$ even at 75\% loss). Note, however, that our read-out scheme is much less tolerant of losses. The fidelity with which a quantum state can be teleported onto the nuclei using the protocol \cite{BrKi98} is a monotonic function of the two-mode squeezing parameter $r(t_\mr{off})$. A typical benchmark \cite{HWPC05} is the average fidelity $F$ with which an arbitrary coherent state can be mapped. For $F\geq2/3$ the quantum channel given by teleportation has a positive quantum capacity. If a TMSS is used for teleportation, $F$ has a simple dependence on the squeezing parameter \cite{Fiu02} and is given by $F(r) = 1/(1+e^{-2r})$. Thus, if our system parameters $g,\gamma$ and the interaction time $t=t_\mr{off}$ lead to $\cosh[2r(t_\mr{off})]$ we have an interface that provides a write-in fidelity $F(r(t_\mr{off}))$, cf.\ \Figref{fig:finalent}. The fidelity for other subsets of states (including, e.g., finite dimensional subspaces) can be computed from the coherent state fidelity \cite{HPC06}. \begin{figure}[ht] \centering \includegraphics[scale=0.65]{plotsfidelity2.eps} \caption{Average fidelity for the mapping of coherent states to the nuclei via teleportation (after complete decay of the cavity) plotted as a function of the interaction time $t_\mr{off}$ for different values of $g/\gamma=1,10,100$ (solid, dash-dotted, dashed). All fidelities converge to $1$ as $gt\to\infty$. } \label{fig:finalent} \end{figure} Already for $r(t_\mr{off})\sim1$ fidelities above $0.8$ are obtained. As seen from \Figref{fig:finalent} this is achieved for $gt_\mr{off}\lesssim5$ even for strong decay. After switching off the coupling we have to wait for the cavity to decay. Since typically $\gamma\gg g$ this does not noticeably prolong the protocol. \section{Implementation}\label{sec.impl} Quantum dots generally have a richer level structure than the $\Lambda$ scheme depicted in \Figref{fig:1}. This and the applicable selection rules imply that $H_\mr{opt}$ is not exactly realized. In this section we take this into account and discuss a setting that allows to realize the desired coupling. We now consider the two spin states $\ket{\Downarrow}, \ket{\Uparrow}$ of the trion in addition to the two electronic spin states. We focus on a setup where these states are Zeeman split by an external magnetic field in growth/$z$-direction (Faraday geometry). The electronic state $\ket{\uparrow}$ is coupled to $\ket{\Uparrow}$ (with angular momentum $+3/2$) by $\sigma^+$ circularly polarized light (and $\ket{\downarrow}$ to $\ket{\Downarrow}$ with $\sigma^-$-polarized light). We can stimulate these transitions by a $\sigma^-$-polarized cavity field and a $\sigma^+$-polarized classical laser field, respectively, but this will not lead to a $\Lambda$ scheme, cf.\ Fig.~\ref{fig:mixtrion}a. The cleanest way to obtain the desired coupling is to mix the trion states with a resonant microwave field. The electronic eigenstates are unchanged (being far detuned from the microwave frequency) and are now both coupled to the new trion eigenstates $\ket{-}=1/\sqrt{2}(\ket{\Uparrow}-\ket{\Downarrow})$ and $\ket{+}=1/\sqrt{2}(\ket{\Uparrow}+\ket{\Downarrow})$, see Fig.~\ref{fig:mixtrion}b in a double $\Lambda$ system. \begin{figure}[ht] \centering \includegraphics[scale=0.65]{systempaperkurzmixtrion2.eps}.\caption{ Level scheme of the QD (a) Electronic and trion states split in an external magnetic field in growth direction. They are coupled by a $\sigma^-$-polarized laser and a $\sigma^+$-polarized cavity field with frequencies $\omega_l$, $\omega_c$, respectively. (b) Additional to the setting in (a), a microwave field resonant with the splitting of the trion states in the magnetic field ($\omega_{\Uparrow}-\omega_{\Downarrow}=\omega_{mw}$) mixes the trion states. Laser and cavity couple both electronic states to the trion states $\ket{+}$ and $\ket{-}$.} \label{fig:mixtrion} \end{figure} There are other ways to couple both ground states to the same excited state, e.g., taking advantage of weakened selection rules (due to heavy-hole/light-hole mixing or an in-plane magnetic field) or using linearly polarized light (also in an in-plane magnetic field, i.e. Voigt geometry). They avoid the need of an additional microwave field at the expense of additional couplings (which have to be kept off-resonant) and are explored further in \cite{SCG09}. The Hamiltonian of the system is now given by \begin{align}\label{eq:opticalim} H=&\frac{\Omega_c}{2}\,a^{\dagger}\,\ketbra{\downarrow}{\Downarrow} +\frac{\Omega_l}{2}\,e^{i\omega_l t}\ketbra{\uparrow}{\Uparrow}+\Omega_{mw}\,e^{i\omega_{mw} t}\ketbra{\Downarrow}{\Uparrow}+\textrm{h.c.}\notag\\ &+\omega_c\,a^{\dagger}a+\omega_{\Uparrow}\proj{\Uparrow}+\omega_{\Downarrow}\proj{\Downarrow}+ \tilde{\omega}_z S^z +H_{\text{hf}}, \end{align} where $\omega_{\Uparrow},\omega_{\Downarrow}=\omega_X\pm\omega_{zh}/2$ include the hole Zeeman splitting $\omega_{zh}=\omega_{mw}$ and $H_{\text{hf}}$ is given by Eq.~(\ref{eqn:hfneu}). In a frame rotating with \[U^{\dagger}=\exp[{-i(\omega_{mw}+\omega_l)t(\proj{\Uparrow}+a^{\dagger}a) - i\omega_l t\proj{\Downarrow}})],\] the Hamiltonian reads \begin{align}\label{eq:opticalim2} H=&\frac{\Omega_c}{2\sqrt{2}}\,(a^{\dagger}\,\ketbra{\downarrow}{+}-a^{\dagger}\,\ketbra{\downarrow}{-}) + \frac{\Omega_l}{2\sqrt{2}}(\ketbra{\uparrow}{+}+\ketbra{\uparrow}{-})\\\notag &+\delta' a^{\dagger}a+\Delta_+\proj{+}+\Delta_-\proj{-}+ \tilde{\omega}_z S^z +H_{\text{hf}}, \end{align} where $\delta'=\omega_c-\omega_l-\omega_{mw}$ and $\Delta_{\pm}=\omega_{\Downarrow}-\omega_l\pm\Omega_{mw}$. We adiabatically eliminate $\ket{\pm}$ and $\ket{\uparrow}$ as explained in Sec.~\ref{sec:adiabatic} and Appendix \ref{app:adiabatic}. This yields \begin{eqnarray} \label{eq:Heff1} H_{el} = g' (aA^+ +\mr{h.c.}) + \omega_1' a^{\dagger}a - \frac{A}{2}\delta A^z - \frac{A^2}{4\tilde\omega_z}A^+A^- + T_{nl}', \end{eqnarray} which is of exactly the same form as the Hamiltonian of our toy model given by Eq.~(\ref{eq:Heff1a}), and differs only by the replacements $\Delta'^{-1}\longrightarrow\frac{1}{2}\left(\Delta_+'^{-1}-\Delta_-'^{-1}\right)$ in the coupling, $\Delta'^{-1}\longrightarrow\frac{1}{2}\left(\Delta_+'^{-1}+\Delta_-'^{-1}\right)$ in the nuclear energy and $\Delta'^{-2}\longrightarrow\frac{1}{2}\left(\Delta_+'^{-2}+\Delta_-'^{-2}\right)$ in the nonlinear terms. As before, the nonlinear terms $T_{nl}'$ are small and are neglected in the following. Using the bosonic description, we then obtain again a beam splitter Hamiltonian Eq.~(\ref{eq:Heff1b}), where the coupling is now given by \begin{equation}\label{eq:gimplement} g'=\frac{\Omega_c\Omega_l g_n}{16\tilde{\omega}_z}\left(\frac{1}{\Delta_+'}-\frac{1}{\Delta_-'}\right). \end{equation} with $\Delta'_{\pm}=\Delta_{\pm}+\frac{\tilde{\omega}_z}{2}$. Compared to Eq.~(\ref{eq:gideal}) the effective coupling $g$ is reduced by a factor $\Delta'({\Delta_+'}^{-1}-{\Delta_-'}^{-1})$, i.e., $\approx2\Omega_{mw}/\Delta'$ for $\Omega_{mw}\ll\Delta'$. To illustrate that $H_{el}$, in the bosonic description, which we denote by $H_{bs}$, provides a good approximation to $H$ and allows to implement a good quantum interface, we consider a maximally entangled state $\sum_k \ket{k}_R\ket{k}_c$ of cavity and some reference system $R$ and then use the interface to map the state of the cavity to the nuclei. If a maximally entangled state of $R$ and nuclei is obtained, it shows that the interface is perfect for the whole subspace considered. The fidelity of the state $\id_R\otimes U(t) \sum_{k=1}^2 \ket{k}_R\ket{k}_c\ket{0}_n$ with the maximally entangled state $\sum_k\ket{k}_R\ket{0}_c\ket{k}_n$ fully quantifies the quality of the interface. In Fig.~\ref{fig:ploteliminationsuperpo} we plot this fidelity for the evolutions $U(t)$ generated by the two Hamiltonians $H$ and $H_{el}$ of Eqs.~(\ref{eq:Heff1}) and (\ref{eq:opticalim2}) to show that a high-fidelity mapping is possible with the chosen parameters and that the simple Hamiltonian $H_{el}$ well describes the relevant dynamics. Since $U(\pi/g)aU(\pi/g)^\dag = ib$ some care must be taken concerning the phases of the number state basis vectors in the nuclear spin mode ($\ket{k}_c\mapsto(i)^k\ket{k}_n$) and different phases at $t=3\pi/g$. For the numerical simulation, we chose the parameters as follows: the number of nuclei $N=10^4$, the hyperfine coupling constant $A=100\mu eV$, the laser and cavity Rabi frequency $\Omega_c=\Omega_l=6\mu eV$, the detuning of the trion $\omega_{X}-\omega_l=700\mu eV$, the microwave Rabi frequency $\Omega_{mw}=50\mu eV$ and the effective Zeeman splitting $\tilde{\omega}_z=50\mu eV$. This corresponds to $\sim4$T using an electron g-factor of $0.48$ (external and Overhauser field are counter-aligned) and the corresponding hole Zeeman splitting $\omega_{mw}\sim 700\mu$eV. With these parameters, a value of $g\sim5\cdot10^{-5}\mu$eV is obtained, leading to times of $\sim10$ microseconds for an interface operation. \begin{figure}[ht] \centering \includegraphics[scale=0.65]{plotelimination12impl2.eps} \caption{Performance of the quantum interface for the maximally entangled input state $\psi_{in}\propto\sum_{k=1}^2 \ket{k}_R\ket{k}_c$ (subscript $c$ indicates the cavity). The red solid curve shows the fidelity $F_{bs}$ of $\psi_{in}$ evolved under $H_{bs}$ with the ideal target state $\ket{\psi_{map}}\propto\sum_{k=1}^2 (-1)^{lk}(i)^k\ket{k}_R\ket{k}_{n}$ (subscript $n$ indicates the nuclei) for $gt\,\epsilon[l\pi,(l+1)\pi]$ , where $l$ takes into account the phases acquired during mapping, see text. The blue solid curve shows the fidelity $\tilde{F}_{bs}$ with $\ket{\tilde{\psi}_{map}}\propto\sum_{k=1}^2 (-1)^{lk}\ket{k}_R\ket{k}_c$ for $gt\,\epsilon[\frac{2l+1}{2}\pi,\frac{2l+3}{2}\pi]$. Dashed curves depict the same fidelities for evolution under H' (denoted by $F_{H'}/\tilde{F}_{H'}$). (Parameters chosen as in the text).} \label{fig:ploteliminationsuperpo} \end{figure} Throughout the discussion we have neglected the internal nuclear dynamics and corrections to the bosonic description. Nuclear dynamics is caused by direct dipole-dipole interaction and electron-mediated interaction \cite{SKL03,YLS06,WiDa06}. In \cite{SCG09} we consider these processes in detail and show that they are negligible: the coupling of the bosonic mode $b$ to bath modes $b_k$ is by a factor $10^{-2}$ smaller than the coupling $g$ in $H_\mr{bs}$ given by Eq.~(\ref{eq:gimplement}).\\ The bosonic description of the nuclear spin system can be introduced in a formally exact way \cite{Christ2008}. However, to obtain the simple Jaynes-Cummings-like Hamiltonian Eq.~(\ref{eq:Heff1b}) instead of Eq.~(\ref{eq:Heff1a}) we have made several approximations. As discussed in more detail in Appendix \ref{app:BosonicDesc}, these can lead to two types of errors, (i) an inhomogeneous broadening of $\omega_2$ and (ii) leakage from the mode $b$ due to inhomogeneity. High polarization reduces both effects. The broadening of $\omega_2$ can be further reduced by an accurate determination of the Overhauser shift $A^z$. Reduced Overhauser variance has already been seen experimentally \cite{Grei07, XU+08, LHZ+Im09}. Leakage is suppressed by the energy difference of excitations in the mode $b$ and the other modes not directly coupled to the electron \cite{KST+Fl09} (cf. also the Appendix). Finally, sufficiently small electron and cavity decoherence must be ensured. In particular, we assume the strong coupling limit of cavity-QED and neglect spontaneous emission for the whole duration of our protocol, which requires that $(\Omega_l/\Delta_\pm)^2\gamma_\mr{spont}1/g \ll 1$, where $\gamma_\mr{spont}$ comprises spontaneous emission of the quantum dot into non-cavity modes. With the parameters chosen above this requires $\gamma_\mr{spont}\gg1\mu$s$^{-1}$. Electron spin relaxation is sufficiently slow in QDs at large Zeeman splitting ($\gtrsim 1$ms) compared to our interaction. The effect of electron spin dephasing processes is suppressed by elimination of the electron: they lead to an inhomogeneous broadening of $g$ and $\omega_i$ which is small as long as the energy scale of the dephasing is small compared to the detuning $\tilde\omega_z$. \section{Conclusion} We have shown how to realize a quantum interface between the polarized nuclear spin ensemble in a singly charged quantum dot and a traveling optical field by engineering beam splitter and two-mode squeezer Hamiltonians coupling the collective nuclear spin excitation and the mode of the open cavity. This indicates how to optically measure and coherently manipulate the nuclear spin state and opens a path to include nuclear spin memories in quantum information and communication applications. Moreover, together with a photo detector for the output mode of the cavity, the quantum dot--cavity system provides a means to monitor nuclear spin dynamics on a microsecond time scale and would allow to precisely study the effect of internal nuclear spin dynamics and the corrections to the bosonic description used here. \begin{comment} Since the effective coupling is obtained by adiabatic elimination and describes a third order process, it is relatively slow (at best $2-3$ orders of magnitude smaller than the strength of the electron-nuclear flip-flop term on which it is based), but still faster than the operative decoherence processes for highly polarized nuclei. [some thoughts on how to increase the speed?] since it is obtained \end{comment} \begin{acknowledgments} We acknowledge support by the DFG within SFB 631 and the NIM Cluster of Excellence. \end{acknowledgments}
2023-04-23T06:10:03.177Z
2010-01-18T18:56:09.000Z
redpajama/arxiv
arxiv_0002
72
6,562
f7b52cfb04995f03fccc737a41bbf0c545eb783e
\section{Introduction} It is known that a function $u$ being harmonic in a domain $D\subset \mathbb{R}^n$ can be defined or characterized by $\Delta u=0$ in $D$ in the distributional sense, that is, $u\in W^{1,2}_{\rm loc}(D):=\left\{v\in L^2_{\rm loc} (D) \mid \nabla v \in L^2_{\rm loc} (D)\right\}$ so that $$ \int_{\mathbb{R}^n} \nabla u (x) \cdot \nabla v(x) dx =0 \qquad \hbox{for every } v\in C^\infty_c(D). $$ It is equivalent to the following averaging property by running a Brownian motion $X$: for every relatively compact subset $U$ of $D$, $$ u(X_{\tau_U})\in L^1({\mbox{\bf P}}_x) \qquad \hbox{and} \qquad u(x) = {\mbox{\bf E}}_x \left[ u(X_{\tau_U})\right] \quad \hbox{for every } x\in U. $$ Here $\tau_U:=\inf\left\{t\geq 0: X_t \notin U\right\}$. Recently there are interests (e.g. \cite{BKK}) resulting from several areas of mathematics in knowing whether the above two notions of harmonicity remain equivalent in a more general context, such as for diffusions on fractals (see \cite{BBKT}) and for discontinuous processes including symmetric L\'evy processes. For instance, due to their importance in theory and in applications, there has been intense interest recently in studying discontinuous processes and non-local (or integro-differential) operators, by both analytical and probabilistic approaches. See, e.g., \cite{CK, CK2} and the references therein. So it is important to identify the connection between the analytic and probabilistic notions of harmonic functions. \medskip In this paper, we address the question of the equivalence of the analytic and probabilistic notions of harmonicity in the context of symmetric Hunt processes on local compact separable metric spaces. Let $X$ be an $m$-symmetric Hunt process on a locally compact separable metric space $E$ whose associated Dirichlet form $({\mathcal E}, {\mathcal F})$ is regular on $L^2(E; m)$. Let $D$ be an open subset of $E$ and $\tau_D$ is the first exit time from $D$ by $X$. Motivated by the example at the beginning of this section, loosely speaking (see next section for precise statements), there are two ways to define a function $u$ being harmonic in $D$ with respect to $X$: (a) (probabilistically) $t\mapsto u(X_{t\wedge \tau_D})$ is a ${\mbox{\bf P}}_x$-uniformly integrable martingale for quasi-every $x\in D$; (b) (analytically) ${\mathcal E} (u, g)=0$ for $g\in {\mathcal F}\cap C_c(D)$. We will show in Theorem \ref{T:7} below that these two definitions are equivalent. Note that even in the Brownian motion case, a function $u$ that is harmonic in $D$ is typically not in the domain ${\mathcal F}$ of the Dirichlet form. Denote by ${\mathcal F}^D_{\rm loc}$ the family of functions $u$ on $E$ such that for every relatively compact open subset $D_1$ of $D$, there is a function $f\in {\mathcal F}$ so that $u=f$ $m$-a.e. on $D_1$. To show these two definitions are equivalent, the crux of the difficulty is to \begin{description} \item{(i)} appropriately extend the definition of ${\mathcal E}(u, v)$ to functions $u$ in $ {\mathcal F}^D_{\rm loc}$ that satisfy some minimal integrability condition when $X$ is discontinuous so that ${\mathcal E}(u, v)$ is well defined for every $v\in {\mathcal F}\cap C_c(D)$; \item{(ii)} show that if $u$ is harmonic in $D$ in the probabilistic sense, then $u\in {\mathcal F}^D_{\rm loc}$ and ${\mathcal E}(u, v)=0$ for every $v\in {\mathcal F}\cap C_c(D)$. \end{description} If one assumes a priori that $u\in {\mathcal F}$, then the equivalence of (a) and (b) is easy to establish. See Remarks \ref{R:5}(i) and \ref{R:2.7} below. In next section, we give precise definitions, statements of the main results and their proofs. Three examples are given to illustrate the main results of this paper. Extensions to general symmetric right processes on Lusin spaces including infinite dimensional spaces are mentioned at the end of this paper. We use ``:=" as a way of definition. For two real numbers $a$ and $b$, $a\wedge b :=\min\{a, b\}$. \section{Main results} Let $X=(\Omega,{\mathcal F}_{\infty},{\mathcal F}_t, X_t,\zeta,{\mbox{\bf P}}_x, x\in E)$ be an $m$-symmetric Hunt process on a locally compact separable metric space $E$, where $m$ is a positive Radon measure on $E$ with full topological support. A cemetery state $\partial$ is added to $E$ to form $E_\partial:=E\cup\{\partial\}$ as its one-point compactification and $\Omega$ is the totality of right-continuous, left-limited sample paths from $[0,\infty[$ to $E_\partial$ that hold the value $\partial$ once attaining it. For any $\omega\in\Omega$, we set $X_t(\omega):=\omega(t)$. Let $\zeta(\omega):=\inf\{t\geq0\,\mid\, X_t(\omega)=\partial\}$ be the life time of $X$. As usual, ${\mathcal F}_{\infty}$ and ${\mathcal F}_t$ are the minimal augmented $\sigma$-algebras obtained from ${\mathcal F}_{\infty}^0:=\sigma\{X_s\,\mid\, 0\leq s<\infty\}$ and ${\mathcal F}_t^0:=\sigma\{X_s\,\mid\, 0\leq s\leq t\}$ under $\{{\mbox{\bf P}}_x: x\in E\}$. For a Borel subset $B$ of $E$, $\tau_B:=\inf\{t>0 \mid X_t\notin B\}$ (the {\it exit time} of $B$) and $\sigma_B:=\inf\{t\geq 0 \mid X_t\in B\}$ (the {\it entrance time} of $B$) are $({\mathcal F}_t)$-stopping times. The transition semigroup $\{P_t: t\ge 0\}$ of $X$ is defined by $$ P_tf(x):={\mbox{\bf E}}_x[f(X_t)]={\mbox{\bf E}}_x[f(X_t): t< \zeta],\qquad t\ge 0. $$ Each $P_t$ may be viewed as an operator on $L^2(E, m)$, and taken as a whole these operators form a strongly continuous semigroup of self-adjoint contractions. The Dirichlet form associated with $X$ is the bilinear form \begin{equation}\label{e:1.1} {\cal E}(u, v):=\lim_{t\downarrow 0}t^{-1}(u-P_tu, v)_m \end{equation} defined on the space \begin{equation}\label{e:1.2} {\cal F}:=\left\{u\in L^2(E; m)\,\Big| \,\sup_{t>0}\,\,t^{-1}(u-P_tu, u)_m<\infty \right\}. \end{equation} Here we use the notation $(f,g)_m:=\int_E f(x)g(x)\, m(dx)$. We assume that $({\mathcal E}, {\mathcal F})$ is a regular Dirichlet form on $L^2(E; m)$; that is, $C_c(E)\cap {\mathcal F}$ is dense both in $(C_c(E), \| \cdot \|_\infty)$ and in $({\mathcal F}, {\mathcal E}_1)$. Here $C_c(E)$ is the space of continuous functions with compact support in $E$ and ${\mathcal E}_1(u, u):={\mathcal E} (u, u)+(u, u)_m$. However to ensure a wide scope of applicability, we do {\it not} assume that the process $X$ (or equivalently, its associated Dirichlet form $({\mathcal E}, {\mathcal F})$) is $m$-irreducible. We refer readers to \cite{CF} and \cite{FOT} for the following known facts. The extended Dirichlet space ${\mathcal F}_e$ is the space of all functions $f$ on $E$ so that there is an ${\mathcal E}$-Cauchy sequences $\{f_n, n\geq 1\}\subset {\mathcal F}$ so that $f_n$ converges to $f$ $m$-a.e. on $E$. For such an $f\in {\mathcal F}_e$, ${\mathcal E}(f, f):=\lim_{n\to \infty} {\mathcal E} (f_n, f_n)$. Every $f\in {\mathcal F}_e$ admits a quasi-continuous version (cf. \cite[Theorem 2.1.7]{FOT}). Throughout this paper, we always assume that every function in ${\mathcal F}_e$ is represented by its quasi-continuous version, which is unique up to a set of zero capacity (that is, quasi-everywhere, or q.e. in abbreviation). We adopt the convention that any function $f$ defined on $E$ is extended to $E_\partial$ by taking $f(\partial )=0$ and that $X_\infty (\omega):=\partial$ for every $\omega \in \Omega$. It is known that ${\mathcal F}_e\cap L^2(E; m] ={\mathcal F}$. The extended Dirichlet form $({\mathcal E}, {\mathcal F}_e)$ admits the following Beurling-Deny decomposition (cf. \cite[Theorem 4.3.3]{CF} or \cite[Theorem 5.3.1]{FOT}): $$ {\mathcal E} (u, u)={\mathcal E}^{(c)}(u, u) + \frac12 \int_{E\times E} (u(x)-u(y))^2 J(dx, dy) + \int_E u(x)^2 \kappa (dx), $$ where ${\mathcal E}^{(c)}$ is the strongly local part of $({\mathcal E}, {\mathcal F})$, $J$ the jumping measure and $\kappa$ the killing measure of $({\mathcal E}, {\mathcal F})$ (or, of $X$). For $u, v\in {\mathcal F}_e$, ${\mathcal E}^{(c)}(u, v)$ can also be expressed by the mutual energy measure $\frac12 \mu^c_{\<u, v\>}(E)$, which is the signed Revuz measure associated with $\frac12 \< M^{u, c}, M^{v, c}\>$. Here for $u\in {\mathcal F}_e$, $M^{u, c}$ denotes the continuous martingale part of the square integrable martingale additive functional $M^u$ of $X$ in the Fukushima's decomposition (cf. \cite[Theorem 5.2.2]{FOT}) of $$u(X_t)-u(X_0)=M^u_t+N^u_t, \qquad t\geq 0, $$ where $N^u$ is continuous additive functional of $X$ having zero energy. When $u=v$, it is customary to write $\mu^c_{\<u, u\>}$ as $\mu^c_{\<u\>}$. The measure $\mu^c_{\<u, v\>}$ enjoys the strong local property in the sense that if $u\in {\mathcal F}_e$ is constant on a nearly Borel quasi-open set $D$, then $\mu^c_{\<u, v \>}(D)=0$ for every $v\in {\mathcal F}_e$ (see \cite[Proposition 4.3.1]{CF}). For $u\in {\mathcal F}$, let $\mu_{\<u\>}$ be the Revuz measure of $\<M^u\>$. Then it holds that $$ {\mathcal E}(u, u)=\frac12 \mu_{\<u\>}(E)+\frac 12 \int_E u(x)^2 \kappa (dx). $$ For an open subset $D$ of $E$, we use $X^D$ to denote the subprocess of $X$ killed upon leaving $D$. The Dirichlet form of $X^D$ on $L^2(D; m)$ is $({\mathcal E}, {\mathcal F}^D)$, where ${\mathcal F}^D:=\{u\in {\mathcal F} \mid u=0 \hbox{ q.e. on } D^c\}$. It is known (cf. \cite[Theorem 3.3.9]{CF} or \cite[Theorem 4.4.3]{FOT} that $({\mathcal E}, {\mathcal F}^D)$ is a regular Dirichlet form on $L^2(D; m)$. Let ${\mathcal F}^D_e:=\{u\in {\mathcal F}_e \mid u=0 \hbox{ q.e. on } D^c\}$. Then ${\mathcal F}^D_e$ is the extended Dirichlet space of $({\mathcal E}, {\mathcal F}^D)$ (see Theorem 3.4.9 of \cite{CF}). A function $f$ is said to be locally in ${\mathcal F}^D$, denoted as $f\in {\mathcal F}^D_{{\rm loc}}$, if for every relatively compact subset $U$ of $D$, there is a function $g\in {\mathcal F}^D$ such that $f=g $ $m$-a.e. on $U$. Every $f\in {\mathcal F}^D_{\rm loc}$ admits an $m$-version that is quasi-continuous on $D$. Throughout this paper, we always assume that every function in ${\mathcal F}^D_{\rm loc}$, when restricted to $D$ is represented by its quasi-continuous version. By the strong local property of $\mu^c_{\<u, v\>}$ for $u, v \in {\mathcal F}$, $\mu^c_{\<u, v\>}$ is well defined on $D$ for every $u, v\in {\mathcal F}^D_{\rm loc}$. We use $L^\infty_{\rm loc} (D; m)$ to denote the $m$-equivalent class of locally bounded functions on $D$. Let $(N(x, dy), H)$ be a L\'evy system of $X$ (cf. \cite{BJ} or \cite{FOT}). Then $$J(dx, dy)=N(x, dy) \mu_H(dx) \qquad \hbox{and} \qquad \kappa (dx):=N(x, \partial ) \mu_H(dx), $$ where $\mu_H$ is the Revuz measure of the positive continuous additive functional $H$ of $X$. \begin{defn}\label{D:1} \rm Let $D$ be an open subset of $E$. We say a function $u$ is {\it harmonic} in $D$ (with respect to the process $X$) if for every relatively compact open subset $U$ of $D$, $t\mapsto u(X_{t\wedge \tau_U})$ is a uniformly integrable ${\mbox{\bf P}}_x$-martingale for q.e. $x\in U$. \end{defn} \medskip To derive an analytic characterization of harmonic functions in $D$ in terms of an extension of quadratic form $({\mathcal E}, {\mathcal F})$, we need some preparation. Let $r_t$ denote the time-reversal operator defined on the path space $\Omega$ of $X$ as follows: For $\omega\in\{t<\zeta\}$, $$ r_t(\omega)(s)= \begin{cases} \omega((t-s){-}) & \hbox{if } 0\le s< t,\\ \omega(0) & \hbox{if } s\ge t. \end{cases} $$ (It should be borne in mind that the restriction of the measure ${\mbox{\bf P}}_m$ to ${\mathcal F}_t$ is invariant under $r_t$ on $\Omega \cap \{ \zeta >t\}$.) \begin{lemma}\label{L:2} If $u\in {\mathcal F}_e$ has ${\mathcal E}(u, u)=0$, then $${\mbox{\bf P}}_x \left( u(X_t)=u(X_0) \hbox{ for every } t\geq 0\right)=1 \qquad \hbox{for q.e. } x\in E. $$ In other words, for q.e. $x\in E$, $E_x:=\{y\in E: u(y)=u(x)\}$ is an invariant set with respect to the process $X$ in the sense that ${\mbox{\bf P}}_x(X [0, \infty)\subset E_x)=1$. This in particular implies that, if, in addition, ${\mbox{\bf P}}_x(\zeta<\infty)>0$ for q.e. $x\in E$, then $u=0$ q.e. on $E$. \end{lemma} \noindent{\bf Proof.} It is known (see, e.g., \cite[Theorem 6.6.2]{CF}) that the following Lyons-Zheng's forward-backward martingale decomposition holds for $u\in {\mathcal F}_e$: $$ u(X_t)-u(X_0)= \frac12 M^u_t -\frac12 M^u_t \circ r_t \quad {\mbox{\bf P}}_m \hbox{-a.e. on } \{t<\zeta \}. $$ As $\mu_{\<u\>}(E)\leq 2 {\mathcal E}(u, u)=0$, we have $M^u=0$ and so $u(X_t)=u(X_0)$ ${\mbox{\bf P}}_m$-a.s. on $\{t<\zeta \}$ for every $t>0$. This implies via Fukushima's decomposition that $N^u=0$ on $[0, \zeta)$ and hence on $[0, \infty)$ ${\mbox{\bf P}}_m$-a.s. Consequently, $ {\mbox{\bf P}}_x \left( u(X_t)-u(X_0)=M^u_t+N^u_t=0 \hbox{ for every } t\geq 0\right)=1$ for q.e. $x\in E$. This proves the lemma. {\hfill $\Box$ \bigskip} \medskip Since $({\mathcal E}, {\mathcal F})$ is a regular Dirichlet form on $L^2(E; m)$, for any relatively compact open sets $ U, V$ with $\overline U\subset V$, there is $\phi \in {\mathcal F}\cap C_c(E)$ so that $\phi =1$ on $U$ and $\phi =0$ on $V^c$. Consequently, \begin{equation}\label{e:J1} J(U, V^c) = \int_{U\times V^c} (\phi (x)-\phi (y))^2 J(dx, dy) \leq 2 {\mathcal E} (\phi, \phi)<\infty. \end{equation} For an open set $D\subset E$, consider the following two conditions for function $u$ on $E$. For any relatively compact open sets $U, V$ with $\overline U \subset V \subset \overline V \subset D$, \begin{equation}\label{e:cond1} \int_{U\times (E\setminus V)} |u(y)| J(dx, dy) <\infty \end{equation} and \begin{equation}\label{e:cond2} {\mbox{\bf 1}}_U (x) {\mbox{\bf E}}_x \left[ \big((1-\phi_V ) |u|\big)(X_{\tau_U})\right] \in {\mathcal F}^U_e, \end{equation} where $\phi_V\in C_c(D)\cap {\mathcal F}$ with $0\leq \phi_V\leq 1$ and $\phi_V=1$ on $V$. Note that both conditions \eqref{e:cond1} and \eqref{e:cond2} are automatically satisfied when $X$ is a diffusion since in this case the jumping measure $J$ vanishes and $X_{\tau_U}\in \partial U $ on $\{\tau_U<\zeta\}$. In view of \eqref{e:J1}, every bounded function $u$ satisfies the condition \eqref{e:cond1}. In fact by the following lemma, every bounded function $u$ also satisfies the condition \eqref{e:cond2}. \begin{lemma}\label{L:2.3} Suppose that $u$ is a function on $E$ satisfying condition \eqref{e:cond1} and that for any relatively compact open sets $U, V$ with $\overline U \subset V \subset \overline V \subset D$, \begin{equation}\label{e:cond3} \sup_{x\in U} {\mbox{\bf E}}_x \left[ \big({\mbox{\bf 1}}_{V^c} |u|\big) (X_{\tau_U})\right]<\infty. \end{equation} Then \eqref{e:cond2} holds for $u$. \end{lemma} In many concrete cases such as in Examples \ref{E:8}-\ref{E:10} below, one can show that condition \eqref{e:cond1} implies condition \eqref{e:cond3}. To prove the above lemma, we need the following result. Observe that the process $X$ is not assumed to be transient. \begin{lemma}\label{L:2.4} Suppose that $\nu$ is a smooth measure on $E$, whose corresponding positive continuous additive functional (PCAF) of $X$ is denoted as $A^\nu$. Define $G\nu (x):= {\mbox{\bf E}}_x[A^\nu_\zeta]$. If $\int_E G\nu (x) \nu (dx)<\infty$, then $G\nu \in {\mathcal F}_e$. Moreover, \begin{equation}\label{e:1} {\mathcal E}(G \nu, u)=\int_E u(x) \nu(dx) \qquad \hbox{for every } u\in {\mathcal F}_e. \end{equation} \end{lemma} \noindent{\bf Proof.} First assume that $m(E)<\infty$. It is easy to check directly that $\{x\in E: {\mbox{\bf E}}_x [ A^\nu_\zeta]>j\}$ is finely open for every integer $j\geq 1$. So $K_j:=\{G\nu \leq j\}$ is finely closed. Since $G\nu <\infty$ $\nu$-a.e. on $E$, we have $\nu (E\setminus \cup_{j=1}^\infty K_j)=0$. Define $\nu_j:={\mbox{\bf 1}}_{K_j} \nu$. Clearly for $x\in K_j$, $G\nu_j(x) \leq G\nu (x)\leq j$, while for $x\in K_j^c$, $$G\nu_j (x)={\mbox{\bf E}}_x \left[ \int_0^\zeta {\mbox{\bf 1}}_{K_j}(X_s) dA^\nu_s\right]= {\mbox{\bf E}}_x \left[ G\nu_j (X_{\sigma_{K_j}})\right]\leq j. $$ So $f_j:=G\nu_j\leq j$ on $E$ and hence is in $L^2(E; m)$. Since by \cite[Theorem 4.1.1]{CF} or \cite[Theorem 5.1.3]{FOT} \begin{equation}\label{e:2} \lim_{t\to 0} \frac1t (f_j-P_tf_j, \, f_j)_m = \lim_{t\to 0} \frac 1t {\mbox{\bf E}}_{f_j\cdot m} \left[ A^{\nu_j}_t \right] = \int_E f_j (x) \nu_j(dx) \leq \int_E G\nu (x) \nu (dx)<\infty, \end{equation} we have $f_j\in {\mathcal F}$ with ${\mathcal E}(f_j, f_j)\leq \int_E G\nu (x) \nu (dx)$. The same calculation shows that for $i>j$, $f_i-f_j= {\mbox{\bf E}}_x \left[ A_\zeta^{{\mbox{\bf 1}}_{K_i\setminus K_j}\cdot \nu}\right]$ and $$ {\mathcal E} (f_i-f_j, f_i-f_j) = \int_{K_i\setminus K_j} (f_i-f_j) (x) \nu(dx) \leq \int_{K_l\setminus K_j} G\nu(x) \nu(dx), $$ which tends to zero as $i, j\to \infty$; that is, $\{f_j, j\geq 1\}$ is an ${\mathcal E}$-Cauchy sequence in ${\mathcal F}$. As $\lim_{j\to\infty} f_j=f$ on $E$, we conclude that $f\in {\mathcal F}_e$. We deduce from \eqref{e:2} that \begin{equation}\label{e:3} {\mathcal E}(f, f)=\lim_{j\to \infty} {\mathcal E} (f_j, f_j)=\int_E G\nu (x) \nu(dx). \end{equation} Moreover, for $u\in {\mathcal F}_b^+$, by \cite[Theorem 4.1.1]{CF} (or \cite[Theorem 5.1.3]{FOT}) and dominated convergence theorem, we have $$ {\mathcal E}(G\nu, u)=\lim_{j\to \infty} {\mathcal E} (f_j, u) = \lim_{j\to \infty} \lim_{t\to 0} \frac1t (f_j-P_t f_j, u) = \lim_{j\to \infty} \int_E u(x) {\mbox{\bf 1}}_{K_j}(x) \nu (dx)= \int_E u(x) \nu (dx). $$ Since the linear span of ${\mathcal F}^+_b$ is ${\mathcal E}$-dense in ${\mathcal F}_e$, we have established \eqref{e:1}. For a general $\sigma$-finite measure $m$, take a strictly positive $m$-integrable Borel measurable function $g$ on $E$ and define $\mu=g\cdot m$. Then $\mu$ is a finite measure on $E$. Let $Y$ be the time-change of $X$ via measure $\mu$; that is, $Y_t=X_{\tau_t}$, where $\tau_t=\inf\{s>0: \int_0^s g(X_s) ds >t\}$. The time-changed process $Y$ is $\mu$-symmetric. Let $({\mathcal E}^Y, {\mathcal F}^Y)$ be the Dirichlet form of $Y$ on $L^2(E; \mu)$. Then it is known that ${\mathcal F}^Y_e={\mathcal F}_e$ and ${\mathcal E}^Y={\mathcal E}$ on ${\mathcal F}_e$ (see (5.2.17) of \cite{CF}). The measure $\nu$ is also a smooth measure with respect to process $Y$. It is easy to verify that the PCAF $A^{Y, \nu}$ of $Y$ corresponding to $\nu$ is related to corresponding PACF $A^\nu$ of $X$ by $$ A^{Y, \nu}_t =A^\nu_{\tau_t} \qquad \hbox{for } t\geq 0 . $$ In particular, we have $G^Y\nu (x)=G\nu$ on $E$. As we just proved that the lemma holds for $Y$, we conclude that the lemma also holds for $X$. {\hfill $\Box$ \bigskip} \medskip \noindent{\bf Proof of Lemma \ref{L:2.3}.} For relatively compact open sets $U$, $V$ with $\overline U\subset V \subset \overline V \subset D$ and $\phi_V\in {\mathcal F}\cap C_c(D)$ with $0\leq \phi_V\leq 1$ and $\phi_V=1$ on $V$, let $f(x):={\mbox{\bf 1}}_U (x) {\mbox{\bf E}}_x \left[ \big((1-\phi_V ) |u|\big)(X_{\tau_U})\right]$, which is bounded by condition \eqref{e:cond3}. Note that $1-\phi_V=0$ on $V$. Using L\'evy system of $X$, we have $$ f(x)= {\mbox{\bf E}}_x \left[ \int_0^{\tau_U} \left( \int_{E\setminus V} (1-\phi_V(X_s)) |u|(X_s) N(X_s, dy)\right) dH_s \right] \qquad \hbox{for } x\in E. $$ Note that the Revuz measure for PCAF $t\mapsto \int_0^{t\wedge \tau_U} \left( \int_{E\setminus V} (1-\phi_V(X_s)) |u|(y) N(X_s, dy)\right) dH_s$ of $X^U$ is $\mu:= \left(\int_{E\setminus V} (1-\phi_V(x)) |u|(x) N(x, dy)\right) d\mu_H$ and so $f=G_U \mu$. Since by condition \eqref{e:cond1}, $$ \mu (U)= \int_U \left(\int_{E\setminus V} (1-\phi_V (y))|u(y)| N(x, dy)\right) \mu_H(dx) \leq \int_U \left(\int_{E\setminus V} |u(y)| N(x, dy)\right) \mu_H(dx) <\infty, $$ we have $ \int_U G_U \mu (x) \mu (dx) \leq \|f \|_\infty \, \mu (U)<\infty$. Applying Lemma \ref{L:2.4} to $X^U$ yields that $f\in {\mathcal F}^U_e$. {\hfill $\Box$ \bigskip} \begin{lemma}\label{L:2.5} Let $D$ be an open subset of $E$. Every $u\in {\mathcal F}_e$ that is locally bounded on $D$ satisfies conditions \eqref{e:cond1} and \eqref{e:cond2}. \end{lemma} \noindent{\bf Proof.} Let $u\in {\mathcal F}_e$ be locally bounded on $D$. For any relatively compact open sets $U, V$ with $\overline U \subset V \subset \overline V \subset D$, take $\phi \in {\mathcal F}\cap C_c(D)$ such that $\phi =1$ on $U$ and $\phi =0$ on $V^c$. Then $u\phi \in {\mathcal F}_e$ and \begin{eqnarray*} \int_{U\times (E\setminus V)} u(y)^2 J(dx, dy) &= & \int_{U\times (E\setminus V)} \left((1-\phi)u)(x)-((1-\phi) u)(y)\right)^2 J(dx, dy)\\ &\leq& 2{\mathcal E} (u-u\phi, u-u\phi)<\infty. \end{eqnarray*} This together with \eqref{e:J1} implies that $$ \int_{U\times (E\setminus V)} |u(y)| J(dx, dy) \leq \frac12 \int_{U\times (E\setminus V)} \left(1+ u(y)^2\right) J(dx, dy) <\infty. $$ Let $\phi_V\in {\mathcal F}\cap C_c(D)$ be such that $0\leq \phi_V\leq 1$ with $\phi_V=1$ on $V$. Note that $|u|\in {\mathcal F}_e$ is locally bounded on $D$ and so $(1-\phi_V)|u|= |u|-\phi_V |u|\in {\mathcal F}_e$. Thus it follows from \cite[Theorem 3.4.8]{CF} or \cite[Theorem 4.6.5]{FOT} that $$ {\mbox{\bf 1}}_U(x){\mbox{\bf E}}_x \left[ \big((1-\phi_V \big) |u| (X_{\tau_U}) \right]= {\mbox{\bf E}}_x \left[ \big((1-\phi_V ) |u|\big) (X_{\tau_U})\right] - (1-\phi_V) |u| \in {\mathcal F}^U_e. $$ {\hfill $\Box$ \bigskip} \begin{lemma}\label{L:3} Let $D$ be a relatively compact open set of $E$. Suppose $u$ is a function in ${\mathcal F}^D_{\rm loc}$ that is locally bounded on $D$ and satisfies the condition \eqref{e:cond1}. Then for every $v\in C_c(D)\cap {\mathcal F}$, the expression $$ \frac12 \mu^c_{\<u, v\>}(D) + \frac12 \int_{E\times E} (u(x)-u(y))(v(x)-v(y)) J(dx, dy) + \int_D u(x) v(x) \kappa (dx) $$ is well-defined and finite; it will still be denoted as ${\mathcal E}(u, v)$. \end{lemma} \noindent{\bf Proof.} Clearly the first and the third terms are well defined and finite. To see that the second term is also well defined, let $U$ be a relatively compact open subset of $D$ such that ${\rm supp} [v]\subset U $. Since $u\in {\mathcal F}^D_{\rm loc}$, there is $f\in {\mathcal F}$ so that $u=f$ $m$-a.e. and hence q.e. on $U$. Under condition \eqref{e:cond1}, \begin{eqnarray*} && \int_{E\times E} |(u(x)-u(y))(v(x)-v(y))| J(dx, dy) \\ &\leq & \int_{U\times U} |(u(x)-u(y))(v(x)-v(y))| J(dx, dy) + 2 \int_{U\times (E\setminus U)} |u(x)v(x)| J(dx, dy) \\ && + 2 \int_U |v(x)| \int_{E\setminus U} |u(y)| J(dx, dy) \\ &\leq & \int_{U\times U} |(f (x) -f(y))(v(x)-v(y))| J(dx, dy) + 2 \| uv\|_\infty J( {\rm supp}[v], U^c) )\\ && + 2 \|v\|_\infty \int_{{\rm supp}[v]\times (E\setminus U)} |u(y)| J(dx, dy) \\ &<& \infty. \end{eqnarray*} In the last inequality we used \eqref{e:J1} and the fact that $f, v\in {\mathcal F}$. This proves the lemma. {\hfill $\Box$ \bigskip} \begin{thm}\label{T:4} Let $D$ be an open subset of $E$. Suppose that $u\in {\mathcal F}^D_{{\rm loc}} $ is locally bounded on $D$ satisfying conditions \eqref{e:cond1}-\eqref{e:cond2} and that \begin{equation}\label{e:2.3} {\mathcal E}(u, v)=0 \qquad \hbox{for every } v \in C_c(D)\cap {\mathcal F} . \end{equation} Then $u$ is harmonic in $D$. If $U$ is a relatively compact open subset of $D$ so that ${\mbox{\bf P}}_x(\tau_U<\infty)>0$ for q.e. $x\in U$, then $u(x)={\mbox{\bf E}}_x \left[ u(X_{\tau_U})\right]$ for q.e. $x\in U$. \end{thm} \noindent{\bf Proof.} Take $\phi \in C_c(D)\cap {\mathcal F} $ such that $0\leq \phi \leq 1$ and $\phi =1$ in an open neighborhood $V$ of $\overline U$. Then $\phi u\in {\mathcal F}^D$. So by \cite[Theorem 3.4.8]{CF} or \cite[Theorem 4.6.5]{FOT}, $h_1(x):={\mbox{\bf E}}_x \left[ (\phi u)(X_{\tau_U})\right] \in {\mathcal F}_e$ and $\phi u-h_1\in {\mathcal F}_e^U$. Moreover \begin{equation}\label{eqn:2} {\mathcal E} (h_1, v)=0 \qquad \hbox{for every } v\in {\mathcal F}_e^U. \end{equation} Let $h_2(x):={\mbox{\bf E}}_x\left[ ((1-\phi)u) (X_{\tau_U})\right]$, which is well defined by condition \eqref{e:cond2}. Note that by the L\'evy system of $X$, $$ f(x):= {\mbox{\bf 1}}_U (x) {\mbox{\bf E}}_x\left[ \big((1-\phi)|u| \big) (X_{\tau_U})\right] = {\mbox{\bf 1}}_U (x)\, {\mbox{\bf E}}_x \left[ \int_0^{\tau_U}\left( \int_{E\setminus V} \big((1-\phi)|u| \big) (z) N(X_s. dz) \right)dH_s\right]. $$ Define $\mu (dx):= {\mbox{\bf 1}}_D(x) \left( \int_{E\setminus V} \big((1-\phi)|u| \big) (z) N(X_s. dz) \right) \mu_H(dx)$, which is a smooth measure of $X^U$. In the following, for a smooth measure $\nu$ of $X^U$, we will use $G_U\nu$ to denote ${\mbox{\bf E}}_x [ A^\nu_{\tau_U}]$, where $A^\nu$ is the PCAF of $X^U$ with Revuz measure $\nu$. Using such a notation, $f=G_U \mu $. We claim that ${\mbox{\bf 1}}_U h_2 \in {\mathcal F}^U_e$ and for $v\in {\mathcal F}^U_e$, \begin{equation}\label{e:2.12} {\mathcal E} ({\mbox{\bf 1}}_D h_2, \, v) = \int_E v(x) {\mbox{\bf 1}}_U(x) \left( \int_{E\setminus V} \big((1-\phi) u \big) (z) N(X_s, dz) \right) \mu_H(dx). \end{equation} Define \begin{eqnarray*} \mu_1 (dx) &:=& {\mbox{\bf 1}}_D(x) \left( \int_{E\setminus V} \big((1-\phi) u^+ \big) (z) N(X_s. dz) \right) \mu_H(dx) , \\ \mu_2 (dx) &:=& {\mbox{\bf 1}}_D(x) \left( \int_{E\setminus V} \big((1-\phi) u^- \big) (z) N(X_s. dz) \right) \mu_H(dx). \end{eqnarray*} Observe that $$G_U \mu_1(x)= {\mbox{\bf E}}_x\left[ ((1-\phi)u^+) (X_{\tau_U})\right] \quad \hbox{ and } \quad G_U \mu_2(x)= {\mbox{\bf E}}_x\left[ ((1-\phi)u^-) (X_{\tau_U})\right] \qquad \hbox{for } x\in U. $$ Clearly $G_U \mu_1 \leq G_U \mu$. For $j\geq 1$, let $F_j:=\{x\in U: G_U \mu_1 (x) \leq j\}$, which is a finely closed subset of $U$. Define $\nu_j:={\mbox{\bf 1}}_{F_j} \mu_1$. Then for $x\in F_j$, $G_U \nu_j (x) \leq G_U \mu_1 (x) \leq j$, which for $x\in U\setminus F_j$, $$ G_U \nu_j(x) = {\mbox{\bf E}}_x \left[ G_U \nu_j( X_{\sigma_{F_j}}) \right] \leq j. $$ In other words, we have $G_U \nu_j \leq j\wedge G_U \mu_1 \leq j \wedge f$. As both $G_U \nu_j$ and $j\wedge f$ are excessive functions of $X^U$ and $m(U)<\infty$, we have by \cite[Theorem 1.1.5 and Lemma 1.2.3]{CF} that $\{G_U \nu_j, \ j\wedge G_U \mu\}\subset {\mathcal F}^U$ and $$ {\mathcal E} (G_U \nu_j, \, G_U \nu_j) \leq {\mathcal E} (j\wedge f, \, j\wedge f) \leq {\mathcal E} (f, \, f ) <\infty. $$ Moreover, for each $j\geq 1$, we have by \cite[Theorem 4.1.1]{CF} or \cite[Theorem 5.1.3]{FOT} that \begin{eqnarray*} {\mathcal E} (G_U \nu_j, \, G_U \nu_j) &=& \lim_{t\to 0} \frac1t \int_E G_U (\nu_j (x) -P^U_t G_U \nu_j(x)) G_U \nu_j(x) m(dx) \\ &=& \lim_{t\to 0} \frac1t \int_E {\mbox{\bf E}}_x \left[ A^{\nu_j}_{t\wedge \tau_U} \right] G_U \nu_j(x) m(dx) \\ &=& \int_U G_U \nu_j(x) \, {\mbox{\bf 1}}_{F_j}(x) \mu_1 (dx), \end{eqnarray*} which increases to $\int_U G_U \mu_1 (x) \mu_1 (dx)$. Consequently, $\int_U G_U \mu_1 (x) \mu_1 (dx)\leq {\mathcal E} (f, f)<\infty$. So we have by Lemma \ref{L:2.4} applied to $X^U$ that $G_U \mu_1 \in {\mathcal F}^U_e$ with ${\mathcal E} (G_U \mu_1, v)=\int_U v(x) \mu_1(dx)$ for every $v\in {\mathcal F}^U_e$. Similarly we have $G_U \mu_2\in {\mathcal F}^U_e$ with ${\mathcal E} (G_U \mu_2, v)=\int_U v(x) \mu_2(dx)$ for every $v\in {\mathcal F}^U_e$. It follows that ${\mbox{\bf 1}}_U h_2= G_U \mu_1 -G_U\mu_2\in {\mathcal F}^U_e$ and claim \eqref{e:2.12} is established. As $h_2={\mbox{\bf 1}}_U h_2 +(1-\phi)u$ and $(1-\phi)u$ satisfies condition \eqref{e:cond1}, we have by Lemma \ref{L:3} and \eqref{e:2.12} that for every $v\in C_c(U)\cap {\mathcal F}$, \begin{eqnarray} {\mathcal E} (h_2, v)&=& {\mathcal E} (1_U h_2, v)+{\mathcal E} ((1-\phi)u, v) \nonumber \\ &=& \int_{E\times E} v(x) (1-\phi (y)) u(y) N(x, dy) \mu_H (dx) - \int_{E\times E} v(x) (1-\phi (y)) u(y) N(x, dy) \mu_H (dx) \nonumber \\ &=& 0 . \label{eqn:3} \end{eqnarray} This combining with \eqref{eqn:2} and condition \eqref{e:2.3} proves that \begin{equation}\label{eqn:4} {\mathcal E}(u-h_1-h_2, v)=0 \qquad \hbox{for every } v\in C_c(U)\cap {\mathcal F}. \end{equation} Since $u-(h_1+h_2)=(\phi u -h_1)- {\mbox{\bf 1}}_D h_2 \in {\mathcal F}_e^U$ and $C_c(U)\cap {\mathcal F}$ is ${\mathcal E}$-sense in ${\mathcal F}^U_e$, the above display holds for every $v\in {\mathcal F}^U_e$. In particular, we have \begin{equation}\label{e:2.8} {\mathcal E} (u-h_1-h_2, \, u-h_1-h_2)=0. \end{equation} By Lemma \ref{L:2}, $u(X_t)-h_1(X_t)-h_2(X_t)$ is a bounded ${\mbox{\bf P}}_x$-martingale for q.e. $x\in E$. As $$h_1(x)+h_2(x)={\mbox{\bf E}}_x \left[ u(X_{\tau_U})\right] \qquad \hbox{for } x\in U, $$ the above implies that $t\mapsto u(X_{t\wedge \tau_U})$ is a uniformly integrable ${\mbox{\bf P}}_x$-martingale for q.e. $x\in U$. If ${\mbox{\bf P}}_x(\tau_U <\infty )>0$ for q.e. $x\in U$, applying Lemma \ref{L:2} to the Dirichlet form $({\mathcal E} , {\mathcal F}^U)$, we have $u-h_1-h_2=0$ q.e. on $U$ and so $u(x)={\mbox{\bf E}}_x \left[ u(X_{\tau_U})\right]$ for q.e. $x\in U$. This completes the proof of the theorem. {\hfill $\Box$ \bigskip} \begin{remark}\label{R:5} \rm \begin{description} \item{(i)} The principal difficulty in above proof is establishing \eqref{eqn:4} and that $u-(h_1+h_2)\in {\mathcal F}^U_e$ for general $u\in {\mathcal F}^D_{\rm loc}$ satisfying conditions \eqref{e:cond1} and \eqref{e:cond2}. If $u$ is assumed a priori to be in ${\mathcal F}_e$, these facts and therefore the theorem itself are then much easier to establish. Note that when $u\in {\mathcal F}_e$, it follows immediately from \cite[Theorem 3.4.8]{CF} or \cite[Theorem 4.6.5]{FOT} that $h_1+h_2= {\mbox{\bf E}}_x [u(X_{\tau_U})]\in {\mathcal F}_e$ enjoys property \eqref{eqn:4} and $u-(h_1+h_2)\in {\mathcal F}^U_e$. Therefore \eqref{e:2.8} holds and consequently $u$ is harmonic in $D$. \item{(ii)} If we assume that the process $X$ (or equivalently $({\mathcal E}, {\mathcal F})$) is $m$-irreducible and that $U^c$ is not $m$-polar, then ${\mbox{\bf P}}_x (\tau_U<\infty )>0$ for q.e. $x\in U$ (cf. \cite[Theorem 3.5.6]{CF} or \cite{FOT}). \end{description} \end{remark} \begin{thm}\label{T:6} Suppose $D$ is an open set of $E$ with $m(D)<\infty$ and $u$ is a function on $E$ satisfying the condition \eqref{e:cond1} so that $u\in L^\infty (D; m)$ and that $\{u(X_{t\wedge \tau_D}), t\geq 0\}$ is a uniformly integrable ${\mbox{\bf P}}_x$-martingale for q.e. $x\in E$. Then \begin{equation}\label{e:1.5} u\in {\mathcal F}^D_{{\rm loc}} \qquad \hbox{and} \qquad {\mathcal E}(u, v)=0 \quad \hbox{for every } v \in C_c(D)\cap {\mathcal F} . \end{equation} \end{thm} \noindent{\bf Proof.} As for q.e. $x\in E$, $\{u(X_{t\wedge \tau_D}), t\geq 0\}$ is a uniformly integrable ${\mbox{\bf P}}_x$-martingale, $ u(X_{t\wedge \tau_D})$ converges in $L^1({\mbox{\bf P}}_x)$ as well as ${\mbox{\bf P}}_x$-a.s. to some random variable $\xi$. By considering $\xi^+$, $\xi^-$ and $u_+:={\mbox{\bf E}}_x [\xi^+]$, $u_-(x):={\mbox{\bf E}}_x [\xi^-]$ separately, we may and do assume without loss of generality that $u\geq 0$. Note that $\xi {\mbox{\bf 1}}_{\{\tau_D<\infty\}} = u(X_{\tau_D})$. Define $u_1(x):={\mbox{\bf E}}_x \left[ u(X_{\tau_D})\right]$ and $u_2(x):={\mbox{\bf E}}_x [ \xi {\mbox{\bf 1}}_{\{\tau_D = \infty\}}] =u-u_1$. Let $\{P^D_t, t\geq 0\}$ denote the transition semigroup of the subprocess $X^D$. Then for q.e. $x\in D$ and every $t>0$, by the Markov property of $X^D$, $$ P^D_t u_2(x)={\mbox{\bf E}}_x \left[ u_2(X_t), t<\tau_D\right] = {\mbox{\bf E}}_x \left[ \xi {\mbox{\bf 1}}_{\{\tau_D=\infty\}} \cdot \theta_t, t<\tau_D\right] =u_2(x) . $$ Since $u_2 \in L^2(D; m)$, by \eqref{e:1.1}-\eqref{e:1.2} \begin{equation}\label{e:u2} u_2\in {\mathcal F}^D \quad \hbox{with} \quad {\mathcal E} (u_2, u_2)=0. \end{equation} On the other hand, $$ P^D_t u(x)={\mbox{\bf E}}_x \left[ u(X_t), t<\tau_D\right] = {\mbox{\bf E}}_x \left[ u(X_{\tau_D}), t<\tau_D\right] \leq u(x) . $$ Let $\{D_n, n\geq 1\}$ be an increasing sequence of relatively compact open subsets of $D$ with $\cup_{n\geq 1} D_n=D$ and define $$ \sigma_n:=\inf \left\{t\geq 0: X^D_t\in D_n\right\}. $$ Let $e_n(x)={\mbox{\bf E}}_x \left[ e^{-\sigma_n}\right]$, $x\in D$, be the 1-equilibrium potential of $D_n$ with respect to the subprocess $X^D$. Clearly $e_n \in {\mathcal F}^D$ is 1-excessive with respect to the process $X^D$, $e_n(x)=1$ q.e. on $D_n$. Let $a:=\| {\mbox{\bf 1}}_D u\|_\infty$. Then for every $t>0$, $$ e^{-t} P^D_t( (a e_n)\wedge u)(x) \leq ((a e_n) \wedge u) (x) \qquad \hbox{for q.e. } x\in D. $$ By \cite[Lemma 1.2.3]{CF} or \cite[Lemma 8.7]{S}, we have $(ae_n)\wedge u\in {\mathcal F}^D$ for every $n\geq 1$. Since $(a e_n)\wedge u=u$ $m$-a.e. on $D_n$, we have $u\in {\mathcal F}^D_{{\rm loc}} $. Let $U$ be a relatively compact open subset of $D$. Let $\phi \in C_c(D)\cap {\mathcal F}$ so that $0\leq \phi \leq 1$ and $\phi =1$ in an open neighborhood $V$ of $\overline U$. Define for $x\in E$, $$ h_1(x):= {\mbox{\bf E}}_x \left[ (\phi u)(X_{\tau_U})\right] \quad \hbox{and} \quad h_2(x):= {\mbox{\bf E}}_x \left[ ((1-\phi) u)(X_{\tau_U})\right]. $$ Then $u_1=h_1+h_2$ on $E$. Since $\phi u\in {\mathcal F}$, we know as in (\ref{eqn:2}) that $h_1\in {\mathcal F}_e$ and $$ {\mathcal E} (h_1, v)=0 \qquad \hbox{for every } v\in {\mathcal F}_e^U. $$ By the same argument as that for (\ref{eqn:3}), we have $$ {\mathcal E} (h_2, v)=0 \qquad \hbox{for every } v\in {\mathcal F}_e^U. $$ These together with \eqref{e:u2} in particular implies that $$ {\mathcal E} (u, v)={\mathcal E}(h_1+h_2+u_2, v)=0 \qquad \hbox{for every } v\in C_c(U)\cap {\mathcal F} . $$ Since $U$ is an arbitrary relatively compact subset of $D$, we have $$ {\mathcal E} (u, v)= 0 \qquad \hbox{for every } v\in C_c(D)\cap {\mathcal F} . $$ This completes the proof. {\hfill $\Box$ \bigskip} \begin{remark}\label{R:2.7} \rm As mentioned in the Introduction, the principal difficulty for the proof of the above theorem is establishing that a function $u$ harmonic in $D$ is in ${\mathcal F}^D_{\rm loc}$ with ${\mathcal E}(u, v)=0$ for every $v\in {\mathcal F} \cap C_c(D)$. If a priori $u$ is assumed to be in ${\mathcal F}_e$, then Theorem \ref{T:6} is easy to establish. In this case, it follows from \cite[Theorem 3.4.8]{CF} or \cite[Theorem 4.6.5]{FOT} that $u_1=h_1+h_2 = {\mbox{\bf E}}_x [ u(X_{\tau_U})]\in {\mathcal F}_e$ and that the second property of \eqref{eqn:4} holds. This together with \eqref{e:u2} immediately implies that $u$ enjoys \eqref{e:1.5}. (See also Proposition 2.5 of \cite{BBKT} for this simple case but under an additional assumption that $1\in {\mathcal F}$ with ${\mathcal E}(1, 1)=0$.) \end{remark} \medskip Combining Theorems \ref{T:4} and \ref{T:6}, we have the following. \begin{thm}\label{T:7} Let $D$ be an open subset of $E$. Suppose that $u$ is a function on $E$ that is locally bounded on $D$ and satisfies conditions \eqref{e:cond1} and \eqref{e:cond2}. Then \begin{description} \item{\rm (i)} $u$ is harmonic in $D$ if and only if condition \eqref{e:1.5} holds. \item{\rm (ii)} Assume that for every relatively compact open subset $U$ of $D$, ${\mbox{\bf P}}_x (\tau_U < \infty)>0$ for q.e. $x\in U$. {\rm (}By Remark \ref{R:5}(ii), this condition is satisfied if $({\mathcal E}, {\mathcal F})$ is $m$-irreducible.{\rm )} Then $u$ is harmonic in $D$ if and only if for every relatively compact subset $U$ of $D$, $u(X_{\tau_U})\in L^1 ({\mbox{\bf P}}_x)$ and $u(x)= {\mbox{\bf E}}_x \left[ u(X_{\tau_U})\right]$ for q.e. $x\in U$. \end{description} \end{thm} \medskip \begin{example}\label{E:8} \rm (Stable-like process on $\mathbb{R}^d$) Consider the following Dirichlet form $({\mathcal E}, {\mathcal F})$ on $L^2(\mathbb{R}^d, dx)$, where \begin{eqnarray*} {\mathcal F} &=& W^{\alpha/2, 2}(\mathbb{R}^d):=\left\{ u\in L^2(\mathbb{R}^d; dx): \ \int_{\mathbb{R}^d\times \mathbb{R}^d} (u(x)-u(y))^2 \frac{1}{|x-y|^{d+\alpha}} \, dxdy<\infty \right\} , \\ {\mathcal E}(u, v)&=& \frac12 \int_{\mathbb{R}^d\times \mathbb{R}^d} (u(x)-u(y))(v(x)-v(y)) \frac{c(x, y)}{|x-y|^{d+\alpha}}\, dx dy \qquad \hbox{for } u, v\in {\mathcal F}. \end{eqnarray*} Here $d\geq 1$, $\alpha \in (0, 2)$ and $c(x, y)$ is a symmetric function in $(x, y)$ that is bounded between two positive constants. In literature, $W^{\alpha, 2}(\mathbb{R}^d)$ is called the Sobolev space on $\mathbb{R}^d$ of fractional order $(\alpha/2, 2)$. For an open set $D\subset \mathbb{R}^d$, $W^{\alpha, 2}(D)$ is similarly defined as above but with $D$ in place of $\mathbb{R}^d$. It is easy to check that $({\mathcal E}, {\mathcal F})$ is a regular Dirichlet form on $L^2(\mathbb{R}^d; dx)$ and its associated symmetric Hunt process $X$ is called symmetric $\alpha$-stable-like process on $\mathbb{R}^d$, which is studied in \cite{CK}. The process $X$ has strictly positive jointly continuous transition density function $p(t, x, y)$ and hence is irreducible. Moreover, there is constant $c>0$ such that \begin{equation}\label{e:2.18} p(t, x, y) \leq c \, t^{-d/\alpha} \qquad \hbox{for } t>0 \hbox{ and } x, y \in \mathbb{R}^d \end{equation} and consequently by \cite[Theorem 1]{Ch}, \begin{equation}\label{e:2.19} \sup_{x\in U} {\mbox{\bf E}}_x [ \tau_U] <\infty \end{equation} for any open set $U$ having finite Lebesgue measure. When $c(x, y)$ is constant, the process $X$ is nothing but the rotationally symmetric $\alpha$-stable process on $\mathbb{R}^d$. In this example, the jumping measure $$J(dx, dy)= \frac{c(x, y)}{|x-y|^{d+\alpha}} \, dx dy. $$ Hence for any non-empty open set $D\subset \mathbb{R}^d$, condition \eqref{e:cond1} is satisfied if and only if $(1\wedge |x|^{-d-\alpha} ) u(x) \in L^1 (\mathbb{R}^d)$. Moreover, for such a function $u$ and relatively compact open sets $U, V$ with $\overline U\subset V \subset \overline V \subset D$, by L\'evy system of $X$, \begin{eqnarray}\label{e:2.20} \sup_{x\in U} {\mbox{\bf E}}_x \left[ ({\mbox{\bf 1}}_{V^c} |u|) (X_{\tau_U})\right] &=& \sup_{x\in U} {\mbox{\bf E}}_x \left[ \int_0^{\tau_U} \left( \int_{V^c} \frac{ c(X_s, y) \, |u(X_s)| }{|X_s-y|^{d+\alpha}} dy \right) ds \right] \nonumber \\ &\leq & \left( c\, \int_{\mathbb{R}^d} (1\wedge |y|^{-d-\alpha} ) |u(y)|dy \right) \sup_{x\in U} {\mbox{\bf E}}_x [ \tau_U] < \infty . \end{eqnarray} In other words, for this example, condition \eqref{e:cond3} and hence \eqref{e:cond2} is a consequence of \eqref{e:cond1}. So Theorem \ref{T:6} says that for an open set $D$ and a function $u$ on $\mathbb{R}^d$ that is locally bounded on $D$ with $(1\wedge |x|^{-d-\alpha} ) u(x) \in L^1 (\mathbb{R}^d)$, the following are equivalent. \begin{description} \item{(i)} $u$ is harmonic in $D$; \item{(ii)} For every relatively compact subset $U$ of $D$, $u(X_{\tau_U})\in L^1 ({\mbox{\bf P}}_x)$ and $u(x)= {\mbox{\bf E}}_x \left[ u(X_{\tau_U})\right]$ for q.e. $x\in U$; \item{(iii)} $u\in {\mathcal F}^D_{\rm loc} =W^{\alpha, 2}_{\rm loc} (D)$ and $$ \int_{\mathbb{R}^d\times \mathbb{R}^d} (u(x)-u(y))(v(x)-v(y)) \frac{c(x, y)}{|x-y|^{d+\alpha}} \, dxdy =0 \qquad \hbox{for every } v\in C_c(D)\cap W^{\alpha/2, 2}(\mathbb{R}^d). $$ \end{description} {\hfill $\Box$ \bigskip} \end{example} \begin{example}\label{E:9} \rm (Diffusion process on a locally compact separable metric space)\ Let $({\mathcal E}, {\mathcal F})$ be a local regular Dirichlet form on $L^2(E; m)$, where $E$ is a locally compact separable metric space, and $X$ is its associated Hunt process. In this case, $X$ has continuous sample paths and so the jumping measure $J$ is null (cf. \cite{FOT}). Hence conditions \eqref{e:cond1} and \eqref{e:cond2} are automatically satisfied. Let $D$ be an open subset of $E$ and $u$ be a function on $E$ that is locally bounded in $D$. Then by Theorem \ref{T:7}, $u$ is harmonic in $D$ if and only if condition \eqref{e:1.5} holds. Now consider the following special case: $E=\mathbb{R}^d$ with $d\geq 1$, $m(dx)$ is the Lebesgue measure $dx$ on $\mathbb{R}^d$, ${\mathcal F}=W^{1,2}(\mathbb{R}^d):=\left\{u\in L^2(\mathbb{R}^d; dx) \mid \nabla u \in L^2(\mathbb{R}^d; dx) \right\}$ and $$ {\mathcal E} (u, v) = \frac12 \sum_{i,j=1}^d \int_{\mathbb{R}^d} a_{ij}(x) \frac{\partial u(x)}{\partial x_i} \frac{\partial v (x)}{\partial x_j} \, dx \qquad \hbox{for } u, v \in W^{1,2}(\mathbb{R}^d), $$ where $(a_{ij}(x))_{1\leq i, j\leq d}$ is a $d\times d$-matrix valued measurable function on $\mathbb{R}^d$ that is uniformly elliptic and bounded. In literature, $W^{1, 2}(\mathbb{R}^d)$ is the Sobolev space on $\mathbb{R}^d$ of order $(1, 2)$. For an open set $D\subset \mathbb{R}^d$, $W^{1, 2}(D)$ is similarly defined as above but with $D$ in place of $\mathbb{R}^d$. Then $({\mathcal E}, {\mathcal F})$ is a regular local Dirichlet form on $L^2(\mathbb{R}^d; dx)$ and its associated Hunt process $X$ is a conservative diffusion on $\mathbb{R}^d$ having jointly continuous transition density function. Let $D$ be an open set in $\mathbb{R}^d$. Then by Theorem \ref{T:7}, the following are equivalent for a locally bounded function $u$ on $D$. \begin{description} \item{(i)} $u$ is harmonic in $D$; \item{(ii)} For every relatively compact open subset $U$ of $D$, $u(X_{\tau_U})\in L^1({\mbox{\bf P}}_x)$ and $u(x)={\mbox{\bf E}}_x \left[ u(X_{\tau_U}) \right]$ for q.e. $x\in U$; \item{(iii)} $u\in W^{1,2}_{\rm loc} (D)$ and $\displaystyle \sum_{i,j=1}^d \int_{\mathbb{R}^d} a_{ij}(x) \frac{\partial u(x)}{\partial x_i} \frac{\partial v (x)}{\partial x_j} \, dx =0$ \ for every $ v \in C_c(D)\cap W^{1,2}(\mathbb{R}^d)$. \end{description} In fact, in this case, it can be shown that every (locally bounded) harmonic function has a continuous version. {\hfill $\Box$ \bigskip} \end{example} \begin{example}\label{E:10} \rm (Diffusions with jumps on $\mathbb{R}^d$) Consider the following Dirichlet form $({\mathcal E}, {\mathcal F})$, where ${\mathcal F}= W^{1,2}(\mathbb{R}^d)$ and \begin{eqnarray*} {\mathcal E}(u, v)&=& \frac12 \sum_{i,j=1}^d \int_{\mathbb{R}^d} a_{ij}(x) \frac{\partial u(x)}{\partial x_i} \frac{\partial v (x)}{\partial x_j} \, dx \\&& + \frac12 \int_{\mathbb{R}^d\times \mathbb{R}^d} (u(x)-u(y))(v(x)-v(y)) \frac{c(x, y)}{|x-y|^{d+\alpha}}\, dx dy \qquad \hbox{for } u, v\in W^{1, 2}(\mathbb{R}^d). \end{eqnarray*} Here $d\geq 1$, $(a_{ij}(x))_{1\leq i, j\leq d}$ is a $d\times d$-matrix valued measurable function on $\mathbb{R}^d$ that is uniformly elliptic and bounded, $\alpha \in (0, 2)$ and $c(x, y)$ is a symmetric function in $(x, y)$ that is bounded between two positive constants. It is easy to check that $({\mathcal E}, {\mathcal F})$ is a regular Dirichlet form on $L^2(\mathbb{R}^d; dx)$. Its associated symmetric Hunt process $X$ has both the diffusion and jumping components. Such a process has recently been studied in \cite{CK2}. It is shown there the process $X$ has strictly positive jointly continuous transition density function $p(t, x, y)$ and hence is irreducible. Moreover, a sharp two-sided estimate is obtained in \cite{CK2} for $p(t, x, y)$. In particular, there is a constant $c>0$ such that $$ p(t, x, y) \leq c \left( t^{-d/\alpha} \wedge t^{-d/2}\right) \qquad \hbox{for } t>0 \hbox{ and } x, y\in \mathbb{R}^d. $$ Note that when $(a_{ij})_{1\leq i, j\leq d}$ is the identity matrix and $c(x, y)$ is constant, the process $X$ is nothing but the symmetric L\'evy process that is the independent sum of a Brownian motion and a rotationally symmetric $\alpha$-stable process on $\mathbb{R}^d$. In this example, the jumping measure $$J(dx, dy)= \frac{c(x, y)}{|x-y|^{d+\alpha}} \, dx dy. $$ Hence for any non-empty open set $D\subset \mathbb{R}^d$, condition \eqref{e:cond1} is satisfied if and only if $(1\wedge |x|^{-d-\alpha} ) u(x) \in L^1 (\mathbb{R}^d)$. By the same reasoning as that for \eqref{e:2.20}, we see that for this example, condition \eqref{e:cond3} and hence \eqref{e:cond2} is implied by condition \eqref{e:cond1}. So Theorem \ref{T:6} says that for an open set $D$ and a function $u$ on $\mathbb{R}^d$ that is locally bounded on $D$ with $(1\wedge |x|^{-d-\alpha} ) u(x) \in L^1 (\mathbb{R}^d)$, the following are equivalent. \begin{description} \item{(i)} $u$ is harmonic in $D$ with respect to $X$; \item{(ii)} For every relatively compact subset $U$ of $D$, $u(X_{\tau_U})\in L^1 ({\mbox{\bf P}}_x)$ and $u(x)= {\mbox{\bf E}}_x \left[ u(X_{\tau_U})\right]$ for q.e. $x\in U$; \item{(iii)} $u\in W^{1,2}_{\rm loc} (D)$ such that for every $ v\in C_c(D)\cap W^{1,2}(\mathbb{R}^d)$, $$ \sum_{i,j=1}^d \int_{\mathbb{R}^d} a_{ij}(x) \frac{\partial u(x)}{\partial x_i} \frac{\partial v (x)}{\partial x_j} \, dx+ \int_{\mathbb{R}^d\times \mathbb{R}^d} (u(x)-u(y))(v(x)-v(y)) \frac{c(x, y)}{|x-y|^{d+\alpha}} \, dxdy =0 . $$ \end{description} {\hfill $\Box$ \bigskip} \end{example} \begin{remark}\label{R:11} \rm It is possible to extend the results of this paper to a general $m$-symmetric right process $X$ on a Lusin space, where $m$ is a positive $\sigma$-finite measure with full topological support on $E$. In this case, the Dirichlet $({\mathcal E}, {\mathcal F})$ of $X$ is a quasi-regular Dirichlet form on $L^2(E; m)$. By \cite{CMR}, $({\mathcal E}, {\mathcal F})$ is quasi-homeomorphic to a regular Dirichlet form on a locally compact separable metric space. So the results of this paper can be extended to the quasi-regular Dirichlet form setting, by using this quasi-homeomorphism. However since the notion of open set is not invariant under quasi-homeomorphism, some modifications are needed. We need to replace open set $D$ in Definition \ref{D:1} by quasi-open set $D$. Similar modifications are needed for conditions \eqref{e:cond1} and \eqref{e:cond2} as well. We say a function $u$ is harmonic in a quasi-open set $D\subset E$ if for every quasi-open subset $U\subset D$ with $\overline U \cap F_k \subset D$ for every $k\geq 1$, where $\{F_k, k\geq 1\}$ is an ${\mathcal E}$-nest consisting of compact sets, $t\mapsto u(X_{t\wedge \tau_{U\cap F_k}})$ is a uniformly integrable ${\mbox{\bf P}}_x$-martingale for q.e. $x\in U\cap F_k$ and for every $k\geq 1$. The local Dirichlet space ${\mathcal F}_{\rm loc}^D$ needs to be replaced by \begin{eqnarray*} \stackrel{ \circ } {{{\mathcal F}}_{{\rm loc}}^D} &=& \Big\{u: \hbox{ there is an increasing sequence of quasi--open sets } \{D_n\} \hbox{ with } \bigcup^\infty_{n=1} D_n = D \hbox{ q.e.} \\ &&\hskip 0.4truein \hbox{and a sequence } \{u_n\} \subset {\mathcal F}^D \hbox{ such that } u = u_n\ m \hbox{-a.e. on } \ D_n \Big\}. \end{eqnarray*} Condition \eqref{e:1.5} should be replaced by \begin{equation}\label{e:1.7} u\in \stackrel{ \circ } {{{\mathcal F}}_{{\rm loc}}^D} \qquad \hbox{and} \qquad {\mathcal E} (u, v)=0 \quad \hbox{for every } v\in {\mathcal F} \hbox{ with } {\mathcal E} \hbox{-supp[$v$]} \subset D. \end{equation} Here ${\mathcal E}$-supp[$u$] is the smallest quasi-closed set that $u$ vanishes $m$-a.e. on its complement. We leave the details to interested readers. {\hfill $\Box$ \bigskip} \end{remark} \noindent {\bf Acknowledgement.} The author thanks Rich Bass and Takashi Kumagai for helpful discussions. He also thanks Rongchan Zhu for helpful comments. \vskip 0.1truein \small \begin{singlespace}
2023-04-23T06:10:03.435Z
2009-04-18T01:49:11.000Z
redpajama/arxiv
arxiv_0002
82
9,022
a69694e7edf38beb623e659e3b8e71ea741889f0
\section{Introduction} \label{introduction} Let $X$ be any of the three constant-curvature spaces $\en$, $\sn$ or $\hn$, and let $G$ be a discrete subgroup of isometries of $X$. By a geometric manifold we mean a manifold of the form $M=X/G$. Many examples of geometric manifolds are given through side-pairings of a polyhedron $P\subset X$, this being a convenient and topologically revealing way of describing a manifold. On the other hand, general manifolds are often given using a handle decomposition, which lends itself to manipulation and simplification through handle moves. In this paper we give a method that converts a polyhedron-side-pairing representation of a manifold into a handle decomposition of the manifold. The method associates every cycle of $k$-faces in the polyhedron to an $(n-k)$-handle in the handle decomposition. While the method works in any dimension, it is most interesting to us in dimensions $n=3,4$, where we give two applications. In section~\ref{conv3} we motivate and illustrate the method by describing it in dimension~3, where it is easily understood. Section~\ref{ident3} provides an application of the method to hyperbolic 3-manifolds. Many examples of finite-volume hyperbolic manifolds $M$ are known to be complements of links in the 3-sphere. However, proving that a particular manifold is a complement of a particular link is often demanding and pushes the limits of intuition. Furthermore, proofs that the author has seen usually require that the link is known before one executes the proof. (The only procedure the author is aware of that does not require this is described in Francis' book \cite{Francis}, however, the procedure is significantly restricted by the type of side-pairings it works for.) We use the method of Section~\ref{conv3} to obtain a handle decomposition of a given hyperbolic manifold. Using handle moves one can easily show that the manifold is a complement of a link in the 3-sphere, while the handle moves produce the diagram of the link as the computation progresses. This procedure has worked in a straightforward way on all the standard examples (complements of the figure-8 knot, the Whitehead link and the Borromean rings) and some less standard ones, like those in \cite{Wielenberg}. In Section~\ref{convgen} we justify the conversion method for all dimensions. Section~\ref{diagram4} details how to get handle decomposition diagrams in dimension~4. Section~\ref{ident4} gives an application of the conversion method in dimension~4. J.~Ratcliffe, S.~Tschantz and the author have found a dozen examples (see \cite{Ivansic3, Ivansic4}) of noncompact hyperbolic 4-manifolds that are complements of varying numbers of tori and Klein bottles in a topological 4-sphere $N$. We work out the handle decomposition of one $N$ in order to show that it is diffeomorphic to the standard differentiable 4-sphere, which the original proof was not equipped to do. As a matter of fact, the author's motivation for developing the conversion method was the problem of whether the topological 4-spheres found in \cite{Ivansic3, Ivansic4} were diffeomorphic to the standard 4-sphere. The dimension-3 application from Section~\ref{ident3} was found afterwards. \begin{figure} \begin{center} \resizebox{2.5in}{!}{\includegraphics{faceneighborhoods.eps}} \caption{Cube with side-pairing, neighborhoods of faces} \label{faceneighborhoods} \end{center} \end{figure} \vfill \section{Conversion in dimension 3} \label{conv3} Let $P$ be a polyhedron in $X=\hiii$, $\eiii$ or $\siii$ with a side-pairing defined on it that gives a geometric manifold $M$. In Fig.~\ref{faceneighborhoods} a cube is drawn as an example: its top and bottom and front and back sides are paired by a translation, while the left and right sides are paired by a translation followed by a $180^\circ$ rotation around the translation vector. Select neighborhoods (for example, $\epsilon$-neighborhoods) around vertices and edges like in Fig.~\ref{faceneighborhoods}. The neighborhoods should match via the side pairing. Let $V_1,\dots, V_m$ be neighborhoods of a cycle of vertices $\{v_1,\dots,v_k\}$ (a cycle of faces comprises all the faces of $P$ that are identified by the side-pairing). Then $V_1\cup\dots\cup V_m$ assembles into a ball $V$ in $M$. In our example, all the vertices are in the same cycle, and $V_i$ is an eighth of a ball. Eight such pieces, of course, assemble in a ball. Removing neighborhoods of all vertices from $P$ removes parts of the neighborhoods of the edges. Let $E_1,\dots,E_n$ be the truncated neighborhoods of a cycle of edges $e_1,\dots,e_n$. Then $E_1\cup\dots\cup E_n$ assembles into a solid cylinder around a truncated edge, which can also be viewed as a 3-ball $E$ in $M$. \begin{figure} \begin{center} \resizebox{3.5in}{!}{\includegraphics{truncatedcube.eps}} \caption{Handles as assemblies of face neighborhoods} \label{truncatedcube} \end{center} \end{figure} Let $H_1$ be the solid obtained by removing neighborhoods of vertices and truncated neighborhoods of edges from $P$. On the surface of $H_1$ it is the truncated sides that get identified, representing pairwise-identified disjoint disks, so $H_1$ projects to a handlebody $H$ in $M$ under the quotient map $P\to M$. The feet of the 1-handles of $H$ are the truncated sides on $H_1$ (see \cite{Gompf-Stipsicz} for basics of handles and handle decompositions). Now, the ball $E=D^2\times D^1$ from above is attached to $H$ along $\partial D^2\times D^1$, making it a 2-handle of $M$. In our example, there are three cycles of edges, and the visible portions of attaching circles $n_1$, $n_2$ and $n_3$ of the corresponding 2-handles are shown in Fig.~\ref{truncatedcube}. Of course, the ball $V$ from above may be viewed as $V=D^3\times D^0$, and it attaches to the 0-, 1- and 2-handles along $\partial D^3\times D^0$, making it a 3-handle. If $P$ is a polyhedron in $\hiii$ with some ideal vertices, the procedure works the same way, except, instead of removing a neighborhood of the vertex we remove a horoball centered at the ideal vertex. Therefore, to get a handle decomposition diagram (pairs of disks in $\rii$ representing feet of 1-handles, curves outside of the disks representing attaching circles of 2-handles), do the following: \begin{figure} \begin{center} \resizebox{3in}{!}{\includegraphics{convertedcube.eps}} \caption{Handle decomposition for the side-pairing from Fig.~1} \label{convertedcube} \end{center} \end{figure} \begin{itemize} \item[---] Project the surface of the polyhedron $P$ to $\rii\cup\infty$ and draw its decomposition into sides. (If the polyhedron has ideal vertices, one may draw them as empty circles.) \item[---] Draw a disk inside every side that represents one of the feet of a 1-handle (paired sides correspond to feet of 1-handles). One of the disks may be the outside of the diagram since a sphere (the surface of $P$) was projected to $\rii$. \item[---] If two sides are adjacent along an edge $e$, draw an arc crossing $e$ once between the disks corresponding to the sides. The union of arcs crossing edges that are in the same cycle comprise the attaching circle for a 2-handle. \item[---] Attention needs to be paid to how disks (feet of 1-handles) are identified, as the transformation that identifies them depends on the transformation that identifies the corresponding sides of $P$. (We do not assume that the feet of 1-handles are identifed by a reflection in the bisector of the centers, as is common in handle-decomposition diagrams.) \item[---] It is not necessary to keep track of 3-handles, since there is only one way then attach them. Furthermore, if the polyhedron is hyperbolic and has only ideal vertices, there are no 3-handles. However, if some of the vertices are real and some ideal, it may be useful to note where on the diagram the 3-handles attach. If necessary, one might put a full circle in $\rii$ wherever there was a real vertex to indicate that the boundary of a 3-ball is attached to that section of $\rii$, and put an empty circle wherever there was an ideal vertex to signify that this part of $\rii$ becomes a part of the boundary of the manifold. \end{itemize} Fig.~\ref{convertedcube} illustrates the process above for the cube example at the beginning of the section. The letters inside the disks suggest the map that pairs the two disks, for example, $A$ and $A'$ are paired by a reflection in their bisector, while $B$ and $B'$ are paired by a reflection in the bisector, followed by a rotation by $180^\circ$. \section{Identifying hyperbolic 3-manifolds as link complements in the 3-sphere} \label{ident3} In this section, we apply the conversion method of \S~\ref{conv3} to illustrate a procedure that attempts to show that a finite-volume noncompact hyperbolic manifold is the complement of a link in the 3-sphere. If the procedure is carried out successfully, it also produces the link diagram. We will use Wielenberg's example 4 from \cite{Wielenberg}. In that paper, the following algebraic theorem of Riley's \cite{Riley} is used to determine that certain hyperbolic manifolds $M$ are complements of links $S^3-L$: if $\pi_1 M$ is anti-isomorphic to $\pi_1 (S^3-L)$, then $M\cong S^3-L$. In order to verify the anti-isomorphism, however, the link has to be known in advance to get the presentation of $\pi_1 (S^3 - L)$. Our procedure produces the link diagram as it is carried out. \begin{figure} \begin{center} \includegraphics{wielenbergpolyhedron.eps} \caption{Wielenberg's side-pairing on a hyperbolic polyhedron} \label{wielenbergpolyhedron} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics{torusas2handle.eps} \caption{Attaching a solid torus along boundary is like adding a 2-handle and a 3-handle} \label{torusas2handle} \end{center} \end{figure} The hyperbolic manifold comes from pairing the sides of the polyhedron $P$ pictured in the upper half-space model in Fig.~\ref{wielenbergpolyhedron}. The vertical sides $C$, $C'$ and $D$, $D'$ are paired by translations. Sides $A$ and $A'$ are paired by a reflection in the vertical plane passing through the point where $A$ and $A'$ touch. Side $B$ is sent to $B'$ by a reflection in the vertical plane that slices $B$ and $B'$ in half, followed by a translation that slides $B$ to $B'$. A finite-volume orientable noncompact hyperbolic 3-manifold $M$ is diffeomorphic to the interior of a compact 3-manifold $\mbar$, whose boundary components are all tori. If solid tori are glued onto the boundary components and the result is a 3-sphere, then $M$ is diffeomorphic to $S^3-L$, where $L$ is the collection of the center circles of the solid tori we added. As Fig.~\ref{torusas2handle} suggests, gluing a solid torus to a component $T^2$ of $\bd\mbar$ is the same as attaching a 2-handle and a 3-handle to $\mbar$. The attaching circle of the 2-handle can be any nontrivial simple closed curve on $T^2$. The components of $\bd\mbar$ are assembled from polygons, called vertex links, that are intersections of small enough horospheres centered at ideal vertices with the polyhedron $P$. In our example, the vertex links are $45^\circ$-$45^\circ$-$90^\circ$ triangles and squares. The three cycles of ideal vertices, $E_1$, $E_2$ and $E_3$ are indicated in Fig.~\ref{wielenbergpolyhedron}, and in Fig.~\ref{wielenbergvertexlinks} the vertex links from each cycle are drawn together and it is shown how they assemble into parallelograms that give rise to toral boundary components of $\mbar$. \begin{figure} \begin{center} \resizebox{4in}{!}{\includegraphics{wielenbergvertexlinks.eps}} \caption{Finding suitable meridians in boundary components} \label{wielenbergvertexlinks} \end{center} \end{figure} For every boundary component $T^2$ we now choose two curves representing generators of $\pi_1 T^2$. One will serve as the attaching circle of the 2-handle, making it a meridian of the attached solid torus. The other automatically becomes a longitude of the solid torus, thus isotopic to its center circle. In Fig.~\ref{wielenbergvertexlinks}, the attaching circle is the thinner arc $m_i$ and the longitude is the thicker arc $l_i$, $i=1,2,3$. When choosing the attaching circle, choose a curve in $T^2$ as short as possible (in its Euclidean metric). If the length of the attaching circle is more than $2\pi$, the $2\pi$-theorem on hyperbolic Dehn surgery (\cite{Bleiler-Hodgson}) asserts we will get a hyperbolic manifold. Thus, if $\bd\mbar$ has only one component, we will have failed to produce $S^3$. If $\bd\mbar$ has several components, it is possible that some combination of long and short attaching circles still produces $S^3$, but chances are probably better the greater the number of shorter ones are chosen. Let $M_W$ now denote the manifold resulting from the side-pairing on the polyhedron above. Since there are three cycles of ideal vertices, $\bd\mbar_W$ will have three components. Step~0 of Fig.~\ref{handlecancellation1} shows the handle decomposition of $\mbar_W$, obtained using the conversion method from~\S\ref{conv3}. The feet $A$, $A'$, $C$, $C'$ and $D$, $D'$ of 1-handles are all identified by a reflection in the perpendicular bisector of the line connecting their center. The feet $B$ and $B'$ are identified by a reflection in the line joining their centers, followed by a translation that moves $B$ to $B'$, so that the arrows drawn inside match up. Attaching circles coming from cycles of edges are labeled I, II and III. The attaching circles that we chose in Fig.~\ref{wielenbergvertexlinks} are also drawn in and labeled $m_1$, $m_2$, and $m_3$. Their corresponding longitudes $l_1$, $l_2$ and $l_3$, are drawn as thick curves. \begin{figure} \begin{center} \includegraphics{linkstohandle.eps} \caption{Converting one diagram to another} \label{linkstohandle} \end{center} \end{figure} Fig.~\ref{linkstohandle} shows to make the easy correspondence between a triangle appearing in Fig.~\ref{wielenbergvertexlinks} and the section of the boundary of the handlebody in step~0 of Fig.~\ref{handlecancellation1} necessary to draw in the longitudes and meridians. The handle decomposition of $\mbar_W$ does not have any 3-handles, since $P$ did not have any real vertices. However, closing off $\bd\mbar_W$ with three solid tori adds three 3-handles. \begin{figure} \begin{center} \resizebox{\textwidth}{!}{\includegraphics{handlecancellation1.eps}} \caption{Handle moves, steps 0--3} \label{handlecancellation1} \end{center} \end{figure} \begin{figure} \begin{center} \resizebox{\textwidth}{!}{\includegraphics{handlecancellation2.eps}} \caption{Handle moves, steps 4--7} \label{handlecancellation2} \end{center} \end{figure} Thus, step~0 of Fig.~\ref{handlecancellation1} shows the handle decomposition of a closed manifold that we hope is $S^3$. In the diagrams in Figures~\ref{handlecancellation1} and \ref{handlecancellation2} we perform handle moves in order to simplify the handle decomposition (see \cite{Gompf-Stipsicz} for basics on handle moves). Keep in mind that the curves labeled $l_1$, $l_2$ and $l_3$ are not attaching circles, but merely curves drawn on the surface of the handlebody whose position we keep track of. In particular, attaching circles may freely be isotoped over these curves and may cross them. It is easy to see that a crossing by an attaching circle will become an undercrossing if the corresponding 2-handle cancels a 1-handle that carries one of the longitudes. {\it Step 0.} Attaching circles $m_2$ and $m_3$ go across 1-handles $AA'$ and $CC'$ only once, respectively, so their corresponding 2-handles cancel the 1-handles $AA'$ and $CC'$. Step~1 shows the handle decomposition after this cancellation. {\it Step 1.} Attaching circles II and III, which loop from feet $B'$ and $B$ can be slid over the 1-handle $BB'$ and then off feet $B$ and $B'$, respectively. Moreover, the looping part of attaching circle I, at near right, may be isotoped to foot $D'$ and then across and off handle $DD'$, after which I is a simple closed curve bounding a disk (on the outside) that may be pushed away from the diagram. A 2-handle whose attaching circle bounds a disk disjoint from the rest of the diagram simply encloses a 3-handle if the manifold is compact, like in our case. The 2- and 3-handles then cancel. Step~2 shows the handle decomposition after the isotopies and cancellation. {\it Step 2.} We now notice that the vertical portion of attaching circle II can be isotoped outside of the diagram to the right and ``wrapped'' across $\infty$ to its new position shown in Step~3. Attaching circle $m_3$ crosses 1-handle $BB'$ only once, causing cancellation of the 2-handle corresponding to $m_3$ and the 1-handle $BB'$. {\it Step 3.} Before we carry out further cancellation, we simplify the picture a bit. We isotope attaching circle III around $D'$ so it attaches at the bottom. Notice that this includes a slide of the part of III that runs across the 1-handle $DD'$, thus the place where III attaches to $D$ moves as well. Also, we isotope the loop of $l_2$ at the bottom of the diagram toward the top, and we straighten out the kink in $l_3$. {\it Step 4.} We isotope at top middle to remove the self-crossing of $l_3$. Attaching circles II and III run parallel, that is, they bound an annulus. This means a 3-handle is located between them which cancels one of the 2-handles, say III. After erasing III we note that II cancels the 1-handle $DD'$. {\it Step 5.} The rest is isotopy of the link components $l_i$, $i=1,2,3$. The loop of $l_3$ at center right is isotoped up and to the left, and so is the section of $l_2$ close to it. The kinks on the left are straightened out, as is the bottom part of $l_3$ and the center of $l_1$. {\it Step 6.} The bottom part of $l_3$ is lifted and flipped to the top, $l_1$ is straightened out and $l_2$ is isotoped a little. {\it Step 7.} After isotoping $l_2$ and rotating the diagram by $180^\circ$, one gets the mirror image of Fig.~7 from Wielenberg's paper~\cite{Wielenberg}. \section{Converting a side pairing to a handle decomposition in dimension $n$} \label{convgen} In this section we generalize the conversion method from~\S\ref{conv3} to any dimension. Let $X=\en$, $\sn$ or $\hn$, and let $P$ be a finite-volume, finite-sided polyhedron in $X$, as defined, for example, in \cite{Ratcliffe}. Assume, furthermore, that every $k$-face of $P$ is diffeomorphic to either $D^k$ or $D^k-\{\text{finitely many points on $\bd D^k$}\}$. The former condition is in order to disallow polyhedra such as a lens in $S^3$, whose 1-face is a circle, the latter to allow hyperbolic polyhedra with ideal vertices. Let there be given a side-pairing on the sides of $P$, again in the sense of \cite{Ratcliffe}, so that the space of identified points $M=P/ \sim$ is a complete manifold with a geometry based on $X$. If $M$ is a noncompact hyperbolic manifold, we will be getting the handle decomposition of $\mbar$, the compact manifold with boundary whose interior is $M$. If $M$ is closed, we set $\mbar=M$. \begin{theorem} \label{gendecomp} Let $M$ be a manifold obtained through a side-pairing defined on a polyhedron $P$ in $X=\en$, $\sn$ or $\hn$. Suppose that every $k$-face of $P$ is diffeomorphic to either $D^k$ or $D^k-\{\text{finitely many points on $\bd D^k$}\}$. Then the decomposition of $P$ into $k$-faces, $0\le k \le n$ induces a handle decomposition of the manifold $\mbar$, where every cycle of $k$-faces corresponds to an $(n-k)$-handle. \end{theorem} {\it Proof.} If $P$ is a hyperbolic polyhedron with ideal vertices, the completeness of $M$ implies the existence of a finite collection of disjoint open horoballs $\{B_s, s\in S\}$ that are centered at ideal vertices of $P$ and are mapped to each other under side-pairings of $P$. Furthermore, each $B_s$ can be chosen so it intersects only sides of $P$ that are incident with the ideal vertex where $B_s$ is centered. Set $U_{-1}=\cup_{s\in S} B_s$ if $P$ is hyperbolic with ideal vertices, otherwise set $U_{-1}=\emptyset$. For every $k=0,\dots,n$, we inductively define real numbers $\epsilon_k$ and ``orthogonal neighborhoods'' $NE^k$ of truncated $k$-faces $TE^k$. Let $E^k_s$, $s\in S_k$ be the collection of $k$-faces of $P$, $k=0,\dots,n-1$. (Note that there is only one $n$-face, namely $P$.) If $P$ has real vertices, there is an $\epsilon$ so that $p: X\to M$ is injective on $\epsilon$-balls around the real vertices. Set $\epsilon_0$ to be the smaller of $\epsilon$ and $\frac{1}{3}\min \{ d(E^0_s, E^0_t) | s\ne t, s,t\in S_0\}$, otherwise (if all vertices are ideal) set $\epsilon_0=1$. Let $TE^0_s=E^0_s$, let $NE^0_s$ denote the closed $\epsilon_0$-neighborhhood in $X$ of a 0-face $E^0_s$, and let $U_0=\cup_{s\in S_0} \intr NE^0_s$. Clearly $NE^0_s$ and $NE^0_t$ are disjoint when $s\ne t$ and $p$ restricted to any of those neighborhoods is a diffeormorphism. Now assume $\epsilon_k$, $U_k$, $NE^k_s$ and $TE^k_s$ have been defined for a $0\le k\le n-2$ and every $k$-face $E^k_s$, $s\in S_k$ of $P$ and that the restriction of $p$ to every $NE^k_s$ is a diffeomorphism. Let $TE^{k+1}_s=E^{k+1}_s - \cup_{-1 \le i \le k} U_i $, $s\in S_{k+1}$. Because $M$ is a manifold, $p$ is injective on the interior of every face. Due to compactness of every $TE^{k+1}_s\subset \intr E^{k+1}_s$, we can find an $\epsilon$ so that $p$ is a diffeomorphism on an $\epsilon$-neighborhood of $TE^{k+1}_s$ for every $s\in S_{k+1}$. Set $\epsilon_{k+1}$ to be the smallest of $\epsilon$, $\frac{1}{2}\epsilon_k$ and $\frac{1}{3}\min \{ d(TE^{k+1}_s, TE^{k+1}_t) | s\ne t, s,t\in S_{k+1}\}$, and let $NE^{k+1}_s$ be the closed $\epsilon_{k+1}$-neighborhood of $TE^{k+1}_s$ in $X$ with $\cup_{i=-1}^k U_i $ excluded. Let $U_{k+1}=\cup_{s\in S_{k+1}} \intr NE^{k+1}_s$. If $k=n-1$, let $TE^n=P-\cup_{i=-1}^{n-1} U_i $ and $NE^n=TE^n$. From the assumption that every $E^k_s$ is diffeomorphic to $D^k$ or $D^k-\{\text{finite set}\}$, it follows that $TE^k_s$ is diffeomorphic to $D^k$ for every $s\in S_k$. The set $NE^k_s$ is then diffeomorphic to $D^k\times D^{n-k}$, where $TE^k_s=D^k\times 0$. Note that $x\times D^{n-k}$ is essentially in the orthogonal direction to $E^k_s$, except close to $\bd TE^k_s$, where some bending has to occur to accomodate $NE^i_t$, where $E^i_t$ is a face of $E^k_s$. Furthermore, for every face $E^i_t$ of $E^k_s$, $i\le k$, note that $NE^i_t$ intersects $NE^k_s=D^k\times D^{n-k}$ only along $\bd D^k\times D^{n-k}$. If $P$ has ideal vertices, $\bd\mbar$ is assembled from links of ideal vertices $\bd B_s \cap P$. Clearly $\bd B_s \cap P\subset \bd D^k\times D^{n-k}$. Thus, any element of $p(\intr D^k \times\bd D^{n-k})$ is in $\intr \mbar$ and therefore must be in some other $p(NE^j_u)$. Our observation above then shows that $j>k$. Let us treat $p(NE^n)$ as a $0$-handle in $\mbar$. Consider an $NE^k_s$, $s\in S_k$, $k\le n$. By our construction, $p$ restricts to a bijection on $NE^k_s$, hence $p(NE^k_s)$ is an $n$-ball inside of $\mbar$. This gives a decomposition of $\mbar$ into a collection of $n$-balls with disjoint interiors. If $E^k_s$ and $E^k_t$ are in the same cycle, then $p(NE^k_s)=p(NE^k_t)$, hence every $n$-ball corresponds to a cycle of $k$-faces for some $k\le n$. Define $M_k=\cup_{n-k \le i \le n} \cup_{s\in S_i} p(NE^i_s)$, meant to be the union of $i$-handles, $0\le i \le k$. As above, $NE^k_s=D^k\times D^{n-k}$ and $p(\intr D^k\times D^{n-k})$ is contained in $\cup_{k < i} \cup_{t\in S_i} p(NE^i_t)=M_{n-k-1}$. Since $M_{n-k-1}$ is closed, $p(D^k\times \bd D^{n-k})\subset M_{n-k-1}$. Therefore, $p(NE^k_s)$ attaches as an $(n-k)$-handle to $M_{n-k-1}$, giving us a handle decomposition of $\mbar$. $\qed$ In the above handle decomposition of $\mbar$, we note that the attaching sphere $0\times \bd D^{n-k}$ of the $n-k$-handle $NE^k_s$ is the boundary of a neighborhood in $X$ of a point $x\in E^k_s$. Naturally, this being a neighborhood in $X$ means that a part of it is outside of $P$ and it intersects several translates $gP$ of $P$. But then $g^{-1}((0\times\bd D^{n-k}) \cap gP)$ is visible in $P$. \section{Drawing handle decomposition diagrams in dimension 4} \label{diagram4} In this section we apply the conversion method described in the previous section to dimension~4. Notation is like in the previous section and, as an illustrative example, we use as $P$ the 4-cube whose sides are paired by translations, yielding $M=$4-torus. We want to draw in $\bd D^4=S^3=\riii \cup \infty$ attaching spheres of the $k$-handle $D^{4-k}\times D^k$. The 0-handle is $NE^4$, $P$ without neighborhoods of all the $k$-faces. Clearly $\bd NE^4 =S^3=\bd P$, realized by a diffeomorphism $h:\bd NE^4\to \bd P$, a restriction of a diffeomorphism $h:NE^4\to P$, which may be imagined as a radial projection from a point in the interior of $P$. Under~$h$, $(\bd NE^{4-k}_s)\cap P$ is sent to $TE^{4-k}_s\times B^{k-1}$, a $TE^{4-k}_s$ ``thickened-up'' in $\bd P$. Note that the thickening of $TE^3_s$ in $\bd P$ is still $TE^3_s$. Now, a piece of the attaching sphere $P\cap (0\times \bd D^k)$ is sent under $h$ to $x \times B^{k-1}$, where $x\in TE^{4-k}_s$. Let the subdivision of $\bd P$ into $k$-faces ($k\le 3$) be drawn in $\riii\cup\infty$. As an example, take the standard ``cube-within-a-cube'' picture of the boundary of the 4-cube. A piece of the attaching sphere for a 1-handle $p(NE^3_s)$ is $P\cap (0\times \bd D^1)$ which is sent to a point in $TE^3_s$, chosen, for example, in its interior. The two points in the attaching sphere of a 1-handle are in paired 3-faces. The attaching region $D^3\times \bd D^1$ is the union of paired truncated 3-faces $TE^3_s$ and $TE^3_t$: schematically, we draw 3-balls inside $TE^3_s$ and $TE^3_t$. The piece inside of $P$ of an attaching circle for a 2-handle $p(NE^2_s)$ is $P\cap (0\times \bd D^2)$, an arc that crosses the 2-face corresponding to the 2-handle and joins the 3-faces whose intersection is the 2-face. Under $h$, this arc maps to the segment $x\times B^1$, $x\in TE^2_s$, visible on the left of Fig.~\ref{dim4conv}. \begin{figure} \resizebox{\textwidth}{!}{\includegraphics{dim4conv.eps}} \caption{Arriving at a handle decomposition for the 4-torus} \label{dim4conv} \end{figure} The attaching sphere for a 3-handle is a 2-sphere, whose intersection $P\cap (0\times \bd D^3)$ with $P$ is a 2-ball. Under $h$, the 2-ball maps to the 2-ball $z\times B^2$, $z\in TE^1_s$, shown in Fig.~\ref{dim4conv}. When $M$ is closed, it is only important how the 1- and 2-handles attach (see \cite{Gompf-Stipsicz}), so the attaching spheres of 3- and 4-handles do not matter. However, it is useful to have in one's mind pieces of the attaching spheres of the 3-handles, as they help with the framing of the attaching map of the 2-handles. In order to specify, up to isotopy, the attaching map $\phi:D^2\times \bd D^2\to \bd Y$ of a 2-handle $D^2\times D^2$, it is enough to specify the images of two parallel circles $\phi(x\times \bd D^2)$ and $\phi(y\times \bd D^2)$. As we can see on the left side of Fig.~\ref{dim4conv}, if $E^1_t$ is incident with $E^2_s$, then the intersection of $z\times B^2$, $z\in TE^1_t$, with $TE^2_s\times B^1$ is an arc $y\times B^1$, $y\in TE^2_s$. We can then choose $y\times \bd D^2$ to be the circle parallel to $x\times \bd D^2$, chosen before. (In Fig.~\ref{dim4conv}, $u\times B^1$ and $v\times B^1$ are pieces of another such pair of parallel circles.) Schematically, the portion $z\times B^2$ of the attaching sphere of the 3-handle $p(NE^1_t)$ is represented as a triangle transverse to $E^1_t$, bounded by the portions $x_i \times B^1$ of attaching circles of the three 2-handles that correspond to the three 2-faces, the pairwise intersections of the three 3-faces whose intersection is the 1-face $E^1_t$. One can speak of a ``cycle'' of triangles, the collection of triangles corresponding to 1-faces that are all in one cycle. Clearly, a cycle of triangles represents all the pieces of the attaching sphere of the 3-handle corresponding to the cycle of 1-faces. Thus, pieces of a parallel circle may be chosen to lie in one of the triangles. Since a 2-face is incindent with several 1-faces, pieces of the attaching circle of a 2-handle will be in the boundary of several triangles. We choose one to contain a piece of the parallel circle; once this is done, the remaining pieces of the parallel circle must be chosen in triangles that are in the same cycle as the one we have chosen. We summarize how to get a picture of a handle decomposition of a 4-manifold that is the result of pairing sides of a polyhedron $P$. \begin{itemize} \item[---] Draw in $\riii$ the decomposition of $\bd P=\riii\cup \infty$ into $k$-faces . Inside every 3-face, draw a 3-ball. Feet of a 1-handle are the two balls inside paired 3-faces. \item[---] We do not assume that the feet of 1-handles are identifed by a reflection in the bisector of the centers, as is common. Rather, the identifying map is determined by the map that pairs the corresponding 3-faces. \item[---] If two 3-faces are adjacent along a 2-face $E^2$, draw an arc between the balls inside the 3-faces that crosses $E^2$ exactly once. The arcs that cross 2-faces that are in the same cycle comprise the attaching circle for a 2-handle. \item[---] Whenever three 3-faces intersect in a 1-face we see a ``triangle'' whose ``vertices'' and edges are the already drawn 3-balls and arcs, respectively. We fill in this triangle (usually only mentally) with a surface that is transverse to the 1-face. Parallel attaching circles can be chosen to lie in these surfaces. \item[---] Once we choose a triangle to contain a piece of the attaching circle, the remaining pieces must be chosen in triangles that are in the same cycle of triangles. \end{itemize} The procedure above yields the familiar handle-decomposition diagram for $T^4$ from the right side of Fig.~\ref{dim4conv}. (see also \cite{Gompf-Stipsicz}, Fig.~4.42). Parallel attaching circles for three 2-handles are the arcs marked I, II and III. \section{A hyperbolic manifold as a complement of 5 tori in the standard differentiable $S^4$} \label{ident4} In \cite{Ivansic3}, the author showed that the double cover of $M_{1011}$, example no. 1011 from Ratcliffe and Tschantz's \cite{Ratcliffe-Tschantz} collection of noncompact finite-volume hyperbolic 4-manifolds, is a complement of 5 tori in the topological 4-sphere. The proof used Freedman's theory, which only provides a homeomorphism to the 4-sphere. In this section we prove that the 4-sphere $N$ from \cite{Ivansic3} is, in fact, diffeomorphic to the standard differentiable 4-sphere. We use the method of this paper to obtain a handle decomposition of the manifold $N$ and then handle moves to simplify the decomposition down to the decomposition of the standard differentiable 4-sphere. The 24-sided polyhedron $Q$ that gives rise to the Ratcliffe and Tschantz's manifolds is described in their paper \cite{Ratcliffe-Tschantz} and in \cite{Ivansic3} and \cite{Ivansic4}, where more details on its combinatorial structure can be found. Here we just recall that its sides (in the ball model of $\hiv$) are spheres of radius 1 centered at points whose two coordinates are $\pm 1$ and the other two are zeroes. We label the spheres and the sides by $S_{****}$, like in \cite{Ivansic4}. For example, $S_{0+0-}$ is the sphere centered at $(0,1,0,-1)$. Each octahedral 3-face of $Q$ has eight 2-faces, so drawing a decomposition of $\bd Q$ would be quite involved. We will therefore jump to the handle-decomposition picture right away, by finding attaching spheres of the 1-, 2- and 3-handles on $\bd Q$, and projecting them to $S^3$ radially from the origin of $B^4$. $S^3$ is then sent to $\riii\cup \infty$ via the M\"obius transformation $g:(\riv\cup\infty)\to(\riv\cup\infty)$ that provides the standard isometry between the ball and upper-half-space models of hyperbolic space. This map is the composite of the reflection in the sphere with center $(0,0,0,1)$ of radius $\sqrt 2$, followed by a reflection in the hyperplane $x_4=0$. Its restriction to $S^3$ is given by $x\mapsto e_4 + \frac{2}{|x-e_4|^2}(x-e_4)$. (This is actually the formula for just the first reflection, since the reflection in $x_4=0$ has no effect on $\riii\cup\infty$, the image of $S^3$.) Note that $g$ leaves $S^2\subset \riii\times 0$ fixed. As attaching spheres of 1-handles we choose points on the sides of $Q$ closest to the origin. ``Shortest'' arcs connecting those points along the sides are chosen to be the pieces of attaching circles of 2-handles. Pieces of attaching spheres of 3-handles are the piecewise-spherical ``triangles'' bounded by the arcs, stretched across the sides. More precisely, let $r$ be the position-vector of a sphere $S$ that determines a side of $Q$. The intersection of $S$ and the line spanned by $r$ is a point $c$ in $S$. Let $c'$ be the intersection of $S'$ and the line spanned by $r'$, where $S'$ is the pair of $S$ under the side-pairing on $Q$. Then we choose $c$ and $c'$ to be the points of the attaching sphere of the 1-handle corresponding to the paired sides $S$ and $S'$. If $S_1$ and $S_2$ are intersecting sides, let $e$ be the arc that is the intersection of $\bd Q$ and the (linear) angle spanned by position vectors $r_1$ and $r_2$. This arc is a portion of the attaching circle of the 2-handle corresponding to the 2-face $S_1\cap S_2$. Finally, if a side $S_3$ intersects $S_1$ and $S_2$, consider the intersection $f$ of $\bd Q$ with the ``positive cone'' spanned by $r_1$, $r_2$ and $r_3$. This ``triangle'' is a portion of the attaching sphere of the 3-handle corresponding to the 1-face $S_1\cap S_2 \cap S_3$. It is not clear that the overall arrangement of the spheres is such that $c$, $c'$, $e$ and $f$ are actually on $\bd Q$. (For example, $c$ may be inside some other sphere $S_0$, which would put it outside of $Q$.) Next, we justify that these choices are indeed on $\bd Q$. Consider the sides $S_{++00}$, $S_{+0+0}$ and $S_{0++0}$. The three sides intersect pairwise and the intersection of all three of them is a 1-face $E^1_s$. Let $L$ be the (linear) hyperplane spanned by $r_{++00}$, $r_{+0+0}$ and $r_{0++0}$, this is the hyperplane $x_4=0$. It is clear that the only spheres that intersect $L$ in more than one point are those with a 0 in the fourth position of its label. All other spheres intersect $L$ in exactly one point, which is one of $\pm e_i$, $i=1,..,4$, that is, an ideal vertex of $Q$. Now, the intersection of the 12 sides $S_{***0}$ with $L$ is a 3-dimensional version of $Q$ which was described and pictured in \cite{Ratcliffe-Tschantz}, Figure 5. From this picture we see that attaching spheres chosen in the way described above are on $\bd Q$. \begin{figure} \resizebox{\textwidth}{!}{\includegraphics{polytohandle.eps}} \caption{Finding attaching spheres for $M_{1011}$} \label{polytohandle} \end{figure} A general 1-face is an intersection of 3 sides if their labels pairwise share exactly one position with the same symbol. This position could be different for each pair of sides, like in the example above, or it could be the same for all three pairs, like for the sides $S_{++00}$, $S_{+0+0}$ and $S_{+00+}$. It is clear that any 1-face can be moved by a linear isometry of $Q$ to one of these two prototypical 1-faces (permute the coordinates and reflect in coordinate hyperplanes). Furthermore, there is a linear isometry of $Q$ that sends $S_{++00}$, $S_{+0+0}$ and $S_{0++0}$ to $S_{++00}$, $S_{+00+}$ and $S_{+0+0}$, respectively; its matrix is \begin{displaymath} \frac{1}{2} \left[ \begin{array}{rrrr} 1 & 1 & 1 & -1\\ 1 & 1 & -1 & 1\\ -1 & 1 & 1 &1\\ 1 & -1 & 1 & 1 \end{array} \right]. \end{displaymath} This shows that the situation illustrated by the sides $S_{++00}$, $S_{+0+0}$ and $S_{0++0}$ is generic, so all choices for attaching spheres done in the way described above are valid. We now have to see where the attaching spheres are sent under the composite $gp $, where $p:\bd Q\to S^3$ is the radial projection. The intersection of each position vector $r_{****}$ with $S^3$ is $\frac{1}{\sqrt 2}r_{****}$. The points of form $\frac{1}{\sqrt 2}r_{***0}$ are on $S^2\subset S^3$, which is fixed by $g$. Furthermore, an easy computation shows that $g(\frac{1}{\sqrt 2}r_{***+})=(\sqrt 2+1)(*,*,*,0)$ and $g(\frac{1}{\sqrt 2}r_{***-})=(\sqrt 2-1)(*,*,*,0)$. As above, let $c_i$ be the point of intersection of a sphere (side) $S_i$ with the line spanned by the position vector $r_i$ of the center of $S_i$, $i=1,2,3$. If sides $S_1$ and $S_2$ intersect, consider the intersection $C$ of $S^3$ and the linear plane spanned by $r_1$ and $r_2$ (part of $C$ is the radial projection of a piece of the attaching circle). This is a circle, so $g(C)$ is a circle, since $g$ is a M\"obius transformation. Since $C$ also contains $-r_1$ and $-r_2$, the circle $g(C)$ will contain the four points $gp(\pm c_1)$ and $gp(\pm c_2)$. Once we have the four points drawn, the circle $g(C)\subset\riii$ will be easy identify. The arc of the circle between $gp(c_1)$ and $gp(c_2)$ is a part of the attaching circle for the 2-handle corresponding to the 2-face $S_1\cap S_2$. Now, if sides $S_1$, $S_2$ and $S_3 $ all intersect in a 1-face $E^1$, part of the attaching sphere corresponding to $E^1$ is the ``triangle'' $f$ that is the intersection of the positive cone generated by $r_1$, $r_2$ and $r_3$ with $\bd Q$. Radial projection to $S^3$ followed by $g$ maps the triangle to a spherical triangle bounded by arcs of circles $g(C)$ that were just described. The top left of Fig.~\ref{polytohandle} shows the points $d_{****}=gp(c_{****})$ and the circles $g(C)$ for the sides $S_{**00}$, $S_{*0*0}$ and $S_{0**0}$. The bottom left of Fig.~\ref{polytohandle} does the same for the sides $S_{**00}$, $S_{0*0*}$ and $S_{*00*}$. The complete picture for $Q$ is obtained by rotating the bottom left by $\pi/2$ around the $x_1$-axis, then around the $x_3$-axis, and taking the union of the resulting three figures with the top left of Fig.~\ref{polytohandle}. To make drawing of pictures easier, we isotope the positions of $gp(c)$'s a little and replace curved arcs $g(C)$ mostly by straight lines, as seen on the right side of Fig.~\ref{polytohandle}. We note that pieces of the attaching circles all lie in one of the coordinate planes or on~$S^2$. (In the straight-edge version of the diagram we imagine this $S^2$ as the surface consisting of 6 rectangles and 8 triangles, spanned by the points $g(c)$). We assume, as discussed in \S\ref{diagram4}, that pieces parallel circles always lie in the triangles in the diagram. Note that each triangle can be taken to lie in one of the coordinate planes or on $S^2$. \begin{figure} \resizebox{\textwidth}{!}{\includegraphics{handle1011.eps}} \caption{Handle decomposition of $\tilde M_{1011}$} \label{handle1011} \end{figure} The handle decomposition of $\mbar_{1011}$ is the right half of Fig.~\ref{handle1011}, where the outside- and inside-most attaching circles are not shown to maintain clarity of picture. If every triangle in the picture is filled in, we note that it will consist of eight ``octahehedra", each of which corresponds to the link of the ideal vertices $v_{*000}$, $v_{0*00}$, $v_{00*0}$ and $v_{000*}$, which is a cube (three pairs of opposing sides/feet of 1-handles). To better see the octahedra, we have separated them on Fig.~\ref{octahedra}. Note that six are shown; the two missing ones, corresponding to $v_{000+}$ and $v_{000-}$, are the same ones that are missing from Fig.~\ref{handle1011}. As a matter of fact, the space between the described octahedra forms sixteen more octahedra, corresponding to the ideal vertices of form $v_{****}$ (see one in Fig.~\ref{meridian5}). A side-pairing $f:S\to S'$ of any of Ratcliffe and Tschantz's examples is always of the form $ru$, where $u$ is a composite of reflections in the coordinate planes, and $r$ is a reflection in $S'$. The restriction of $f$ to $\bd Q$ is all that matters to us, so $u$ explains how feet of 1-handles are identified. Note that conjugating by $g$ the reflections in hyperplanes $x_1=0$, $x_2=0$ and $x_3=0$ gives the same reflections, while conjugating the reflection in $x_4=0$ gives the reflection in the unit sphere. Using the convention from \cite{Ivansic4} we name the side-pairings for $M_{1011}$ by letters $a,b,\dots,k, l$ as follows. (The composite of reflections that pair the sides is under the arrow.) \begin{displaymath} \begin{array}{llll} S_{++00} \arrtop{a}{-+++} S_{-+00}\hskip10pt & S_{+-00} \arrtop{b}{-+++} S_{--00}\hskip10pt & S_{+0+0} \arrtop{c}{++-+} S_{+0-0}\hskip10pt & S_{-0+0} \arrtop{d}{++-+} S_{-0-0} \\ S_{0++0} \arrtop{e}{----} S_{0--0}\hskip10pt & S_{0+-0} \arrtop{f}{----} S_{0-+0}\hskip10pt & S_{+00+} \arrtop{g}{----} S_{-00-}\hskip10pt & S_{+00-} \arrtop{h}{----} S_{-00+} \\ S_{0+0+} \arrtop{i}{+-++} S_{0-0+}\hskip10pt & S_{0+0-} \arrtop{j}{+-++} S_{0-0-} \hskip10pt & S_{00++} \arrtop{k}{+++-} S_{00+-}\hskip10pt & S_{00-+} \arrtop{l}{+++-} S_{00--}. \end{array} \end{displaymath} Furthermore, for simplicity of notation, if a letter $s$ pairs two sides, we relabel the originating side with $S$ and $s(S)$ by $S'$. Thus, $d=$reflection in plane $x_3=0$ sends side $D=S_{-0+0}$ to side $D'=S_{-0-0}$. Let $G_{1011}\subset \Isom(\hiv)$ be the fundamental group of $M_{1011}$ and let $H_{1011}$ be the subgroup of orientation-preserving isometries in $G_{1011}$. Of course, $G_{1011}$ is generated by $a,b,\dots,l$. We are really interested in the orientable double cover $\tilde M_{1011}$ of $M_{1011}$, whose fundamental polyhedron consists of two copies of $Q$ with suitably paired sides. It is easy to see (and is explained in \S3 of \cite{Ivansic3}) that the fundamental polyhedron for $H_{1011}$ is $Q\cup hQ$, where $h$ is one of the above listed generators of $G$, that is also, being orientation reversing, a coset representative for the nontrivial right coset of $H_{1011}$ in $G_{1011}$. The discussion in \cite{Ivansic3} also shows that sides of $Q\cup hQ$ are paired according to the following rule. Let $S$, $S'$ be sides of $P$ paired by the transformation $s\in G$. If $s$ is orientation-reversing, then side $S$ is paired to $hS'$ via $hs$ and side $hS$ is paired to $S'$ via $sh^{-1}$. If $s$ is orientation-preserving, then $S$ is paired to $S'$ via $s$ and $hS$ is paired to $hS'$ via $hsh^{-1}$. \begin{figure} \begin{center} \resizebox{1.5in}{!}{\includegraphics{octahedra.eps}} \caption{Octahedra appearing in Fig.~12, separated} \label{octahedra} \end{center} \end{figure} We may view the handle decomposition of $\tilde M_{1011}$ as having two 0-handles (corresponding to $Q$ and $hQ$) and a 1-handle joining them (coming from the paired sides $H'$ and $hH$). Since $Q$ and $hQ$ lie on opposite sides of the hyperplane $H'$, it is clear that the handle decomposition of $\tilde M_{1011}$ can be drawn by drawing two handle-decompositions of $\mbar_{1011}$ side-by-side while identifying $H'$ and $hH$. The same effect is achieved by drawing them side-by-side and introducing a 2-handle canceling the 1-handle coming from pairing $H'$ to $hH$. The handle decomposition for $\tilde M_{1011}$ is the entire diagram in Fig.~\ref{handle1011}. The part coming from $Q$ we take to be centered at 0, the part coming from $hQ$ we center at $(-6,0,0)$, the two portions being symmetric in the plane $x_1=-3$. To get proper labeling on the feet of 1-handles of the $hQ$-part, we recall that they are the result of applying $h=ru_{----}$ to $Q$, thus we need to reflect the picture on the right in planes $x_1=0$, $x_2=0$, $x_3=0$ and the unit sphere centered at 0, and then apply the reflection $r$ in the plane $x_1=-3$. Putting together all the facts from above, the feet of the 1-handles in the decomposition in Fig.~\ref{handle1011} have the following identification pattern: \begin{gather*} A, B, C, D, I, J, K, L, hA, hB, hC, hD, hI, hJ, hK, hL \mapsto\\ A', B', C', D', I', J', K', L', hA', hB', hC', hD', hI', hJ', hK', hL'\\ \text{via reflection in the bisector of the feet}\\ E, F, G, H, E', F', G', H' \mapsto hE', hF', hG', hH', hE, hF, hG, hH\\ \text{via reflection in $x_1=-3$.} \end{gather*} The manifold $\tilde M_{1011}$ has 5 three-torus boundary components, each of those an $S^1$-fiber bundle over $T^2$. Closing off the boundary components involves filling in each fiber with a disc, resulting in a closed manifold $N$. Equivalently, this can be achieved by attaching a $T^2\times D^2$ to each component of $\bd \tilde M_{1011}$. A handle decomposition for $T^2\times D^2$ derived from the simplest handle decomposition for $T^2$ has one 0-handle, two 1-handles and one 2-handle. Attaching it to $\tilde M_{1011}$ results in adding one 2-handle, two 3-handles and one 4-handle to the decomposition, since the handles in the decomposition of $T^2\times D^2$ must be viewed in an upside-down way (see \cite{Gompf-Stipsicz}), as it is attached to $\tilde M_{1011}$. The attaching circle of the 2-handle is any fiber in the bundle. As selected and illustrated in \cite{Ivansic3}, in four of the boundary components, the fibers are represented by a straight-line segment joining opposing sides of the cube that is the vertex link, therefore, the attaching circle of the 2-handle is a line-segment joining two opposed feet of 1-handles in the octahedron that corresponds to the vertex link. Since the parallel circle is another fiber, we may assume it is simply a parallel line-segment. We now simplify the handle decomposition of $N$ using handle moves. We repeatedly make use of the following proposition: \begin{proposition} \label{cancel3handle} (\cite{Gompf-Stipsicz}, modified Proposition 5.1.9) If the handle decomposition of a closed manifold contains an attaching circle of a 2-handle that can be isotoped so that it bounds a disc disjoint from the rest of the diagram, and the disc contains its parallel circle, then the 2-handle cancels a 3-handle from the decomposition (and we may erase the 2-handle from the diagram). \end{proposition} In order to better see what goes on in the complicated diagram we consider sections with the coordinate planes $x_1x_2$, $x_1x_3$, $x_2x_3$ and its parallel plane $x_1=-6$ (``$x_2x_3$-planes''), and the two spheres $S^2$, displayed in this order in Figures~\ref{step0}--\ref{step6}. Since pieces of the parallel circles are on one of those surfaces, if isotopy of the diagram stays parallel to the surface, those pieces remain in the surface, so are easy to track. On occasion, pieces of parallel circles are not on the default surfaces --- it is either obvious where they are, or it is noted. \begin{figure} \begin{center} \includegraphics{octahedroncancellation.eps} \caption{Octahedron corresponding to $v_{0+00}$ after $AA'$ is canceled by a 2-handle.} \label{octahedroncancellation} \end{center} \end{figure} Handle moves are tracked in Figures~\ref{step0}--\ref{step6} by drawing what happens in the sections with the mentioned planes. The topmost box of the explanation describes which 1- and 2-handles have canceled. The middle four boxes describe subsequent isotopies, each of the four pertaining to the part of the diagram in the same relative position as the box. The bottom box describes cancellation of 2- and 3-handles owing to Proposition~\ref{cancel3handle}. Note that 1-handles are designated by labels on the corresponding paired sides. One could wonder if isotopy or handle cancellation in one of the planes interferes with the situation in the other. A little 3-dimensional insight helps one see that there is no problem. A picture such as Fig.~\ref{octahedroncancellation}, which is typical, may help the reader see what happens after a 1-handle cancels with a 2-handle. This picture shows the octahedron corresponding to $v_{0+00}$ after 1-handle $AA'$ was canceled by a 2-handle. The initial handle decomposition also includes thirty-four 3-handles and five 4-handles. Twenty-four of the 3-handles come from cycles of 1-faces, the remaining ten 3-handles and five 4-handles come from handle decompositions of attached $T^2\times D^2$'s. As explained in \cite{Gompf-Stipsicz}, \S4.4, because $N$ is a closed manifold, 3- and 4-handles can attach essentially in only one way, so there is no need to keep track of them. \begin{figure} \begin{center} \includegraphics{meridian5.eps} \caption{Position of $m_5$ in octahedron corresponding to $v_{++++}$} \label{meridian5} \end{center} \end{figure} After Fig.~\ref{step6}, the 3-dimensional diagram is simple enough that we can draw it on one picture. In steps thus far, we have not pictured the additional 2-handle coming from closing off the fifth boundary component. The choice $e^{-1}g$, made in \cite{Ivansic3}, is represented by a union of line-segments: one joining the opposite sides $S_{+00+}$ and $S_{0++0}$ of the cube that is the vertex link at $v_{++++}$, and one joining the opposite sides $S_{0--0}$ and $S_{-00-}$ of the cube that is the vertex link at $v_{----}$. This corresponds to two segments joining $E$ and $G$ and $hE'$ and $hG'$ in our diagram. Fig.~\ref{meridian5} shows the first one in the ``octahedron" corresponding to $v_{++++}$: we can see that all the moves done so far on the diagram do not affect it (in particular, it lies outside of the two $S^2$'s), so it can be drawn in the same position in the overall picture. After the final 2-handles and 3-handles cancel in step~12 of Fig.~\ref{step7}, we arrive at an empty diagram (one 0-handle and some 3- and 4-handles). Since this is the handle decomposition of the standard differentiable 4-sphere, we conclude that $N$ is diffeomorphic to it. Incindentally, by keeping track of the 3- and 4-handles throughout the computation we see that the final handle decomposition has four 3-handles and five 4-handles. However, since the boundary of their union is $S^3$, the union must be $D^4$ (otherwise the boundary would be a connected sum of $S^1\times S^2$'s). \pagebreak \begin{landscape} \begin{figure} \resizebox{8in}{!}{\includegraphics{step0.eps}} \caption{Step 0, initial handle decomposition} \label{step0} \end{figure} This is the initial setup. Altogether, there are twenty-four 1-handles, fifty-four 2-handles, and also thirty-four 3-handles and five 4-handles, whose attaching spheres we do not need to keep track of. Each of the 2-handles $m_1$, $m_2$, $m_3$ and $m_4$ (coming from attaching $T_2\times D^2$ to $\tilde M_{1011}$) passes exactly once over 1-handles $AA'$, $JJ'$, $KK'$ and $CC'$, respectively, so those 2-handles cancel the 1-handles. \vfill \pagebreak \begin{figure} \resizebox{8in}{!}{\includegraphics{step1.eps}} \caption{Step 1} \label{step1} \begin{tabular}{|l||l|} \hline \multicolumn{2}{|l|} {2-handles $m$, $m_1$, $m_2$ and $m_3$ cancel $H'hH$, $AA'$, $JJ'$ and $KK'$, respectively} \\ \hhline{:=t::=:} \parbox[t]{4in}{$n_1$ slides over the 1-handle $hGG' $ and off the foot $hG$ into position indicated by the dotted line\\ $n_2$ isotopes into dotted position\\ $n_4$ slides over II', and off foot I' into dotted position\rule[-6pt]{0pt}{0pt}} & \parbox[t]{4in}{$n_6$ slides over $CC'$ and off foot $C'$ into dotted position\\ $n_5$ slides over DD' and off foot D' into dotted position} \\ \hhline{:=::=:} \parbox[t]{4in}{$n_7$ slides over $F'hF$ and off foot $hF$ into dotted position\\ $n_8$ slides over $EhE'$ and off foot $hE'$ into dotted position\\ $n$ slides over $LL'$ and off foot $L$ into dotted position\rule[-6pt]{0pt}{0pt}} & \parbox[t]{4in}{$n_9$ slides over $EhE'$ and off foot $hE'$ into dotted position\\ $n_{10}$ slides over $FhF'$ and off foot $hF'$ into dotted position} \\ \hhline{|-||-|} \end{tabular} \end{figure} \pagebreak \begin{figure} \resizebox{8in}{!}{\includegraphics{step2.eps}} \caption{Step 2} \label{step2} \begin{tabular}{|l||l|} \hline \multicolumn{2}{|l|}{$m_4$, $n_1$, $n_2$, $n_6$, $n_8$ and $n_9$ cancel $CC'$, $hIhI'$, $BB'$, $LL'$ and $hLhL'$, hBhB' respectively} \\ \hhline{:=t::=:} \parbox[t]{4in}{$n_{11}$ slides over $hJhJ'$ and off $hJ$ into dotted position\\ $n_{12}$ isotopes into dotted position} & \parbox[t]{4in}{$n_{13}$ slides over $HhH'$ and off $hH'$ into dotted position\\ $n_{14}$ slides over $GhG'$ and off $hG'$ into dotted position\\ $n_{15}$ slides over $hDhD'$ and off $hD$ into dotted position\\ $n_{16}$ slides over $hChC'$ and off $hC'$ into dotted position\rule[-6pt]{0pt}{0pt}} \\ \hhline{:=::=:} \parbox[t]{4in}{$n_{19}$ slides over $FhF'$ and off $hF'$ into dotted position\\ $n_{20}$ slides over $E'hE$ and off $hE$ into dotted position} & \parbox[t]{4in}{$n_{17}$ isotopes into dotted position\\ $n_{21}$ slides over $F'hF$ and off $hF$ into dotted position\\ $n_{22}$ slides over $E'hE$ and off $hE$ into dotted position\rule[-6pt]{0pt}{0pt}} \\ \hhline{:=b::=:} \multicolumn{2}{|l|}{ Each of the 2-handles $n_3$, $n_4$, $n_5$, $n_7$ and $n_{10}$ cancels a 3-handle owing to Proposition~\ref{cancel3handle}} \\ \hline \end{tabular} \end{figure} \pagebreak \begin{figure} \resizebox{8in}{!}{\includegraphics{step3.eps}} \caption{Step 3} \label{step3} \begin{tabular}{|l||l|} \hline \multicolumn{2}{|l|} {$n_{13}$, $n_{17}$, $n_{19}$ cancel $hDhD'$, $hKhK"$, $DD'$, respectively} \\ \hhline{:=t::=:} \parbox[t]{4in}{isotopy simplifies picture} & \parbox[t]{4in}{$n_{18}$ isotopes into dotted position\\ $n_{24}$ slides over $G'hG$ and off $hG$ into dotted position\rule[-6pt]{0pt}{0pt}} \\ \hhline{:=::=:} \parbox[t]{4in}{isotopy simplifies picture} & \parbox[t]{4in}{$n_{23}$ isotopes\rule[-6pt]{0pt}{0pt}} \\ \hhline{:=b::=:} \multicolumn{2}{|l|} {$n_{14}$, $n_{15}$, $n_{16}$, $n_{20}$, and $n_{23}$ cancel a 3-handle} \\ \hline \end{tabular} \end{figure} \pagebreak \begin{figure} \resizebox{8in}{!}{\includegraphics{step4.eps}} \caption{Step 4} \label{step4} \begin{tabular}{|l||l|} \hline \multicolumn{2}{|l|} {$n_{11}$, $n_{18}$ cancel $hAhA'$, $hChC'$ respectively} \\ \hhline{:=t::=:} \parbox[t]{4in}{\ } & \parbox[t]{4in}{isotopy simplifies picture\\ $n_{33}$ isotopes to dotted position\rule[-6pt]{0pt}{0pt}} \\ \hhline{:=::=:} \parbox[t]{4in}{} & \parbox[t]{4in}{$n_{25}$ isotopes\\ $n_{26}$ isotopes\rule[-6pt]{0pt}{0pt}} \\ \hhline{:=b::=:} \multicolumn{2}{|l|} {$n_{12}$, $n_{24}$, $n_{21}$, $n_{22}$, $n_{25}$, $n_{26}$ cancel a 3-handle} \\ \hline \end{tabular} \end{figure} \pagebreak \begin{figure} \resizebox{8in}{!}{\includegraphics{step5.eps}} \caption{Step 5} \label{step5} \begin{tabular}{|l||l|} \hline \multicolumn{2}{|l|} {$n_{33}$, $n_{27}$, $n_{28}$ cancel $G'hG$, $hJhJ'$, $II'$ respectively} \\ \hhline{:=t::=:} \parbox[t]{7in}{$n_{31}$ rises in part above $x_1x_2$-plane, spans surface like an arched roof that contains its parallel circle\\ $n_{32}$ rises in part above $x_1x_2$-plane, slides over $GhG'$ and off $G$ into dotted position\\ $n_{35}$ isotopes into dotted position\rule[-6pt]{0pt}{0pt}} & \parbox[t]{1in}{} \\ \hhline{:=::=:} \parbox[t]{7in}{both branches of $n_{39}$ are isotoped by rotating by $180^\circ$ around the axis joining their endpoints --- the dots representing where $n_{37}$ and $n_{38}$ cross these planes show they do not interfere with the isotopy\rule[-6pt]{0pt}{0pt}} & \parbox[t]{1in}{} \\ \hhline{:=b::=:} \multicolumn{2}{|l|} {\parbox[t]{8in}{$n_{31}$, $n_{32}$, $n_{34}$ cancel a 3-handle; $n_{29}$ and $n_{30}$ can be separated from the rest of the diagram by pulling them in the $x_1$-direction, then each cancels a 3-handle}} \\ \hline \end{tabular} \end{figure} \pagebreak \begin{figure} \resizebox{8in}{!}{\includegraphics{step6.eps}} \caption{Step 6} \label{step6} \begin{tabular}{|l||l|} \hline \multicolumn{2}{|l|} {$n_{35}$ cancels $HhH'$} \\ \hhline{:=t::=:} \parbox[t]{4in}{$n_{37}$ and $n_{38}$ isotoped up and down, respectively\rule[-6pt]{0pt}{0pt}} & \parbox[t]{4in}{} \\ \hhline{:=::=:} \parbox[t]{4in}{} & \parbox[t]{4in}{$n_{40}$, $n_{41}$, $n_{42}$ and $n_{43}$ are isotoped along the $S^2$'s so they lie, along with their parallel circles, in planes parallel to the $x_1x_3$-plane --- note that $n_{37}$ and $n_{38}$ do not interfere with this in their new position\rule[-6pt]{0pt}{0pt}} \\ \hhline{:=b::=:} \multicolumn{2}{|l|} {$n_{36}$ cancels a 3-handle} \\ \hline \end{tabular} \end{figure} \pagebreak \begin{figure} \resizebox{8in}{!}{\includegraphics{step7.eps}} \caption{Steps 7--12} \label{step7} \begin{tabular}{|l||l|} \hhline{|-||-|} \parbox[t]{4.1in}{$n_{41}$ isotopes to dotted positions\\ parallel curve of $m_5$ can be chosen right above $m_5$} & \parbox[t]{4.1in}{$n_{41}$ cancels $F'hF$, $n_{44}$ cancels $GhG'$\\ $n_{43}$ isotopes to dotted positions\\ $n_{40}$ slides over $E'hE$ and off $hE$, then cancels 3-handle\\ $n_{45}$ cancels 3-handle\rule[-6pt]{0pt}{0pt}} \\ \hhline{|-||-|} \end{tabular} \begin{tabular}{|l||l||l|} \hhline{|-||-||-|} \parbox[t]{2in}{$m_5$ isotopes\\ $n_{43}$ cancels $FhF'$\\ $n_{42}$ slides over $EhE'$ and off~$E$, then cancels 3-handle} & \parbox[t]{2.5in}{$m_5$ cancels $EhE'$\\ $n_{37}$ and $n_{38}$ can be separated from\\ the diagram and cancel 3-handles} & \parbox[t]{3.5in}{$n_{39}$, $n_{46}$, $n_{47}$ and $n_{48}$ can be isotoped so they all lie in a plane, along with their parallel curves\\ $n_{46}$ cancels $E'hE$\\ $n_{39}$, $n_{47}$ and $n_{48}$ cancel 3-handles\rule[-6pt]{0pt}{0pt}} \\ \hhline{|-||-||-|} \end{tabular} \end{figure} \end{landscape} \pagebreak
2023-04-23T06:10:03.494Z
2008-10-23T01:48:05.000Z
redpajama/arxiv
arxiv_0002
88
10,187
8317361af7a7cf94c33e2db74e6173d6fb3fcaa0
\section{Introduction} During the late quaternary period, after the last glacial maximum (LGM), from 18 kyears ago till present, a global warming was responsible for the melting of the glaciers leading to a fast increase in the sea level. In approximately 13 kyears, the sea level rised up to about 120 meters, reaching the actual level. However, the sea level did not go up in a continuous fashion, but rather, it has evolved in a pulsatile way, leaving behind a signature of what actually happened, the Continental Shelf, i.e. the seafloor. Continental shelves are located at the boundary with the land so that they are shaped by both marine and terrestrial processes. Sea-level oscillations incessantly transform terrestrial areas in marine environments and vice-versa, thus increasing the landscape complexity \cite{lambeck:2001}. The presence of regions with abnormal slope as well as the presence of terraces on a Continental Shelf are indicators of sea level positions after the Last Glacial Maximum (LGM), when large ice sheets covered high latitudes of Europe and North America, and sea levels stood about 120-130m lower than today \cite{heinrich:1988}. Geomorphic processes responsible for the formation of these terraces and discontinuities on the bottom of the sea topography are linked to the coastal dynamics during eustatic processes associated with both erosional or depositional forcing (wave cut and wave built terraces respectively \cite{goslar:2000}). The irregular distribution of such terraces and shoreface sediments is mainly controlled by the relationship between shelf paleo-physiography and changes on the sea level and sediment supply which reflect both global and local processes. Several works have dealt with mapping and modeling the distribution of shelf terraces in order to understand the environmental consequences of climate change and sea level variations after the LGM \cite{adams:1999,andrews:1998,broecker:1998}. In this period of time the sea-level transgression was punctuated by at least six relatively short flooding events that collectively accounted for more than 90m of the 120m rise. Most (but not all) of the floodings appear to correspond with paleoclimatic events recorded in Greenland and Antarctic ice-cores, indicative of the close coupling between rapid climate change, glacial melt, and corresponding sea-level rise \cite{taylor:1997}. In this work, we analyze data from the Southeastern Brazilian Continental Shelf (SBCS) located in a typical sandy passive margin with the predominance of palimpsests sediments. The mean length is approximately 250km and the shelfbreak is located at 150m depth. It is a portion of a greater geomorphologic region of the southeastern Brazilian coast called S\~ao Paulo Bight, an arc-shaped part of the southeastern Brazilian margin. The geology and topography of the immersed area are very peculiar, represented by the Mesozoic/Cenozoic tectonic processes that generated the mountainous landscapes known as "Serra do Mar". These landscapes (with mean altitudes of 800m) have a complex pattern that characterize the coastal morphology, and leads to several scarps intercalated with small coastal plains and pocket beaches. This particular characteristic determines the development of several small size fluvial basins and absence of major rivers conditioning low sediment input, what tends to preserve topographic signatures of the sea-level variations. For the purpose of the present study, we select three parallel profiles acquired from echo-sounding surveys, since for all the considered profiles, the same similar series of sequences of terraces were found. These profiles \cite{furtado:1992,conti:2001,correa:1996} are transversal to the coastline and the isobaths trend, and they extend from a 20m to a 120m depth. The importance of understanding the formation of these ridges is that it can tell us about the coastal morphodynamic conditions, inner shelf processes and about the characteristics of periods of the sea level regimes standstills (paleoshores). In particular, the widths of the terraces are related to the time the sea level "stabilized". All this information is vital for the better understanding of the late quaternary climate changes dynamic. We find relations between the widths of the terraces that follow a self-affine pattern description. These relations are given by a mathematical model, which describes an order of appearance for the terraces. Our results suggest that this geomorphological structure for the terraces can be described by a devil's staircase \cite{mandelbrot:1977}, a staircase with infinitely many steps in between two steps. This property gives the name "devil" to the staircase, once an idealized being would take an infinite time to go from one step to another. So, the seafloor morphology is self-affine (fractal structure) as reported in Ref. \cite{herzfeld,goff}, but according to our findings, it has a special kind of self-affine structure, the devil's staircase structure. A devil's staircase as well as other self-affine structure are the response of an oscillatory system when excited by some external force. The presence of a step means that while varying some internal parameter, the system preserves some averaged regular behavior, a consequence of the stable frequency-locking regime between a natural frequency of the system and the frequency of the excitation. This staircase as well as other self-affine structures are characterized by the presence of steps whose widths are directly related to the rational ratio between the natural frequency of the system and the frequency of the excitation. In a similar fashion, we associate the widths of the terraces with rational numbers that represent two hypothetical frequencies of oscillation which are assumed to exist in the system that creates the structure of the SBCS, here regarded as the sea level dynamics (SLD), also known as the sea level variations. Then, once these rational numbers are found, we show that the relative distances between triples of terraces (associated with some hypothetical frequencies) follow similar scalings found in the relative distance between triples of plateaus (associated with these same frequencies) observed in the devil's staircase. The seafloor true structure, apart from the dynamics that originated it, is also a very relevant issue, specially for practical applications. For example, one can measure the seafloor with one resolution and then reconstruct the rest based on some modeling \cite{mareschal}. As we show in this work (Sec. \ref{model}), a devil's staircase structure fits remarkably well the experimental data. Our paper is organized as follows. In Sec. \ref{data}, we describe the data to be analyzed. In Sec. \ref{devil}, we describe which kind of dynamical systems can create a devil's staircase and how one can detect its presence in experimental data based on only a few observations. In Sec. \ref{devil_in_data}, we show the evidences that led us to characterize the SBCS as a devil's staircase, and in Sec. \ref{model} we show how to construct seafloor profiles based on the devil's staircase geometry. Finally, in Sec. \ref{conclusao}, we present our conclusions, discussing also possible scenarios for the future of the sea level dynamics under the perspective of our findings. \section{Data}\label{data} \begin{figure} \centerline{\hbox{\psfig{file=meco_fig21.ps,height=9.0cm,width=9cm}}} \caption{[Color online] Profiles (depth versus the distance to the cost) of the Southeastern Brazilian Continental Shelf. The arrows indicate the terraces considered in our analyzes. The profile shown with a thick black line is the profile chosen for our derivations, reproduced also in (B). The other two profiles had their original position of the two axes shifted by a constant value such that one can also identify the terraces observed in the chosen profile in these other two. Note that a translation of the profiles by a constant value has no effect on any of the scalings observed. The reason of this mismatch between the profiles is due to the local geometry of the cost at the time the sea reached that level.} \label{meco_fig1} \end{figure} The data consists of the tree profiles given in Fig. \ref{meco_fig1}(A-B). The profile considered for our analyzes is shown in Fig. \ref{meco_fig1}(B), where we show the Continental Shelf of the State of S{\~a}o Paulo, in a transversal cut in the direction: inner shelf ("cost") $\rightarrow$ shelfbreak ("open sea"). The horizontal axis represents the distance to the cost and the vertical axis, the sea level (depth), $d$. We are interested in the terraces widths and their respective depths. The profiles shown in Fig. \ref{meco_fig1} were the result of a smoothing (filtering) process from the original data collected by Sonar \cite{note3}. The smoothing process is needed to eliminate from the measured data the influence of the oscillations of the ship where the sonar is located and local oscillations on the sea floor probably due to the stream flows. Smaller topographic terraces could be smoothed or masked due to several processes such as: coastal dynamic erosional during sea-level rising, Holocene sediment cover, erosional processes associated with modern hidrodynamic pattern (geostrophic currents). For that reason we only consider the largest ones, as the ones shown in Fig. \ref{meco_fig2}, (located at $d=-30.01m$ with the width of $l=6.06km$). As one can see, the edges of the terraces are not so sharp as one would expect from a staircase plateau. Again, this is due to the action of the sea waves and stream flows throughout the time. To reconstruct what we believe to be the original terrace, we consider that its depth is given by the depth of the middle point, and its width is given by the minimal distance between two parallel lines placed along the scarps of the terrace edges. Using this procedure, we construct Table \ref{table1} with the largest and more relevant terraces found. We identify a certain terrace introducing a lower index $n$ in $l$ and $d$, according to their chronological order of appearance. More recent appearance (closer to the cost, less deep) smaller is the index $n$. We consider the more recent data to have a zero distance from the cost, but in fact, this data is positioned at about 15km away from the shore, where the bottom of the sea is not affected by the turbulent zone caused by the break out of the waves. The profile of Fig. \ref{meco_fig1}(B) was the one chosen among the other tree profiles because from it we could more clearly identify the largest number of relevant terraces \cite{note3}. \begin{figure} \centerline{\hbox{\psfig{file=meco_fig2.ps,width=9.0cm}}} \caption{Reconstruction of the terraces. The width of the terrace is given by the minimal distance between the two parallel dashed lines. } \label{meco_fig2} \end{figure} \begin{center} \begin{table} \begin{tabular}{c|c|c} $n$ & $d_n (m)$ & $l_n (km)$ \\ \hline 1 & -30.01 & 6.06 $\pm$ 0.05 \\ 2 & -41.86 & 1.59 $\pm$ 0.05 \\ 3 & -54.01 & 2.93 $\pm$ 0.05 \\ 4 & -61.14 & 1.73 $\pm$ 0.05 \\ 5 & -66.69 & 2.21 $\pm$ 0.05 \\ 6 & -74.33 & 0.80 $\pm$ 0.1 \\ 7 & -79.75 & 0.80 $\pm$ 0.1 \\ 8 & -85.30 & 0.80 $\pm$ 0.1 \\ \hline \end{tabular} \caption{Terrace widths and depths. While the depths present no representative deviation, the deviation in the widths become larger for deeper terraces. The deviation in the widths is estimated by calculating the widths assuming many possible configurations between the placement of the two parallel lines used to calculate the widths.} \label{table1} \end{table} \end{center} \section{The Devil's Staircase}\label{devil} Frequency-locking is a resonant response occurring in systems of coupled oscillators or oscillators coupled to external forces. The first relevant report about this phenomenon was given by Christian Huygens in the 17$^th$ century. He observed that two clocks back to back in the wall, set initially with slightly different frequencies, would have their oscillations coupled by the energy transfer throughout the wall, and then they would eventually have their frequencies synchronized. Usually, we expect that an harmonic, $P_{w1}$, of one oscillatory system locks with an harmonic, $Q_{w2}$, of the other oscillatory system, leading to a locked system working in the rational ratio $P/Q$ \cite{jensen:1984}. To understand what is the dynamics responsible for the onset of a frequency-locked oscillation, that is, the reasons for which a system either locks or unlocks, we present the simplest model one can come up with to describe a more general oscillator. This model is described by an angle $\theta$, which is changed (after one period) to the angle $f(\theta)$. So, $f(\theta)=\theta + \Omega$. In order to introduce an external force in the oscillator modeling also possible physical interactions with other oscillators, a resonant term, $g$, is added into this model, resulting in the following model \begin{equation} f(\theta)=\theta+\Omega-g(\theta,K) \mbox{\ \ \ } (mod \mbox{\ } 1), \label{circle_map} \end{equation} \noindent where \begin{equation} g(\theta,K) = \frac{K}{2\pi}\sin{2 \pi \theta}. \label{g_theta} \end{equation} Despite this map simplicity, the same can not be said about its complexity \cite{argyris:1994}. Arnold (see ref. \cite{arnold:1965}) studied this map in detail aiming to understand how an oscillatory system would undergo into periodic stable state when perturbed by an external perturbation. For $K=0$, Eq. (\ref{circle_map}) represents a pure rotation, which is topological equivalent to a twice continuously differentiable, orienting preserving mapping of the circle onto itself [Theorem of Denjoy, see Ref. \cite{arnold:1988}]. Therefore, the simple Eq. (\ref{circle_map}) can be considered as a model for many types of oscillatory systems. In fact, Eq. (\ref{circle_map}) represents a more complicated system, a three-dimensional torus with frequencies $w_1$ and $w_2$, when viewed by a Poincar{\'e} map. Thus, $\Omega$ in Eq. (\ref{circle_map}) represents the ratio $w_1/w_2$. When $w_1/w_2=p/q$ (with $p \leq q$) is rational, this map has a period $p$ motion and its trajectory, i.e. the value of $\theta$, assumes the same value after $q$ iterations. For $K=0$, the so called winding number $W$ is exactly equal to $\Omega$, i.e. $W$=$p/q$. For $K \neq 0$ (non-linear case) $W$ is defined by {\small \begin{equation} W(\Omega,K)= \lim_{n \to \infty} \frac{h(\theta_0,K)+h(\theta_1,K)+\ldots+h(\theta_{n-1},K)}{n}, \label{winding_number} \end{equation}} \noindent where \begin{equation} h(\theta,K)=\Omega+g(\theta,K). \label{h} \end{equation} For $K<1$, Eq. (\ref{circle_map}) is monotonic and invertible. For $K=1$, it develops a cubic inflection point at $\theta=0$. The map is still invertible but the inverse has a singularity. For $K>1$ the map is non-invertible. \begin{figure} \centerline{\hbox{\psfig{file=meco_fig6.ps,width=8cm}}} \caption{A complete devil's staircase, obtained from Eq. (\ref{circle_map}), for $K$=1.} \label{meco_fig6} \end{figure} Arnold wanted to understand how periodic oscillations would appear as one increases $K$ from zero to positive values. He observed that a quasi-periodic oscillation, for an irrational $\Omega$ and $K=0$, would turn into a periodic oscillation as one varies $K$, from zero to positive values. He demonstrated that a periodic oscillation has probability zero of being found for $K=0$ (rational numbers set is countable while the irrational numbers set is uncountable) and positive probability of being found for $K>0$. He also observed that fixing a positive value $K$, the winding number $W$ [Eq. (\ref{winding_number})] is a continuous but not differentiable function of $\Omega$, as one can see in Fig. \ref{meco_fig6}, forming a stair-like structure. If $W(\Omega,K)$ is rational, it can be represented by the ratio between two integer numbers as $W=\frac{P}{Q}$. At this point, the frequency $\Omega$ and the frequency of the function $g$ are locked, producing the phenomenon of frequency-locking, when $W(K,\Omega)$ does not change its value within an interval $\Delta \Omega$ (a plateau) of values of $\Omega$. In fact, smaller is the denominator $Q$, larger is the interval $\Delta \Omega$. As one changes $\Omega$, plateaus for $W$ rational appear following a natural order described by the Farey mediant. Given two plateaus that represent a $\frac{P_1}{Q_1}$ and a $\frac{P_3}{Q_3}$ winding numbers, with plateau widths of $\Delta \Omega_1$ and $\Delta \Omega_3$, respectively, there exists another plateau positioned at a winding number $W$ within the interval $[\frac{P_1}{Q_1},\frac{P_3}{Q_3}]$ given by \begin{equation} \frac{P_2}{Q_2}=\frac{P_1+P_3}{Q_1+Q_3}. \label{farey_mediant} \end{equation} \noindent The Farey mediant gives the rational with the smallest integer denominator that is within the interval $[\frac{P_1}{Q_1},\frac{P_3}{Q_3}]$. Therefore, the $\Delta \Omega (\frac{P_2}{Q_2})$ plateau is smaller than $\Delta \Omega (\frac{P_1}{Q_1})$ and $\Delta \Omega (\frac{P_3}{Q_3})$, but is bigger than any other possible plateau. Organizing the rationals according to the Farey mediant creates an hierarchical level of rationals, which are called Farey Tree. The plateaus $\Delta \Omega_1$ and $\Delta \Omega_3$ are regarded as the parents, and $\Delta \Omega_2$ as the daughter plateau. The interesting case of Eq. (\ref{circle_map}) for our purpose, is exactly when $K$=1. For that case, one can find periodic orbits with any possible rational winding number, as one varies $\Omega$. What results in a zero (probability) measure of finding quasi-periodic oscillation in Eq. (\ref{circle_map}), by a random chosen of the $\Omega$ value. Also, for $K>1$, due to the overlap of resonances (periodic oscillation), chaos is possible. The devil's staircase can be fully characterized by the relations between the plateaus widths, and the relations between the gaps between two of them. While the plateau widths are linked to the probability one has to find periodic oscillations, the gaps widths between plateaus are linked to the probability one has to find quasi-periodic oscillations, in Eq. (\ref{circle_map}). There are many scaling laws relating the plateau widths \cite{jensen:1983,jensen:1984,cvitanovic:1985}. There are local scalings, which relate the widths of plateaus that appear close to a specific winding number, for example the famous golden mean $W_G=\frac{\sqrt{5}-1}{2}$. However, we will focus our attention in the global scalings, which can be experimentally observed, and only for the case where $K$=1. For this case, we are interested in two scalings. The one that relates the plateau widths with the respective winding numbers in the form $\frac{1}{Q}$ (the largest plateaus), and the one that describes the structure of the complementary set to the plateaus $\Delta \Omega$, i.e. the structure of the gaps between plateaus. The structure of the plateaus is a Cantor set as well as the structure of the complementary set. Therefore, a characterization of these sets can be done in terms of the fractal dimension $D_0$ \cite{farmer:1983} of the complementary set. The first scaling is \cite{jensen:1984} \begin{equation} \Delta \Omega(\frac{1}{Q}) \propto \frac{1}{Q^\gamma} \mbox{\ \ } (\gamma>3), \label{scaling_1} \end{equation} \noindent The second scaling relates the widths of the complementary set as one goes to smaller and smaller scales. These widths are related to a power-scaling law whose coefficient $D_0$ is the fractal dimension of the complementary set. For $K=1$, we have that the fractal dimension of the complementary set is $D_0 \cong 0.87$. This is an universal scaling. Since the complementary set of the plateaus represents the irrational rotations, the smaller is its fractal dimension, the smaller is the probability of finding quasi-periodic oscillation. For experimental data, the determination of $D_0$ is difficult to obtain because the dimension measures a microscopic quantity of the plateaus widths, and in experimental data one can only observe the largest plateaus. Fortunately, an approximation $D^{\prime}$ for $D_0$ can be obtained from the largest plateaus by using the idea proposed in \cite{hentschel:1983}, \begin{equation} \left(\frac{S^{\prime}}{S}\right)^{D^{\prime}} + \left(\frac{S^{\prime\prime}}{S} \right)^{D^{\prime}} = 1. \label{scaling_2} \end{equation} \noindent where $S^{\prime}$, $S$, and $S^{\prime\prime}$ are represented in Fig. \ref{meco_fig6_1}. \begin{figure} \centerline{\hbox{\psfig{file=meco_fig6_1.ps,width=8.0cm}}} \caption{Representation of the gaps $S^{\prime\prime}$, $S^{\prime}$, and $S$, used to estimate the fractal dimension $D_0$ of the complementary set, using Eq. (\ref{scaling_2}).} \label{meco_fig6_1} \end{figure} In case one has $K \cong 1$ ($K<1$), we do not have a complete devil's staircase. In other words, winding numbers with denominators larger than a given ${\widetilde{Q}}$ are cut off from the Farey Tree. Using this information we can estimate the value of $K$ through the largest denominator observed \cite{jensen:1984} \begin{equation} {\widetilde{Q}} \geq \frac{1}{1-K}. \label{cutoff} \end{equation} Finally, we would like to stress that while in Eq. (\ref{circle_map}) the plateaus of the devil's staircase are positioned at winding numbers defined by Eq. (\ref{winding_number}), in nature, devil's staircases have plateaus positioned at some accessible measurement. In the driven Rayleigh-B{\'e}nard experiment \cite{stavans:1985,jensen:1985}, convection rolls appear in a small brick-shaped cell filled with mercury, for a critical temperature difference between the upper and lower plates. As one perturbs the cell by a constant external magnetic field parallel to the axes of the rolls and by the introduction of an AC electrical current sheet pulsating with a frequency $f_{e}$ and amplitude $B$, a devil's staircase is found in the variable $f_i$, the main frequency of the power spectra of the fluid velocity. As one varies the external frequency $f_{e}$, stable oscillations take place at a frequency ratio $f_i/f_e$, for a given value of $f_e$. In analogy to the devil's staircase of Eq. (\ref{circle_map}), $f_e$ should be thought as playing the same role of $\Omega$ in Eq. (\ref{circle_map}), and the ratio $f_i/f_e$ as playing the same role of the winding number $W$. A devil's staircase can also be observed (see Ref. \cite{baptista:2004}) in the amount of information $H$ (topological entropy) that an unstable chaotic set has in terms of an interval of size $\epsilon$, used to create the set. To generate the unstable chaotic set, we eliminate all possible trajectories of a stable chaotic set that visits this interval $\epsilon$. In analogy with the devil's staircase of the circle map, $\epsilon$ should be related to $\Delta \Omega$, while $H$ to $W$. The first proof of a complete devil's staircase in a physical model was given in Ref. \cite{bak:1982}, in the one-dimensional ising model with convex long-range antiferromagnetic interactions. In Ref. \cite{jin:1994}, it was found that a model for the El Ni{\~n}o, a phenomenon that is the result of a tropical ocean-atmosphere interaction when coupled nonlinearly with the Earth's annual cycle, could undergo a transition to chaos through a series of frequency-locked steps. The overlapping of these resonances, which are the steps of the devil's staircase, leads to the chaotic behavior. \section{A devil's staircase in the Southern Brazilian Continental Shelf}\label{devil_in_data} The main premise that guides the application of the devil's staircase model to the Shelf is that the rules found in quantities related to the widths and depths of the terraces obey the same rules found in a complete devil's staircase for the frequency-locked intervals $\Delta \Omega$ and their rational winding number, $W$. Thus, we assume that the terrace widths $l_n$ play the same role of the frequency-locked intervals $\Delta \Omega$, and the terrace depths play the same role of the rational winding number $W$. In order to interpret the Shelf as a devil's staircase, we have to show that the terraces appear in positions which respect the Farey mediant, the rule that describes the winding number "positions" of the many plateaus. For that, we verify whether the position of the terraces at $d_n$ can be associated to hypothetical frequency ratios, denoted by $w_n=\frac{p_n}{q_n}$, which respects the Farey mediant. In doing so, we want that the metric of three adjacent terraces respects the Farey mediant. In addition, we also assume that the larger terraces are the parents of the Farey Tree, while the smaller terraces between two larger ones are the daughters. Thus, for each triple of terrace, we want that \begin{equation} \frac{d_{n+2}+d_{n}}{d_{n+1}}= \frac{w_{n+2}+w_{n}}{w_{n+1}}. \label{metric_equivalence} \end{equation} \noindent One could have considered other ways to relate the depths and the frequency ratios. The one chosen in Eq. (\ref{metric_equivalence}) is used in order to account for the fact that while $d_n$ is negative $w_n$ is not. From Eq. (\ref{metric_equivalence}) it becomes clear that for the further analysis the depth of a particular terrace does not play a so important role as the ratio between the depths of triple of terraces that contains this particular terrace. These ratios may eliminate the influence of the local morphology and the influence of the local sea level dynamics into the formation of the Shelf. Therefore, the proposed quantity might be suitable for an integrate analysis of the different Shelfs all over the world, specially the ones affected by local geomorphological characteristics. From the Farey mediant, we have a way to obtain the frequency ratios associated to each terrace, \begin{equation} w_{n+1}=\frac{p_n + p_{n+2}}{q_n + q_{n+2}}. \label{farey_rule} \end{equation} \noindent Therefore, combining Eq. (\ref{metric_equivalence}) and Eq. (\ref{farey_rule}), we obtain \begin{equation} \frac{p_n + p_{n+2}}{q_n + q_{n+2}}=\frac{\frac{p_{n+2}}{q_{n+2}}+ \frac{p_{n}}{q_{n}}}{E}, \label{regra_terrace_1} \end{equation} \noindent which results in \begin{equation} (p_n+p_{n+2})(E-1)q_{n}q_{n+2}p_{n}=p_{n}q_{n+2}^2 + q_n^2p_{n+2}, \label{regra_terrace_2} \end{equation} \noindent where $E$ is defined by \begin{equation} \frac{d_{n+2}+d_{n}}{d_{n+1}}=E. \label{regra_dados} \end{equation} \begin{center} \begin{table} \begin{tabular}{c|c|c} $n$ & $p_n$ & $q_n$ \\ \hline 1 & 1 & 8 \\ 2 & 2 & 17 \\ 3 & 1 & 9 \\ 4 & - & - \\ 5 & - & - \\ 6 & 1 & 17 \\ 7 & 2 & 35 \\ 8 & 1 & 18 \\ \hline \end{tabular} \caption{Integers associated with the $n$ considered terraces, with $n=1,\ldots,8$.} \label{table2} \end{table} \end{center} We do not expect to have Eq. (\ref{regra_terrace_2}) satisfied. We only require that the difference between the left and right hand sides of this equation, regarded as $\delta \epsilon$, is the lowest possible, among all possible values for $p_m$ and $q_m$ (with $m=n,n+2$), for a given $E$, with the restriction that the considered largest terraces are related to largest plateaus of Eq. (\ref{circle_map}), and thus $p_{m+2}$=$p_{m}$ and $q_{m+2}$=$q_{m}+1$, and $\delta \epsilon \ll 1$. Doing so, we find the rationals associated with the terraces, which are shown in table \ref{table2}. The minimal value of $\delta \epsilon$, denoted by $\min{[\delta \epsilon]}$, is $\min{[\delta \epsilon(d_1,d_2,d_3)]}$=0.032002, with $p_1/q_1$=$1/8$ for the terrace 1, and $p_3/q_3$=$1/9$, for the terrace 3. We also find that $\min{[\delta \epsilon(d_6,d_7,d_8)]}$=0.002344, with $p_6/q_6$=$1/17$, for the terrace 6, and $p_8/q_8$=$1/18$, for the terrace 8. These minimal values can be seen in Figs. \ref{meco_fig20}(A-B), where we show the values of $\delta \epsilon(d_1,d_2,d_3)$ [in (A)] and the values of $\delta \epsilon(d_6,d_7,d_8)$ [in (B)], for different values of $p$ and $q$. Using bigger values for $p$ has the only effect to increase the value of $\delta \epsilon$. We have not identified rationals that can be associated with the terraces $4$ and $5$, which means that for $p$ and $q$ within $p=[1,50]$ and $q=[1,400]$, we find that $\delta \epsilon > 1$. We have assumed that they could be either a daughter or a parent. From now on, when convenient, we will drop the index $n$ and represent each terrace by the associated frequency ratio. So, the terrace 1, for $n$=1, is represented as the terrace with $w=1/8$. \begin{figure} \centerline{\hbox{\psfig{file=meco_fig20.ps,width=7.0cm}}} \caption{[Color online] (A) Values of $\delta \epsilon(d_1,d_2,d_3)$ and (B) $\delta \epsilon(d_6,d_7,d_8)$, for different values of $p$ and $q$. $\delta \epsilon$ is the difference between the left and the right hand sides of Eq. (\ref{regra_terrace_2}).} \label{meco_fig20} \end{figure} Table \ref{table2} can be represented in the form of the Farey Tree as shown in Fig. \ref{meco_fig3}. The branch of rationals in the Farey Tree in the form $1/q$ belongs to the most stable branch, which means that the observed terraces should have the largest widths. We believe that the other less important branches of the complete devil's staircase present in the data were smoothed out by the action of the waves and the flow streams throughout the time, and at the present time cannot be observed. Notice that as the time goes by, the frequency ratios are increasing their absolute value, which means that if this tendency is preserved in the future, we should expect to see larger terraces. \begin{figure} \centerline{\hbox{\psfig{file=meco_fig3.ps,width=7.0cm}}} \caption{Farey Tree representing the frequency ratios associated with the major terraces.} \label{meco_fig3} \end{figure} \begin{figure} \centerline{\hbox{\psfig{file=meco_fig5.ps,width=7.0cm}}} \caption{Scaling between the $1/q$-terrace widths and the value of $q$.} \label{meco_fig5} \end{figure} In the following, we will try to recover in the experimental profile, the universal scaling laws of Eqs. (\ref{scaling_1}) and (\ref{scaling_2}). Regarding Eq. (\ref{scaling_1}), we find that $l$ scales as $1/q^{-3.60}$, as shown in Fig. \ref{meco_fig5}, which is the expected global universal scaling for a complete devil's staircase. Regarding Eq. (\ref{scaling_2}), and calculating $S^{\prime}$, $S^{\prime\prime}$, and $S$ using the triple of terraces with widths $l(w=1/8)$, $l(w=2/17)$, and $l(w=1/9)$, as represented in Fig. \ref{meco_fig6_1}, we find $D^{\prime}$=0.89. Using the triple of terraces ($n$=3,$n$=4,$n$=5), we find that $D^{\prime}$=0.87. Both results are very close from the universal fractal dimension $D_0 \cong 0.87$, found for a complete devil's staircase. \section{Fitting the SBCS}\label{model} \begin{figure} \centerline{\hbox{\psfig{file=meco_fig7.ps,width=8.0cm}}} \caption{Magnifications of the small box of Fig. \ref{meco_fig6}, showing the plateaus of the devil's staircase of Eq. (\ref{circle_map}) that appear for the same frequency ratios associated with the triple of terraces $w$=(1/8,2/17,1/9), in (A), and $w$=(1/17,2/35,1/18), in (B).} \label{meco_fig7} \end{figure} Motivated by our previous results, we fit the observed Shelf as a complete devil's staircase, using Eq. (\ref{circle_map}). Notice that the only requirement for Eq. (\ref{circle_map}) to generate a complete devil's staircase is that the function $g$ has a cubic inflection point at the critical parameter $K=1$. Whether Eq. (\ref{circle_map}) is indeed an optimal modeling for the Shelf is beyond the scope of the present study. We only chose this map because it is a well known system and it captures most of the relevant characteristic a dynamical systems needs to fulfill in order to create a devil's staircase. We model the SBCS as a complete devil's staircase, but we rescale the winding number $W$ into the observed terrace depth. So, we transform the complete devil's staircase of Fig. \ref{meco_fig7} as good as possible into the profile of Fig. \ref{meco_fig1}(B), by rescaling the vertical axis of the staircase in Fig. \ref{meco_fig7}. We do that by first obtaining the function $F$ (see Fig. \ref{meco_fig4}) whose application into the terrace depth $d(w)$ gives the frequency ratio $w_{n}=\frac{p_{n}}{q_{n}}$ associated with the terrace. For the triple of terraces $w$=(1/8,2/17,1/9), we obtain \begin{equation} F(d[km])=0.14219+0.00057853d[km], \label{function_F1} \end{equation} and for the triple of terraces $w$=(1/17,2/35,1/18), we obtain \begin{equation} F(d[km])=0.080941+0.00029786d[km]. \label{function_F2} \end{equation} \noindent Therefore, we assume that, locally, the frequency ratios are linearly related to the depth of the terraces. Then, we rescale the vertical axis of the staircases in Figs. \ref{meco_fig7}(A-B) and calculate an equivalent depth, $d$, for the winding number $W$ by using \begin{equation} d = F^{-1}(W), \end{equation} \begin{figure} \centerline{\hbox{\psfig{file=meco_fig4.ps,width=8.0cm}}} \caption{The function $F$, which is a linear relation between the frequency ratios associated with the terraces and their depths for the triple of terraces $w$=(1/8,2/17,1/9) and $w$=(1/17,2/35,1/18).} \label{meco_fig4} \end{figure} \begin{figure} \centerline{\hbox{\psfig{file=meco_fig77.ps,width=8.0cm}}} \caption{[Color online] (A) Rescaling of Fig. \ref{meco_fig7}(A) in black, showing that the devil's staircase fits well the terraces with $w$=(1/8,2/17,1/9) of the profile of Fig. \ref{meco_fig1}(B), in gray. (B) Rescaling of Fig. \ref{meco_fig7}(B) in black, showing that the devil's staircase fits well the terraces with $w$=(1/17,2/35,1/18) of the profile of Fig. \ref{meco_fig1}(B), in gray.} \label{meco_fig8} \end{figure} We also allow tiny adjustments in the axes for a best fitting. The result is shown in Fig. \ref{meco_fig8}(A) for the triple of terraces $w$=(1/8,2/17,1/9) and in Fig. \ref{meco_fig8}(B) for the triple of terraces $w$=(1/17,2/35,1/18). We see that locally, for a short time interval, we can have a good agreement of the terrace widths and positions, with the rescaled devil's staircase. However, globally, the fitting in (A) does not do well, as it is to be expected since the function $F$ is only locally well defined and it changes depending on the depths of the terraces. Notice however that this short time interval is not so short since the time interval correspondent to a triple of terraces is of the order of a few hundred years. The assumption made that $K=1$ is also supported from Eq. (\ref{cutoff}). Using this equation, we can obtain an estimation of the maximum value of $K$ from a terrace with a frequency ratio that has the largest denominator. In our case, we observed $w=2/35$. Using ${\widetilde{Q}}$=35 in Eq. (\ref{cutoff}), we obtain $K\leq 0.97$. In Fig. \ref{meco_fig8}(A), we see a 1/7 plateau positioned in the zero sea level. That is the current level. Thus, the model predicts that nowadays we should have a large terrace, which might imply in an average stabilization of the sea level for a large period of time. However, this prediction might not correspond to reality if the sea dynamics responsible for the creation of the observed Continental Shelf suffered structurally modifications. \section{Conclusions}\label{conclusao} We have shown some experimental evidences that the Southern Brazilian Continental Shelf (SBCS) has a structure similar to the devil's staircase. That means that the terraces found in the bottom of the sea are not randomly distributed but they occur following a dynamical rule. This finding lead us to model the SBCS as a complete devil's staircase, in which, between two real terraces, we suppose an infinite number of virtual (smaller) ones. We do not find these later ones, either because they have been washed out by the stream flow or simply due to the fact that the time period in which the sea level dynamics (SLD) stayed locked was not sufficient to create a terrace. By our hypothesis, the SLD creates a terrace if it is a dynamics in which two relevant frequencies are locked in a rational ratio. This special phase-locked dynamics possesses a critical characteristic: large changes in some parameter responsible for a relevant natural frequency of the SLD might not destroy the phase-locked regime, which might imply that the averaged sea level would remain still. On the other hand, small changes in the parameter associated with an external forcing of the SLD could be catastrophic, inducing a chaotic SLD, what would mean a turbulent averaged sea level rising/regression. In order to interpret the Shelf as a devil's staircase, we have shown that the terraces appear in an organized way according to the Farey mediant, the rule that describes the way plateaus appear in the devil's staircase. That allow us to "name" each terrace depth, $d_n$, by a rational number, $w_n$, regarded as the hypothetical frequency ratio. Arguably, these ratios represent the ratio between real frequencies that are present in the SLD. It is not the scope of the present work to verify this hypothesis, however, one way to check if the hypothetical frequency ratios are more than just a mathematical artifact would be to check if the SLD has, nowadays, two relevant frequencies in a ratio 1/7, as predicted. The newly proposed approach to characterize the SBCS rely mainly on the ratios between terraces widths and between terraces depths. While single terrace widths and depths are strongly influenced by local properties of the costal morphology and the local sea level variations, the ratios between terrace widths and depths should be a strong indication of the global sea level variations. Therefore, the newly proposed approach has a general character and it seems to be appropriated as a tool of analysis to other Continental Shelves around the world. Reminding that the local morphology of the studied area, the "Serra do Mar" does not have a strong impact in the formation of the Shelf and assuming that the local SLD is not directed involved in the formation of the large terraces considered in our analyses, thus, our results should reflect mainly the action of the global SLD. If the characteristics observed locally in the S\~ao Paulo Bight indeed reflect the effect of the global SLD, then the global SLD might be a critical system. Hopefully, the environmental changes caused by the modern men have not yet made any significant change in a relevant parameter of this global system. {\bf Acknowledgements:} MSB was partially supported by the ``Funda\c c\~ao para a Ci\^encia e Tecnologia'' (FCT) and the Max-Planck Institute f\"ur die Physik komplexer Systeme.
2023-04-23T06:10:05.085Z
2008-10-24T13:14:13.000Z
redpajama/arxiv
arxiv_0002
114
6,479
da9018dd52042ab295c3ced26f5511d1adccff61
\section{Introduction} In \cite{nakayashiki} the integral formulae of the quantum Knizhnik--Zamolodchikov (qKZ) equations \cite{frenkel2} for the tensor product of spin $1/2$ representation of $U_q({sl_2})$ arising from $q$-Wakimoto modules have been studied. The formulae are identif\/ied with those of Tarasov--Varchenko's formulae. The aim of this paper is to generalize the results to the case of tensor product of representations with arbitrary spins. It is known that certain matrix elements of intertwining operators between $q$-Wakimoto modu\-les satisfy the qKZ equation~\cite{frenkel2,matsuo}. Thus it is interesting to compute those matrix elements explicitly. In \cite{jimbo} two kinds of intertwining operators were introduced, type I and type II. They were def\/ined according as the position of evaluation representations. In the application to the study of solvable lattice models two types of operators have their own roles. Type I and type II operators correspond to states and particles respectively. The properties of traces exhibit very dif\/ferent structure. However as far as the matrix elements are concerned they are not expected to be very dif\/ferent \cite{jimbo}. In \cite{nakayashiki} a computation of matrix elements has been carried out in the case of type I ope\-ra\-tor and the tensor product of 2-dimensional vector representation of $U_q(sl_2)$ generalizing the result of~\cite{matsuo} (see the previous paper \cite{nakayashiki}). In this paper we compute matrix elements for the composition of the type I intertwining operators \cite{jimbo} associated to f\/inite dimensional irreducible representations of $U_q(sl_2)$. We perform certain multidimensional integrals and sums explicitly. It is shown that the formulae thus obtained coincide with those of Matsuo \cite{M0}, Tarasov and Varchenko \cite{tarasov2} without the term corresponding to the deformed cycles. To obtain actual matrix elements of intertwining operators it is necessary to specify certain contours of integration associated to screening operators. We do not consider this problem in this paper. To f\/ind integration contours describing each composition of intertwining operators is an important open problem. We also remark that the formulae for type II intertwining operators are not obtained in this paper. The computation of them looks quite dif\/ferent from that for type I case as opposed to the expectation. It is interesting to f\/ind the way to get a similar result for matrix elements in the case of type II operators. The paper is organized in the following manner. The construction of the solutions of the qKZ equations due to Tarasov and Varchenko is reviewed in Section~\ref{section2}. In Section~\ref{section3} a free f\/ield construction of intertwining operator is reviewed. The formulae for the matrix elements of some operators are calculated in Section~\ref{section4}. The main theorem of this paper is stated in this section. In Section~\ref{section5} the proof of the main theorem is given. The evaluation representation of $U_q(\widehat{sl_2})$ is explicitly described in Appendix~\ref{appendixA}. Appendix~\ref{appendixB} gives the explicit form of the $R$-matrix in special cases. The explicit forms of the operators which appear in Section~\ref{section3} are given in Appendix~\ref{appendixC}. Appendix~\ref{appendixD} contains the list of OPE's which is necessary to derive the integral formulae. \section[Tarasov-Varchenko's formulae]{Tarasov--Varchenko's formulae}\label{section2} We review Tarasov--Varchenko's formula for solutions of the qKZ equations. In this paper we assume that $q$ is a complex number such that $|q|<1$. We mainly follow the notation of~\cite{tarasov2}. For a nonnegative integer $l$ let $V^{(l)}=\bigoplus_{i=0}^l {\mathbb{C}}v_i^{(l)}$ be the $l+1$ dimensional irreducible $U_q(sl_2)$-module and $V^{(l)}_z=V^{(l)}\otimes {\mathbb{C}}[z, z^{-1}]$ the evaluation representation of $U_q(\widehat{sl_2})$ on $V^{(l)}$. The action of $U_q(\widehat{sl_2})$ on $V^{(l)}_z$ is given in Appendix~\ref{appendixA}. Let $l_1$ and $l_2$ be nonnegative integers and $R_{l_1, l_2}(z)\in {\rm End}(V^{(l_1)}\otimes V^{(l_2)})$ the trigonometric quantum $R$-matrix uniquely determined by the following conditions: \begin{gather*} (i)~ \ \ P R_{l_1, l_2}(z) \ \text{commutes with} \ U_q(\widehat{sl_2}), \\ (ii) \ \ P R_{l_1, l_2}(z) \big(v_0^{(l_1)}\otimes v_0^{(l_2)}\big)=v_0^{(l_2)}\otimes v_0^{(l_1)}, \end{gather*} where $P : V^{(l_1)}\otimes V^{(l_2)}\to V^{(l_2)}\otimes V^{(l_1)}$ is a linear map given by \begin{gather*} P(v \otimes w)=w \otimes v. \end{gather*} The explicit form of the $R$-matrix is given in Appendix~\ref{appendixB} in case $l_1=1$ or $l_2=1$. We set \begin{gather*} \widehat{R}_{l_i, l_j}(z)=\rho_{l_i, l_j}(z)\widetilde{R}_{l_i, l_j}(z), \qquad \widetilde{R}_{l_i, l_j}(z)=(C_{l_i}\otimes C_{l_j}){R}_{l_i, l_j}(z)(C_{l_i}\otimes C_{l_j}), \\ \rho_{l_i, l_j}(z)=q^{\frac{l_i l_j}{2}}\frac{(q^{l_i+l_j+2}z^{-1}; q^4)_{\infty}(q^{-l_i-l_j+2}z^{-1}; q^4)_{\infty}} {(q^{-l_i+l_j+2}z^{-1}; q^4)_{\infty}(q^{l_i-l_j+2}z^{-1}; q^4)_{\infty}}, \\ C_{l} v_{\epsilon}^{(l)}=v_{l-\epsilon}^{(l)}\qquad \big(v_\epsilon^{(l)}\in V^{(l)}\big), \end{gather*} where for a complex number $a$ with $|a|<1$ \begin{gather*} (z; a)_{\infty}=\prod_{i=0}^{\infty}\big(1-a^i z\big). \end{gather*} Let $k$ be a complex number. We set \begin{gather*} p=q^{2(k+2)}. \end{gather*} We assume that $p$ satisf\/ies $|p|<1$. Let $T_j$ denote the $p$-shift operator of $z_j$, \begin{gather*} T_j f(z_1, \dots, z_n)=f(z_1, \dots, p z_j, \dots z_n). \end{gather*} Let $l_1, \dots, l_n$ and $N$ be nonnegative integers. The qKZ equation for a $V_{l_1}\otimes \cdots \otimes V_{l_n}$-valued function $\Psi(z_1, \dots, z_n)$ is \begin{gather} T_j\Psi=\widehat{R}_{j, j-1}(p z_j/z_{j-1})\cdots \widehat{R}_{j, 1}(p z_j/z_{1})\kappa^{\frac{h_j}{2}}\widehat{R}_{j, n}(z_j/z_{n})\cdots \widehat{R}_{j, j+1}(z_j/z_{j+1})\Psi, \label{qKZ} \end{gather} where $\kappa$ is a complex parameter, $\widehat{R}_{i, j}(z)$ signif\/ies that $\widehat{R}_{l_i, l_j}(z)$ acts on the $i$-th and $j$-th components of the tensor product and $\kappa^{h_j}$ acts on $j$-th component as \begin{gather*} \kappa^{\frac{h_j}{2}}v_{m}^{(l_j)}=\kappa^{\frac{l_j-2m}{2}} v_{m}^{(l_j)}. \end{gather*} We set \begin{gather*} (z)_{\infty}=(z; p), \qquad \theta(z)=(z)_{\infty}\big(pz^{-1}\big)_{\infty}(p)_{\infty}. \end{gather*} Consider a sequence $(\nu)=(\nu_1, \dots, \nu_n)$ satisfying $0 \le\nu_i \le l_i$ for all $i$ and $N=\sum\limits_{i=1}^n \nu_i$. Let $r=\sharp\{i\,|\,\nu_i\ne 0\}$, $\{i\,|\,\nu_i\ne 0\}=\{k(1)<\dots<k(r)\}$ and $n_i=\nu_{k(i)}$. We set \begin{gather*} w_{(\nu)}(t, z)=\prod_{a<b}\frac{t_a-t_b}{q^{-2}t_a-t_b} \sum_{\substack{\Gamma_1\sqcup\dots \sqcup \Gamma_r=\{1, \dots, N\} \\ |\Gamma_s|=n_s (s=1, \dots, r)}} \left(\prod_{\substack{1\le i<j\le r \\ a\in \Gamma_i, b\in \Gamma_j}} \frac{q^{-2}t_{a}-t_{b}}{t_{a}-t_{b}}\right) \nonumber \\ \phantom{w_{(\nu)}(t, z)=}{} \times \prod_{b\in \Gamma_s}\Bigg( \frac{t_{b}}{t_{b}-q^{-l_{k(i)}}z_{k(i)}}\prod_{j<k(i)}\frac{q^{-l_{j}}t_{b}-z_j}{t_{b}-q^{-l_{j}}z_j} \Bigg). \end{gather*} The elliptic hypergeometric space ${\cal{F}}_{\rm ell}$ is the space of functions $W(t, z){=}W(t_1, \dots, t_N, z_1, \dots, z_n)$ of the form \begin{gather*} W=Y(z)\Theta(t, z)\frac{1}{\prod\limits_{j=1}^n\prod\limits_{a=1}^N \theta(q^{l_j} t_a/z_j)} \prod_{1\le a<b \le N}\frac{\theta(t_a/t_b)}{\theta(q^{-2}t_a/t_b)} \end{gather*} satisfying the following conditions: \begin{enumerate}\itemsep=0pt \item[$(i)$] $Y(z)$ is meromorphic on $({\mathbb{C}}^{\ast})^n$ in $z_1, \dots, z_n$, where ${\mathbb{C}}^{\ast}={\mathbb{C}}\setminus \{0\}$; \item[$(ii)$] $\Theta(t, z)$ is holomorphic on $({\mathbb{C}}^{\ast})^{n+N}$ in $t_1, \dots, z_n$ and symmetric in $t_1, \dots, t_N$; \item[$(iii)$] $T_a^tW/W=\kappa q^{-2N+4a-2} \prod\limits_{i=1}^n q^{l_i}$, $T_j^zW/W=q^{-l_j N}$, where $T_a^tW=W(t_1, \dots, p t_a, \dots, t_N, z)$ and $T_j^zW=W(t, z_1, \dots, p z_j, \dots, z_n)$. \end{enumerate} Def\/ine the phase function $\Phi(t, z)$ by \begin{gather*} \Phi(t, z)=\left(\prod_{a=1}^{N}\prod_{i=1}^{n}\frac{(q^{l_i} t_a /z_i)_{\infty}}{(q^{-l_i} t_a /z_i)_{\infty}}\right) \left(\prod_{a<b}\frac{(q^{-2}t_a/t_b)_{\infty}}{(q^{2}t_a/t_b)_{\infty}}\right). \end{gather*} For $W\in {\cal{F}}_{\rm ell}$ let \begin{gather} I(w_{(\epsilon)}, W)=\int_{\widetilde{\mathbb{T}}^N}\prod_{a=1}^N\frac{d t_a}{t_a}\Phi(t, z) w_{(\epsilon)}(t, z) W(t, z), \label{TV-1} \end{gather} where $\widetilde{\mathbb{T}}^N$ is a suitable deformation of the torus \begin{gather*} \mathbb{T}^N=\{(t_1, \dots, t_N)\,|\, |t_i|=1, 1\le i \le N\}, \end{gather*} specif\/ied as follows. The integrand has simple poles at \begin{gather*} t_a/z_j=\big(p^s q^{-l_j}\big)^{\pm 1},\qquad s\ge 0, \quad 1\le a \le N, \quad 1\le j \le n, \\ t_a/t_b=\big(p^s q^2\big)^{\pm 1},\qquad s\ge 0, \quad 1\le a< b\le N. \end{gather*} The contour of integration in $t_a$ is a simple closed curve which rounds the origin in the counterclockwise direction and separates the following two sets \begin{gather*} \big\{p^s q^{-l_j}z_j, p^s q^2 t_b|s\ge 0, 1\le j \le N, a<b\big\}, \\ \big\{p^{-s} q^{l_j}z_j, p^{-s} q^{-2} t_b|s\ge 0, 1\le j\le N, a<b\big\}. \end{gather*} Let $L$ be a complex number and \begin{gather*} \kappa=q^{-2\big(L+\sum\limits_{i=1}^n\frac{l_i}{2}-N+1\big)}. \end{gather*} Then \begin{gather} \Psi_W=\Bigg(\prod_{i=1}^n z_i^{a_i}\Bigg) \Bigg(\prod_{i<j}\xi_{l_i, l_j}(z_i/z_j)\Bigg) \sum_{(\epsilon)}I(w_{(-\epsilon)}, W) v^{(l_1)}_{\epsilon_1}\otimes \dots \otimes v^{(l_n)}_{\epsilon_n} \label{TV-2} \end{gather} is a solution of the qKZ equation (\ref{qKZ}) for any $W \in {\cal{F}}_{\rm ell}$ where $(-\epsilon)=(l_1-\epsilon_1, \dots, l_n-\epsilon_n)$ and \begin{gather*} a_i={\frac{l_i}{2(k+2)} }\Bigg(L+\sum_{j=1}^n{l_j}-\frac{l_i}{2}-N+1\Bigg), \\ \xi_{l_i, l_j}(z) =\frac{\left(p q^{l_i+l_j+2}z^{-1} ; q^4 , p \right)_{\infty} \left(p q^{-l_i-l_j+2}z^{-1} ; q^4 , p \right)_{\infty}} {\left(p q^{l_i-l_j+2}z^{-1} ; q^4 , p \right)_{\infty} \left(p q^{-l_i+l_j+2}z^{-1} ; q^4 , p \right)_{\infty}}, \\ (z; p, q)=\prod_{i=0}^{\infty}\prod_{j=0}^{\infty}(1-p^i q^j z). \end{gather*} \section[Free field realizations]{Free f\/ield realizations}\label{section3} We brief\/ly review the free f\/ield construction of the representation of the $U_q(\widehat{sl_2})$ of level $k$ \cite{abada,matsuo,shiraishi} and intertwining operators \cite{BW,kato,konno}. We mainly follow the notation of \cite{kato}. We set \begin{gather*} [n]=\frac{q^n-q^{-n}}{q-q^{-1}}. \end{gather*} Let $k$ be a complex number and $\{a_n, b_n ,c_n, \tilde{a}_0, \tilde{b}_0, \tilde{c}_0, Q_a, Q_b, Q_c\,|\,n\in {\mathbb{Z}}_{\ge 0}\}$ satisfy \begin{gather*} [a_n, a_m]=\delta_{m+n, 0}\frac{[(k+2)n][2n]}{n},\qquad [\tilde{a}_0, Q_a]=2(k+2), \\ [b_n, b_m]=\delta_{m+n, 0}\frac{-[2n]^2}{n},\qquad [\tilde{b}_0, Q_b]=-4, \\ [c_n, c_m]=\delta_{m+n, 0}\frac{[2n]^2}{n},\qquad [\tilde{c}_0, Q_c]=4. \end{gather*} Other combinations of elements are supposed to commute. Set \begin{gather*} N_{\pm}={\mathbb{C}}[a_n, b_n, c_n \,|\, \pm n > 0]. \end{gather*} Let $r$ be a complex number and $s$ an integer. The Fock module $F_{r, s}$ is def\/ined to be the free~$N_-$ module of rank one generated by the vector $|r, s\rangle$ satisfying \begin{gather*} N_{+}|r, s\rangle=0, \qquad \tilde{a}_0|r,s\rangle=r|r, s\rangle, \qquad \tilde{b}_0|r,s\rangle=-2s|r, s\rangle, \qquad \tilde{c}_0|r,s\rangle=-2s|r, s\rangle. \end{gather*} We set \begin{gather*} F_r=\oplus_{s\in {\mathbb{Z}}}F_{r, s}. \end{gather*} The right Fock module $F_{r, s}^{\dagger}$ and $F_{r}^{\dagger}$ are similarly def\/ined using the vector $\langle r, s|$ satisfying the conditions \begin{gather*} \langle r,s|N_{-}=0, \qquad \langle r,s|\tilde{a}_0=r\langle r,s|, \qquad \langle r,s|\tilde{b}_0=-2s\langle r,s|, \qquad \langle r,s|\tilde{c}_0=-2s\langle r,s|. \end{gather*} Notice that $F_r$ and $F_{r}^{\dagger}$ have left and right $U_q(\widehat{sl_2})$-module structure respectively~\cite{matsuo, shiraishi}. Let \begin{gather*} |L\rangle=|L, 0\rangle\in F_{L, 0},\qquad \langle L|=\langle L, 0|\in F_{L, 0}^{\dagger}. \nonumber \end{gather*} They become left and right highest weight vectors of $U_q(\widehat{sl_2})$ with the weight $L\Lambda_1+(k-L)\Lambda_0$ respectively, where $\Lambda_0$ and $\Lambda_1$ are fundamental weights of $\widehat{sl_2}$. We consider operators \begin{gather*} \phi_m^{(l)}(z): \ F_{r, s}\to F_{r+l, s+l-m}, \qquad J^-(u): \ F_{r, s}\to F_{r, s+1}, \qquad S(t): \ F_{r, s}\to F_{r-2, s-1}, \end{gather*} the explicit forms of which are given in Appendix~\ref{appendixC}. We set \begin{gather*} \phi_l^{(l)}(z)=\phi_l(z) \end{gather*} for simplicity. The operator $\phi_m^{(l)}(z)$ is used to construct the vertex operator for $U_q(\widehat{sl_2})$: \begin{gather*} \phi^{(l)}(z): \ W_r\to W_{r+l}{\otimes} V^{(l)}_z, \qquad \phi^{(l)}(z)=\sum_{m=0}^l \phi^{(l)}_m(z)\otimes v_m^{(l)}, \end{gather*} where $W_r$ is a certain submodule of $F_r$ called $q$-Wakimoto module \cite{matsuo}. The operator $J^-(u)$ is a generating function of a part of generators of the Drinfeld realization for $U_q(\widehat{sl_2})$ at level $k$. The operator $S(t)$ commutes with $U_q(\widehat{sl_2})$ modulo total dif\/ferences. Here modulo total dif\/ferences means modulo functions of the form \begin{gather*} _{k+2}\partial_z f(z)=\frac{f(q^{k+2}z)-f(q^{-(k+2)}z)}{(q-q^{-1})z}. \end{gather*} Consider \begin{gather*} F(t,z)=\langle L+\sum_{i=1}^n l_i -2N|\phi^{(l_1)}(z_1)\cdots\phi^{(l_n)}(z_n) S(t_N)\cdots S(t_1)|L\rangle \end{gather*} which is a function taking the value in $V^{(l_1)}\otimes\dots\otimes V^{(l_n)}$. Let \begin{gather*} \triangle_j=\frac{j(j+2)}{4(k+2)}. \end{gather*} Set \begin{gather*} \widehat{F}=\left(\prod_{i=1}^n z_i^{\triangle_{L+\sum\limits_{j=i}^{n}l_j-2N}-\triangle_{L+\sum\limits_{j=i+1}^{n}l_j-2N}}\right)F =\left(\prod_{i=1}^n z_i^{\frac{l_i}{2(k+2)} \big(L+\sum\limits_{i<j}l_j-2N+\frac{l_i+2}{2}\big)}\right)F. \end{gather*} Then the function $\widehat{F}(t,z)$ satisf\/ies qKZ equation (\ref{qKZ}) with $\kappa=q^{-2\big(L+\frac{\sum\limits_{i=1}^n l_i}{2}-N+1\big)}$ modulo total dif\/ferences \cite{matsuo}. \section{Integral formulae}\label{section4} Def\/ine the components of $F(t,z)$ by \begin{gather*} F(t,z)=\sum_{\substack{\nu_i\in\{0, \dots, l_i\} \\ 1\le i \le n}} F^{(\nu)}(t,z) v_{\nu_1}^{(l_1)}\otimes \dots \otimes v_{\nu_n}^{(l_n)}, \end{gather*} where $(\nu)=(\nu_1, \dots, \nu_n)$. By the conditions on weights $F^{(\nu)}(t,z)=0$ unless \begin{gather*} \sum_{i=1}^n(l_i-\nu_i)=N \end{gather*} is satisf\/ied. We assume this condition once for all. Let \begin{gather*} \sharp\{i\, |\, \nu_i\ne l_i\}=r,\qquad \{i\, |\, \nu_i\ne l_i\}=\{k(1)<\cdots<k(r)\},\\ n_i=l_{k(i)}-\nu_{k(i)} \qquad (1\le i \le r). \end{gather*} The main result of this paper is \begin{theorem}\label{main} We have \begin{gather*} F^{(\nu)}(t, z) = A^{(\nu)}(t, z)\left(\prod_{i=1}^n z_i^{\frac{l_i}{2(k+2)} \big(L-2N+\sum\limits_{i<j}l_j\big)}\right) \Bigg(\prod_{i<j}\xi_{l_i, l_j}(z_i/z_j)\Bigg) \Phi(t, z)w_{(-\nu)}(t, z) , \end{gather*} where $(-\nu)=(l_{1}-\nu_1, \dots, l_{n}-\nu_n)$, $n_i=l_{k(i)}-\nu_{k(i)}$ and \begin{gather*} A^{(\nu)}(t, z) = q^{-NL}q^{\frac{3N(N-1)}{2}-\big(\sum\limits_{i=1}^n l_i\big)N} q^{\frac{1}{2(k+2)}\big(k{\sum\limits_{i<j}l_i l_j}+k(L-2N)\sum\limits_{i=1}^nl_i+4LN-4N(N-1)\big)} \\ \phantom{A^{(\nu)}(t, z) =}{} \times \left(\frac{1}{q-q^{-1}}\right)^{N} \sum_{(\nu)} \left\{\prod_{s=1}^{r}q^{\big(\sum\limits_{t=s+1}^r n_t\big) n_s-l_{k(s)}n_s} \right\} \left\{\prod_{s=1}^r \prod_{i=0}^{n_s-1}\big(1-q^{2(l_{k(s)}-i)}\big)\right\} \\ \phantom{A^{(\nu)}(t, z) =}{}\times \left(\prod_{a=1}^N t_a^{\frac{2}{k+2}(a-1)-\frac{1}{k+2}L-1}\right). \end{gather*} \end{theorem} The formula for $F^{(\nu)}(t, z)$ is of the form of (\ref{TV-1}), (\ref{TV-2}). More precisely in Tarasov--Varchenko's formula (\ref{TV-1}), (\ref{TV-2}), $W$ can be written as \begin{gather*} W= \left(\prod_{i=1}^{n}z_i^{\frac{l_i}{2(k+2)}\big(L-3N-\sum\limits_{j< i}l_j+\sum_{i<j}l_j\big)}\right) \left(\prod_{a=1}^{N}t_a \right) A^{(\nu)}(t, z) W' \end{gather*} for suitable $W'$. This $W'$ specif\/ies an intertwiner. In this paper we don't consider the problem on specifying $W'$. To prove Theorem~\ref{main} let us begin by writing down the formula obtained by the free f\/ield description of operators $\phi_l(z)$, $J^{-}(u)$, $S(t)$ given in Appendix~\ref{appendixC}. Let $(\epsilon)=(\epsilon_1, \dots, \epsilon_N), (\mu)=(\mu_{1, 1}, \dots, \mu_{1, n_1}, \dots, \mu_{r, n_r})\in \{0, 1\}^N$. Then $F^{(\nu)}(t, z)$ can be written as \begin{gather*} F^{(\nu)}(t, z) = (-1)^N \big(q-q^{-1}\big)^{-2N} \prod_{i=1}^{r}\frac{1}{[n_i]!} \prod_{a=1}^N t_a^{-1} \\ \phantom{F^{(\nu)}(t, z) =}{} \times \sum_{\epsilon_i, \mu_{i_1, i_2}=\pm}\prod_{i=1}^N \epsilon_i \oint\left(\prod_{\genfrac{}{}{0pt}{}{1\le i_1 \le r}{1\le i_2\le n_{i_1}}}\mu_{i_1, i_2} \frac{d u_{i_1, i_2}}{2 \pi i u_{i_1, i_2}}\right)F_{(\epsilon)(\mu)}^{(\nu)}(t, z| u), \end{gather*} where \begin{gather*} F_{(\epsilon)(\mu)}^{(\nu)}(t, z| u) = \Big\langle L+\sum_{i=1}^{n}l_i-2N|\phi_{l_1}(z_1)\cdots \phi_{l_{k(1)-1}}(z_{k(1)-1}) \\ \qquad{}\times[\dots[\phi_{l_{k(1)}}(z_{k(1)}), J^{-}_{\mu_{1, 1}}(u_{1, 1})]_{q^{l_{k(1)}}}, J^{-}_{\mu_{1, 2}}(u_{1, 2})]_{q^{l_{k(1)}-2}}\dots , J^{-}_{\mu_{1, n_1}}(u_{1, n_1})]_{q^{l_{k(1)}-2(n_1-1)}} \dots \\ \qquad{}\times[\dots[\phi_{l_{k(r)}}(z_{k(r)}), J^{-}_{\mu_{r, 1}}(u_{r, 1})]_{q^{l_{k(r)}}}, J^{-}_{\mu_{r, 2}}(u_{r, 2})]_{q^{l_{k(r)}-2}}\dots , J^{-}_{\mu_{r, n_r}}(u_{r, n_r})]_{q^{l_{k(r)}-2(n_r-1)}} \\ \qquad{}\times \phi_{l_{k(r)+1}}(z_{k(r)+1})\dots \phi_{l_n}(z_n) S_{\epsilon_N}(t_N)\dots S_{\epsilon_1}(t_1) |L\Big\rangle. \end{gather*} and the integrand in the right hand side signif\/ies to take the coef\/f\/icient of $\Bigg(\prod\limits_{\substack{1\le i \le r \\ 1\le j \le n_i}}u_{i, j}\Bigg)^{-1}$. For the notation $[x, y]_q$ see Appendix~\ref{appendixC}. Let $(m)=(m_1, \dots, m_r)$, $0\le m_i \le n_i$. Then \begin{gather*} \oint \left(\prod_{\substack{1\le i_1 \le r \\ 1\le i_2 \le n_{i_1}}}\mu_{i_1, i_2}\frac{d u_{i_1, i_2}}{2\pi iu_{i_1, i_2}}\right) F_{(\epsilon)(\mu)}^{(\nu)}(t, z) \\ \qquad{}= \sum_{\genfrac{}{}{0pt}{}{0\le m_i\le n_i}{1\le i \le r}} (-1)^{\sum\limits_{i=1}^{r} m_i} \left(\prod_{i=1}^{r}q^{m_i l_{k(i)}}q^{-m_i(n_i-1)}\qbi{n_i}{m_i}\right) \\ \qquad{}\times \int_{C^N} \left(\prod_{\substack{1\le i_1 \le r \\ 1\le i_2\le n_{i_1}}}\mu_{i_1, i_2}\frac{d u_{i_1, i_2}}{2 \pi i u_{i_1, i_2}}\right) F_{(\epsilon)(\mu)(m)}^{(\nu)}(t, z| u), \end{gather*} where \begin{gather*} F_{(\epsilon)(\mu)(m)}^{(\nu)}(t, z| u) =\Big\langle L+\sum_{i=1}^{n}l_i-2N|\phi_{l_1}(z_1) \cdots \phi_{l_{k(1)-1}}(z_{k(1)-1}) \nonumber \\ \qquad{}\times \big(J^{-}_{\mu_{1, 1}}(u_{1, 1}) \cdots J^{-}_{\mu_{1, m_1}}(u_{1, m_1}) \phi_{l_{k(1)}}(z_{k(1)}) J^{-}_{\mu_{1, m_1+1}}(u_{1, m_1+1}) \cdots J^{-}_{\mu_{1, n_1}}(u_{1, n_1})\big)\cdots \\ \qquad{}\times \big(J^{-}_{\mu_{r, 1}}(u_{r, 1}) \cdots J^{-}_{\mu_{r, m_r}}(u_{r, m_r}) \phi_{l_{k(r)}}(z_{k(r)}) J^{-}_{\mu_{r, m_r+1}}(u_{r, m_r+1}) \cdots J^{-}_{\mu_{r, n_r}}(u_{r, n_r})\big) \\ \qquad{}\times \phi_{l_{k(r)+1}}(z_{k(r)+1}) \cdots \phi_{l_{n}}(z_{n}) S_{\epsilon_N}(t_N) \cdots S_{\epsilon_1}(t_1)|L\Big\rangle, \end{gather*} and $C^N$ is a suitable deformation of the torus ${\mathbb{T}}^N$ specif\/ied as follows. We introduce the lexicographical order \begin{gather*} (i_1, i_2)<(j_1, j_2) \ \ \Leftrightarrow \ \ i_1<j_1 \quad \text{or} \quad i_1=j_1 \quad \text{and} \quad i_2<j_2. \end{gather*} For a given $(m)=(m_1, \dots, m_r)$, $1\le m_i \le n_i$, we def\/ine \begin{gather*} j<(i_1, i_2) \ \ \Leftrightarrow \ \ j<k(i_1) \quad \text{or} \quad j=k(i_1) \quad \text{and} \quad m_{i_1}<i_2, \\ j>(i_1, i_2) \ \ \Leftrightarrow \ \ j>k(i_1) \quad \text{or} \quad j=k(i_1) \quad \text{and}\quad m_{i_1}\ge i_2. \end{gather*} The contour for the integration variable $u_{i_1, i_2}$ is a simple closed curve rounding the origin in the counterclockwise direction such that $q^{l_j+k+2}z_j$ $((i_1, i_2)<j)$, $q^{-2}u_{j_1, j_2}$ $((i_1, i_2)<(j_1, j_2))$, $q^{-\mu_{i_1, i_2}(k+2)}t_a$ $(1\le a \le N)$ are inside, and $q^{-l_j+k+2}z_j$ $((i_1, i_2)>j)$, $q^2 u_{j_1, j_2}$ $((j_1, j_2)<(i_1, i_2))$ are outside. We denote it $C_{(i_1, i_2)}$. Then \begin{gather*} F_{(\epsilon)(\mu)(m)}^{(\nu)}(t, z| u)=f^{(\nu)}(t, z)\Phi(t, z)G_{(\epsilon)(\mu)(m)}^{(\nu)}(t, z| u), \end{gather*} where \begin{gather*} f^{(\nu)}(t, z) = \left\{\prod_{i<j}(q^k z_i)^{\frac{l_i l_j}{2(k+2)}} \xi_{l_i, l_j}(z_i/z_j)\right\} \left\{\prod_{i=1}^{n}(q^k z_i)^{-\frac{N l_i}{k+2}}\right\} \\ \phantom{f^{(\nu)}(t, z)=}{} \times \left\{\prod_{i=1}^{n}(q^k z_i)^{\frac{L l_i}{2(k+2)}}\right\} \left\{\prod_{i=1}^{N}(q^{-2}t_i)^{-\frac{L}{k+2}}\right\} \left\{\prod_{a<b} (q^{-2} t_b)^{\frac{2}{k+2}}\right\}, \\ G_{(\epsilon)(\mu)(m)}^{(\nu)}(t, z| u) = \widehat{G}_{(\epsilon)(\mu)(m)}^{(\nu)}(t, z| u) \left(\prod_{a<b} \frac{q^{\epsilon_b} t_b-q^{\epsilon_a} t_a}{t_b-q^{-2} t_a}\right), \\ \widehat{G}_{(\epsilon)(\mu)(m)}^{(\nu)}(t, z| u) = \left(\prod_{(i_1, i_2)}q^{L\mu_{i_1, i_2}}\right) \left(\prod_{(i_1, i_2)>j}\frac{z_j-q^{\mu_{i_1, i_2} l_j-k-2}u_{i_1, i_2}}{z_j-q^{l_j-k-2}u_{i_1, i_2}}\right) \\ \phantom{\widehat{G}_{(\epsilon)(\mu)(m)}^{(\nu)}(t, z| u) =}{} \times \left(\prod_{(i_1, i_2)<j}q^{\mu_{i_1, i_2} l_j} \frac{u_{i_1, i_2}-q^{-\mu_{i_1, i_2} l_j+k+2}z_j}{u_{i_1, i_2}-q^{l_j+k+2}z_j}\right) \\ \phantom{\widehat{G}_{(\epsilon)(\mu)(m)}^{(\nu)}(t, z| u) =}{} \times \left(\prod_{\genfrac{}{}{0pt}{}{(i_1, i_2)}{1\le b \le N}}q^{-\mu_{i_1, i_2}} \frac{u_{i_1, i_2}-q^{-\mu_{i_1, i_2}(k+1)-\epsilon_b} t_b}{u_{i_1, i_2}-q^{-\mu_{i_1, i_2}(k+2)} t_b}\right) \\ \phantom{\widehat{G}_{(\epsilon)(\mu)(m)}^{(\nu)}(t, z| u) =}{} \times \left(\prod_{(i_1, i_2)<(j_1, j_2)} \frac{q^{-\mu_{i_1, i_2}} u_{i_1, i_2}-q^{-\mu_{j_1, j_2}}u_{j_1, j_2}} {u_{i_1, i_2}-q^{-2}u_{j_1, j_2}}\right). \end{gather*} For $i$, let $A_{\mu, i}^{\pm}=\{(i, j) |\mu_{i, j}=\pm \}$. The number of elements in $A_{\mu, i}^{\pm}$ is $a_i^{\pm}$and $A_{\mu, i}^{\pm}=\{{{\ell}}^{\pm}_{i, 1}, \dots, {{\ell}}^{\pm}_{i, a_i^{\pm}}\}$ . We set $a_i^{-}=a_i$, $A_{\mu, i}^{-}=A_{\mu, i}$, $A_{\mu}=\cup_{i=1}^{r}A_{\mu, i}$ and \begin{gather*} \widehat{J}_{(\epsilon)(\mu)}^{(\nu)} = \sum_{\substack{0\le m_i\le n_i \\ 1\le i \le r}}(-1)^{\sum\limits_{i_1}^{r}m_i} \left\{\prod_{i=1}^{r}q^{m_i l_{k(i)}}q^{-m_i(n_i-1)}\qbi{n_i}{m_i}\right\} \\ \phantom{\widehat{J}_{(\epsilon)(\mu)}^{(\nu)} =}{} \times\int_{C^N}\left(\prod_{(i_1,i_2)}\mu_{i_1, i_2}\frac{d u_{i_1, i_2}}{2\pi i u_{i_1, i_2}}\right) \widehat{G}_{(\epsilon) (\mu)(m)}^{(\nu)}. \end{gather*} See the beginning of the next section for the notation of the $q$-binomial coefficient $\qbi{n_i}{m_i}$. For a given $(a)=(a_1, \dots, a_r)$, $1\le a_i \le n_i$, we def\/ine $\widehat{J}_{(\epsilon)(a)}^{(\nu)}$ and ${J}_{(a)}^{(\nu)}$ as follows \begin{gather*} \widehat{J}_{(\epsilon)(a)}^{(\nu)} =\sum_{\substack{|A_{\mu, i}|=a_i \\ 1\le i \le r}}\widehat{J}_{(\epsilon)(\mu)}^{(\nu)}, \\ {J}_{(a)}^{(\nu)}=\sum_{\epsilon_1, \dots, \epsilon_N=\pm}\left(\prod_{j=1}^N \epsilon_j\right) \left(\prod_{1\le a <b \le N}\frac{q^{\epsilon_b}t_b-q^{\epsilon_a}t_a}{t_b-q^{-2}t_a}\right) \widehat{J}_{(\epsilon)(a)}^{(\nu)}. \end{gather*} Using $J^{(\nu)}_{(a)}$, $F^{(\nu)}(t, z)$ can be written as \begin{gather*} F^{(\nu)}(t, z)=(-1)^N\big(q-q^{-1}\big)^{-2N}\left(\prod_{i=1}^r\frac{1}{[n_i]!}\right)\left(\prod_{b=1}^N t_b^{-1}\right) f^{(\nu)}(t, z)\Phi(t,z)\sum_{(a)}J^{(\nu)}_{(a)}. \end{gather*} Theorem~\ref{main} straightforwardly follows from the following proposition. \begin{proposition}\label{mainprop} If $(a)\ne(n_1, n_2, \dots, n_r)$, $J_{(a)}^{(\nu)}(t, z)=0$. For $(a)=(n_1, n_2, \dots, n_r)$ we have \begin{gather*} J_{(n_1, \dots, n_r)}^{(\nu)}(t, z) = (-1)^N\big(1-q^{-2}\big)^{N} q^{N(N-L)+\frac{N(N-1)}{2}-\big(\sum\limits_{i=1}^n l_i\big)N} \\ \phantom{J_{(n_1, \dots, n_r)}^{(\nu)}(t, z) = }{} \times \prod_{s=1}^{r}\left\{q^{\big(\sum\limits_{t=s+1}^r n_t\big) n_s -l_{k(s)}n_s}[n_s]! \prod_{i=0}^{n_s-1}(1-q^{2(l_{k(s)}-i)})\right\}w_{(-\nu)}(t, z). \end{gather*} \end{proposition} This proposition is proved by performing integrals in the variables $u_{i, j}$ in the next section. \section{Proof of Proposition \ref{mainprop}}\label{section5} We set \begin{gather*} [n]! = \prod_{i=1}^{n}[i], \qquad \qbi{n}{m} = \frac{[n]!}{[n-m]![m]!}, \end{gather*} for nonnegative integers $n$, $m$ $(n\ge m)$. To prove Proposition~\ref{mainprop}, we have to calculate $\widehat{J}^{(\nu)}_{(\epsilon)(a)}$. We need the following lemmas. \begin{lemma}{\label{combi}} For $n\ge 1$ and $n\ge m \ge 0$, we have \begin{gather*} (i)~ \ \ \sum_{\substack{A\sqcup B=\{1, 2, \dots, n\}\\ |A|=m}} \left(\prod_{\substack{i<j \\ i \in A , j \in B}} q^2 \right) = q^{m(n-m)}\qbi{n}{m}; \\ (ii) \ \ \sum_{\substack{A\sqcup B=\{1, 2, \dots, n\}\\ |A|=m \\ \mu_i=1 (i\in A), \, \mu_i=-1 (i\in B)}} \left(\prod_{i<j} q^{\mu_i} \right) = q^{-\frac{n(n-1)}{2}+m(n-1)}\qbi{n}{m}. \end{gather*} \end{lemma} \begin{proof} By the $q$-binomial theorem \begin{gather*} \prod_{i=1}^n\big(1+q^{-n-1+2i}x\big)=\sum_{i=0}^n\qbi{n}{i}x^i, \end{gather*} we have the equation \begin{gather*} \sum_{1\le i_1<\dots <i_m\le n}q^{2\sum\limits_{j=1}^{m}i_j}=q^{(n+1)m}\qbi{n}{m}. \end{gather*} The assertions $(i)$ and $(ii)$ easily follow from this equation. \end{proof} \begin{lemma}{\label{sym}} Let $n\ge 1$, $n\ge m\ge 0$ and $1\le i_1<\dots<i_m\le n$. Then we have \begin{gather*} \sum_{\sigma\in S_n}{\rm{sgn}}\,\sigma \,\,t_{\sigma(i_1)} t_{\sigma(i_2)} \cdots t_{\sigma(i_m)} \prod_{1\le a<b\le n}(t_{\sigma(b)}-q^{-2}t_{\sigma(a)}) \\ \qquad{} = q^{-m(n+1)-\frac{n(n-1)}{2}+2\sum\limits_{j=1}^{m}i_j}[m]! [n-m]!\,\,e_m(t_1, \dots, t_n) \prod_{1\le a<b\le n}(t_b-t_a) , \end{gather*} where $e_m(t_1, \dots, t_n)$ is the $m$-th elementary symmetric polynomial. \end{lemma} \begin{proof} Set \begin{gather*} F(t)=\sum_{\sigma\in S_n}{\rm{sgn}}\,\sigma \,\,t_{\sigma(i_1)} t_{\sigma(i_2)} \cdots t_{\sigma(i_m)} \prod_{1\le a<b\le n}(t_{\sigma(b)}-q^{-2}t_{\sigma(a)}). \end{gather*} It is easy to see that $F(t)$ is an antisymmetric polynomial. So we can write \[ F(t)=S(t)\prod_{1\le a<b\le n}(t_b-t_a), \] where $S(t)$ is a symmetric polynomial. Moreover $S(t)$ is a homogeneous polynomial of degree~$m$ and ${\rm{deg}}_{t_i} S(t)=1$ for all $i\in\{1, \dots , n\}$. Hence we have \[ S(t)=c e_m(t) \] for some constant $c$. The number $(-1)^{\sum\limits_{j=1}^{m}i_j+\frac{n(n-1)}{2}-\frac{m(m+1)}{2}}c$ is equal to the coef\/f\/icient of \[ t_{i_1}^{n}t_{i_2}^{n-1}\cdots t_{i_m}^{n-m+1}t_1^{n-m-1}t_{2}^{n-m-2}\cdots t_{n-1} \] in $F(t)$. We can show \begin{gather*} c=q^{-2nm+m(m-1)+2\sum\limits_{k=1}^{m} i_k} \left(q^{-m(m-1)}\sum_{\sigma\in S_{m}}q^{2\ell(\sigma)}\right) \left(q^{-(n-m)(n-m-1)}\sum_{\tau\in S_{n-m}}q^{2\ell(\tau)}\right), \end{gather*} where $\ell(\sigma)$ is the inversion number of $\sigma$. Using the fact $\sum\limits_{\sigma\in S_{m}}q^{2\ell(\sigma)}=q^{\frac{m(m-1)}{2}}[m]!$, we have the desired result. \end{proof} \begin{lemma}{\label{z}} For $1\le n \le l$, we have \begin{gather*} \sum_{s=0}^n(-1)^s q^{-s(n-1)} \qbi{n}{s} \sum_{\sigma \in S_n} \prod_{i=1}^{s}(z-q^{l}t_{\sigma(i)}) \prod_{i=s+1}^{n}\big(z-q^{-l}t_{\sigma(i)}\big) \left(\prod_{1\le a <b \le n}\!\!\frac{t_{\sigma(b)}-q^{-2}t_{\sigma(a)}}{t_{\sigma(b)}-t_{\sigma(a)}}\right) \\ \qquad{} = (-1)^n q^{-ln-\frac{n(n-1)}{2}} \left\{\prod_{i=0}^{n-1}\big(1-q^{2(l-i)}\big)\right\} [n]! t_1t_2\dots t_n. \end{gather*} \end{lemma} \begin{proof} We set \begin{gather*} L_{n, s} = \sum_{\sigma \in S_n} {\rm{sgn}}\,\sigma \prod_{i=1}^{s}(z-q^{l}t_{\sigma(i)}) \prod_{j=s+1}^{n}\big(z-q^{-l}t_{\sigma(j)}\big) \left(\prod_{i>j}\frac{t_{\sigma(i)}-q^{-2}t_{\sigma(j)}}{t_{i}-t_{j}}\right), \\ L_n= \sum_{s=0}^n(-1)^s q^{-s(n-1)} \qbi{n}{s} L_{n, s}. \end{gather*} Using Lemma \ref{sym}, \begin{gather*} L_{n, s} = \sum_{k=0}^n (-1)^k z^{n-k} e_k(t) q^{-k(n+1)-\frac{n(n-1)}{2}} [k]![n-k]! \!\left\{\sum_{t=0}^k q^{2lt-lk} \!\left(\!\sum_{\substack{1\le i_1< i_2<\cdots< i_t\le s \\ s< i_{t+1}<\cdots<i_{k}\le n}}\!\!q^{\sum\limits_{j=1}^k 2 i_j}\right)\! \right\} \\ \phantom{L_{n, s}}{} =\sum_{k=0}^n (-1)^k z^{n-k} e_k(t) q^{-lk-k(n+1)-\frac{n(n-1)}{2}} [k]![n-k]! \\ \phantom{L_{n, s}=}{}\times \left(\sum_{t=0}^k q^{2lt} q^{2s(k-t)+(s+1)t+(n-s+1)(k-t)}\qbi{s}{t}\qbi{n-s}{k-t} \right). \end{gather*} Then, \begin{gather*} L_n = \sum_{s=0}^n (-1)^s q^{-s(n-1)} \qbi{n}{s} \sum_{k=0}^n (-1)^k z^{n-k} e_k(t)q^{-lk-k(n+1)-\frac{n(n-1)}{2}} [k]![n-k]! \\ \phantom{L_n =}{} \times \left(\sum_{t=0}^k q^{2lt}q^{sk+k+n(k-t)} \qbi{s}{t}\qbi{n-s}{k-t} \right) \\ \phantom{L_n}{}= [n]!\sum_{k=0}^n (-1)^k z^{n-k} e_k(t)\, q^{-lk-k(n+1)-\frac{n(n-1)}{2}} \\ \phantom{L_n =}{}\times \sum_{t=0}^k q^{2lt}q^{(k-t)(n+1)+t} \qbi{k}{t} \sum_{s=t}^{n-k+t} (-1)^s q^{-s(n-k-1)} \qbi{n-k}{s-t} \\ \phantom{L_n}{}=[n]!\sum_{k=0}^n (-1)^k z^{n-k} e_k(t)\, q^{-lk-k(n+1)-\frac{n(n-1)}{2}}\\ \phantom{L_n =}{}\times \sum_{t=0}^k q^{2lt} q^{(k-t)(n+1)+t} \qbi{k}{t} (-1)^{t}q^{-t(n-k-1)}\delta_{n, k} \\ \phantom{L_n}{}= [n]!(-1)^n q^{-ln} q^{-\frac{n(n-1)}{2}} \sum_{t=0}^n (-1)^{t}q^{2lt} q^{-(n-1)t} \qbi{n}{t}\, e_n(t) \\ \phantom{L_n}{}=[n]!(-1)^n q^{-ln} q^{-\frac{n(n-1)}{2}} \left\{\prod_{i=0}^{n-1}(1-q^{2(l-i)})\right\} e_n(t). \end{gather*} Here we have used the $q$-binomial theorem. \end{proof} For a given sequence $(m_i)_{i=1}^{r}$ $(0\le m_i \le n_i)$, let $M_i=\{(i, j)\,|\,j\le m_i\}$. Set \begin{gather*} \widehat{I}_{(\mu)(\epsilon)(m)}^{(\nu)} =\int_{C^N}\left(\prod_{(i_1,i_2)}\frac{d u_{i_1, i_2}}{2\pi i u_{i_1, i_2}}\right) \widehat{G}_{(\mu)(\epsilon)(m)}^{(\nu)}. \end{gather*} \begin{lemma} \label{i} We have \begin{gather*} \widehat{I}_{(\mu)(\epsilon)(m)}^{(\nu)} =q^{(L-N)\big\{\sum\limits_{s=1}^r(n_s-2a_s)\big\} } \left(\prod_{(i_1, i_2)<j} q^{\mu_{i_1, i_2} l_j}\right) \left(\prod_{(i_1, i_2)<(j_1, j_2)} q^{-\mu_{i_1, i_2}}\right) \\ \times \sum_{\substack{C_i\sqcup D_i=A_{\mu, i}\\ D_i'=D_i\cap M_i \\ 1\le i \le r}} \left(\prod_{b=1}^{N}q^{-1-\epsilon_b}\right)^{\sum\limits_{i=1}^{r}|C_i|}\left(\prod_{\substack{(i_1, i_2)<(j_1, j_2)\\ (i_1, i_2)\in C_1\cup\cdots \cup C_r \\ (j_1, j_2)\in D_1\cup\cdots \cup D_r}} q^2\right) \\ \times \sum_{\substack{1\le b_{i, j} \le N \\ 1\le i \le r \\ 1\le j \le |D_i|}} \prod_{i_1=1}^r \left\{ \prod_{i_2=1}^{|D_{i_1}|}\left((1-q^{-1-\epsilon_{b_{i_1, i_2}}}) \prod_{b\ne b_{i_1, i_2}}\frac{t_{b_{i_1, i_2}}-q^{-1-\epsilon_b}t_b}{t_{b_{i_1, i_2}}-t_b} \right. \right. \\ \times \left. \left. \prod_{(i_1, i_2)<(j_1, j_2)}\frac{t_{b_{i_1, i_2}}-t_{b_{j_1, j_2}}}{t_{b_{i_1, i_2}}-q^{-2}t_{b_{j_1, j_2}}} \prod_{j=1}^{k(i_1)-1}\frac{z_j-q^{-l_j}t_{b_{i_1, i_2}}}{z_j-q^{l_j}t_{b_{i_1, i_2}}}\right) \prod_{i_2=|D_{i_1}'|+1}^{|D_{i_1}|}\frac{z_{k(i_1)}-q^{-l_{k(i_1)}}t_{b_{i_1, i_2}}}{z_{k(i_1)}-q^{l_{k(i_1)}}t_{b_{i_1, i_2}}} \right\}. \end{gather*} \end{lemma} \begin{proof} We integrate with respect the variables $u_{i, j}$, $(i, j)\in A_{\mu}^+$ in the order $u_{{{\ell}}^+_{1, 1}}, \dots, u_{{{\ell}}^+_{1, a_1^+}},$ $ u_{{{\ell}}^+_{2, 1}}, \dots, u_{{{\ell}}^+_{r, a_r^+}}$. With respect to $u_{{\bf{\ell}}_{1, 1}^ +}$ the only singularity outside $C_{{\bf{\ell}}^+_{1, 1}}$ is~$\infty$. Then the integral in $u_{{\bf{\ell}}_{1, 1}^ +}$ is calculated by taking the residue at $\infty$. After this integration the integrand as a function of $u_{{{\ell}}^+_{1, 2}}$ has a similar structure. Then the integral with respect to $u_{{{\ell}}^+_{1, 2}}$ is calculated by taking residue at $\infty$ and so~on. Finally we get \begin{gather*} \widehat{I}^{(\nu)}_{(\epsilon)(\mu)(m)} = (-1)^{\sum\limits_{i=1}^{r}a_i^{+}} {{\mathop{\rm Res}_{{u_{{{\ell}}_{r, a_r^{+}}^{+}}=\infty}}}}\cdots{{\mathop{\rm Res}_{{u_{{{\ell}}_{r, 1}^{+}}=\infty}}}}\cdots {{\mathop{\rm Res}_{{u_{{{\ell}}_{1, a_1^{+}}^{+}}=\infty}}}}\cdots{{\mathop{\rm Res}_{{u_{{{\ell}}_{1, 1}^{+}}=\infty}}}} \widehat{G}_{(\epsilon)(\mu)(m)}^{\nu}(t, z| u) \\ \phantom{\widehat{I}^{(\nu)}_{(\epsilon)(\mu)(m)} }{} = \left(\prod_{(i_1, i_2)}q^{(L-N)\mu_{i_1, i_2}}\right) \left(\prod_{(i_1, i_2)<j}q^{\mu_{i_1, i_2} l_j}\right) \left(\prod_{(i_1, i_2)<(j_1, j_2)}q^{-\mu_{i_1, i_2}}\right) \\ \phantom{\widehat{I}^{(\nu)}_{(\epsilon)(\mu)(m)} =}{}\times \int_{C^{N-\sum\limits_{i=1}^{r}a_i^+}} \left(\prod_{(i_1, i_2)\in A_{\mu}}\frac{d u_{i_1, i_2}}{2 \pi i u_{i_1, i_2}}\right) \left(\prod_{\substack{j<(i_1, i_2)\\ (i_1, i_2)\in A_{\mu}}}\frac{z_j-q^{-l_j-k-2}u_{i_1, i_2}}{z_j-q^{l_j-k-2}u_{i_1, i_2}}\right) \\ \phantom{\widehat{I}^{(\nu)}_{(\epsilon)(\mu)(m)} =}{}\times \left(\prod_{\genfrac{}{}{0pt}{}{(i_1, i_2)\in A_{\mu}}{1\le b \le N}} \frac{u_{i_1, i_2}-q^{k+1-\epsilon_b} t_b}{u_{i_1, i_2}-q^{k+2} t_b}\right) \left(\prod_{\substack{(i_1, i_2)<(j_1, j_2)\\ (i_1, i_2), (j_1, j_2)\in A_{\mu}}} \frac{u_{i_1, i_2}-u_{j_1, j_2}} {u_{i_1, i_2}-q^{-2}u_{j_1, j_2}}\right), \end{gather*} where $C^{N-\sum\limits_{i=1}^{r}a_i^+}$ is the resulting contour for $(u_{{\bf{\ell}}_{1, 1}}, \dots, u_{{\bf{\ell}}_{r, a_r}})$. We set \begin{gather*} I_{(\epsilon)(\mu)(m)}^{(\nu) +}(t, z) = \left(\prod_{(i_1, i_2)\in A_{\mu}}\frac{1}{u_{i_1, i_2}}\right) \left(\prod_{\substack{j<(i_1, i_2)\\ (i_1, i_2)\in A_{\mu}}} \frac{z_j-q^{-l_j-k-2}u_{i_1, i_2}}{z_j-q^{l_j-k-2}u_{i_1, i_2}}\right) \\ \phantom{I_{(\epsilon)(\mu)(m)}^{(\nu) +}(t, z)}{} \times \left(\prod_{\substack{(i_1, i_2)\in A_{\mu} \\ 1\le b \le N}} \frac{u_{i_1, i_2}-q^{k+1-\epsilon_b} t_b}{u_{i_1, i_2}-q^{k+2} t_b}\right) \left(\prod_{\substack{(i_1, i_2)<(j_1, j_2) \\ (i_1, i_2), (j_1, j_2)\in A_{\mu}}} \frac{u_{i_1, i_2}-u_{j_1, j_2}} {u_{i_1, i_2}-q^{-2}u_{j_1, j_2}}\right). \end{gather*} Next we perform integrations with respect to the remaining variables $u_{i, j}$, $(i, j)\in A_{\mu}$ in the order $u_{{{\ell}}_{r, a_r}}, \dots$, $u_{{{\ell}}_{r, 1}}, u_{{{\ell}}_{r-1, a_{r-1}}}, \dots, u_{{{\ell}}_{1, 1}}$. The poles of the integrand inside $C_{{{\ell}}_{r, a_r}}$ are $0$ and $q^{k+2}t_b$, $b=1,\dots, N$. Thus we have \begin{gather*} \int_{C_{{{\ell}}_{r, a_r}}}\frac{d u_{{{\ell}}_{r, a_r}}}{2 \pi i } I_{(\epsilon)(\mu)(m)}^{(\nu) +}(t, z) \\ = \left(\prod_{i\le b \le N}q^{-1-\epsilon_b}\right) \left(\prod_{\substack{(i_1, i_2)\in A_{\mu}\\ (i_1, i_2)\ne {{\ell}}_{r, a_r}}}\frac{1}{u_{i_1, i_2}}\right) \left(\prod_{\substack{j<(i_1, i_2)\\ (i_1, i_2)\in A_{\mu}-\{{{\ell}}_{r, a_r}\}}}\frac{z_j-q^{-l_j-k-2}u_{i_1, i_2}}{z_j-q^{l_j-k-2}u_{i_1, i_2}}\right) \\ \times \left(\prod_{\substack{(i_1, i_2)\in A_{\mu}-\{{{\ell}}_{r, a_r}\}\\ 1\le b \le N}} \frac{u_{i_1, i_2}-q^{k+1-\epsilon_b} t_b}{u_{i_1, i_2}-q^{k+2} t_b}\right) \left(\prod_{\substack{(i_1, i_2)<(j_1, j_2)<{{\ell}}_{r, a_r}\\ (i_1, i_2), (j_1, j_2)\in A_{\mu}}} \frac{u_{i_1, i_2}-u_{j_1, j_2}} {u_{i_1, i_2}-q^{-2}u_{j_1, j_2}}\right) \\ + \sum_{1\le b_{{{\ell}}_{r, a_r}}\le N} (1-q^{-1-\epsilon_{b_{{{\ell}}_{r, a_r}}}}) \left(\prod_{j<{{\ell}}_{r, a_r}}\frac{z_j-q^{-l_j}t_{b_{{{\ell}}_{r, a_r}}}}{z_j-q^{l_j-k-2}t_{b_{{{\ell}}_{r, a_r}}}}\right) \left(\prod_{\substack{1\le b \le N\\ b\ne b_{{{\ell}}_{r, a_r}}}} \frac{t_{b_{{{\ell}}_{r, a_r}}}-q^{-1-\epsilon_b} t_b}{t_{b_{{{\ell}}_{r, a_r}}}-t_b}\right) \\ \times \left(\prod_{(i_1, i_2)<{{\ell}}_{r, a_r}}\!\! \frac{u_{i_1, i_2}-q^{k+2}t_{b_{{{\ell}}_{r, a_r}}}}{u_{i_1, i_2}-q^{k}t_{b_{{{\ell}}_{r, a_r}}}}\right) \left(\prod_{\substack{(i_1, i_2)\in A_{\mu}\\ (i_1, i_2)\ne {{\ell}}_{r, a_r}}}\frac{1}{u_{i_1, i_2}}\right)\! \left(\! \prod_{\substack{j<(i_1, i_2) \\ (i_1, i_2)\in A_{\mu}-\{{{\ell}}_{r, a_r}\}}}\!\!\!\!\! \frac{z_j-q^{-l_j-k-2}u_{i_1, i_2}}{z_j-q^{l_j-k-2}u_{i_1, i_2}}\!\right)\! \\ \times \left(\prod_{\substack{(i_1, i_2)\in A_{\mu}-\{{{\ell}}_{r, a_r}\}\\ 1\le b \le N}} \frac{u_{i_1, i_2}-q^{k+1-\epsilon_b} t_b}{u_{i_1, i_2}-q^{k+2} t_b}\right) \left(\prod_{\substack{(i_1, i_2)<(j_1, j_2)<{{\ell}}_{r, a_r} \\ (i_1, i_2), (j_1, j_2)\in A_{\mu}}} \frac{u_{i_1, i_2}-u_{j_1, j_2}} {u_{i_1, i_2}-q^{-2}u_{j_1, j_2}}\right). \end{gather*} The integrand in $u_{{{\ell}}_{r, a_r-1}}$ has the poles at $0$ and $q^{k+2}t_b$ inside $C_{{{\ell}}_{r, a_r-1}}$ and so on. Finally we get \begin{gather*} \widehat{I}^{(\nu)}_{(\epsilon)(\mu)}=\left(\prod_{(i_1, i_2)} q^{(L-N)\mu_{i_1, i_2} }\right) \left(\prod_{(i_1, i_2)<j} q^{\mu_{i_1, i_2} l_j}\right) \left(\prod_{(i_1, i_2)<(j_1, j_2)} q^{-\mu_{i_1, i_2}}\right) \\ \phantom{\widehat{I}^{(\nu)}_{(\epsilon)(\mu)}=}{} \times \sum_{\substack{w_{{{\ell}}_{i_1, i_2}}\in \{0\}\cup(T-W_{i_1, i_2})\\ (i_1, i_2)\in A_{\mu}}} \mathop{\rm Res}_{u_{{{\ell}}_{1, 1}}=w_{{{\ell}}_{1, 1}}}\cdots \mathop{\rm Res}_{u_{{{\ell}}_{r, a_r}}=w_{{{\ell}}_{r, a_r}}} I_{(\epsilon)(\mu)}^{(\nu)+}, \end{gather*} where $T=\{t_1, t_2, \dots, t_N\}$, ${W_{i_1, i_2}=\mathop{\cup}\limits_{{{\ell}}_{i_1, i_2}<{{\ell}}_{j_1, j_2}}\{w_{{{\ell}}_{j_1, j_2}}\}}$. Set $C_i=\{{{\ell}}_{i, j}\,|\, w_{{{\ell}}_{i, j}}=0\}$, $D_i=A_{\mu, i}-C_i$. Then we have the desired result. \end{proof} Now we can calculate $\widehat{J}_{(\epsilon)(a)}^{(\nu)}$. \begin{proposition} We have \begin{gather*} \widehat{J}_{(\epsilon)(a)}^{(\nu)}= (-1)^{\sum\limits_{i=1}^r a_i} \left(q^{\sum\limits_{s=1}^r\big(\sum\limits_{t=k(s)+1}^{n}l_t\big)(n_s-2 a_s)} \right) \left(q^{(L-N)\big\{\sum\limits_{s=1}^{r}(n_s-2 a_s)\big\}}\right) \\ \phantom{\widehat{J}_{(\epsilon)(a)}^{(\nu)}=}{}\times \left(q^{-\sum\limits_{s=1}^{r}\sum\limits_{t=s+1}^r n_s(n_t-2 a_t)}\right) \sum_{\substack{1\le b_{i_1, i_2}\le N \\ 1\le i_1 \le r \\ 1\le i_2 \le a_{i_1}}} \left(\prod_{i_1<j_1}\frac{t_{b_{i_1, i_2}}-t_{b_{i_1, i_2}}}{t_{b_{i_1, i_2}}-q^{-2}t_{b_{i_1, i_2}}}\right) \\ \phantom{\widehat{J}_{(\epsilon)(a)}^{(\nu)}=}{}\times \prod_{i_1=1}^{r}\left\{\sum_{s_{i_1}=0}^{a_{i_1}} q^{a_{i_1}(n_{i_1}-s_{i_1}-1)-\frac{n_{i_1}(n_{i_1}-1)}{2}} \frac{[n_{i_1}]!}{[s_{i_1}]![a_{i_1}-s_{i_1}]!} \right. \\ \left.\phantom{\widehat{J}_{(\epsilon)(a)}^{(\nu)}=}{}\times \sum_{i=0}^{n_{i_1}-a_{i_1}} (-1)^{i+s_{i_1}}q^{i(2l_{k(i_1)}-n_{i_1}-a_{i_1}+1)+s_{i_1}} \frac{1}{[i]![n_{i_1}-a_{i_1}-i]!} \right. \\ \left.\phantom{\widehat{J}_{(\epsilon)(a)}^{(\nu)}=}{}\times \left\{\prod_{i_2=1}^{a_{i_1}}\left(\big(1-q^{-1-\epsilon_{b_{i_1, i_2}}}\big) \prod_{b\ne b_{i_1, i_2}}\frac{t_{b_{i_1, i_2}}-q^{-1-\epsilon_b}t_b}{t_{b_{i_1, i_2}}-t_b} \prod_{i_2<j_2}\frac{t_{b_{i_1, i_2}}-t_{b_{i_1, j_2}}}{t_{b_{i_1, i_2}}-q^{-2}t_{b_{i_1, j_2}}} \right. \right. \right. \\ \left. \left. \left. \phantom{\widehat{J}_{(\epsilon)(a)}^{(\nu)}=}{} \times \prod_{j=1}^{k(i_1)-1}\frac{z_j-q^{-l_j}t_{b_{i_1, i_2}}}{z_j-q^{l_j}t_{b_{i_1, i_2}}} \right) \prod_{i_2=s_{i_1}+1}^{a_{i_1}}\frac{z_{k(i_1)}-q^{-l_{k(i_1)}}t_{b_{i_1, i_2}}}{z_{k(i_1)}-q^{l_{k(i_1)}}t_{b_{i_1, i_2}}} \right\} \right\}. \end{gather*} \end{proposition} \begin{proof} Using Lemma~\ref{i} we have \begin{gather} \widehat{J}_{(\epsilon)(a)}^{(\nu)} = (-1)^{\sum\limits_{i=1}^r a_i}\sum_{\substack{|A_{\mu, i}|=a_i \\ 1\le i \le r}} \sum_{\substack{0\le m_i\le n_i \\ 1\le i \le r}}(-1)^{\sum\limits_{i=1}^{r}m_i} \left\{\prod_{i=1}^{r}q^{m_i l_{k(i)}}q^{-m_i(n_i-1)}\qbi{n_i}{m_i}\right\} \nonumber \\ \phantom{\widehat{J}_{(\epsilon)(a)}^{(\nu)} =}{} \times \left(\prod_{i=1}^rq^{(L-N)(n_i-2a_i)}\right) \left(\prod_{(i_1, i_2)<j} q^{\mu_{i_1, i_2} l_j}\right) \left(\prod_{(i_1, i_2)<(j_1, j_2)} q^{-\mu_{i_1, i_2}}\right) \nonumber \\ \phantom{\widehat{J}_{(\epsilon)(a)}^{(\nu)} =}{} \times \sum_{\substack{C_i\sqcup D_i=A_{\mu, i} \\ D_i'=D_i\cap M_i \\ 1\le i \le r}} (\prod_{b=1}^{N}q^{-1-\epsilon_b})^{\sum\limits_{i=1}^{r}|C_i|}\left(\prod_{\substack{(i_1, i_2)<(j_1, j_2)\\ (i_1, i_2)\in C_1\cup\dots \cup C_r \\ (j_1, j_2)\in D_1\cup\cdots \cup D_r}} q^2\right) \nonumber \\ \phantom{\widehat{J}_{(\epsilon)(a)}^{(\nu)} =}{} \times \sum_{\substack{1\le b_{i, j} \le N \\ 1\le i \le r \\ 1\le j \le |D_i|}} \prod_{i_1=1}^r \left\{ \prod_{i_2=1}^{|D_{i_1}|}\left(\big(1-q^{-1-\epsilon_{b_{i_1, i_2}}}\big) \prod_{b\ne b_{i_1, i_2}}\frac{t_{b_{i_1, i_2}}-q^{-1-\epsilon_b}t_b}{t_{b_{i_1, i_2}}-t_b} \right. \right. \nonumber \\ \left. \phantom{\widehat{J}_{(\epsilon)(a)}^{(\nu)} =}{} \times \prod_{(i_1, i_2)<(j_1, j_2)}\frac{t_{b_{i_1, i_2}}-t_{b_{j_1, j_2}}}{t_{b_{i_1, i_2}}-q^{-2}t_{b_{j_1, j_2}}} \prod_{j=1}^{k(i_1)-1}\frac{z_j-q^{-l_j}t_{b_{i_1, i_2}}}{z_j-q^{l_j}t_{b_{i_1, i_2}}} \right) \nonumber \\ \left. \phantom{\widehat{J}_{(\epsilon)(a)}^{(\nu)} =}{}\times \prod_{i_2=|D_{i_1}'|+1}^{|D_{i_1}|}\frac{z_{k(i_1)}-q^{-l_{k(i_1)}}t_{b_{i_1, i_2}}}{z_{k(i_1)}-q^{l_{k(i_1)}}t_{b_{i_1, i_2}}} \right\}. \label{rep} \end{gather} Set $\lambda_i=|A_{\mu, i}\cap M_i|$, $\gamma_i=|D_i|$, $s_i=|D_i'|$, $1\le i \le r$. Then the right hand side of (\ref{rep}) is equal to \begin{gather*} \sum_{\substack{0\le m_i\le n_i \\ 1\le i \le r}}(-1)^{\sum\limits_{i=1}^{r} m_i} \left\{\prod_{i=1}^{r}q^{m_i l_{k(i)}}q^{-m_i(n_i-1)} \qbi{n_i}{m_i}\right\} \\ \qquad{}\times\sum_{\substack{0\le \gamma_j \le a_j \\ 1\le j \le r}}\sum_{\substack{0\le s_j\le\gamma_j \\ 1\le j \le r}} \sum_{\substack{0\le \lambda_j\le m_j \\ 1\le j \le r}} C_{(a)(\gamma)} \left\{q^{\sum\limits_{s=1}^{r}l_{k(s)}(m_s-2\lambda_s)}\right\} \\ \qquad{}\times \left\{\prod_{i_1=1}^r \left(\sum_{\substack{|A_{\mu, i_1}|=a_{i_1}\\ |A_{\mu, i_1}\cap M_{i_1}|=\lambda_{i_1}}}\prod_{\substack{i_2<j_2 \\ i_1=j_1}} q^{-\mu_{i_1, i_2}}\right)\right\}\\ \qquad{}\times \left(\prod_{i_1=1}^{r} q^{\lambda_{i_1}\gamma_{i_1}+a_{i_1}\gamma_{i_1}-a_{i_1}s_{i_1}-\gamma_{i_1}^2} \qbi{\lambda_{i_1}}{s_{i_1}} \qbi{a_{i_1}-\lambda_{i_1}}{\gamma_{i_1}-s_{i_1}} \right) \\ \qquad{}\times \sum_{\substack{1\le b_{i_1, i_2}\le N \\ 1\le i_1 \le r \\ 1\le i_2 \le \gamma_{i_1}}} \prod_{i_1=1}^{r}\left\{\left(\prod_{i_2=1}^{\gamma_{i_1}}(1-q^{-1-\epsilon_{b_{i_1, i_2}}}) \prod_{b\ne b_{i_1, i_2}}\frac{t_{b_{i_1, i_2}}-q^{-1-\epsilon_b}t_b}{t_{b_{i_1, i_2}}-t_b} \right. \right. \\ \left. \left. \qquad{}\times \prod_{i_2<j_2}\frac{t_{b_{i_1, i_2}}-t_{b_{i_1, j_2}}}{t_{b_{i_1, i_2}}-q^{-2}t_{b_{i_1, j_2}}} \prod_{j=1}^{k(i_1)-1}\frac{z_j-q^{-l_j}t_{b_{i_1, i_2}}}{z_j-q^{l_j}t_{b_{i_1, i_2}}} \right) \prod_{i_2=s_{i_1}+1}^{\gamma_{i_1}}\frac{z_{k(i_1)}-q^{-l_{k(i_1)}}t_{b_{i_1, i_2}}}{z_{k(i_1)}-q^{l_{k(i_1)}}t_{b_{i_1, i_2}}} \right\} \\ \qquad{}\times \left(\prod_{i_1<j_1}\frac{t_{b_{i_1, i_2}}-t_{b_{j_1, j_2}}}{t_{b_{i_1, i_2}}-q^{-2}t_{b_{j_1, j_2}}}\right), \end{gather*} where \begin{gather*} C_{(a) (\gamma)} = (-1)^{\sum\limits_{i=1}^r a_i} \left(q^{\sum\limits_{s=1}^r\big(\sum\limits_{t=k(s)+1}^{n}l_t\big)(n_s-2 a_s)} \right) \left(q^{(L-N)\big\{\sum\limits_{s=1}^{r}(n_s-2 a_s)\big\}}\right) \nonumber \\ \phantom{C_{(a) (\gamma)} =}{} \times \left(q^{-\sum\limits_{s=1}^{r}(n_s-2 a_s)\big(\sum\limits_{t=s+1}^r n_t\big)}\right) \left(q^{2\sum\limits_{s=1}^{r}\sum\limits_{s<t}\gamma_t(a_s-\gamma_s)}\right) \left(\prod_{1\le b \le N}q^{-1-\epsilon_b}\right)^{\sum\limits_{s=1}^{r}(a_s-\gamma_s)}. \end{gather*} Here we have used Lemma \ref{combi} $(i)$. By $(ii)$ of Lemma \ref{combi} we have \begin{gather*} \widehat{J}_{(\epsilon)(a)}^{(\nu)} = \sum_{j=1}^r\sum_{\substack{0\le \gamma_j \le a_j \\ 1 \le j \le r}} C_{(a) (\gamma)} \sum_{\substack{1\le b_{i_1, i_2}\le N \\ 1\le i_1 \le r \\ 1 \le i_2 \le \gamma_{i_2}}} \left(\prod_{i_1<j_1}\frac{t_{b_{i_1, i_2}}-t_{b_{j_1, j_2}}}{t_{b_{i_1, i_2}}-q^{-2}t_{b_{j_1, j_2}}}\right) \\ \phantom{\widehat{J}_{(\epsilon)(a)}^{(\nu)}=}{} \times \prod_{i_1=1}^{r}\left\{\sum_{s_{i_1}=0}^{\gamma_{i_1}}\sum_{\lambda_{i_1}=s_{i_1}}^{a_j-\gamma_{i_1}+s_{i_1}} \sum_{m_{i_1}=0}^{n_{i_1}}(-1)^{m_{i_1}} q^{-m_{i_1}(n_{i_1}-1)} q^{2l_{k(i_1)}(m_{i_1}-\lambda_{i_1})}\qbi{n_{i_1}}{m_{i_1}} \right. \\ \left. \phantom{\widehat{J}_{(\epsilon)(a)}^{(\nu)}=}{} \times \left(\sum_{\substack{|A_{\mu, i_1}|=a_{i_1} \\ |A_{\mu, i_1}\cap M_{i_1}|=\lambda_{i_1}}} \prod_{\substack{i_2<j_2 \\ i_1=j_1}}q^{-\mu_{i_1, i_2}}\right) \left( q^{\lambda_{i_1}\gamma_{i_1}+a_{i_1}\gamma_{i_1}-a_{i_1}s_{i_1}-\gamma_{i_1}^2} \qbi{\lambda_{i_1}}{s_{i_1}} \qbi{a_{i_1}-\lambda_{i_1}}{\gamma_{i_1}-s_{i_1}} \right) \right. \\ \left. \phantom{\widehat{J}_{(\epsilon)(a)}^{(\nu)}=}{}\times \left\{\left(\prod_{i_2=1}^{\gamma_{i_1}}(1-q^{-1-\epsilon_{b_{i_1, i_2}}}) \prod_{b\ne b_{i_1, i_2}}\frac{t_{b_{i_1, i_2}}-q^{-1-\epsilon_b}t_b}{t_{b_{i_1, i_2}}-t_b} \prod_{i_2<j_2}\frac{t_{b_{i_1, i_2}}-t_{b_{i_1, j_2}}}{t_{b_{i_1, i_2}}-q^{-2}t_{b_{i_1, j_2}}} \right. \right. \right. \\ \left. \left. \left. \phantom{\widehat{J}_{(\epsilon)(a)}^{(\nu)}=}{} \times \prod_{j=1}^{k(i_1)-1}\frac{z_j-q^{-l_j}t_{b_{i_1, i_2}}}{z_j-q^{l_j}t_{b_{i_1, i_2}}} \right) \prod_{i_2=s_{i_1}+1}^{\gamma_{i_1}}\frac{z_{k(i_1)}-q^{-l_{k(i_1)}}t_{b_{i_1, i_2}}}{z_{k(i_1)}-q^{l_{k(i_1)}}t_{b_{i_1, i_2}}} \right\} \right\} \\ \phantom{\widehat{J}_{(\epsilon)(a)}^{(\nu)}}{}= \sum_{\substack{0\le \gamma_j \le a_j \\ 1 \le j \le r}} C_{(a) (\gamma)} \sum_{\substack{1\le b_{i_1, i_2}\le N \\ 1\le i_1 \le r \\ 1 \le i_2 \le \gamma_{i_2}}} \left(\prod_{i_1<j_1}\frac{t_{b_{i_1, i_2}}-t_{b_{j_1, j_2}}}{t_{b_{i_1, i_2}}-q^{-2}t_{b_{j_1, j_2}}}\right) \\ \phantom{\widehat{J}_{(\epsilon)(a)}^{(\nu)}=}{}\times \prod_{i_1=1}^{r}\left\{\sum_{s_{i_1}=0}^{\gamma_{i_1}}\sum_{\lambda_{i_1}=s_{i_1}}^{a_j-\gamma_{i_1}+s_{i_1}} \sum_{m_{i_1}=0}^{n_{i_1}}(-1)^{m_{i_1}} q^{-m_{i_1}(n_{i_1}-1)} q^{2l_{k(i_1)}(m_{i_1}-\lambda_{i_1})} \qbi{n_{i_1}}{m_{i_1}} \right. \\ \left. \phantom{\widehat{J}_{(\epsilon)(a)}^{(\nu)}=}{}\times \left( q^{n_{i_1}\lambda_{i_1}+a_{i_1}n_{i_1}-a_{i_1}m_{i_1}-a_{i_1}-\frac{n_{i_1}(n_{i_1}-1)}{2}} \qbi{m_{i_1}}{\lambda_{i_1}} \qbi{n_{i_1}-m_{i_1}}{a_{i_1}-\lambda_{i_1}} \right) \right. \\ \left. \phantom{\widehat{J}_{(\epsilon)(a)}^{(\nu)}=}{}\times \left( q^{\lambda_{i_1}\gamma_{i_1}+a_{i_1}\gamma_{i_1}-a_{i_1}s_{i_1}-\gamma_{i_1}^2} \qbi{\lambda_{i_1}}{s_{i_1}} \qbi{a_{i_1}-\lambda_{i_1}}{\gamma_{i_1}-s_{i_1}} \right) \right. \\ \left. \phantom{\widehat{J}_{(\epsilon)(a)}^{(\nu)}=}{}\times \left\{\prod_{i_2=1}^{\gamma_{i_1}}\left((1-q^{-1-\epsilon_{b_{i_1, i_2}}}) \prod_{b\ne b_{i_1, i_2}}\frac{t_{b_{i_1, i_2}}-q^{-1-\epsilon_b}t_b}{t_{b_{i_1, i_2}}-t_b} \prod_{i_2<j_2}\frac{t_{b_{i_1, i_2}}-t_{b_{i_1, j_2}}}{t_{b_{i_1, i_2}}-q^{-2}t_{b_{i_1, j_2}}} \right. \right. \right. \\ \left. \left. \left. \phantom{\widehat{J}_{(\epsilon)(a)}^{(\nu)}=}{}\times \prod_{j=1}^{k(i_1)-1}\frac{z_j-q^{-l_j}t_{b_{i_1, i_2}}}{z_j-q^{l_j}t_{b_{i_1, i_2}}} \right) \prod_{i_2=s_{i_1}+1}^{\gamma_{i_1}}\frac{z_{k(i_1)}-q^{-l_{k(i_1)}}t_{b_{i_1, i_2}}}{z_{k(i_1)}-q^{l_{k(i_1)}}t_{b_{i_1, i_2}}} \right\} \right\}. \end{gather*} It is easy to show \begin{gather*} \sum_{\lambda=s}^{a-\gamma+s} \sum_{m=0}^{n}(-1)^{m} q^{-m(n-1)} \qbi{n}{m} q^{2l(m-\lambda)} \left(q^{n \lambda +a n-a m-a-\frac{n(n-1)}{2}} \qbi{m}{\lambda} \qbi{n-m}{a-\lambda} \right) \\ \qquad{}\times \left( q^{\lambda \gamma+a \gamma-a s-\gamma^2} \qbi{\lambda}{s} \qbi{a-\lambda}{\gamma-s} \right) \\ \qquad{}= (-1)^s q^{a(n-s-1)+s} q^{-\frac{n(n-1)}{2}} \frac{[n]!}{[s]![a-s]!} \sum_{i=0}^{n-a} (-1)^{i}q^{i(2l-n-a+1)} \frac{1}{[i]![n-a-i]!}\delta_{a, \gamma}, \end{gather*} for $0\le s \le \gamma \le a\le n$. Hence \begin{gather*} \widehat{J}_{(\epsilon)(a)}^{(\nu)} =(-1)^{\sum\limits_{i=1}^r a_i} \left(q^{\sum\limits_{s=1}^r\big(\sum\limits_{t=k(s)+1}^{n}l_t\big)(n_s-2 a_s)} \right) \left(q^{(L-N)\big\{\sum\limits_{s=1}^{r}(n_s-2 a_s)\big\}}\right) \\ \phantom{\widehat{J}_{(\epsilon)(a)}^{(\nu)}=}{} \times \left(q^{-\sum\limits_{s=1}^{r}(n_s-2 a_s)\big(\sum\limits_{t=s+1}^r n_t\big)}\right) \sum_{\substack{1\le b_{i_1, i_2}\le N \\ 1\le i_1 \le r \\ 1\le i_2 \le a_{i_1}}} \left(\prod_{i_1<j_1}\frac{t_{b_{i_1, i_2}}-t_{b_{j_1, j_2}}}{t_{b_{i_1, i_2}}-q^{-2}t_{b_{j_1, j_2}}}\right) \\ \phantom{\widehat{J}_{(\epsilon)(a)}^{(\nu)}=}{}\times \prod_{i_1=1}^{r}\left\{\sum_{s_{i_1}=0}^{a_{i_1}} (-1)^{s_{i_1}} q^{a_{i_1}(n_{i_1}-s_{i_1}-1)+s_{i_1}} q^{-\frac{n_{i_1}(n_{i_1}-1)}{2}} \frac{[n_{i_1}]!}{[s_{i_1}]![a_{i_1}-s_{i_1}]!} \right. \\ \left.\phantom{\widehat{J}_{(\epsilon)(a)}^{(\nu)}=}{}\times \sum_{i=0}^{n_{i_1}-a_{i_1}} (-1)^{i}q^{i(2l_{k(i_1)}-n_{i_1}-a_{i_1}+1)} \frac{1}{[i]![n_{i_1}-a_{i_1}-i]!} \right. \\ \left. \phantom{\widehat{J}_{(\epsilon)(a)}^{(\nu)}=}{}\times \left\{\prod_{i_2=1}^{a_{i_1}}\left((1-q^{-1-\epsilon_{b_{i_1, i_2}}}) \prod_{b\ne b_{i_1, i_2}}\frac{t_{b_{i_1, i_2}}-q^{-1-\epsilon_b}t_b}{t_{b_{i_1, i_2}}-t_b} \prod_{i_2<j_2}\frac{t_{b_{i_1, i_2}}-t_{b_{i_1, j_2}}}{t_{b_{i_1, i_2}}-q^{-2}t_{b_{i_1, j_2}}} \right. \right. \right. \\ \left. \left. \left. \phantom{\widehat{J}_{(\epsilon)(a)}^{(\nu)}=}{}\times \prod_{j=1}^{k(i_1)-1}\frac{z_j-q^{-l_j}t_{b_{i_1, i_2}}}{z_j-q^{l_j}t_{b_{i_1, i_2}}} \right) \prod_{i_2=s_{i_1}+1}^{a_{i_1}}\frac{z_{k(i_1)}-q^{-l_{k(i_1)}}t_{b_{i_1, i_2}}}{z_{k(i_1)}-q^{l_{k(i_1)}}t_{b_{i_1, i_2}}} \right\} \right\}.\tag*{\qed} \end{gather*}\renewcommand{\qed}{} \end{proof} \begin{lemma} If $a_i \ne n_i $ for some $i$, \begin{gather*} \sum_{\substack{\epsilon_i=\pm \\ 1\le i \le N}}\left(\prod_{j=1}^N \epsilon_j \right) \left(\prod_{a<b}\frac{q^{\epsilon_b}t_b-q^{\epsilon_a}t_a}{t_b-q^{-2}t_a}\right)\widehat{J}_{(\epsilon)(a)}^{(\nu)} =0. \end{gather*} \end{lemma} \begin{proof} It is enough to show the following equation. For $1\le b_{i_1, i_2}\le N$ $(1\le i_1 \le r$, $1\le i_2 \le a_{i_1})$, $b_{i_1, i_2}\ne b_{j_1, j_2} $ $((i_1, i_2)\ne(j_1, j_2))$, \begin{gather} \sum_{\substack{\epsilon_i=\pm \\ 1\le i \le N}}\left(\prod_{i=1}^N \epsilon_{i}\right) \prod_{a<b}(q^{\epsilon_b}t_b-q^{\epsilon_a}t_a) \prod_{\substack{1\le i_1 \le r \\ 1\le i_2 \le a_{i_1}}} \left\{(1-q^{-1-\epsilon_{b_{i_1, i_2}}}) \prod_{b\ne b_{i_1, i_2}}(t_{b_{i_1, i_2}}-q^{-1-\epsilon_b}t_b) \right\} \nonumber \\ \qquad =(1-q^{-2})^{N} q^{\frac{N(N-1)}{2}}\left(\prod_{s=1}^{r}\delta_{a_s, n_s}\right) \left\{\prod_{a<b} (t_b-t_a)\right\} \left\{\prod_{b\ne b_{i_1, i_2}}(t_{b_{i_1, i_2}}-q^{-2}t_b)\right\}. \label{mod} \end{gather} For a set $\{b_{1, 1}, \dots, b_{r, a_r}\}=\{b_1, \dots, b_{\alpha}\}$, let$\{c_1, \dots ,c_{N-\alpha}\}$ be def\/ined by \begin{gather*} \{b_1, \dots, b_{\alpha}\}\sqcup \{c_1, \dots ,c_{N-\alpha}\}=\{1, \dots, N \} , \end{gather*} where $\alpha=\sum\limits_{i=1}^r a_i$. Then the left hand side of (\ref{mod}) is equal to \begin{gather*} (1-q^{-2})^{\alpha}\left(\prod_{1\le i \le \alpha}\delta_{\epsilon_{b_i}, +} \right) \left\{\prod_{i<j}q (t_{b_j}- t_{b_i})\right\} \left\{\prod_{\substack{1\le i, j \le \alpha \\ i\ne j}} (t_{b_{i}}-q^{-2}t_{b_j})\right\} \\ \qquad{} \times \left\{\prod_{b_i<c_j}(-q)\right\} \left\{\prod_{c_i<b_j}q \right\} \sum_{\substack{\epsilon_{c_i}=\pm \\ 1\le i \le N-\alpha}}\left(\prod_{i=1}^{N-\alpha} \epsilon_{c_i}\right) \left\{\prod_{i<j}(q^{\epsilon_{c_j}}t_{c_j}-q^{\epsilon_{c_i}}t_{c_i})\right\} \\ \qquad{} \times \left\{\prod_{1\le i \le \alpha}\prod_{1\le j \le N-\alpha}(t_{b_i}-q^{\epsilon_{c_j}-1}t_{c_j})\right\} \left\{\prod_{1\le i \le \alpha}\prod_{1\le j \le N-\alpha}(t_{b_{i}}-q^{-1-\epsilon_{c_j}}t_{c_j})\right\} . \end{gather*} Using \begin{gather*} (t_{b_j}-q^{\epsilon_{c_i}-1}t_{c_i})\big(t_{b_{i}}-q^{-1-\epsilon_{c_j}}t_{c_j}\big) =(t_{b_j}-t_{c_i})\big(t_{b_{i}}-q^{-2}t_{c_j}\big), \end{gather*} we have \begin{gather*} \sum_{\epsilon}\left(\prod_{i=1}^N \epsilon_{i}\right)\! \left\{\prod_{a<b}\big(q^{\epsilon_b}t_b-q^{\epsilon_a}t_a\big)\right\}\! \left\{\prod_{\substack{1\le i_1 \le r \\ 1\le i_2 \le a_{i_1}}} \big(1-q^{-1-\epsilon_{b_{i_1, i_2}}}\big)\!\right\}\! \left\{\prod_{b\ne b_{i_1, i_2}}\big(t_{b_{i_1, i_2}}-q^{-1-\epsilon_b}t_b\big)\!\right\}\! \\ \qquad {} =(1-q^{-2})^{\alpha}\left(\prod_{1\le i \le \alpha}\delta_{\epsilon_{b_i}, +} \right) \!\left\{\prod_{i<j}q (t_{b_j}- t_{b_i})\!\right\}\! \left\{\prod_{\substack{1\le i, j \le \alpha \\ i\ne j}} (t_{b_{i}}-q^{-2}t_{b_j})\!\right\}\! \\ \qquad{} \times \left\{\prod_{1\le i \le \alpha}\prod_{1\le j \le N-\alpha}(t_{b_i}-t_{c_j}) \big(t_{b_{i}}-q^{-2}t_{c_j}\big)\right\} \\ \qquad{} \times \left\{\prod_{b_i<c_j}(-q)\right\} \left\{\prod_{c_i<b_j}q \right\} \sum_{\genfrac{}{}{0pt}{}{\epsilon_{c_i}=\pm}{1\le i \le N-\alpha}}\left(\prod_{i=1}^{N-\alpha} \epsilon_{c_i}\right) \left\{\prod_{i<j}\big(q^{\epsilon_{c_j}}t_{c_j}-q^{\epsilon_{c_i}}t_{c_i}\big)\right\} . \end{gather*} Let $\alpha\ne N$ and ${\boldsymbol{a}}_i(\epsilon)=^{t}(1, q^{\epsilon}t_i, (q^{\epsilon}t_i)^2, \dots, (q^{\epsilon}t_i)^{N-\alpha-1})$. Then \begin{gather} \sum_{\substack{\epsilon_{i}=\pm \\ 1\le i \le N-\alpha}}\left(\prod_{i=1}^{N-\alpha} \epsilon_{i}\right) \prod_{i<j}(q^{\epsilon_{j}}t_{j}-q^{\epsilon_{i}}t_{i})\nonumber\\ \qquad{} =\sum_{\substack{\epsilon_{i}=\pm\\ 1\le i \le N-\alpha}}\left(\prod_{i=1}^{N-\alpha} \epsilon_{i}\right) \det ({\boldsymbol{a}}_1(\epsilon_1), {\boldsymbol{a}}_2(\epsilon_2), \dots, {\boldsymbol{a}}_{N-\alpha}(\epsilon_{N-\alpha})). \label{0} \end{gather} Since \begin{gather*} \sum_{\epsilon_i=\pm}\epsilon_i{\boldsymbol{a}}_i(\epsilon)= ^{t}(0, (q-q^{-1})t_i, \dots, (q^{N-\alpha-1}-q^{-(N-\alpha-1)})t_i^{N-\alpha-1}), \end{gather*} the right hand side of (\ref{0}) is equal to $0$. \end{proof} If $a_i=n_i$ for all $i$, then \begin{gather} \sum_{\substack{\epsilon_i=\pm \\ 1\le i \le N}}\left(\prod_{i=1}^N \epsilon_i\right) \left(\prod_{1\le a< b\le N}\frac{q^{\epsilon_b} t_b-q^{\epsilon_a} t_a}{t_b-q^{-2} t_a}\right)\hat{J}_{(\epsilon)(a)}^{(\nu)}\nonumber\\ \qquad{}= C_1\left(\prod_{a<b} \frac{t_b-t_a}{t_b-q^{-2}t_a}\right) \sum_{\substack{\Gamma_1\sqcup \dots \sqcup \Gamma_r=\{1, \dots, N\}\\ |\Gamma_s|=n_s\,\,(s=1, \dots, r)}} \sum_{\substack{b_{i_1, i_2}\in \Gamma_{i_1} \\ 1\le i_1 \le r \\ 1\le i_2 \le n_{i_1}}} \left(\prod_{i_1> j_1}\frac{t_{b_{i_1, i_2}}-q^{-2}t_{b_{j_1, j_2}}}{t_{b_{i_1, i_2}}-t_{b_{j_1, j_2}}}\right) \nonumber \\ \qquad{} \times \prod_{i_1=1}^{r}\left\{ \sum_{s_{i_1}=0}^{n_{i_1}} (-1)^{s_{i_1}} q^{-(n_{i_1}-1)s_{i_1}} \qbi{n_{i_1}}{s_{i_1}} \prod_{i_2=1}^{s_{i_1}}(z_{k(i_1)}-q^{l_{k(i_1)}}t_{b_{i_1, i_2}}) \right. \nonumber \\ \left.\qquad{} \times \prod_{i_2=s_{i_1}+1}^{n_{i_1}}(z_{k(i_1)}-q^{-l_{k(i_1)}}t_{b_{i_1, i_2}}) \prod_{i_2>j_2}\frac{t_{b_{i_1, i_2}}-q^{-2}t_{b_{i_1, j_2}}}{t_{b_{i_1, i_2}}-t_{b_{i_1, j_2}}} \right. \nonumber \\ \left.\qquad{} \times \prod_{i_2=1}^{n_{i_1}}\left( \frac{1}{z_{k(i_1)}-q^{l_{k(i_1)}}t_{b_{i_1, i_2}}} \prod_{j=1}^{k(i_1)-1}\frac{z_j-q^{-l_j}t_{b_{i_1, i_2}}}{z_j-q^{l_j}t_{b_{i_1, i_2}}} \right) \right\}, \label{a=n} \end{gather} where \begin{gather*} C_1=(-1)^N\big(1-q^{-2}\big)^{N} q^{N^2-LN} q^{\frac{N(N-1)}{2}} q^{\sum\limits_{i=1}^r \frac{n_i(n_i-1)}{2}}\\ \phantom{C_1=}{}\times \left\{q^{-\sum\limits_{s=1}^r\big(\sum\limits_{t=k(s)+1}^{n}l_t\big) n_s} \right\} \left\{q^{\sum\limits_{s=1}^{r}\big(\sum\limits_{t=s+1}^r n_t\big)n_s}\right\}. \end{gather*} By Lemma {\ref{z}} the right hand side of (\ref{a=n}) becomes \begin{gather*} C_1\prod_{s=1}^r\left\{(-1)^{n_s}[n_s]! q^{-l_{k(s)}n_s} q^{-\frac{n_s(n_s-1)}{2}} \left\{\prod_{i=0}^{n_s-1}\big(1-q^{2(l_{k(s)}-i)}\big)\right\}\right\} \\ \qquad{} \times \left(\prod_{a<b} \frac{t_b-t_a}{t_b-q^{-2}t_a}\right) \sum_{\genfrac{}{}{0pt}{}{\Gamma_1\sqcup \dots \sqcup \Gamma_r=\{1, \dots, N\}}{|\Gamma_s|=n_s\,\,(s=1, \dots, r)}} \left(\prod_{\genfrac{}{}{0pt}{}{1\le i<j \le r}{a\in \Gamma_i, b\in \Gamma_j }}\frac{t_{b}-q^{-2}t_{a}}{t_{b}-t_{a}}\right) \\ \qquad{} \times \prod_{s=1}^{r} \prod_{b\in \Gamma_s} \left( \frac{t_b}{z_{k(s)}-q^{l_{k(s)}}t_{b}} \prod_{i=1}^{k(s)-1}\frac{z_i-q^{-l_i}t_{b}}{z_i-q^{l_i}t_{b}} \right) . \end{gather*} This completes the proof of Proposition $\ref{mainprop}$.\hfill \qed
2023-04-23T06:10:05.534Z
2009-03-07T08:08:08.000Z
redpajama/arxiv
arxiv_0002
126
11,040
c4790d6ee3cd0ad2bb7c15ed77dfe52fc6c48c3f
\section{Introduction} \noindent {\bf\em I Introduction}: Clarifying formation process of mini black hole (BH) in higher-dimensional spacetimes has become an important issue since a possibility of BH formation in accelerators was pointed out. If our space is the 3-brane in large \cite{ADD98} or warped \cite{RS99} extra dimensions, the Planck energy could be of $O({\rm TeV})$ that may be accessible with planned particle accelerators. In the presence of the extra dimensions, a BH of very small mass may be produced in the accelerators and the evidence may be detected. The possible phenomenology of a BH produced in accelerators was first discussed in \cite{BHUA} (see \cite{reviews} for reviews). During the high-energy particle collision of sufficiently small impact parameter in a higher-dimensional spacetime, two particles will merge to form a distorted BH, and then, it settles down to a quasistationary state after emission of gravitational waves. The quasistationary BH will be soon evaporated by the Hawking radiation, implying that the quantum gravity effects will be important. The evaporation and quantum gravity effects \cite{greybody} have been studied for yielding a plausible scenario (cf. \cite{related} for related issues). By contrast, the analyses for BH formation and subsequent evolution by gravitational radiation have not yet been done in detail (but see \cite{HEADON}). These phases are described well in the context of general relativity \cite{GR04}, but due to its highly nonlinear nature, any approximation breaks down. Thus, numerical relativity simulation is the unique approach to study this phase. In this paper, we present a new numerical-relativity study for high-velocity collision of two BHs in four dimensions. This is the first step for understanding the high-velocity collision of two BHs in higher-dimensional spacetimes. We perform simulations for two equal-mass BHs of no spin. A new approach is adopted for preparing the initial condition (see Sec. II). The velocity of each BH is chosen in the range, 0.6--$0.9c$ (where $c$ is the speed of light), and the results are extrapolated to infer a result for $v \rightarrow c$. We determine the largest value of the impact parameter for formation of new BH and approximately determine the final mass and spin of the BH. In the following, we use the geometrical units in which $c=G=1$. \noindent {\bf\em II Initial condition}: There are several methods for preparing initial condition for high-velocity collision of two BHs. A popular method is the moving-puncture approach which has been adopted in a recent work for the head-on collision of two high-velocity BHs \cite{HEADON}. In the simple moving-puncture approach in which the three-spatial hypersurface is assumed to be initially conformally flat, the BHs are not in a stationary state in their own comoving frame and hence a large amount of spurious gravitational waves are included, as pointed out in \cite{HEADON}. To avoid this unsuitable property, in this paper, we use a {\em different} approach from the simple moving-puncture one: We superimpose two boosted BHs, as described below. The line element of a nonrotating BH in the isotropic coordinates is written as \begin{eqnarray} ds^2=-\alpha_0^2 dt_0^2 + \psi_0^4 (dx_0^2 + dy_0^2 + dz_0^2), \end{eqnarray} where \begin{eqnarray} \alpha_0=\Bigl(1-{m_0 \over 2r_0}\Bigr)\psi_0^{-1},~~\psi_0 =1+{m_0 \over 2r_0}. \end{eqnarray} $r_0=\sqrt{x_0^2 + y_0^2 + z_0^2}$ and $m_0$ is the BH mass in the rest frame of the BH. By boosting the BH in the $x$-axis direction with the speed $v$, the line element becomes \begin{eqnarray} &&ds^2=-\Gamma^2 (\alpha_0^2-\psi_0^4v^2) dt^2 + 2 \Gamma^2 v (\alpha_0^2 - \psi_0^4) dt dx \nonumber \\ &&~~~~~~~~~~+ \psi_0^4 (B_0^2 dx^2 + dy^2 + dz^2), \label{lorentz0} \end{eqnarray} where the new coordinates $x^{\mu}$ are related to the original one by the Lorentz transformation as $t = \Gamma (t_0 + v x_0)$, $x = \Gamma (x_0 + v t_0)$, $y=y_0$, and $z=z_0$. $\Gamma$ is the Lorentz factor, $\Gamma=1/\sqrt{1-v^2}$, and \begin{eqnarray} B_0^2=\Gamma^2 (1-v^2 \alpha_0^2 \psi_0^{-4}). \end{eqnarray} Note that at $t=0$, $r_0=\sqrt{\Gamma^2 x^2 + y^2 + z^2}$. Because the exact solution of a boosted BH is described by Eq. (\ref{lorentz0}), in the following, we consider initial data for which the spatial metric has the form \begin{eqnarray} dl^2 = \psi^4 (B^2 dx^2 + dy^2 +dz^2). \label{line} \end{eqnarray} Before going ahead, we summarize the lapse function ($\alpha$), nonzero-component of the shift vector ($\beta^i$), and non-zero components of extrinsic curvature ($K_{ij}$) of the boosted BH at $t=0$: \begin{eqnarray} && \alpha=\alpha_0 B_0^{-1},~~ \beta^x={\alpha_0^2 -\psi_0^4 \over \psi_0^4 -\alpha_0^2 v^2}v,\\ && K_{xx}={\Gamma^2 B_0 x v \over r_0} \biggl[2 \alpha_0'-{\alpha_0 \over 2}[\ln (\psi_0^4-\alpha_0^2 v^2)]' \biggr] ,\label{kxx}\\ && K_{yy}=K_{zz}={2 \Gamma^2 x v \alpha_0 \psi_0' \over \psi_0 B_0 r_0}, \\ && K_{xy}={B_0 v y \over r_0}\biggl[\alpha_0' -{\alpha_0 \over 2}[\ln (\psi_0^4-\alpha_0^2 v^2)]' \biggr],\\ && K_{xz}={B_0 v z \over r_0}\biggl[\alpha_0' -{\alpha_0 \over 2}[\ln (\psi_0^4-\alpha_0^2 v^2)]' \biggr].\label{kxz} \end{eqnarray} The dash (${}'$) denotes the ordinary derivative with respect to $r_0$ (e.g., $\alpha_0'=d\alpha_0/dr_0$), and $K_{ij}$ is derived from \begin{eqnarray} K_{ij}={1 \over 2 \alpha}\Bigl(D_i \beta_j + D_j \beta_i -\partial_t \gamma_{ij} \Bigr). \end{eqnarray} $D_i$ denotes the covariant derivative with respect to three metric, $\gamma_{ij}$, and we use the relation $\partial_t \gamma_{ij}=-\Gamma v \partial_{x_0} \gamma_{ij}$. Now, we describe initial data for two BHs. Although we adopt initial condition which {\em approximately} satisfies constraint equations in general relativity in this paper, a general framework is first summarized. We write the conformal factor as \begin{eqnarray} &&\psi = \psi_{\rm main} + \phi, \\ &&\psi_{\rm main} \equiv 1 + {m_1 \over 2r_1} + {m_2 \over 2r_2}, \end{eqnarray} where $m_a$ denotes mass parameter of each BH, $r_a = \sqrt{\Gamma^2 (x-x_a)^2 + (y-y_a)^2 + z^2}$, and $(x_a, y_a, 0)$ denotes the location of each BH at $t=0$. Namely, we express each BH by a moving-puncture framework in a modified form, in which $r_a \not=\sqrt{(x-x_a)^2 + (y-y_a)^2 + z^2}$. $\phi$ is a correction term which should be determined by solving the Hamiltonian constraint. This paper focuses on the equal-mass case in which $m_1=m_2=m_0$, $x_2=-x_1=x_0~(\geq 0)$, $y_2=-y_1=b/2~(>0)$. Two BHs are assumed to have the same absolute velocity but move in the opposite directions each other; i.e., $v_1=-v_2=v~(>0)$. Here, $b$ denotes the impact parameter. The total mass energy $M_0$ and angular momentum $J$ of the system are \begin{eqnarray} M_0 = 2m_0 \Gamma~~{\rm and}~~J= m_0 \Gamma v b, \end{eqnarray} and the nondimensional spin parameter of the system is \begin{eqnarray} {J \over M_0^2} ={b v \over 4m_0 \Gamma}. \label{spin} \end{eqnarray} It is natural to expect that a new BH is formed after the collision whenever $J/M_0^2 <1$, i.e., $b < 4m_0\Gamma/v$. Taking into account that the line element of a boosted BH is written by Eq.~(\ref{lorentz0}), we write $B^2$ as \begin{eqnarray} B^2=\Gamma^2 \biggl[1-v^2 \Big(1 - {m_1 \over 2r_1} - {m_2 \over 2r_2}\Big)^2 \psi_{\rm main}^{-6}\biggr]. \end{eqnarray} For the extrinsic curvature, we basically superimpose two parts as \begin{eqnarray} K_{ij} = K_{1ij}+K_{2ij}+\delta K_{ij}, \end{eqnarray} where $K_{aij}~(a=1, 2)$ are defined from Eqs. (\ref{kxx})--(\ref{kxz}) by replacing $r_0$ to $r_a~(a=1, 2)$. $\delta K_{ij}$ is a correction term which should be determined by solving the momentum constraint. \begin{figure}[t] \vspace{-12mm} \epsfxsize=3.in \leavevmode \epsffile{fig1a.ps} \\ \vspace{-8mm} \epsfxsize=3.in \leavevmode \epsffile{fig1b.ps} \vspace{-12mm} \caption{(a) Violation of the Hamiltonian constraint along $x$ axis for $x_0=160m_0$, $b=0$, and $v=0.9$. The violation is defined by $|H|/(|H_1|+|H_2|+|H_3|)$ where $H_1=\tilde D_i \tilde D^i \psi$, $H_2=-\tilde R \psi/8$, $H_3=[K_{ij}K^{ij}-(K_k^{~k})^2]\psi^5/8$, and $H=H_1+H_2+H_3$ should be zero if the Hamiltonian constraint is satisfied. $\tilde D_i$ and $\tilde R$ are the covariant derivative and Ricci scalar with respect to the conformal three metric $\tilde \gamma_{ij}=\psi^{-4}\gamma_{ij}$. (b) The same as (a) but for numerical results near one of BHs just before collision for $x_0=160m_0$, $bv/(m_0\Gamma)=5.088$, and $v=0.8$ in the simulations of different grid resolution, $h/m_0=0.0625$ (dotted curve; at $t=256m_0$), 0.050 (dashed curve; at $t=256m_0$), and 0.042 (solid curve; at $t=258m_0$). BH is located for $3 \alt x/m_0 \alt 4$. Note that convergence is not seen because the degree of the constraint violation seems to be primarily determined by the initial condition. \label{FIG0}} \end{figure} In this paper, we adopt an approximate initial condition of $\phi=0$ and $\delta K_{ij}=0$. The adopted initial data does not satisfy the constraint equations of general relativity for finite values of $x_a$ and $y_a$ or for nonzero value of $v$. However, for the case that $R_a \equiv (x_a^2 + y_a^2)^{1/2} /m_0 \gg 1$, the violation of the constraints is tiny because the magnitude of the violation is proportional to $m_0/R_a$. In the present work, we choose $R_a/m_0 \agt 100$ (typically 160) for the initial condition. In such case, the violation of the constraints is $\sim m_0/R_a=O(0.01)$ for most of region (see Fig. \ref{FIG0}), and thus, the initial condition approximately satisfies the constraints. The exception occurs around the puncture for which the violation is large, but the worst region is hidden inside apparent horizon and hence does not play a bad role. The constraint violation does not disappear during the evolution, but the magnitude of the violation remains roughly in the same small level at least before collision of two BHs (see Fig. \ref{FIG0}(b)). In this method, the BHs are approximately in a stationary state in their own comoving frame and hence a large amount of spurious gravitational waves are not included, in contrast to the initial data prepared by the simple moving-puncture approach. With this initial data, apparent horizons are located approximately for $r_a=m_0/2$ at $t=0$. The area of each horizon is $\approx 16\pi m_0^2$ within the error of $O(10^{-3})$ for $x_a \agt 100m_0$ (thus the bare mass of the BHs would be $\approx m_0$, and hence, a fraction of kinetic energy in the total mass of the system is $1-\Gamma^{-1}$). Furthermore, the area of the apparent horizon remains approximately $16\pi m_0^2$ during evolution before collision (cf. Fig. \ref{FIG1.5}). Therefore, it is reasonable to expect that numerical solution obtained by the simulation provides an approximate solution within an error of $\sim 1\%$. In the future work, we will perform simulations using improved initial data which satisfies the constraints, obtained by computing correction terms, $\phi$ and $\delta K_{ij}$, to strictly validate the present strategy. \begin{figure}[t] \epsfxsize=3.in \leavevmode \epsffile{fig2.ps} \vspace{-5mm} \caption{Area of apparent horizon as a function of time before collision for $v=0.9$ and $bv/(m_0\Gamma)=4.708$ with different grid resolution, $h/m_0=0.0625$ (dotted curve), 0.050 (dashed curve), and 0.042 (solid curve). \label{FIG1.5}} \end{figure} \noindent {\bf\em III Numerical results}: For numerical simulation, we use {\small SACRA} code recently developed by our group \cite{SACRA}. In {\small SACRA}, the Einstein equations are solved in a modified version of BSSN (Baumgarte-Shapiro-Shibata-Nakamura) formalism \cite{BSSN} with a fourth-order finite differencing scheme in space and time and with an adaptive mesh refinement algorithm (at refinement boundaries, second-order interpolation scheme is partly adopted and hence the convergence may reduce to be second order). The moving-puncture approach is adopted for following moving BHs \cite{BHBH}. Gravitational waves are computed by extracting the outgoing part of the Newman-Penrose quantity (the so-called $\Psi_4$). Properties of the BHs such as mass and spin are determined by analyzing area and circumferential radii of apparent horizons. The details of our schemes, formulation, gauge conditions, and methods for the analysis are described in \cite{SACRA}. This reference also shows that {\small SACRA} can successfully simulate merger of two equal-mass BHs. Because an adaptive mesh refinement algorithm is implemented, the moving BHs can be computed accurately by preparing a high-resolution domain appropriately around the BHs; in the present work, we prepare 10 refinement levels. Indeed, we performed test simulations in which a single high-velocity BH is boosted with $v=0.8$ and 0.9, and found that our code can follow such high-velocity BH for more than $500 m_0$; e.g., we checked that the area of apparent horizon converges at approximately fourth order with improving grid resolution and the error is within $\sim 0.1\%$ and 1\% for $v=0.8$ and 0.9, respectively, for the grid resolution with $h=m_0/20$ (e.g., Fig. \ref{FIG1.5}). \begin{figure}[t] \vspace{-8mm} \epsfxsize=3.5in \leavevmode \epsffile{fig3.ps} \vspace{-15mm} \caption{Trajectories of relative position of moving punctures for $v=0.9$ and $b/m_0=6.0$, 6.2, and 6.4 (solid, dashed, and dotted curves). ($bv/m_0\Gamma=4.708$, 4.865, and 5.021.) \label{FIG1}} \end{figure} Numerical simulations are performed for $v=0.6$, 0.7, 0.8, and 0.9 and $x_0/m_0=160$ changing the impact parameter $b$. As expected from Eq.~(\ref{spin}), two BHs should merge after collision for $b < 4 m_0\Gamma/v$, and hence, the critical value of the impact parameter for the BH formation (hereafter $b_{\rm crit}$) should be close to $4 m_0\Gamma/v$. For this reason, the value of $b$ is chosen in the range, $4 \alt bv/ (m_0\Gamma) \alt 5.5$, with the basic step size of $b$ being $0.1m_0$ (near $b=b_{\rm crit}$, the step size of $0.05m_0$ is partly used). Numerical results depend only weakly on the initial separation for given values of $v$ and $b$, and for a given grid resolution. Indeed, we performed detailed test simulations for $v=0.8$ and 0.9, and $x_0/m_0=80$, 128, and 160. If $x_0/m_0$ is changed from 160 to 128, the value of $b_{\rm crit}$, which is one of the most important outputs in this work, decreases only by $\alt 0.05m_0$, and from $x_0/m_0=160$ to 80, by $\alt 0.1m_0$. Recall that the larger value of the initial separation results in the smaller initial constraint violation. Thus, the weak dependence of the numerical results on $x_0$ indicates that the initial constraint violation only weakly affects the numerical results. The numerical simulations are performed for different grid resolutions as $h/m_0=0.075$, 0.0625, 0.050, and 0.042. The outer boundaries along each axis are located at $L \approx 770 m_0$ for all the grid resolution. The value of $L/c$ is longer than the duration that the simulation is carried out, and hence, spurious effects from the outer boundaries are excluded. We find that the value of $b_{\rm crit}$ depends only weakly on the grid resolution (it increases only by $\sim 0.1m_0$ ($\sim 1.5\%$) if we change $h/m_0$ from 0.0625 to 0.042). The area of a BH formed after the merger, and total energy and angular momentum dissipated by gravitational waves ($\Delta E$ and $\Delta J$) depend more strongly on the grid resolution. However, our results indicate a convergence slightly better than second order with improving the grid resolution as long as $h/m_0 \leq 0.0625$ and $0.6 \leq v \leq 0.9$, although the convergence becomes slow for $v \agt 0.9$. This is natural because the coordinate radius of the apparent horizon in the $x$-axis direction for each BH is proportional to $\Gamma^{-1}$. Obviously, the better grid resolution is necessary for the ultra-high velocity collision $v \rightarrow 1$, but this is beyond scope of this paper. \begin{figure}[t] \epsfxsize=3.in \leavevmode \epsffile{fig4.ps} \vspace{-2mm} \caption{Summary of the final outcomes after the collision of two BHs in the parameter space of $(v, b)$. The circles denote that a BH is formed after collision, whereas the crosses do not. The solid curve denotes $b/(m_0\Gamma/v)=4.0$, and the dashed curves are 4.9, 5.0, and 5.1. \label{FIG2}} \end{figure} Figure~\ref{FIG1} plots trajectories of moving punctures for $v=0.9$, $x_0=160m_0$, and $bv/(m_0 \Gamma)=4.708$, 4.865, and 5.021. Here, we plot relative position $[(x_2-x_1)/2m_0, (y_2-y_1)/2m_0]$ on the orbital plane. For the first two impact parameters, the BH is formed after the collision, whereas for the other, each BH escapes from the center after scattering. For the small values of $b$, two BHs form a bound orbit when the orbital separation becomes small enough. For a sufficiently small value of $b$, such as $bv/(m_0 \Gamma)\alt 4.7$, two BHs merge within one orbit. For a value of $b$ close to $b_{\rm crit}$, two BHs rotate around each other for more than one orbits before two BHs merge, as shown in Fig.~\ref{FIG1}. For $b> b_{\rm crit}$, two BHs rotate around each other for small separation. However, they do not constitute a bound orbit because of the large centrifugal force, and eventually, each BH escapes from the center. To summarize the outcomes after the collision, we generate Fig.~\ref{FIG2} which shows a parameter space composed of $(v, b)$. We plot the circles for the case that two BHs merge after the collision, whereas the crosses are plotted when two BHs do not merge. The solid curve denotes $b = 4m_0\Gamma/v$ for which the nondimensional spin parameter of the system is unity at $t=0$. The three dashed curves denote $bv/m_0\Gamma=4.9$, 5.0, and 5.1. Figure \ref{FIG2} clarifies that for $b\alt b_{\rm crit} \approx (5.0 \pm 0.1) m_0 \Gamma/v$, a BH is formed after the collision for the chosen velocity, $0.6 \leq v \leq 0.9$. Extrapolating the result for $v \rightarrow 1$ under the assumption that the discovered relation holds even for $v \rightarrow 1$, the maximum impact parameter is determined to be $(2.50\pm 0.05) M_0$. \begin{figure}[t] \epsfxsize=3.in \leavevmode \epsffile{fig5.ps} \vspace{-5mm} \caption{Outgoing part of the Newman-Penrose quantity for $v=0.8$ and $bv/(m_0\Gamma)=5.088$. The solid and dotted curves denote $l=m=2$ and $l=m=4$ modes for the best-resolved run, and the dashed curve denotes $l=m=2$ modes for the second-best run. $t_{\rm ret}=0$ approximately corresponds to the time of the onset of merger. $D$ denotes a distance between the source and observer. \label{FIG3}} \end{figure} Figure \ref{FIG3} plots outgoing part of the Newman-Penrose quantity for $v=0.8$ and $bv/m_0\Gamma=5.088$. $l=m=2$ and $l=m=4$ modes are plotted for the best-resolved run with $h=0.042m_0$. For $l=m=2$, we also show the result for $h=0.05m_0$, which agrees with the best-resolved result with a small error. Note that the waveforms are qualitatively similar irrespective of the value of $v$ as far as $b \approx b_{\rm crit}$, although the maximum amplitude steeply increases when $b$ approaches $b_{\rm crit}$. Figure \ref{FIG3} shows that gravitational waves are efficiently emitted after the onset of collision: When two BHs approach each other, amplitude of gravitational waves gradually increases. As the separation of two BHs becomes sufficiently small, they constitute a bound orbit and quasiperiodic gravitational waves are emitted. After substantial fraction of gravitational waves are emitted, two BHs merge to be a new BH, and then, ring-down gravitational waves associated with fundamental quasinormal modes are emitted for $t_{\rm ret} \agt 40M_0$. In this case, the formed BH is rapidly rotating with the spin parameter $\approx 0.73 \pm 0.02$, and hence, the damping time scale is longer than that for nonrotating BHs \cite{L85}. Because the orbital velocity is very large, higher-multipole components of gravitational waves are also enhanced significantly (cf. the waveform for $l=m=4$). Total energy and angular momentum dissipated by gravitational waves are $\approx 25 \pm 5\%$ and $\approx 65 \pm 5\%$ of the initial energy and angular momentum, respectively, for $b \alt b_{\rm crit}$ and for $v=0.9$. The totally emitted gravitational radiation for $b \sim b_{\rm crit}$ slightly decreases with decreasing $v$, but still, $\Delta E/M_0 \agt 20\%$ and $\Delta J/J \agt 60\%$ for $0.6 \leq v \leq 0.8$. $l=|m|=4$ modes contribute to $\Delta E$ and $\Delta J$ by $\approx 10$--15\% and by $\approx 15$--20\%, respectively, for $b \sim b_{\rm crit}$. It should be noted that in the limit $b \rightarrow b_{\rm crit}$, total amount of gravitational radiation may be slightly larger than that presented here, because the lifetime of the formed binary orbit could be longer. We note that the results for the total amount of gravitational radiation are consistent with the mass and spin of the BH finally formed within an acceptable error for the best-resolved runs: The mass and angular momentum of the BHs estimated from apparent horizon are always smaller than those expected from gravitational radiation (i.e., $M_0-\Delta E$ and $J-\Delta J$) by $\alt 0.05M_0$ and $\alt 0.1J$, respectively, for the best-resolved run. The error is larger for the larger value of $v$. The reason for this error is that energy and angular momentum are dissipated spuriously by numerical effects associated with finite grid resolution. However, our results show a behavior of convergence slightly better than second order with improving grid resolution. \noindent {\bf\em IV Discussion and summary}: We find that the largest value of the impact parameter for the BH formation after the collision is $b_{\rm crit}\approx (2.50 \pm 0.05) M_0/v$. For such value of the impact parameter, the initial value of the spin parameter of the system is \begin{eqnarray} {J \over M_0^2} =1.25 \pm 0.03. \end{eqnarray} For the BH formation, the spin parameter of the formed BH should be smaller than unity if the cosmic censorship holds \cite{Wald}. This implies that a large fraction of the angular momentum is dissipated by gravitational radiation during the collision. We estimate the total amount of the angular momentum and energy dissipated by gravitational radiation are $\Delta J=(0.65 \pm 0.05)J$ and $\Delta E=(0.25 \pm 0.05)M_0$, respectively. The expected spin parameter of the formed BH is \begin{eqnarray} \approx {J-\Delta J \over (M_0-\Delta E)^2} \approx (0.6\pm 0.1) {J \over M_0^2}. \end{eqnarray} Thus, even if the spin parameter of the system is initially 1.25, the resulting value at BH formation is smaller than unity. As this discussion clarifies, gravitational radiation increases the impact factor by $\sim 25\%$, and as a result, the cross section by $\sim 50\%$. It is also worth noting that the formed BH does not have an extremely large spin $\sim 1$, but approximately $0.8 \pm 0.1$ even for $b \alt b_{\rm crit}$. In this work, we adopt initial data which satisfies the constraint equations only approximately. Although the violation is tiny (cf. Fig. \ref{FIG0}), this error produces a small error in estimation of the critical cross section, and $\Delta E$ and $\Delta J$ of gravitational waves. To determine these quantities strictly, it is necessary to perform simulations using improved initial condition. This work is a first step toward detailed understanding of high-velocity collision of two BHs in higher-dimensional spacetime. We plan to develop a numerical code for higher-dimensional spacetimes. \noindent {\bf\em Acknowledgments}: We thank T. Shiromizu and K. Maeda for helpful discussions and comments. This work was in part supported by Monbukagakusho Grant No. 19540263.
2023-04-23T06:10:05.923Z
2008-10-27T02:08:56.000Z
redpajama/arxiv
arxiv_0002
136
4,139
23d861c4456dfdabe2d33185d162f82b5d022541
\subsection[Introduction and overview]{Introduction and Overview} High-energy gamma rays can be observed from the ground by detecting secondary particles of the atmospheric cascades initiated by the interaction of the gamma-ray with the atmosphere. Imaging atmospheric Cherenkov telescopes (IACTs) detect broadband spectrum Cherenkov photons ($\lambda > 300$ nm), which are produced by electrons and positrons of the cascade and reach the ground level without significant attenuation. The technique utilizes large mirrors to focus Cherenkov photons onto a finely pixelated camera operating with an exposure of a few nanoseconds, and provides low energy threshold and excellent calorimetric capabilities. The IACTs can only operate during clear moonless and, more recently, partially-moonlit nights. Alternatively, the extended air shower (EAS) arrays, which directly detect particles of the atmospheric cascade (electrons, photons, muons, etc.) can be operated continuously but require considerably larger energy of the gamma rays necessary for extensive air showers to reach the ground level. The field of TeV gamma-ray astronomy was born in the years 1986 to 1988 with the first indisputable detection of a cosmic source of TeV gamma rays with the Whipple $10$~m IACT, the Crab Nebula \cite{1989ApJ...342..379W}. Modern IACT observatories such as VERITAS \cite{Week:02,Maie:07}, MAGIC \cite{2004NewAR..48..339L,Goeb:07}, and H.E.S.S. \cite{2004NewAR..48..331H,Horn:07} can detect point sources with a flux sensitivity of $1\%$ of the Crab Nebula corresponding to a limiting $\nu $F$_{\nu }$-flux of $\sim 5\times 10^{-13}$ ergs cm$^{-2}$ s$^{-1}$ at 1 TeV. The improvement of sensitivity by two orders of magnitude during the last two decades has been made possible due to critical advances in IACT technology and significantly increased funding for ground-based gamma-ray astronomy. The high point-source flux sensitivity of IACT observatories is a result of their large gamma-ray collecting area ($\sim 10^{5}$ m$^{2}$), relatively high angular resolution ($\sim 5$ arcminutes), wide energy coverage (from $<100$ GeV to $>10$ TeV), and unique means to reject cosmic ray background ($> 99.999\%$ at 1 TeV). The limitations of the IACT technique are the small duty cycle ($\sim 10\%$), and narrow field of view ($\sim 4\deg $; $3.8\times 10^{-3}$ sr for present-day IACTs). Large EAS arrays provide complementary technology for observations of very high-energy gamma rays. Whereas their instantaneous sensitivity is currently a factor $\sim 150$ less sensitive than that of IACT observatories, their large field of view ($\sim 90\deg $; $1.8$ sr) and nearly $100\%$ duty cycle makes these observatories particularly suited to conduct all-sky surveys and detect emission from extended astrophysical sources (larger than $\sim 1\deg $, e.g. plane of the Galaxy). Milagro \cite{Smit:05}, the first ground-based gamma-ray observatory which utilized EAS technology to discover extended sources \cite{Abdo:07}, has surveyed $2\pi $~sr of the sky at $20$~TeV for point sources to a sensitivity of $3\times 10^{-12}$ ergs cm$^{-2}$ s$^{-1}$. Due to the wide field of view coverage of the sky and uninterrupted operation, the EAS technique also has the potential for detection of Very High Energy (VHE) transient phenomena. The current limitations of EAS technique are high-energy threshold ($\sim 10$ TeV), low angular resolution ($\sim 30$ arcminutes), and limited capability to reject cosmic-ray background and measure energy. The primary technical goal for the construction of the next generation of observatories is to achieve an improvement of sensitivity by a factor of $\alpha $ at the cost increase less than a factor of $\alpha ^{2}$, the increase that would be required if the observatory were constructed by simply cloning present day instrumentation~\footnote{Background dominated regime of observatory operation is assumed}. The history of ground-based gamma-ray astronomy over the last two decades has shown twice an improvement in the sensitivity of the observatories by a factor of ten while the cost has increased each time only by a factor of ten \cite{2007ebhe.conf..282W}. The construction of a large array of IACTs covering an area of $\sim 1$ km$^2$ will enable ground-based $\gamma$-ray astronomy to achieve another order of magnitude improvement in sensitivity. This next step will be facilitated by several technology improvements. First, large arrays of IACTs should have the capability to operate over a broad energy range with significantly improved angular resolution and background rejection as compared to the present day small arrays of telescopes, such as VERITAS or H.E.S.S.. Second, the capability of using subarrays to fine tune the energy range to smaller intervals will allow for considerable reduction of aperture of individual telescopes and overall cost of the array while maintaining the collecting area at lower energies equal to the smaller array of very large aperture IACTs. Finally, the cost per telescope can be significantly reduced due to the advancements in technology, particularly the development of low cost electronics, novel telescope optics designs, replication methods for fabrication of mirrors, and high efficiency photo-detectors, and due to the distribution of initial significant non-recurring costs over a larger number of telescopes. In the case of EAS arrays, the breakthrough characterized by the improvement of sensitivity faster than the inverse square root of the array footprint area is possible due to mainly two factors. First, next generation EAS array must be constructed at a high elevation ($>4000$ m) to increase the number of particles in a shower by being closer to the altitude where the shower has the maximum number of particles. Thus, a lower energy threshold is possible and energy resolution is improved. Second, the size of the EAS array needs to be increased in order to more fully contain the lateral distribution of the EAS. A larger array improves the angular resolution of the gamma-ray showers and also dramatically improves the cosmic ray background rejections. The lateral distribution of muons in a cosmic ray shower is very broad, and identification of a muon outside the shower core is key to rejecting the cosmic ray background. The science motivations for the next generation ground-based gamma-ray observatories are outlined in this document. There are clear cost, reliability, maintenance, engineering, and management challenges associated with construction and operation of a future ground-based astronomical facility of the order $\sim $100M dollar scale. Detailed technical implementation of a future observatory will benefit from current and future R\&D efforts that will provide better understanding of the uncertainties in evaluation of the cost impact of improved and novel photon detector technologies and from the current incomplete simulation design studies of the large optimization space of parameters of the observatory. In the remainder of this section, we outline a broadly defined technical roadmap for the design and construction of future instrumentation which could be realized within the next decade. We start with a status of the field, identify the key future observatory design decisions, technical drivers, describe the current state of the art technologies, and finally outline a plan for defining the full technology approach. \subsection{Status of ground-based gamma-ray observatories}{Status of Ground-Based Gamma-ray\\Observatories} \begin{figure*}[t] \begin{center} \includegraphics[angle=0,width=6.0in]{Figure1.ps} \caption{\label{fig:exp} The images show four major ground-based gamma-ray observatories currently in operation: VERITAS, MAGIC, H.E.S.S.\ , and MILAGRO. A future ground-based gamma-ray project can build on the success of these instruments.} \end{center} \end{figure*} At present, there are four major IACT and three EAS observatories worldwide conducting routine astronomical observations, four of which are shown in Fig \ \ref{fig:exp}. Main parameters of these instruments are the following: \paragraph{VERITAS} is a four-telescope array of IACTs located at the Fred Lawrence Whipple Observatory in Southern Arizona (1268 m a.s.l.). Each telescope is a 12 m diameter Davies-Cotton (DC) reflector (f/1.0) and a high resolution 3.5$\deg$ field of view camera assembled from 499 individual photo multiplier tubes (PMTs) with an angular size of 0.15 deg. The telescope spacing varies from 35~m to 109~m. VERITAS was commissioned to scientific operation in April 2007. \paragraph{The H.E.S.S.\ array} consists of four 13 m DC IACTs (f/1.2) in the Khomas Highlands of Namibia (1800 m a.s.l.). The 5 deg field of view cameras of the telescopes contain 960 PMTs, each subtending 0.16deg angle. The current telescopes are arranged on the corners of a square with 120m sides. H.E.S.S.\ has been operational since December 2003. The collaboration is currently in the process of upgrading the experiment (H.E.S.S.\ -II) by adding a central large (28 m diameter) telescope to the array to lower the trigger threshold for a subset of the events to 20 GeV and will also improve the sensitivity of the array above 100 GeV. \paragraph{MAGIC} is a single 17 m diameter parabolic reflector (f/1.0) located in the Canary Island La Palma (2200 m a.s.l.). It has been in operation since the end of 2003. The 3.5 deg non-homogenous camera of the telescope is made of 576 PMTs of two angular sizes 0.1deg (396 pixels) and 0.2deg (180 pixels). The MAGIC observatory is currently being upgraded to MAGIC-II with a second 17-m reflector being constructed 85 m from the first telescope. The addition of this second telescope will improve background rejection and increase energy resolution. \paragraph{CANGAROO-III} consists of an array of four 10 m IACTs (f/0.8) located in Woomera, South Australia (160 m a.s.l.) \cite{Mori:07}. The telescope camera is equipped with an array of 552 PMTs subtending an angle of 0.2deg each. The telescopes are arranged on the corners of a diamond with sides of 100 m. \paragraph{Milagro} is an EAS water Cherenkov detector located near Los Alamos, New Mexico (2650 m a.s.l.). Milagro consists of a central pond detector with an area of 60 x 80m$^2$ at the surface and has sloping sides that lead to a 30 x 50 m$^2$ bottom at a depth of 8 m. It is filled with 5 million gallons of purified water and is covered by a light-tight high-density polypropylene line. Milagro consists of two layers of upward pointing 8'' PMTs. The tank is surrounded with an array of water tanks. The central pond detector has been operational since 2000. The array of water tanks was completed in 2004. \paragraph{The AS-\large{$\gamma$} and ARGO arrays} are located at the YangBaJing high-altitude laboratory in Tibet, China. AS-$\gamma$, an array of plastic scintillator detectors, has been operational since the mid 1990s. ARGO consists of a large continuous array of Resistive Plate Counters (RPCs) and will become operational in 2007 \cite{Zao:05}. \bigskip The current generation of ground based instruments has been joined in mid-2008 by the space-borne \textbf{Fermi Gamma-ray Space Telescope} (formerly GLAST). Fermi comprises two instruments, the Large Area Telescope (LAT) \cite{McEn:07} and the Fermi Gamma-ray Burst Monitor (GBM) \cite{Lich:07}. The LAT covers the gamma-ray energy band of 20 MeV - 300 GeV with some spectral overlap with IACTs. The present generation of IACTs match the $\nu F_{\nu}$-sensitivity of Fermi. Next-generation ground-based observatories with one order of magnitude higher sensitivity and significantly improved angular resolution would be ideally suited to conduct detailed studies of the Fermi sources. \begin{table*}[!ht] \begin{center} \caption{\label{regimes} Gamma-ray energy regimes, scientific highlights and technical challenges.} {\footnotesize \begin{tabular}{p{0.6in} p{0.6in} p{1.9in} p{2.6in}} \hline\hline Regime & Energy Range & Primary Science Drivers & Requirements/Limitations \\ \hline {\bf multi-GeV}: & $\leq$50~GeV & extragalactic sources (AGN, GRBs) at cosmological distances ($z>1$), Microquasars, Pulsars & very large aperture or dense arrays of IACTs, preferably high altitude operation \& high quantum efficiency detectors required; angular resolution and energy resolution will be limited by shower fluctuations, cosmic-ray background rejection utilizing currently available technologies is inefficient. \\ {\bf sub-TeV}: & 50~GeV -- 200~GeV & extragalactic sources at intermediate redshifts($z < 1$), search for dark matter, Galaxy Clusters, Pair Halos, Fermi sources & very-large-aperture telescopes or dense arrays of mid-size telescopes and high light detection efficiency required; limited but improving with energy cosmic-ray background rejection based on imaging analysis. For gamma-ray bursts, high altitude EAS array. \\ {\bf TeV}: & 200~GeV -- 10~TeV & nearby galaxies (dwarf, starburst), nearby AGN, detailed morphology of extended galactic sources (SNRs, GMCs, PWNe) & large arrays of IACTs: best energy flux sensitivity, best angular and energy resolutions, best cosmic-ray hadron background rejection, new backgrounds from cosmic-ray electrons may ultimately limit sensitivity in some regions of the energy interval. At the highest energy end, an irreducible background may be due to single-pion sub-showers. EAS arrays for mapping Galactic diffuse emission, AGN flares, and sensitivity to extended sources. \\ {\bf sub-PeV}: & $\geq$10~TeV & Cosmic Ray PeVatrons (SNRs, PWNe, GC, ...), origin of galactic cosmic rays & requires very large (10 km$^2$ scale) detection areas; large arrays of IACTs equipped with very wide ($\ge 6^\circ$) FoV cameras and separated with distance of several hundred meters may provide adequate technology. Background rejection is excellent and sensitivity is $\gamma$-ray count limited. Single-pion sub-showers is ultimate background limiting sensitivity for very deep observations. Regime of best performance of present EAS arrays; large EAS arrays ($\ge 10^{5}m^{2}$). \\ \hline \end{tabular} } \end{center} \end{table*} \subsection[Design considerations for a next-generation gamma-ray detector]{Design Considerations for a Next-Generation Gamma-Ray Detector} At the core of the design of a large scale ground-based gamma-ray observatory is the requirement to improve the integral flux sensitivity by an order of magnitude over instruments employed today in the 50 GeV-20~TeV regime where the techniques are proven to give excellent performance. At lower energies (below 50 GeV) and at much higher energies (50-200 TeV) there is great discovery potential, but new technical approaches must be explored and the scientific benefit is in some cases less certain. For particle-detector (EAS) arrays, it is possible to simultaneously improve energy threshold and effective area by increasing the elevation, and the technical road-map is relatively well-defined. In considering the design of future IACT arrays, the development path allows for complementary branches to more fully maximize the greatest sensitivity for a broad energy range from 10~GeV up to 100~TeV. Table \ref{regimes} summarizes specific issues of the detection technique and scientific objectives for four broad energy regimes (adapted from \cite{AharT:05,AharT:08}). \subsection[Future IACT arrays]{Future IACT Arrays} \begin{figure*}[t] \begin{center} \includegraphics[angle=0,width=3.in]{future_sens.eps} \includegraphics[angle=0,width=3.in]{gamma_ang_res.eps} \caption{ \textit{Left:} Differential sensitivities calculated for present and future gamma-ray experiments. For the future IACT array, an area of $\sim$1~km$^2$, no night-sky-background, a perfect point spread function \cite{Bugaev:07}, and an order of magnitude improvement in cosmic-ray rejection compared with current instruments has been assumed. All sensitivities are 5 sigma detections in quarter decade energy intervals (chosen to be larger than the expected full-width energy resolution). \textit{Right} Angular resolution for Fermi (GLAST) \cite{GLAST}, VERITAS \cite{Krawcz:06} and for ideal future space-borne and ground based \cite{Hofmann2005} gamma-ray detectors.} \label{fig:sensang} \end{center} \end{figure*} The scientific goals to be addressed with a future IACT array require a flux sensitivity at least a factor of ten better than present-day observatories, and an operational energy range which extends preferably into the sub-100 GeV domain in order to open up the $\gamma$-ray horizon to observations of cosmologically distant sources. These requirements can be achieved by an array with a collecting area of $\sim 1$~km$^2$ (see Fig 1). The intrinsic properties of a $\sim 1$~km$^2$ IACT array could bring a major breakthrough for VHE gamma-ray astronomy since it combines several key advantages over existing 4-telescope arrays: \begin{itemize} \item A collection area that is 20 times larger than that of existing arrays. Comparison of the collection area of a $\sim 1$ km$^2$ array with the characteristic size of the Cherenkov light pool ($\sim 5 \times 10^4$ m$^2$) suggests that the array should be populated with 50-100 IACTs. \item Fully contained events for which the shower core falls well within the geometrical dimensions of the array, thus giving better angular reconstruction and much improved background rejection. The performance of a typical IACT array in the energy regime below a few TeV is limited by the cosmic-ray background. The sensitivity of a future observatory could be further enhanced through improvements of its angular resolution and background rejection capabilities. It is known that the angular resolution of the present-day arrays of IACTs, which typically have four telescopes, is not limited by the physics of atmospheric cascades, but by the pixelation of their cameras and by the number of telescopes simultaneously observing a $\gamma$-ray event \cite{VF2005,Hofmann2005,FV2007}. \item Low energy threshold compared to existing small arrays, since contained events provide sampling of the inner light pool where the Cherenkov light density is highest. Lower energy thresholds (below 100~GeV) generally require larger aperture ($>15$ m) telescopes; however, a $\sim 1$ km$^2$ IACT has an intrinsic advantage to lower the energy threshold due to the detection of fully contained events. \item A wider field of view and the ability to operate the array as a survey instrument. \end{itemize} In order to maximize the scientific capabilities of a $\sim 1$ km$^2$ array with respect to angular resolution, background suppression, energy threshold and field of view, it is necessary to study a range of options including the design of the individual telescopes and the array footprint. Furthermore, it is necessary to determine the most cost effective/appropriate technology available. The reliability of the individual telescopes is also a key consideration to minimize operating costs. The history of the development of instrumentation for ground-based $\gamma$-ray astronomy has shown that a significant investment into the design and construction of new instruments ($\sim 10$ times the cost of previously existing ACTs) has yielded significant increases in sensitivity. For example, the construction of high resolution cameras in the 1980s assembled from hundreds of individual PMTs and fast electronics made the ``imaging'' technique possible. This advancement improved the sensitivity of the observatories by a factor of 10 through the striking increase of angular resolution and cosmic-ray background rejection, and ultimately led to a detection of the first TeV source \cite{1989ApJ...342..379W}. Another factor of ten investment into the development of small arrays of mid-sized IACTs ($12$~m) demonstrated the benefits of ``stereoscopic'' imaging and made possible the H.E.S.S. and VERITAS observatories. The sensitivity of these instruments improved by a factor of 10 due to the increase of angular resolution and CR background discrimination, despite their only relatively modest increase in the $\gamma$-ray collecting area compared to the previous-generation Whipple $10$~m telescope. The next logical step in the evolution of the IACT technique is the $\sim 1$ km$^2$ array concept. Technological developments such as novel multi-pixel high-quantum-efficiency photo-detectors (MAPMTs, SiPMs, APDs, CMOS sensors, etc.) or PMTs with significantly improved QE, new telescope optical design(s), and modular low-cost electronics based on ASICs (Application-Specific Integrated Circuits) and intelligent trigger systems based on FPGAs (Field Programmable Gate Arrays) hold the promise to (i) significantly reduce the price per telescope, and (ii) considerably improve the reliability and versatility of IACTs. The improvement in sensitivity with a $\sim 1$ km$^2$ array is in part achieved by increasing the number of telescopes. Simple scaling suggests that a factor of $10^1$ improvement in sensitivity requires a factor of $10^2$ increase in the number of telescopes and observatory cost. However, this is not the case for the $\sim 1$ km$^2$ IACT array concept, since the $\sim 1$ km$^2$ concept inherently provides a better event reconstruction so that the sensitivity improves far beyond simple scaling arguments. For the current generation of small arrays, the shower core mostly falls outside the physical array dimensions. A $\sim 1$ km$^2$ array could, for the first time, fully constrain the air shower based on many view points from the ground. This leads to several substantial improvements and can be understood by considering the Cherenkov light density distribution at the ground. The Cherenkov light pool from an atmospheric cascade consists of three distinct regions: an inner region ($r<120$~m) in which the photon density is roughly constant, an intermediate region where density of the Cherenkov photons declines as a power law ($120$~m $<r<$ $300$ m) and an outer region where the density declines exponentially. A small array (VERITAS, HESS) samples the majority of cascades in the intermediate and outer regions of the light pool. A $\sim 1$ km$^2$ array samples for its mostly contained events, the inner, intermediate and outer region of the light pool and allows a much larger number of telescopes to participate in the event reconstruction with several important consequences: \begin{itemize} \item First of all, at the trigger level this results in a lower energy threshold since there are always telescopes that fall into the inner region where the light density is highest. For example, the $12$~m reflectors of the VERITAS array sample a majority of $100$ GeV $\gamma$ rays at distances of $\sim 160$~m and collect $\sim 105$ PEs per event. The same median number of photons would be collected by $9.3$ m reflectors, if the atmospheric cascades were sampled within a distance of $~\sim 120$ m. A $\sim 1$ km$^2$ array of IACTs with fully contained events could operate effectively at energies below 100~GeV despite having a telescope aperture smaller than that of VERITAS~\cite{VF2005,JKBF2005}. Reducing the telescope size translates into a reduction of cost per telescope and total cost for a future observatory. \item The second factor which significantly affects the sensitivity and cost of future IACT arrays is the angular resolution for $\gamma$-rays. Due to the small footprint of the VERITAS and H.E.S.S. observatories, the majority of events above $\sim 100$~GeV are sampled outside the boundaries of the array, limiting the accuracy to which the core of atmospheric cascade can be triangulated. Even higher resolution pixels will not help to improve the angular resolution below $\sim 9$ arc-minutes ~\cite{Bugaev:07} for small arrays. However, contained events in a $\sim 1$ km$^2$ array of IACTs provide a nearly ideal reconstruction based on simultaneous observations of the shower from all directions while sampling multiple core distances. Simulations of idealized (infinite) large arrays of IACTs equipped with cameras composed from pixels of different angular sizes suggest that the angular resolution of the reconstructed arrival direction of $\gamma$-rays improves with finer pixelation up to the point at which the typical angular scale, determined by the transverse size of the shower core is reached~\cite{FV2007}. Figure~\ref{fig:sensang} shows the angular resolution that can be achieved (few minutes of arc) with an ideal ``infinite'' array of IACTs when instrumental effects are neglected \cite{Hofmann2005}. \item The third factor improving the sensitivity of $\sim 1$ km$^2$ arrays of IACTs comes through enhanced background discrimination. For atmospheric cascades contained within the array footprint, it is possible to determine both the depth of the shower maximum and the cascade energy relatively accurately, thereby enabling better separation of hadronic and electromagnetic cascades. Multiple viewpoints from the ground at different core distances also allow the detection of fluctuations in light density and further improve background rejection. Additional improvements extending to energies below 200~GeV may be possible by picking up muons from hadronic cascades, a technique that is used in air shower arrays. A ``muon veto'' signal present in the images obtained of a large array could improve the technique even further. Another method to reject cosmic-ray background at the lowest energies and low light levels \cite{FK:1995} is based on the parallactic displacement of images. The images viewed from multiple viewpoints at the ground show significant fluctuations in lateral displacements for hadronic showers and simulations indicate appreciable $\gamma$/hadron separation capabilities in a regime where faint Cherenkov light images can no longer be resolved for the calculation of standard image parameters. This technique could become effective close to the trigger threshold of large arrays. \end{itemize} In summary, the concept of ``large IACT arrays'' provides strongly improved sensitivity at mid-energies, $\sim 1$ TeV, not only due to increased collecting area, but also due to enhanced angular resolution and CR background rejection. It also presents a cost-effective solution for increasing the collecting area of the observatory at lower energies. For energies above $>10$~TeV, the collecting area of the $\sim 1$ km$^2$ IACT array will be approximately two times larger than its geometrical area due to events impacting beyond the perimeter of the array. It must be noted that in this energy regime the observatory is no longer background limited and therefore its sensitivity scales inversely proportional to the collecting area and exposure. Clearly, versatility is another virtue of a ``large IACT array''. If the astrophysics goal is to only measure the high-energy part of the spectrum ($>10$~TeV) of a given source, e.g. the Crab Nebulae or Galactic Center, only $1/10^{\mathrm{th}}$ of the observatory telescopes, spaced on the grid of $\sim 300$~m, would be required to participate in the study to gain a required sensitivity, while at the same time other observation programs could be conducted. The flexibility of a large array also allows operation in a sky survey mode to detect transient galactic or extragalactic sources~\cite{VF2005}. In this mode of operation a large field of view would be synthesized by partially overlapping the fields of view of individual telescopes. Survey observations, in which collecting area has been traded for wide solid-angle coverage, could then be followed up by more sensitive ``narrow-field'' of view for detailed source studies. Although the design considerations outlined above are relevant for any ``large IACT array'', realistic implementations of this concept could vary. An alternative approach to the array, consisting of identical telescopes, is being developed, based on an extrapolation from small arrays, H.E.S.S. and VERITAS, and is known as the hybrid array concept. In this approach the limitation of the cost of the future observatory is addressed through a design with multiple types of IACTs, each addressing a different energy range. For example, a central core composed of a few very large aperture telescopes ($\sim 20$~m) equipped with fine pixel cameras (or very high spatial density mid-size reflectors~\cite{JKBF2005} ), provides for the low energy response of the array. A significantly larger, $\sim 1$~km$^2$, ring area around the array core is populated with VERITAS class telescopes ($>12$~m) to ensure improved collecting area and performance at mid-energies, $\sim 1$ TeV. Finally, a third ring surrounds the 1~km$^2$ array with a very spread-out array of inexpensive, small ($2$~m aperture), wide-field IACTs outfitted with coarsely pixelated cameras ($0.25^{\circ}$), which would cover areas up to $10$~km$^2$. On the order of $100$ telescopes with $300$~m spacing might be required to gain the desired response at the highest energies ($> 10$ TeV)~\cite{stamatescu07}. The hybrid array concept with a central region of several large aperture telescopes is motivated by significant changes in the distribution of Cherenkov photons at energies considerably smaller than $\sim 100$ GeV. At very low energies, $\sim 10$ GeV, the Cherenkov light is distributed over a relatively large area, but with lower overall density. Therefore, large aperture telescopes arranged in an array with significant separation between them may provide a cost effective solution to improve the low energy response. Independently from exact implementation of the IACT array layout, the sensitivity of future ground-based observatories could be improved through the increase of both camera pixelation and the number of telescopes. The low energy sensitivity will also be affected by the telescope aperture. Therefore, a trade-off optimization of these factors should also be performed under a constraint of constant cost of the observatory. For example, if the camera dominates the overall cost of the IACT significantly, then a reduction of camera pixelation and increase of the number of telescopes is suggested for optimizing cost. If the telescope optical and positioning systems dominate the cost, then reducing the number of telescopes and improving their angular resolution is preferential for achieving the highest sensitivity. The cost per pixel and of the indivisual telescopes of a given apearture are the most critical parameters required for future observatory design decisions. Through the design and construction of H.E.S.S., VERITAS, and MAGIC, considerable experience has been gained in understanding the cost and technical challenges of constructing prime focus, Davies-Cotton (DC) and parabolic reflectors and assembling cameras from hundreds of individual PMTs. The relatively inexpensive, DC telescope design has been used in ground-based $\gamma$-ray astronomy for almost fifty years successfully and provides an excellent baseline option for a future observatory. For example, the HESS 13~m aperture telescopes have an optical pointspread function of better than 0.05 deg. FWHM over a 4 degree field of view and pixel size of 0.15~deg., demonstrating that this telescope design could in principle accommodate a few arc minute camera resolution. To reach significantly better angular resolution in conjunction with wider field of view systems, alternative designs are being considered. An alternative telescope design that could be used in future IACT array is based on the Schwarzschild-Couder (SC) optical system (see Fig. \ref{fig:vass_fig2.ps})~\cite{Vass:07}, which consists of two mirrors configured to correct spherical and coma aberrations, and minimize astigmatism. For a given light-collecting area, the SC optical system has considerably shorter focal length than the DC optical system, and is compatible with small-sized, integrated photo-sensors, such as Multi Anode PMTs (MAPMTs) and possibly Silicon PMs (SiPMs). Although the SC telescope optical system, based on aspheric mirrors, is more expensive than that of a DC design of similar aperture and angular resolution, it offers a reduction in the costs of focal plane instrumentation using pixels that are physically substantially smaller. In addition, the SC telescope offers a wide, unvignetted, 6 degree field-of-view, unprecedented for ACTs, which can be further extended up to 12 degrees, if necessary, when a modest degradation of imaging and loss of light-collecting area can be tolerated. Unlike a DC telescope, the two-mirror aplanatic SC design does not introduce wavefront distortions, allowing the use of fast $>$~GHz electronics to exploit the very short intrinsic time scale of Cherenkov light pulses ($<$3 nsec). The Schwarzschild telescope design was proposed in 1905~\cite{Schwarzschild1905}, but the construction of an SC telescope only became technologically possible recently due to fundamental advances in the process of fabricating aspheric mirrors utilizing replication processes such as glass slumping, electroforming, etc. It is evident that the SC design requires novel technologies and is scientifically attractive. Prototyping and a demonstration of its performance and cost are required to fully explore its potential and scientific capabilities. To summarize, ``large'' IACT array concept provides the means to achieve the required factor of 10 sensitivity improvement over existing instruments. Significant simulations and design studies are required to make an informed decision on the exact array implementation, such as deciding between uniform or graded arrays. Two telescope designs, DC \& SC, offer a possibility for the largest collecting area, largest aperture, and highest angular resolution IACT array options. Studies of the tradeoff of performance costs and robustness of operation are necessary for design conclusions. \begin{figure*}[t] \begin{center} \includegraphics[angle=0,width=6.1cm]{effarea_7_sc_b.eps} \includegraphics[angle=0,width=8.8cm]{array-conf.eps} \caption{ \textit{Left:} Effective area vs. energy for a single cell for different telescope spacings; for a very large array with a fixed number of telescopes, the total effective area will be proportional to this number. \textit{Center,Right:} Two possible array configurations showing a uniform array and one where the central cluster of telescopes is more densely packed to achieve a balance between the desires for low threshold and large effective are at higher energies.} \label{fig:array1} \end{center} \end{figure*} \begin{figure*}[t] \begin{center} \includegraphics[angle=0,width=4.0in]{vass_fig2.eps} \caption{\label{fig:optics} A future Cherenkov telescope array may use conventional Davies-Cotton or parabolic optical reflectors similar to the ones used by VERITAS, MAGIC, and H.E.S.S., or may use novel Schwarzschild-Couder optical designs that combine wide field of views with excellent point spread functions and a reduction of the plate-scale, and thus of the camera size, weight, and costs. The image shows the cross-section of an exemplary Schwarzschild-Couder design (from \cite{Vass:07}).} \label{fig:vass_fig2.ps} \end{center} \end{figure*} \subsection[Future EAS observatory]{Future EAS Observatory} The success of EAS observatories in gamma-ray astronomy is relatively recent, with the first detection of new sources within the last couple of years \cite{Abdo:07}, as compared to the over 20 year history of successes with IACTs. However, EAS observatories have unique and complementary capabilities to the IACTs. The strengths of the technique lie in the ability to perform unbiased all-sky surveys (not simply of limited regions such as the Galactic plane), to measure spectra up to the highest energies, to detect extended sources and very extended regions of diffuse emission such as the Galactic plane, and to monitor the sky for the brightest transient emission from active galaxies and gamma-ray bursts and search for unknown transient phenomena. The instantaneous field of view of an EAS detector is $\approx$2 sr and is limited by the increasing depth of the atmosphere that must be traversed by the extensive air shower at larger zenith angles. However, for higher energy gamma rays, the showers are closer to shower maximum and have more particles; thus the resolution improves. As the Earth rotates, all sources that pass within $\approx$45 degrees of the detector's zenith are observed for up to 6 hours. For a source with a Crab-like spectrum, the flux sensitivity of an EAS detector varies by less than 30\% for all sources located within $\approx$2$\pi$ sr. The angular resolution, energy resolution, and $\gamma$-hadron separation capabilities of EAS technique are limited by the fact that the detectors sample the particles in the tail of the shower development well past the shower maximum. The angular resolution improves at higher energies ($>$ 10 TeV), and the best single-photon angular resolution achieved to date is 0.35$^{\circ}$ which was achieved with the highest energy observations of Milagro. Placing an extensive shower detector at a higher elevation will allow the particles to be detected closer to the shower maximum. For example, an observatory at 4100m above sea level detects 5-6 times as many particles for the same energy primary as an observatory at 2650m (the elevation of Milagro). Also, increasing the size of a detector will increase the collection area and thus the sensitivity. As both signal and background are increased, the relative sensitivity would scale proportional to Area$^{0.5}$ if there were no other improvements. However, the effectiveness of the gamma-hadron cuts improves drastically with detector size, because the lateral shower distribution is more thoroughly sampled. The background hadron induced showers can be efficiently rejected through the identification of muons, hadrons and secondary electromagnetic cores. But the large transverse momentum of hadronic interactions spreads the shower secondaries over a much larger area on the ground than the gamma-ray initiated showers. Detailed simulations using Corsika to simulate the air showers and GEANT4 to simulate a water Cherenkov observatory show that most background hadronic showers can be rejected by identifying large energy deposits separated from the shower core\cite{Smith_GLAST:07}. Simulations of larger versions of such a detector demonstrate that sensitivity scales as Area$^{0.8}$ at least up to 300m x 300m. The high-energy sensitivity of all gamma-ray detectors is limited by the total exposure because the flux of gamma rays decreases with energy. An EAS detector has a very large exposure from observing every source every day. For example, a detector of area 2 $\times$ 10$^4$m$^2$ after 5 years will have over 1 km$^2 \times$ 100 hours of exposure. And as the energy increases, EAS observatories become background free because the lateral distribution of muons, hadrons and secondary cores in hadronic showers is better sampled. The low energy response of EAS detectors is very different from IACTs, again because only the tail of the longitudinal distribution of the shower is observed. Past shower maximum, the number of particles in the shower decreases with each radiation length. However, the probability of a primary penetrating several radiation lengths prior to first interaction in the atmosphere decreases exponentially with radiation length. These two facts, as well as the number of particles at shower maximum is proportional to the primary energy, imply the effective area increases with energy E as E$^{2.6}$ until a threshold energy where the shower can be detected if the primary interacts within the first radiation length in the atmosphere. Therefore, EAS detectors can have an effective area up to 100 m$^2$ at the low energies of $\sim$ 100 GeV. This area is considerably larger than Fermi's of $\sim$ 1 m$^2$, and is sufficient to observe bright, extragalactic sources such as active galactic nuclei and possibly gamma-ray bursts. The wide field of view of EAS observatories is required to obtain long term monitoring of these transient sources and EAS observatories search their data in real time for these transient events to send notifications within a few seconds to IACTs and observers at other wavelengths. The HAWC (High Altitude Water Cherenkov) observatory is a next logical step in the development of EAS observatories\cite{dingus:07}. It will be located in Mexico at Sierra Negra at an altitude of 4100 m and will have 10-15 times the sensitivity of Milagro. The (HAWC) observatory will re-use the existing photomultiplier tubes from Milagro in an approximately square array of 900 large water tanks. The tanks will be made of plastic similar to the Auger tanks, but will be larger, with a diameter of 5 m and 4.3 m tall. An 8" diameter PMT would be placed at the bottom of each tank and look up into the water volume under $\approx$4 m of water. The array would enclose 22,500 m$^2$ with $\approx$75\% active area. Thus, unlike Milagro, the same layer of PMTs would be used to both reconstruct the direction of the primary gamma ray and to discriminate against the cosmic-ray background. The optical isolation of each PMT in a separate tank allows a single layer to accomplish both objectives. A single tank has been tested in conjunction with Milagro and its performance agrees with Monte Carlo simulation predictions. The optical isolation also improves the background discrimination (especially at the trigger level), and the angular and energy resolution of the detector. The performance of HAWC is shown in Figure \ref{fig:hawc} and is compared to Milagro. These detailed calculations use the same Monte Carlo simulations that accurateley predict the performance of Milagro. The top panel shows the large increase in the effective area at lower energies as expected from the increase in altitude from 2600m to 4100m. At higher energies the geometric area of HAWC is similar to the geometric area of Milagro with its outrigger tanks. However, the improved sampling of the showers over this area with the continuous array of HAWC tanks results in improved angular resolution and a major increase in background rejection efficiency. Therefore, the combined sensitivity improvement for a Crab-like source is a factor of 10-15 times better than Milagro. This implies that the Crab can be detected in one day as compared to three months with Milagro. \begin{figure*}[ht] \begin{center} \includegraphics[height=5.2in]{hawc.eps} \caption{\label{fig:hawc}: The sensitivity of HAWC and Milagro versus primary gamma-ray energy. Panel (a) shows the effective area, (b) the angular resolution, and (c) the efficiency with the hadronic background showers are rejected when half of the gamma-ray events are accepted. } \end{center} \end{figure*} The water Cherenkov EAS detector can be extrapolated to enclose even larger areas and the sensitivity of such a detector is relatively straight forward to calculate. Earlier work in this area discussed an array enclosing 100,000 m$^2$, with two layers of PMTs \cite{Sinnis2004, Sinnis2005}. Recent work indicates that a single deep layer (as in the HAWC design) will perform as well as the previous two-layer design. For example, a detector with an active detection area 100,000 m$^2$ (HAWC100), located at 5200 m above sea level, would have an effective area at 100 GeV of $\sim$10,000 m$^2$ for showers from zenith. The low-energy response allows for the detection of gamma-ray bursts at larger redshifts than current instruments ($z\sim$1 for HAWC compared to $z\sim$0.3 for Milagro if, at the source, the TeV fluence is equal to the keV fluence). While current instruments, such as Milagro, indicate that the typical TeV fluence from a GRB is less than the keV fluence, instruments such as HAWC100 and HAWC would be sensitive to a TeV fluence 2-3 orders of magnitude smaller than the keV fluence of the brightest gamma-ray bursts. \subsection[Technology roadmap]{Technology Roadmap} \label{sec:RoadMap} The recent successes of TeV $\gamma$-ray astronomy both in terms of scientific accomplishments and in terms of instrument performance have generated considerable interest in next-generation instruments. Part of the excitement originates from the fact that an order of magnitude sensitivity improvement seems to be in reach and at acceptable costs for making use of existing technologies. New technologies could result in even better sensitivity improvements. A roadmap for IACT instruments over the next 3 years should focus on design studies to understand the trade- offs between performance, costs, reliability of operation of IACT arrays, and on carrying out prototyping and the required research and development. It is anticipated that, at the end of this R\&D phase, a full proposal for construction of an observatory would be submitted. A next generation instrument could be built on a time scale of $\sim$5 years to then be operated for between 5 years (experiment-style operation) and several decades (observatory-style operation). For IACT instruments, the following R\&D should be performed: \begin{itemize} \item Monte Carlo simulations of performance of large IACT arrays to optimize array configuration parameters such as array type (hybrid or homogeneous), array layout, aperture(s) of the telescope(s), and pixilation of the cameras, with a fixed cost constraint. Effects of these parameters on energy threshold, angular resolution, and sensitivity of the observatory should be fully understood, together with associated cost implications. \item The conservative Davies-Cotton telescope design with $f - \frac{F}{D} \sim 1$ should be considered as a baseline option for the future observatory. However, limitations of this design and benefits and cost impact of alternative options should be investigated. These alternatives include large focal length Davies-Cotton or parabolic prime-focus reflectors with $f\sim 2$ and aplanatic two-mirror optical systems, such as Schwarzschild-Couder and Ritchey-Chr\'{e}tien telescopes. The latter designs have the potential to combine significantly improved off-axis point spread functions, large field-of-views, and isochronicity with reduced plate scales and consequently reduced costs of focal plane instrumentation. Prototyping of elements of the optical system of SC or RC telescopes is required to assess cost, reliability and performance improvement. Mechanical engineering feasibility studies of large focal length prime focus telescopes and two-mirror telescopes should be conducted. \item The development and evaluation of different camera options should be continued. Of particular interest are alternative photo-detectors (photomultiplier tubes with ultra high quantum efficiency, multi-anode photomultipliers, multi channel plates, Si photomultipliers, Geiger mode Si detectors, and hybrid photodetectors with semiconductor photocathodes such as GaAsP or InGaN) and a modular design of the camera which reduces the assembly and maintenance costs. Compatibility of these options with different telescope designs and reliability of operation and cost impact should be evaluated. \item The development of ASIC-based front-end-electronics should be continued to further minimize the power and price of the readout per pixel. \item A next-generation experiment should offer the flexibility to operate in different configurations, so that specific telescope combinations can be used to achieve certain science objectives. Such a system requires the development of a flexible trigger system. Furthermore, the R\&D should explore the possibility of combining the trigger signals of closely spaced telescopes to synthesize a single telescope of larger aperture. A smart trigger could be used to reduce various backgrounds based on parallactic displacements of Cherenkov light images \cite{FK:1995}. \item The telescope design has to be optimized to allow for mass production and to minimize the maintenance costs. \item The telescopes should largely run in robotic operation mode to enable a small crew to operate the entire system. The reliability of operation of large IACT arrays should be specifically researched, including tests of instrumentation failure rates and weathering to evaluate required maintenance costs. \end{itemize} A roadmap for EAS array over the next 5 years (HAWC) is well defined by the benefits of moving the experiment to high altitudes and enlarging the detection area. The cost of this path is $<$ \$10M USD. A site in Mexico has been identified and is a few km from the Large Millimeter Telescope; it is a 2 hour drive from the international airport in Puebla, and has existing infrastructure of roads, electricity, and internet. The HAWC project will be a joint US and Mexican collaboration with scientists from Milagro, Auger, and other astronomical and high-energy physics projects. The R\&D for IACT could be finalized on a time scale of between 3 (IACTs). The R\&D should go hand in hand with the establishment of a suitable experimental site and the build-up of basic infrastructure. Ideally, the site should offer an easily accessible area exceeding 1 km$^2$. For an IACT array, an altitude between 2 km and 3.5 km will give the best tradeoff between low energy thresholds, excellent high-energy sensitivity, and ease of construction and operation. The U.S.\ teams have pioneered the field of ground based $\gamma$-ray astronomy during the last 50 years. The U.S. community has formed the ``AGIS'' collaboration (Advanced Gamma ray Imaging System) to optimize the design of a future $\gamma$-ray detector. A similar effort is currently under consideration in Europe by the CTA (Cherenkov Telescope Array) group, and the Japanese/Australian groups building CANGAROO are also exploring avenues for future progress. Given the scope of a next-generation experiment, the close collaboration of the US teams with the European and Japanese/Australian groups should be continued and intensified. If funded appropriately, the US teams are in an excellent position to lead the field to new heights. \section{Technology} \label{grb-subsec} \input{7-technology.tex}
2023-04-23T06:10:06.277Z
2008-10-24T04:11:41.000Z
redpajama/arxiv
arxiv_0002
152
8,014
08975901d5c412d49b4a0e26956122e7568b4083
\section{Introduction} It has long been known that there is a deep and rich connection between supersymmetric nonlinear $\sigma$-models and complex geometry. From a string theory point of view, a two dimensional nonlinear $\sigma$-model -- with as basic objects maps $X: \Sigma \rightarrow {\cal M}$ -- corresponds to the field theory on the world-sheet $\Sigma$ of a string, which is embedded into some target space ${\cal M}$ by an embedding map $X$. It is the existence of extended supersymmetry on $\Sigma$ which then relates to additional structure on ${\cal M}$. Here we focus on $N=(2,2)$ supersymmetry. Apart from the obvious motivation coming from string theory, this is clearly a problem worthy of study in its own right. Especially so when, again motivated by string theory, the target space is not simply a Riemannian manifold endowed with a metric $g$, but is also equipped with a torsionful connection, where the torsion is derived from an NS-NS three-form $H$. Usually, a superspace description of the world-sheet theory helps unraveling this intricate connection with geometrical structures on ${\cal M}$ In $N=(2,2)$ superspace on a surface without boundary, corresponding to the world-sheet of a closed string, the situation is by now quite well understood: the target space geometry allowing for $N=(2,2)$ world-sheet supersymmetry is bihermitian \cite{Gates:1984nk} or -- in the language of generalized complex geometry -- generalized K\"ahler \cite{Gualtieri:2003dx}, and can always be parameterized by at least one of three types of $N=(2,2)$ superfield called chiral, twisted chiral and semi-chiral superfields \cite{Lindstrom:2005zr}. Things are still much less understood for world-sheets with boundaries, corresponding to open strings, where the boundary of $\Sigma$ is mapped to a submanifold ${\cal N}$ of ${\cal M}$ wrapped by a D-brane. The presence of a boundary breaks at least part of the world-sheet supersymmetry, and it is the case where half of the supersymmetry is preserved that is important for understanding D-branes which are BPS in target space. We are thus led to the study of open string boundary conditions in a bihermitian background which preserve half of the world-sheet supersymmetry. It is however important to note that solving for these boundary conditions does not necessarily lead to BPS D-branes. To achieve this, the boundary conditions should also respect quantum conformal invariance. Exactly this fact also serves as an important motivation for understanding the extended superspace description of D-branes. Indeed, one way of acquiring quantum conformal invariance is by imposing the world-sheet $\beta$-functions to vanish. Achieving this at higher order in perturbation theory involves higher loop quantum field theory calculations, and is greatly facilitated by having a superspace formulation at hand \cite{Grisaru:1986px,Nevens:2006ht}. Although superspace approaches to $\sigma$-models with boundaries have been the subject of much investigation, an appropriate superspace formulation is still lacking. Here, we review how the notion of a boundary superspace promises to resolve this problem. $N=1$ boundary superspace was introduced and explored in \cite{Koerber:2003ef}. In \cite{Sevrin:2007yn} an $N=2$ boundary superspace formalism was used to recover the full classification of A and B branes on K\"ahler manifolds, which are parameterized by either chiral or twisted chiral superfields exclusively. Especially for coisotropic A branes \cite{Kapustin:2001ij,Lindstrom:2002jb} the solution proved to be quite subtle and elegant. In \cite{Sevrin:2008tp} this was taken one step further to cover bihermitian geometries with commuting complex structures, which generically have a local description in terms of chiral and twisted chiral fields simultaneously. In this note we also discuss some interesting aspects of the classification when semi-chiral superfields are included. The full classification will appear elsewhere \cite{wip}. For a far more detailed account of these matters and a more complete list of references, we invite the reader to consult \cite{Sevrin:2007yn,Sevrin:2008tp}. Here we will only provide a rough sketch of the boundary superspace formalism and the way it is used to classify $N=2$ boundary conditions. But first we briefly summarize some facts about the relation between supersymmetry and geometry in the absence of world-sheet boundaries. \section{$N=(2,2)$ superspace and geometry} Given a target space ${\cal M}$ with metric $g$ and three-form $H$, the dynamics of a closed supersymmetric string propagating in such a background is encoded in the action \begin{eqnarray} {\cal S}=2\int d^2 \sigma \, d \theta^+ d\theta^- \,D_+X^aD_-X^b\left(g_{ab}+b_{ab} \right),\label{an11} \end{eqnarray} where we used that locally $H = db$. Here we wrote down the action in $N=(1,1)$ superspace, which required the introduction of two real Grassmann coordinates $\theta^\pm$ on top of the usual (bosonic) world-sheet coordinates $\tau$ and $\sigma$. The corresponding supercovariant derivatives are denoted by $D_\pm$ and satisfy a standard superalgebra; see for example \cite{Sevrin:2008tp}. Furthermore $X^a$, $g$ and $b$ are $N=(1,1)$ superfields. The $N=(1,1)$ superspace action eq. \rref{an11} can be written down for any Riemannian manifold $({\cal M}, g)$. One can however ask for what kind of backgrounds this action exhibits additional supersymmetries. For $N=(2,2)$ supersymmetry, this implies that there exist additional symmetries of the form \begin{eqnarray} \delta X^a= \varepsilon ^+\,J_+^a{}_b(X)\,D_+X^b+\varepsilon ^-\,J_-^a{}_b(X)\,D_-X^b,\label{tr22} \end{eqnarray} which are of the most general form consistent with dimensions and super Poincar\'e invariance. Here, $\varepsilon^\pm$ are real Grassmann supersymmetry parameters and $J_\pm$ are a priori arbitrary (1,1)-tensors which are allowed to depend on the superfields $X^a$. On-shell closure of the $N=(2,2)$ algebra and invariance of \rref{an11} under \rref{tr22} however require that $J_\pm$ are complex structures, that $g$ is hermitian with respect to both of them and that they are covariantly constant with respect to a torsionful connection: $\nabla_\pm J_\pm = 0$, where $\nabla_\pm = \nabla_g \pm g^{-1}H$, and $\nabla_g$ is the Levi-Civita connection. Note that hermiticity of the metric with respect to $J_\pm$ implies the existence of two two-forms $\omega_\pm = g J_\pm$, which are however not closed when $H$ is nonzero. Hence, these geometries are K\"ahler only when $H=0$. More generally, such geometries are called bihermitian. The result of the previous paragraph has been known for over twenty years \cite{Gates:1984nk}, but only recently, a better understanding of such geometries has been achieved. The first reason for this was the development of generalized complex geometry (GCG) \cite{Hitchin:2004ut,Gualtieri:2003dx}. Indeed, in \cite{Gualtieri:2003dx} it was shown that bihermitian geometry is equivalent to what in the language of GCG is called generalized K\"ahler geometry. A generalized complex structure (GCS) is an automorphism ${\cal J}$ of $T{\cal M} \oplus T^\ast{\cal M}$ that squares to minus the identity, preserves the natural pairing defined on $T{\cal M} \oplus T^\ast{\cal M}$ and is involutive with respect to the Courant bracket. A generalized K\"ahler structure then requires the existence of two commuting GCSs ${\cal J}_1$ and ${\cal J}_2$, such that their product defines a definite inner product on $T{\cal M} \oplus T^\ast{\cal M}$. Up to a b-transform, the GCSs ${\cal J}_1$ and ${\cal J}_2$ are completely specified by the complex structures $J_\pm$, and $g$ and $H$,\footnote{Another approach is to twist the GCSs with respect to $H$ and put $b=0$ in their concrete expression given here.} \begin{eqnarray} {\cal J}_{1,2} = \frac 1 2 \left( \begin{array}{cc} 1 & 0 \\ b & 1 \end{array}\right) \left( \begin{array}{cc} J_+ \pm J_- & -(\omega^{-1}_+ \mp \omega^{-1}_-) \\ \omega_+ \mp \omega_- & -(J^t_+ \pm J^t_-) \end{array}\right) \left( \begin{array}{cc} 1 & 0 \\ -b & 1 \end{array}\right).\label{gcs} \end{eqnarray} It is easy to see that for instance for $J_+ = J_- = J$, the GCS ${\cal J}_1$ corresponds to a complex structure $J$ and ${\cal J}_2$ corresponds to a symplectic structure $w = gJ$. This example obviously corresponds to a K\"ahler structure. A second, related development was the establishment that the most general bihermitian target space can be parameterized as follows \cite{Sevrin:1996jr,Lindstrom:2005zr}. Locally, the target space can be decomposed as $\ker(J_+ - J_-) \oplus \ker(J_+ + J_-) \oplus \operatorname{coker}[J_+, J_-]$. From an $N=(2,2)$ superspace point of view, it turns out that each component of this decomposition can be parameterized by a different kind of superfield. To understand this better, let us introduce two more Grassmann coordinates $\hat\theta^\pm$ (and corresponding supercovariant derivatives $\hat D_\pm$) on top of $\theta^\pm$ to form an $N=(2,2)$ superspace. On dimensional grounds, the most general $N=(2,2)$ superspace action must necessarily be of the following form: \begin{eqnarray} {\cal S}=\int\,d^2 \sigma \,d\theta^+ \,d\theta^- \,d \hat \theta^+ \,d \hat \theta^- \, V(X), \label{an22} \end{eqnarray} where $V$ is a real dimensionless scalar potential. To obtain some dynamics, one must impose constraint equations on the superfields. Imposing linear constraints of the form $\hat D_\pm X^a = J^a_\pm{}_b D X^b$, implies that $J_\pm$ are commuting complex structures. Thus, one expects such constrained superfields to parameterize $\ker[J_+, J_-]$. Indeed, diagonalizing $J_+$ and $J_-$ simultaneously and introducing complex coordinates with respect to $J_+$, we find that $\ker(J_+ - J_-)$ is parameterized by chiral fields $z^\alpha$ and anti-chiral fields $z^{\bar\alpha}$, \begin{eqnarray} \hat D_\pm z^\alpha = i D_\pm z^\alpha, \quad \hat D_\pm z^{\bar\alpha} = -i D_\pm z^{\bar\alpha}, \end{eqnarray} while $\ker(J_+ + J_-)$ is parameterized by twisted chiral fields $w^\mu$ and twisted anti-chiral fields $w^{\bar\mu}$, \begin{eqnarray} \hat D_\pm w^\mu = \pm i D_\pm w^\mu, \quad \hat D_\pm w^{\bar\mu} = \mp i D_\pm w^{\bar\mu}. \label{n22tc} \end{eqnarray} These superfields have exactly the same number of components as $N=(1,1)$ superfields. On the other hand, since the supersymmetry algebra only closes off-shell up to terms involving $[J_+,J_-]$, we expect to need auxiliary fields in the $N=(2,2)$ parameterization of $\operatorname{coker}[J_+, J_-]$, and thus ``less constrained'' superfields in these directions. These superfields form a semi-chiral multiplet $l^{\tilde\alpha}$, $r^{\tilde\mu}$, and a semi-anti-chiral multiplet $l^{\bar{\tilde\alpha}}$ and $r^{\bar{\tilde\mu}}$, which obey \begin{eqnarray} \hat D_+ l^{\tilde\alpha} = i D_+ l^{\tilde\alpha}, &&\quad \hat D_+ l^{\bar{\tilde\alpha}} = -i D_+ l^{\bar{\tilde\alpha}},\\ \hat D_- r^{\tilde\mu} = i D_- r^{\tilde\mu}, &&\quad \hat D_- r^{\bar{\tilde\mu}} = -i D_- r^{\bar{\tilde\mu}}, \end{eqnarray} where consistency requires an equal number of $l^{\tilde\alpha}$ and $r^{\tilde\mu}$. The result of \cite{Lindstrom:2005zr} was essentially that only the above types of superfield are required to parameterize the most general bihermitian target space. All necessary data defining the background, i.e. the complex structures $J_\pm$, the metric $g$ and the three-form $H$, can be derived from the single Lagrangian density $V$, which is consequently called the generalized K\"ahler potential. In the chiral and twisted chiral sectors, these data derive linearly from $V$, while in the semi-chiral sector, the analogous relations are highly nonlinear. \section{Boundary superspace} It has been argued that supersymmetric D-branes in generalized K\"ahler backgrounds are generalized complex submanifolds $({\cal N}, {\cal F} )$ -- where ${\cal N}$ is a submanifold and $d{\cal F} = H$ on ${\cal N}$ -- with respect to either ${\cal J}_1$ or ${\cal J}_2$ \cite{Gualtieri:2003dx,Zabzine:2004dp}. For K\"ahler manifolds parameterized by chiral fields, it is not difficult to see that with the conventions from the previous section, this notion with respect to ${\cal J}_1$ leads to B branes, while the same notion with respect to ${\cal J}_2$ leads to A branes \cite{Gualtieri:2003dx}. Equivalently, one can consider only ${\cal J}_1$ and instead exchange chiral fields for twisted chiral fields to obtain A branes. It is this latter approach we will adopt in the boundary superspace formalism. Unfortunately, the geometric understanding of D-branes as generalized complex submanifolds is much less developed once $H$ differs from zero. Nevertheless, using boundary superspace, we can give a fairly concrete local description of D-branes in bihermitian backgrounds. In the absence of boundaries, the action \rref{an11} is manifestly $N=(1,1)$ supersymmetric. However, in the presence of a boundary -- at $\sigma = 0$ say -- part of the translation invariance and therefore half of the world-sheet supersymmetry is necessarily broken. Introducing a new basis, $D = D_+ + D_-$ and $D' = D_+ - D_-$, we can choose the broken supersymmetry to correspond to $D'$, while $D$ is preserved. It now turns out that the action \cite{Lindstrom:2002mc,Koerber:2003ef} \begin{eqnarray} {\cal S}=-\int d^2 \sigma \, d \theta \,D'\left(D_+X^aD_-X^b\left(g_{ab}+b_{ab}\right)\right),\label{an1} \end{eqnarray} differs from (\ref{an11}) only by a boundary term, while being manifestly invariant under the $N=1$ supersymmetry corresponding to $D$. This is thus called an action in $N=1$ boundary superspace. The boundary term in the variation of (\ref{an1}) disappears by imposing either Dirichlet boundary conditions, $\delta X^a = 0$, or Neumann boundary conditions, $D'X^a = b^a{}_b DX^b$. To describe branes of intermediate dimensions one introduces an almost product structure; see \cite{Sevrin:2008tp} and references therein. Notice that the action \rref{an1} is not unique, but one can add a boundary term of the form, \begin{eqnarray} {\cal S}_b=2i\,\int d \tau \, d \theta\, (A_a\,DX^a + B_a D' X^a). \label{Eva} \end{eqnarray} The first term simply leads to a replacement $b \rightarrow {\cal F} = b + F$, where $F=dA$, in \rref{an1}. The second term seems to be problematic at first sight. A term like this arises naturally when considering twisted chiral or semi-chiral superfields. It turns out that it however always reduces to the form of the first term in \rref{Eva} when appropriate boundary conditions (e.g. Neumann conditions $D'X = {\cal F} DX$) are imposed. To similarly go from $N=(2,2)$ to $N=2$ superspace, we again introduce operators $D$, $D'$, $\hat D$ and $\hat D'$, where again $\hat D = \hat D_+ + \hat D_-$ and $\hat D' = \hat D_+ - \hat D_-$. We take $D$ and $\hat D$ to correspond to preserved supersymmetries, while the other combinations are broken. By the same construction that led us to the $N=1$ action, we find that the action \begin{eqnarray} {\cal S}=\int d^2 \sigma\, d \theta d \hat \theta\, D' \hat D'\, V(X, \bar X)+ i\,\int d \tau \,d \theta d \hat \theta \,W( X, \bar X), \label{an2} \end{eqnarray} has manifest $N=2$ supersymmetry and differs from (\ref{an22}) only by a boundary term. The symbol $X$ here stands collectively for chiral, twisted chiral and semi-chiral superfields. Note that we were able to add a term with a boundary potential $W$, which turns out to be crucial for the consistency of the formalism. For instance, eq. (\ref{an22}) is invariant under generalized K\"ahler transformations \begin{eqnarray} V \rightarrow V + F + \bar F + G + \bar G\,, \quad F \equiv F(z,w,l)\,, ~G \equiv G(z, \bar w, r). \end{eqnarray} This invariance only survives in the presence of a boundary if at the same time $W$ transforms as \begin{eqnarray} W \rightarrow W -i( F - \bar F) +i (G - \bar G). \end{eqnarray} Again, the variation of the action with respect to the superfields will result in a boundary term. $N=(2,2)$ supersymmetry puts strong conditions on which kind of Dirichlet and Neumann conditions can be imposed. This then results in a classification of the possible D-branes in a bihermitian background. To get a feeling for what kind of boundary conditions we can impose, we can already learn a lot by analyzing the $N=2$ superfield content one obtains starting from the constrained superfields in $N=(2,2)$ superspace. In this respect chiral fields turn out to be quite different from the other types of superfield, while twisted and semi-chiral superfields are in some important ways very similar. Starting from an $N=(2,2)$ chiral superfield $z$ and its complex conjugate $\bar z$, we end up with the $N=2$ superfields $z$, $\bar z$, $D'z$ and $D'\bar z$. Note however that in $N=2$ superspace these fields are still constrained. The situation is very different for a twisted chiral field $w$. From eq. \rref{n22tc} we get $N=2$ superspace relations like $\hat D w = i D' w$ which relate the $N=2$ superfields $w$ and $D'w$. This implies that in $N=2$ superspace, we only end up with the superfields $w$ and $\bar w$, which are unconstrained. Roughly the same happens for a semi-chiral multiplet $l$, $r$, $\bar l$ and $\bar r$: in $N=2$ superspace we end up with the unconstrained superfields $l$, $r$, $\bar l$ and $\bar r$, and some (unconstrained) auxiliary fields. Knowing this, a lot can already be understood about possible boundary conditions. Since for chiral fields $Dz$ and $D'z$ are unrelated, Dirichlet and Neumann conditions can be imposed independently, while every condition implies its complex conjugate. So in chiral directions, one finds an even number of Dirichlet and an even of Neumann boundary conditions. For example, on K\"ahler manifolds parameterized by chiral fields exclusively, one finds holomorphically embedded submanifolds, $[J,R] = 0$, with a holomorphic $U(1)$ bundle with field strength $F_{\alpha\bar\beta} = -i \partial_\alpha \partial_{\bar\beta} W$, namely the well known B branes. In contrast, for twisted chiral and semi-chiral fields, one has two options. First of all, imposing a Dirichlet condition on a twisted chiral field automatically implies a Neumann condition, basically because of the constraint equation $\hat D w = i D' w$. For semi-chiral fields it turns out that Dirichlet conditions always come in even numbers, while they still automatically imply (an even number) of Neumann conditions.\footnote{Note that this implies that D0-branes preserving $N=2$ world-sheet supersymmetry can only exist on K\"ahler manifolds, while D1-branes only exist on manifolds parameterized by one twisted chiral field, any number of chiral fields and their complex conjugates.} Restricting to the case where chiral fields are absent allows us to be a bit more concrete. The boundary term in the variation of the action can in that case be written as \begin{eqnarray} \delta {\cal S}\vert_{bdy} = \int d\tau d\theta d\hat\theta \, (B_a \delta X^a + \delta W), \label{bv} \end{eqnarray} where the $B_a$ are certain expressions involving the bulk potential $V$. The exterior derivative of the one-form with components $B_a$ turns out to be $\Omega$, the symplectic form which exists in the absence of chiral fields, \begin{eqnarray} \Omega = 4g (J_+ - J_-)^{-1}. \end{eqnarray} Let us take the extremal case where we impose a maximal number of Dirichlet conditions, so that ${\cal N}$ has half the dimensionality of ${\cal M}$. Introducing local coordinates $\sigma^i$ on the brane, the vanishing of \rref{bv} implies the condition, $\partial_i W = - B_a \partial_i X^a$. The integrability condition for this is precisely that the pull-back of the symplectic form $\Omega$ to the brane vanishes, so that the brane wraps a lagrangian cycle with respect to $\Omega$. On the other hand, since twisted and semi-chiral superfields are unconstrained at the boundary, we can choose to impose a boundary constraint on them. Taking again the extreme case where such a constraint is imposed on all $X^a$ in eq. \rref{bv}, we get \begin{eqnarray} \hat D X^a = K^a{}_b (X) D X^b, \label{bc} \end{eqnarray} so that all $X^a$ become chiral at the boundary. Note that this results in a space-filling brane. As before, consistency of an equation like this immediately results in the fact that $K$ is a complex structure. One can show that $\Omega$ is a (2,0) + (0,2) form with respect to $K$, which implies that the target space needs to be $4n$-dimensional, where $n\in {\mathbb N}$. This type of brane also requires a non-vanishing world-volume flux of the form $F = \Omega K$. Summarizing, these are the conditions for a maximally coisotropic brane, again with respect to $\Omega$. The intermediate cases describe coisotropic branes of intermediate dimension. In the special case where only twisted chiral fields are present, this story reduces to the known one for A branes on K\"ahler manifolds \cite{Kapustin:2001ij}. Our results illustrate in a very concrete way that for bihermitian geometries of symplectic type (although no longer necessarily K\"ahler), there exists a class of D-branes, which are generalized complex submanifolds with respect to the symplectic structure,\footnote{That ${\cal J}_1$ is indeed of symplectic type follows from the fact that $\omega_+^{-1} - \omega_-^{-1}$ is proportional to $\Omega^{-1}$ and thus invertible.} as was anticipated in \cite{Gualtieri:2003dx}. As such, this generalizes the notion of A branes on K\"ahler manifolds. An important difference with respect to the purely chiral case, is that in this symplectic case, the boundary potential $W$ is essentially fixed by consistency requirements. This is in stark contrast with B branes, where $W$ is completely independent of $V$ and serves in a very straightforward way as a potential for the gauge field living on the brane. This leaves only the issue of adding chiral fields to the mix. In \cite{Sevrin:2008tp} this was solved in the case where only chiral and twisted chiral fields are present. As we hinted at before, a nice geometric interpretation like the one in the previous paragraph is still lacking for this case. Nevertheless, very concrete boundary conditions were obtained also in this case. For example, if Neumann conditions are chosen in all twisted chiral directions, one can combine the boundary constraints on the twisted chiral fields with the constraints on the chiral fields along the brane to obtain \begin{eqnarray} \hat D X^M = {\cal K}^M{}_N DX^N, \quad \mbox{where} \quad {\cal K} = \left( \begin{array}{cc} J & 0 \\ L & K \end{array} \right) \label{calK}, \end{eqnarray} and the upper (lower) components of $X^N$ are (twisted) chiral fields. Notice that nonzero $L$ now leads to components of $F$ with one leg in chiral and one leg in twisted chiral directions. It seems very likely that the same general structure persists once we include semi-chiral fields. \section{Outlook and applications} The full set of possible boundary conditions for models including the three types of superfield is very close to being fully understood and will appear in a forthcoming paper \cite{wip}. A complete understanding of the geometry behind these boundary conditions, possibly with the help of generalized complex geometry, is certainly desirable and being investigated. Here, a more geometric understanding of the role of the boundary potential $W$ could be very important. Once a full classification is obtained, we have the necessary tools for studying more general -- e.g. six-dimensional -- examples as well as duality transformations between them. In a second part of this write-up, we focus on some four-dimensional examples and applications of this general formalism. In particular, we discuss 3-branes on the WZW model on $SU(2)\times U(1)$. This is then used as a starting point for using the power of superspace to perform explicit T-duality transformations which map these branes to various branes on K\"ahler manifolds. A particularly interesting application of this is the construction of new examples of coisotropic branes. We show how to for instance construct a maximally coisotropic brane on a K\"ahler manifold which not hyperk\"ahler. \begin{acknowledgement} We thank Ulf Lindstr\"om, Martin Ro\v{c}ek and Maxim Zabzine for useful discussions and suggestions. All authors are supported in part by the European Commission FP6 RTN programme MRTN-CT-2004-005104. AS and WS are supported in part by the Belgian Federal Science Policy Office through the Interuniversity Attraction Pole P6/11, and in part by the ``FWO-Vlaanderen'' through project G.0428.06. AW is supported in part by grant 070034022 from the Icelandic Research Fund. \end{acknowledgement}
2023-04-23T06:10:06.781Z
2008-10-29T21:54:49.000Z
redpajama/arxiv
arxiv_0002
172
4,136
133bd32fbf496d2084f9ccdecd20b567fc151462
\section{Introduction} The detection of gravitational waves (GWs) will have many implications in Physics and Astrophysics. Besides the confirmation of the general relativity theory, it will allow the investigation of several astrophysical phenomena, such as the existence of black holes and the mass and abundance of neutron stars, thus opening new scientific frontiers. The most promising sources for detection of GWs are neutron stars and black holes. These objects emit waves in a very wide spectrum of frequencies determined by their quasi-normal modes oscillations \cite{Marranghello}. With the goal to analyze the possibility of a future detection of the quasi-normal modes of compact stars by resonant mass detectors (RMDs), we focalize our attention in the region of the spectrum in the range at 0.8-3.4 kHz, which is the operation region of the antennas: ALLEGRO, EXPLORER, NAUTILUS, AURIGA \cite{astone2007}, SCHENBERG, and MiniGrail. In particular we will work with the frequency band of the spherical detectors SCHENBERG and MiniGrail (2.8-3.4 kHz) \cite{minigrail2007,costa2008}. The SCHENBERG is the second spherical detector ever built in the world and the first equipped with a set of parametric transducers, which is installed at the Physics Institute of the University of Sao Paulo (at Sao Paulo city, Brazil). It has undergone its first test run in September 8, 2006, with three transducers operational. Recent information on the present status of this detector can be found in reference \cite{cqg2008}. It is worth stressing also that among all known GW detectors, the spherical ones are the only one capable to determine the direction of the incoming wave \cite{lenzi20081,lenzi20082}. In this paper we present an extension of the work made by Marranghello \cite{Marranghello}, where the authors show results for a restricted band of frequency that include the SCHENBERG and MiniGrail bandwidth. In the present work we analyze the case of a possible future detection made by all resonant antennas and compare the mass and radius range obtained from the frequency bands of the GWs modes to the mass and radius calculated with several relativistic equations of state (EoS) models. In section 2 we introduce the f and p$_I$-mode, in section 3 we show the mass-radii diagrams for the f and p$_I$ modes and compare them with the ones obtained with different relativistic EoS models, in section 4 we introduce the damping time of the f-mode and its mass-radii diagram and finally, in section 5, we made the last considerations. \section{The quasi-normal modes: f and p$_I$-modes} The neutron stars have a rich spectrum of frequencies because the fluid perturbation oscillates in many different modes. From the GW point of view the most important quasi-normal modes are the fundamental mode of the fluid oscillation (f-mode), the first pressure mode (p$_{I}$-mode), the first GW mode (w$_{I}$-mode) \cite{kokkotas92} and the r-modes that, under certain circumstances, can be an important source of GWs \cite{kokkotas99}. In this work we concentrate in the f and p$_{I}$-modes. The fundamental mode can be described by the density distribution inside the star, while the p-mode is the pressure restoration force. In reference \cite{Benhar2004}, the authors have obtained an empirical formulae for the frequencies of these two modes as a function of the mass and radius using a wide sample of equations of state: \begin{equation}\label{eq1} \nu_f = (0.79\pm 0.09) +(33\pm 2)\sqrt{\frac{M}{R^3}}, \end{equation} \begin{equation}\label{eq2} \nu_p = \frac{1}{M}\left[(-1.5\pm 0.8) + (79\pm 4)\frac{M}{R}\right], \end{equation} where the mass and the radii are given in km (remember that $M_\odot \thickapprox 1.477$ km), while $\nu_f$ and $\nu_p$ are given in kHz. Using the empirical relations (\ref{eq1}) and (\ref{eq2}), we have calculated the radii $R$ of the stars for a given interval of masses $M$ in the range of frequencies that include the bandwidth (0.8-3.4 kHz) of all RMDs in operation. In Table (1) we can see the resonant frequencies of these detectors. \begin{table}[ht] \begin{center} \caption{Frequency band of the RMDs in operantion in the world.} \begin{tabular}{llcc} \hline \hline Antena & Location & Freq.(Hz) & Type \\ \hline ALLEGRO \ \ \ \ \ \ \ & Baton Rouge & 890-920 & Bar \\ EXPLORER & CERN & 895-920 & Bar \\ NAUTILUS & Frascai & 905-925 & Bar \\ AURIGA & Legnaro & 850-930 & Bar \\ SCHENBERG & S\~ao Paulo & \ \ \ 3100-3300 \ \ \ & Spherical \\ MiniGrail & Leiden & 2800-3000 & Spherical \\ \hline \end{tabular}\label{table1} \end{center} \end{table} Through these relations we have obtained diagrams for p$_I$ and f-modes that relate GW frequency with masses and radii of the sources. These diagrams allow us to determine the better candidates for a future detection by resonant antennas from the compactness of the star. We can see in figure (1) and (2) the f and p$_I$-mode's mass-radii diagrams where the different gray scale identify the different frequencies. \section{Comparison of the mass-radius diagrams with the ones obtained by relativistic EoS models } To determine what relativistic models of compact stars emit GW in the frequency bands of the RMDs we compare the diagrams of the relations (\ref{eq1}) and (\ref{eq2}) with some neutron stars masses and radii sequences obtained by different relativistic models that generate several equations of state for hadronic matter such as models NP, NPH, NPHQ with and without isovector-scalar $\delta$ \cite{Constanca01}, namely \begin{enumerate} \item[-] the models with parameters set GM1, GM3, NL3, TM1 \cite{Constanca02,Constanca03} \end{enumerate} and some for quark strange matter as \begin{enumerate} \item[-] Nambu-Jona-Las\'inio model (NJL) \cite{Constanca4}, color-flavor locked phase (CFL) \cite{Sanjay} and the MIT bag model to different values of the bag constant \cite{MITbag} and hybrid star EoS. \end{enumerate} Relativistic hadronic models have been widely used in order to describe nuclear matter, finite nuclei, stellar matter properties, and recently in the high temperature regime produced in heavy ion collisions \cite{debora}. Many variations of the well known quantum hadrodynamic model \cite{sw} have been developed and used along the last decades. Some of them rely on density dependent couplings between the baryons and the mesons \cite{original1,original2,tw,br,gaitanos,twring1,twring2} while others use constant couplings \cite{nl3,tm1,glen}. Still another possibility of including density dependence on the lagrangian density is through derivative couplings among mesons and baryons \cite{delf1,delf2,chiappa1} or the coupling of the mediator mesons among themselves \cite{nlwr1,nlwr2,nlwr3}. The relativistic model couplings are adjusted in order to fit expected nuclei properties such as binding energy, saturation density, compressibility and energy symmetry at saturation density, particle energy levels, etc. These same relativistic models are extrapolated to higher densities as in stellar matter and the results obtained for the neutron star masses and radii are quite good in comparison with the astronomical observations. In the case of bare quark stars, the strange matter inside the star is usually describe by the MIT bag model, a Fermi gas of free quarks with a vacuum energy known as the bag constant, or by chiral models like the Nambu-Jona-Las\'inio (NJL) model that has a dynamical chiral symmetry breaking mechanism that originates mass for the quarks. Recently, the possibility that quarks can be paired at high densities and be in a color superconductive phase has originated new quark matter equations of state, that depending on the pairing interaction, can be quite stiff and produce large stars masses and radii \cite{MM1,MM2,MM3}. The main feature of a quark star, since they are bound by the strong force and not by gravity, is that they are more compact and have smaller mass to radius ratio than a neutron star. As we will see, it is this fact that strange stars can have small radii will explain the high frequency GWs modes produced by this type of stars. We can see in the diagrams (1) and (2) that the frequency band of the RMDs are on the dark region, where it is expected that GWs generate from less compact neutron stars in both diagrams. This fact shows the impossibility of a future detection, by cylindrical antennas, of relativistic neutron star candidates emitting gravitational wave on p$_I$ or f-mode. However, we can see on the diagrams that for the spherical detectors bandwidth, MiniGrail and Schenberg, have some candidates near their resonant frequencies. The most probable source would correspond to a very compact object with radius smaller than 10 km. The models that fulfill this condition are models of strange quark stars, as preview in \cite{Marranghello}. This fact is confirmed when we compare with the compact star sequence generated from the MIT bag model (with bag constant $B^{1/4} = 170$ MeV), NJL model and CFL of quark matter. On the other hand the p$_I$-mode would only be expected to come from less compact neutron stars. \section{The damping time} How can we distinguish the f-mode in a putative detection? And how to determine the mass and radii of the star? The damping time is the response for these questions \cite{Marranghello}. In \cite{Benhar2004} the authors obtained an empirical relation for the f-mode damping time as function of the radius and mass, described by: \begin{equation}\label{eq3} \tau_f = \frac{R^4}{cM^3}\left[ (8.7 \pm 0.2)\cdot10^{-2} + (-0.271 \pm 0.009) \frac{M}{R} \right]^{-1}. \end{equation} Even though the RMDs can not determine the damping time properties with small errors, we use the empirical relation (\ref{eq3}) to calculate the damping time given the intervals of radii and mass $(R,M)$ obtained with relation (\ref{eq1}). We can get a new mass-radius diagram, but doing a distinction in the damping time. We compare the diagram with models of quark stars, CFL and MIT bag model with bag constant equal to 170 MeV. Results obtained can be seen in figure (3). Through these results we can estimate the masses and radii of the stars solving the inverse problem, as show in \cite{kokkotas99}. \section{Summary} RMDs bandwidth are on the spectrum regions with a few (or none) neutron stars candidates emiting GWs through their f and p$_I$-modes. However, the spherical detectors are on a region where the f-modes of very compact objects can be detected. All sequences of neutron stars described by quark matter models are on the region near to MiniGrail and Schenberg bandwidth, but the MIT bag model with bag constant equal to 170 MeV and CFL of the quark matter with the bag constant 200 MeV and the gap 100 MeV are the best candidates for these detectors, as we can see in figures (1) and (3). On the other hand the detection of the f and p$_I$-modes of neutron stars by bar detectors is unlikely, because their bandwidth is located in low frequencies. \begin{acknowledgements} This work was partially supported by FEDER and FCT (Portugal) under the project PDCT/FP/64707/2006. The CHL, MM, RMM thank the financial support given by CAPES through the fellowship 2071/07-0 and the international cooperation program Capes-Grices between Brazil-Portugal. \end{acknowledgements}
2023-04-23T06:10:07.013Z
2009-01-21T21:40:19.000Z
redpajama/arxiv
arxiv_0002
185
1,896
d1b84812715289cbee8644b5244a9161acadae72
\section{Introduction} The Standard Model of particle physics (SM) has been succesful in explaining experimental data so far. However, the Higgs sector of the model is experimentally unknown. This fact has lead many theorists to suggest that the sector could be non-minimal, and today it is common to study extensions of the SM scalar sector. The simplest extension is the Two Higgs Doublet Model (2HDM)\cite{mssm,HHG}, which involves two scalar doublets in the process of electroweak symmetry breaking. After the symmetry breaking, there are five physical Higgs particles: two charged Higgs $H^\pm$, two CP-even $H^0$, $h^0$, and one CP-odd $A^0$. The charged particles $H^\pm$ are characteristic of the models with two Higgs doublets and its discovery would be a clear signal of physics beyond the SM.\\ A promising alternative for the search of new physics lies on the collisions of photons that will be present at ILC (International Linear Collider). The ILC is the next greatest project to be developed after the LHC (Large Hadron Collider). The photons will be produced by Compton retrodispersion, and earlier studies show that production of neutral scalars by photon collisions present a considerable probability of detection. Besides, photons couple directly to charged particles, so $\gamma\gamma$ high energy collisions could provide a better understanding of several aspects of the SM and its extensions\cite{rdr}.\\ In this work we study the process $\gamma\gamma\to A^0\to (W^-\to l\nu) (H^+\to f_if_j)$ in the frame of a general 2HDM. The first section presents a brief overview of the 2HDM, focusing in the third type (2HDM-III). The next section contains the expressions used in the calculations. The third section contains the obtained results and finally the conclusions are drawn. \section{The Two Higgs Doublet Model type III} The minimal extension of the Higgs sector of the SM consists in adding a second scalar doublet with the same quantum numbers than the first one\cite{HHG,RDiaz}. We denote them by: \begin{eqnarray} \Phi_1= \left(\begin{array}{cc} \phi_1^+\\ \phi_1^0 \end{array}\right) \hspace*{1cm} \Phi_2= \left(\begin{array}{cc} \phi_2^+\\ \phi_2^0 \end{array}\right), \end{eqnarray} with hypercharge $Y_{\Phi_1}=Y_{\Phi_2}=1$. Both doublets acquire vacuum expectation values different from zero: \begin{equation} \left<\Phi_1\right>_0=\frac{v_1}{\sqrt{2}},\hspace*{1cm} \left<\Phi_2\right>_0=\frac{v_2}{\sqrt{2}}. \end{equation} The most general gauge invariant lagrangian which couples both Higgs fields with fermions is: \begin{eqnarray} -\mathcal L_Y&=& \eta_{ij}^{U,0}\bar Q_{iL}^0\tilde\Phi_1U_{jR}^0 +\eta_{ij}^{D,0}\bar Q_{iL}^0\Phi_1D_{jR}^0\nonumber\\ &+&\xi_{ij}^{U,0}\bar Q_{iL}^0\tilde\Phi_2U_{jR}^0 +\xi_{ij}^{D,0}\bar Q_{iL}^0\Phi_2D_{jR}^0\nonumber\\ &+&\eta_{ij}^{E,0}\bar l_{iL}^0\tilde\Phi_1E_{jR}^0 +\xi_{ij}^{E,0}\bar l_{iL}^0\Phi_2E_{jR}^0+h.c. \end{eqnarray} where $\eta^{U,D}$ and $\xi^{U,D}$ are non-diagonal mixing matrices $3\times3$, $\tilde\Phi_i=i\sigma_2\Phi_i$, $(U,D)_R$ are right-handed fermion singlets, $Q_L$ are left-handed fermion doublets, and the index 0 indicates that the fields are not mass eigenstates.\\ In the most general case, both Higgs doublets couple with the up and down sectors, and therefore they contribute simultaneously in the proccess of mass generation for quarks. This case leads to FCNC (Flavor Changing Neutral Currents) at tree level, because it is impossible to diagonalize simultaneously both matrices $\eta$ and $\xi$. This general case is known as 2HDM type III. However, FCNC proccesses at tree level are highly supressed by the experiment. In order to avoid their existence, Glashow and Weinberg\cite{RDiaz} designed the following set of discrete symmetries: \begin{eqnarray} &\Phi_1\to\Phi_1 \mbox{ and } \Phi_2\to-\Phi_2,\nonumber\\ &D_{jR}\to\mp D_{jR} \mbox{ and } U_{jR}\to - U_{jR}. \end{eqnarray} The condition of invariance under this discrete symmetry leads to two cases: \begin{itemize} \item By using $D_{jR}\to -D_{jR}$, matrices $\eta^{U,0}$ and $\eta^{D,0}$ have to be eliminated from the lagrangian. In this case $\Phi_1$ decouples in the Yukawa sector and only $\Phi_2$ gives masses to sectors up and down. This case is known as 2HDM type I. \item By using $D_{jR}\to D_{jR}$, matrices $\eta^{U,0}$ and $\xi^{D,0}$ must be eliminated from the lagrangian. In this case $\Phi_1$ couples to the down sector and $\Phi_2$ gives masses to up sector. This case is known as 2HDM type II. \end{itemize} The 2HDM-III is the only 2HDM that allows FCNC proccesses at tree level. Precission tests of the SM model show a great aggreement with the FCNC parameters, except for the phenomenon of neutrino oscillation. Besides, the FCNC processes don't seem to violate any fundamental law. We study the 2HDM-III because it has a more rich phenomenology, and it is possible to find the first two types as limit cases of this one.\\ In the 2HDM-III, a rotation of the scalar fields does not change the physical content of the model\cite{RDiaz}. This rotation can get rid of the VEV of one doublet. If we take $\left<\Phi_2\right>=0$, it is found that $\tan\beta=0$. This is known as the {\bf fundamental parameterization}. We denote the VEV of the first doublet as $\left<\Phi_1\right>=v$.\\ For a better study of the FCNC processes, Cheng, Sher and Yuang (CSY) propose an anzats for the Yukawa matrices such that \begin{equation} \xi_{ij} = \frac{\sqrt{m_im_j}}{v}\lambda_{ij} \end{equation} This anzats obeys to the fact that couplings between fermions and the Higgs particle in the SM are proportional to the mass of the fermion. Parameters $\lambda_{ij}$ could change the hierarchy between fermionic couplings and because of this it is expected that they would be $\sim 1$. Restrictions over parameters have been obtained in references \cite{RDiaz,bounds,jimenez}. The most relevant are: \begin{eqnarray} \xi_{\mu\tau}^2 &\in&[7.62 \times 10^{-4} ; 4.44 \times 10^{-2}]\nonumber\\ \xi_{\tau\tau} &\in&[-1.8 \times 10^{-2} ; 2.2 \times 10^{-2}]\nonumber\\ \xi_{\mu\mu} &\in& [-0.12;0.12]\nonumber\\ \xi_{\mu e} &\in& [-0.39;0.39]\nonumber\\ \lambda_{bb} &\in& [-6;6]\nonumber\\ \lambda_{tt} &\in& [-\sqrt{8};\sqrt{8}].\label{restrictions} \end{eqnarray} \section{The process $\gamma\gamma\to A^0 \to W^+H^-\to l\nu f_if_j$} Loops contributing in neutral Higgs production are shown in Figure \ref{fig:loops} and the decay $A^0\to H^\pm W^\mp$ exists at tree level in the framework of the 2HDM-III. The process $H^-\to q_i \bar{q_j}$ has been studied in 2HDM-III\cite{hcardenas}, and under the restrictions mentioned above, it has been found that the most relevant decay is $H^-\to t\bar b$.\\ \begin{figure}[htb] \centering \includegraphics[width=.22\linewidth]{loopCircle}\hspace*{2cm} \includegraphics[width=.22\linewidth]{loopTriangle}\\ \caption{Contributing diagrams in the process $\gamma\gamma\to A^0$} \label{fig:loops} \end{figure} The decay width of the process $\gamma\gamma\to A^0$ is given by \begin{eqnarray} \Gamma\left(\gamma\gamma\to A^0\right) = \frac{\alpha^2 g^2}{1024\pi^3}\frac{m_{A^0}^3}{m_W^2}\left| \sum_i N_Ce_i^2F(\tau)R_i^{A^0} \right|^2. \end{eqnarray} The factor $R_i^{A^0}$ is the relative coupling between the 2HDM-III and the SM, the kinematical factor is $\tau=4m_i^2/m_{A^0}^2$, the function $F(\tau)$ is defined as: \begin{equation} F(\tau)=-2\tau\left(1+(1-\tau)f(\tau)\right), \end{equation} and the function $f$ is: \begin{eqnarray} f(\tau)=\left\lbrace \begin{array}{ll} -\frac14\left|\mbox{Ln}\left(\frac{1+\sqrt{1-\tau}}{1-\sqrt{1-\tau}}\right)-i\pi\right|^2&\tau<1\\ \mbox{ArcSin}\left(\sqrt{\frac{1}{\tau}}\right)^2&\tau\ge 1 \end{array} \right.. \end{eqnarray} \begin{figure}[htb] \centering \includegraphics[width=.5\linewidth]{DecayWidthggA0} \caption{\small Decay width of the process $\gamma\gamma\to A^0$ to one loop for $\lambda_{bb}=1$ and $\lambda_{tt}=\sqrt{8}$} \label{fig:decaywidth} \end{figure} Figure \ref{fig:decaywidth} shows the decay width of the process $\gamma\gamma\to A^0$ for several values of the parameters $\lambda_{bb}$ and $\lambda_{tt}$ according to the restrictions mentioned in equation \ref{restrictions}. It is found that changes in $\lambda_{bb}$ do not have a big impact on the results, while changes in $\lambda_{tt}$ are considerable. It is found also that the decay width increases with the $A^0$ mass. So we will consider $\lambda_{bb}=6$ and values for $m_{A^0}>600$GeV.\\ The decay width for the process $A^0\to W^-H^+$ is given by \begin{eqnarray} \Gamma(A^0\to W^-H^+)=\frac{\cos^2\alpha m_{H^+}^3}{64\pi v^2}\times\hspace*{2.5cm}\nonumber\\ \left[\sqrt{1-\left(\frac{m_{W^-}+m_{A^0}}{m_{H^+}}\right)^2}\right. \left.\sqrt{1-\left(\frac{m_{W^-}-m_{A^0}}{m_{H^+}}\right)^2}\right]^3, \end{eqnarray} where $\alpha$ is the mixing angle of the Higgs eigenstates, $v$ is the vacuum expectation value as defined in the fundamental parameterization.\\ It has been found that the most relevant decay of the charged Higgs into fermions for this model is the decay into $t\bar b$ quarks\cite{hcardenas}. This is given by \begin{eqnarray} \Gamma(H^+\to t\bar b) = \frac{3m_{H^+}^2K_{tb}^2}{16\pi v^2}\times\hspace*{3.5cm}\nonumber\\ \left[\left(1-\frac{m_t^2+m_b^2}{m_{H^+}^2}\right)-4\lambda_{tt}\lambda_{bb}\frac{m_t^2m_b^2}{m_{H^+}^2}\right]\times \left|\vec p_H(m_t,m_b)\right|, \end{eqnarray} where $K_{tb}$ is the CKM matrix element, and $\left|\vec p_H(1,2)\right|$ is defined as: \begin{eqnarray} \left|\vec p_H(m_1,m_2)\right|=\hspace*{5cm}\nonumber\\ \sqrt{1-\left(\frac{m_{1}+m_{2}}{m_{H}}\right)^2} \sqrt{1-\left(\frac{m_{1}-m_{2}}{m_{H}}\right)^2}. \end{eqnarray} The cross section for the whole process is calculated as \begin{eqnarray} \sigma(\gamma\gamma\to A^0\to \tau\nu_{\tau}t\bar b)=\hspace*{4cm}\\ 8\pi\frac{\Gamma(\gamma\gamma\to A^0)\Gamma(A^0\to\tau\nu_{\tau}t\bar b)}{(E_{\gamma\gamma}^2-m_{A^0}^2)^2+\Gamma_{A^0}^2m_{A^0}^2}g(\lambda\l')\nonumber \end{eqnarray} Figures \ref{fig:sigma600} and \ref{fig:sigma800} show the cross section for the process $\gamma\gamma \to A^0\to W^+H^- \to (\tau\nu_\tau)(t\bar b)$, using $E_{\gamma\gamma}=1000$GeV. \section{Conclussions} \begin{figure}[htb] \centering \includegraphics[width=.5\linewidth]{CrossSection_mA600_Log} \caption{\small Cross section (in barns) of the process $\gamma\gamma\to A^0\to W^-H^+ \to e\nu_et\bar b$ for different values of $\lambda_{tt}$, using $m_{A^0}=600$GeV and $\lambda_{bb}=6$} \label{fig:sigma600} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=.5\linewidth]{CrossSection_mA800_Log} \caption{\small Cross section (in barns) of the process $\gamma\gamma\to A^0\to W^-H^+ \to e\nu_et\bar b$ for different values of $\lambda_{tt}$, using $m_{A^0}=800$GeV and $\lambda_{bb}=6$} \label{fig:sigma800} \end{figure} We found the cross section for the process $\gamma\gamma \to A^0\to W^+H^- \to (\tau\nu_\tau)(t\bar b)$ in the frame of the 2HDM-III. Results are shown in Figures \ref{fig:sigma600} and \ref{fig:sigma800}. It is found that the cross section takes values between 1pb and 10pb for the parameters $\lambda_{bb}=6$, $\lambda_{tt}=\sqrt{8}$ and $m_{A^0}=600$GeV. For a higher value of $m_{A^0}$ we get higher values of the cross section, between 10pb and 100pb. For lower values of the parameter $\lambda_{tt}$, we still get cross section values between 0.1pb and 10pb.\\ Finally, we can say that for two Higgs doublets models type III, the contribution of the process $\gamma\gamma\to A^0\to W^-H^+ \to e\nu_et\bar b$ is important, even though it is loop mediated. Earlier studies show that the contribution of this kind of processes in models such as MSSM is null\cite{asakawa}. The presence of this process would help to diferentiate between the MSSM and a more general 2HDM. Besides, evidence of the charged Higgs existence would demonstrate the multiple doublets structure of the Higgs sector.\\ \begin{acknowledgments} R.M acknowledges to Banco de la Rep\'ublica for the financial support in the development of this work. \end{acknowledgments}
2023-04-23T06:10:07.020Z
2008-10-23T17:33:55.000Z
redpajama/arxiv
arxiv_0002
186
2,151
ad17cacc8d21a658201271124da142cb961a3fc5
\section{Introduction} \label{xxsec0} \bigskip The spectral radius (also called the Frobenius-Perron dimension) of a matrix is an elementary and extremely useful invariant in linear algebra, combinatorics, topology, probability and statistics. For instance, we can classify all the finite graphs which are simple and connected by applying the spectral radius to adjacency matrix of them \cite{DG}. The Frobenius-Perron dimension of an object in a semisimple finite tensor (or fusion) category was introduced by Etingof-Nikshych-Ostrik in 2005 \cite{ENO} (also see \cite{EG, EGO, N}). Since then it has become an extremely useful invariant in the study of fusion categories and representations of semismiple (weak and/or quiasi-)Hopf algebras. In 2017, the Frobenius-Perron dimension of an endofunctor of a category was introduced by the authors in \cite{CG1}. It can be viewed as a generalization of the Frobenius-Perron dimension of an object in a fusion category introduced by Etingof-Nikshych-Ostrik \cite{ENO}. It was shown in \cite{CG1, CG2, ZZ} that the Frobenius-Perron dimension has strong connections with the representation type of a category. To gain a better understanding of the Frobenius-Perron dimension of an endofunctor, Wicks \cite{W} calculated the Frobenius-Perron dimension of the representations category of a modified $ADE$ bounded quiver algebra with arrows in a certain direction. It showed that the Frobenius-Perron dimension of this category was equal to the maximum number of loops at a vertex, and asked what would happen if the directions of the arrows were changed. In this paper, we study the Frobenius-Perron theory of the representation categories of the bound quiver algebras containing loops, find a way to calculate the Frobenius-Perron dimension of these algebras when they satisfy the commutativity condition of loops. Especially, we give the Frobenius-Peorron dimension of the bound quiver algebras containing loops who have representation-directed or canonical algebras as their quotient algebras. As an application, we prove that the Frobenius-Perron dimension of the representation category of a modified $ADE$ bounded quiver algebra is equal to the maximum number of loops at a vertex no matter what the directions of arrows we choose, which present an explicit answer to the question asked in \cite{W}. At last, we calculate the Frobenius-Perron dimension of the representation category of a polynomial algebra, prove it is equal to the number of indeterminate variables which is the number of loops in the quiver actually. Therefore, we show that there also exists infinite dimensional algebras whose Frobenius-Perron dimension is equal to the maximal number of loops. \subsection{Conventions} \label{xxsec0.8} \begin{enumerate} \item[(1)] Throughout let $\Bbbk$ be an algebraically closed field, and let everything be over $\Bbbk$. \item[(2)] Usually $Q$ means a finite connected quiver. \item[(3)] If $A$ is an algebra over the base field $\Bbbk$, then mod $A$ denote the category of finite dimensional left $A$-modules. \end{enumerate} \bigskip The paper is organized as follows. In Section 1, we introduce the background and summarize the main work of this paper. In Section 2, we review the definition of Frobenius-Perron dimension of a $\Bbbk$-linear category. In Section 3, we study the loop-extended algebras (see Definition \ref{def3.1}) and describe the properties of the extension spaces over the representation categories of these algebras. In Section 4, we find a way to obtain the Frobenius-Perron dimension of loop-extended algebras of representation-directed algebras which include ADE quiver algebras as special cases. In Section 5, we study the Frobenius-Perron dimension of a tube. In Section 6, we calculate the Frobenius-Perron dimension of loop-extended algebras of canonical algebras and give some examples. In Section 7, we give the Frobenius-Perron dimension of the representation categories of the polynomial algebras. The following two theorems are main results of this paper which are proved in Theorem \ref{xxthm3.2} and Theorem \ref{thm6.3}, respectively. \begin{theorem} \label{thm1.1} Let $A=\Bbbk Q/\mathcal{I}$ be the bound quiver algebra of a finite quiver $Q$, where $\mathcal{I}$ is an admissible ideal satisfying the commutativity condition of loops. $B=A/J$ is a quotient algebra where $J$ is generated by all the loops in $Q$. The following hold. $(1)$ If $M,N$ are two $B$-modules with $\Hom_B(M,N)=0$, then \[ \Ext_A^1(M,N)\cong\Ext_B^1(M,N). \] $(2)$ If $M$ is a brick in mod $B$ which is not simple, then \[ \Ext_A^1(M,M)\cong\Ext_B^1(M,M). \] \end{theorem} \begin{theorem} Keep the notation as in Theorem \ref{thm1.1}, the following hold. $(1)$ If $B$ is representation-directed, then\[ \fpdim({\rm mod}\ A)=\max_{P\in Q_0}N_P \]where $N_P$ is the number of loops at $P$. $(2)$ If $B$ is a canonical algebra of type ADE , then\[ \fpdim({\rm mod}\ A)\in [n_{max},n_{max}+1) \]where $n_{max}$ is the maximum of the numbers of loops at vertexes in the quiver. \end{theorem} \section{Preliminaries} \label{xxsec1} \subsection{$\Bbbk$-linear categories} \label{xxsec1.2} If ${\mathcal C}$ is a $\Bbbk$-linear category, then $\Hom_{\mathcal C}(M,N)$ is a $\Bbbk$-module for all objects $M,N$ in ${\mathcal C}$. If ${\mathcal C}$ is also abelian, then $\Ext^i_{\mathcal C}(M,N)$ are $\Bbbk$-modules for all $i\geq 0$. Let $\dim$ be the $\Bbbk$-vector space dimension. Throughout the rest of the paper, let ${\mathcal C}$ denote a $\Bbbk$-linear category. A functor between two $\Bbbk$-linear categories is assumed to preserve the $\Bbbk$-linear structure. For simplicity, $\dim(A,B)$ stands for $\dim \Hom_{\mathcal C}(A,B)$ for any objects $A$ and $B$ in ${\mathcal C}$. The set of finite subsets of nonzero objects in ${\mathcal C}$ is denoted by $\Phi$ and the set of subsets of $n$ nonzero objects in ${\mathcal C}$ is denoted by $\Phi_n$ for each $n\geq 1$. It is clear that $\Phi=\bigcup_{n\geq 1} \Phi_n$. We do not consider the empty set as an element of $\Phi$. \begin{definition}\cite[Definition 1.2]{CG1} \label{xxdef2.1} Let ${\mathcal C}$ be a $\Bbbk$-linear abelian category, $\phi:=\{X_1, X_2, \cdots,X_n\}$ be a finite subset of nonzero objects in ${\mathcal C}$, namely, $\phi\in \Phi_n$. \begin{enumerate} \item[(1)] The {\it adjacency matrix} of $(\phi)$ is defined to be $$A(\phi):=(a_{ij})_{n\times n}, \quad {\text{where}}\;\; a_{ij}:=\dim\Ext^1_{\mathcal C}(X_i, X_j) \;\;\forall i,j.$$ \item[(2)] An object $M$ in ${\mathcal C}$ is called a {\it brick} if \begin{equation} \nota \Hom_{\mathcal C}(M,M)=\Bbbk. \end{equation} \item[(3)] $\phi\in \Phi$ is called a {\it brick set} if each $X_i$ is a brick and $$\dim(X_i, X_j)=\delta_{ij}$$ for all $1\leq i,j\leq n$. The set of brick $n$-object subsets is denoted by $\Phi_{n,b}$ . We write $\Phi_{b}=\bigcup_{n\geq 1} \Phi_{n,b}$. \end{enumerate} \end{definition} Let $A$ be an $n\times n$-matrix over complex numbers ${\mathbb C}$. The {\it spectral radius} of $A$ is defined to be $$\rho(A):=\max\{ |r_1|, |r_2|, \cdots, |r_n|\}\quad \in {\mathbb R}$$ where $\{r_1,r_2,\cdots, r_n\}$ is the complete multi-set of eigenvalues of $A$. \begin{definition}\cite[Definition 2.3]{CG1} \label{xxdef2.3} Retain the notation as in Definition \ref{xxdef2.1}, and we use $\Phi_{b}$ as the testing objects. \begin{enumerate} \item[(1)] The {\it $n$th Frobenius-Perron dimension} of $\mathcal{C}$ is defined to be $$\fpdim^n (\mathcal{C}):=\sup_{\phi\in \Phi_{n,b}}\{\rho(A(\phi))\}.$$ If $\Phi_{n,b}$ is empty, then by convention, $\fpdim^n(\mathcal{C})=0$. \item[(2)] The {\it Frobenius-Perron dimension} of $\mathcal{C}$ is defined to be $$\fpdim (\mathcal{C}):=\sup_n \{\fpdim^n(\mathcal{C})\} =\sup_{\phi\in \Phi_{b}} \{\rho(A(\phi)) \}.$$ \end{enumerate} \end{definition} \subsection{Representation of bound quivers} Let $Q$ be a finite connected quiver and $\mathcal{I}$ be an admissible ideal of $\Bbbk Q$. Then $(Q,\mathcal{I})$ is called a {\it bound quiver} and $A=\Bbbk Q/\mathcal{I}$ is the {\it bound quiver algebra} of $Q$ with respect to $\mathcal{I}$. A {\it representation} of $(Q,\mathcal{I})$ is a tuple $M:=(M_a,M_\alpha)_{a\in Q_0,\alpha\in Q_1}$, satisfying $(R1)$ To each point $a$, $M_a$ is a finite dimensional $\Bbbk$-vector space; $(R2)$ To each arrow $\alpha:a\rightarrow b$, $M_\alpha$ is a $\Bbbk$-linear map from $M_a$ to $M_b$; $(R3)$ If $\sum\limits_{i=1}^m \lambda_i\alpha_{i,1}\cdots\alpha_{i,n_i} $ belongs to $\mathcal{I}$, where $\lambda_i\in \Bbbk$ (not all zero), then it implies $\sum\limits_{i=1}^m \lambda_i M_{\alpha_{i,1}}\cdots M_{\alpha_{i,n_i}}=0$.\\ Assume $M=(M_a,M_\alpha)$ and $N=(N_a,N_\alpha)$ are two representations of $(Q,\mathcal{I})$, a morphism $f:M\rightarrow N$ from $M$ to $N$ is a tuple $f=(f_a: M_{a}\rightarrow N_{a})_{a\in Q_0}$ of $\Bbbk$-linear maps such that $f_b\circ M_\alpha=N_\alpha\circ f_a$ hold for each arrow $\alpha:a\rightarrow b$. Denote by rep$(Q,\mathcal{I})$ the category of representations of $(Q,\mathcal{I})$. The following theorem is from \cite{ASS}. \begin{theorem} \cite[Ch.\uppercase\expandafter{\romannumeral3}, Theorem 1.6] {ASS} Let $(Q,\mathcal{I})$ be a bound quiver and $A=\Bbbk Q/\mathcal{I}$ be the bound quiver algebra of $Q$ with respect to $\mathcal{I}$. There exists a $\Bbbk$-linear equivalence of categories\[ F:{\rm mod}\ A\Tilde{\longrightarrow} {\rm rep}(Q,\mathcal{I}). \] \end{theorem} \section{Representation category of a bound quiver containing loops} \begin{definition} \label{def3.1} Let $A=\Bbbk Q/\mathcal{I}$ be the bound quiver algebra of a finite quiver $Q$, where $\mathcal{I}$ is an admissible ideal. We say $\mathcal{I}$ \emph{satisfies the commutativity condition of loops} if the following conditions hold. $(a)$ Any $\gamma\alpha$ with $\gamma$ is a loop and $\alpha$ is an arrow but not a loop belongs to $\mathcal{I}$; $(b)$ Any $\beta\gamma$ with $\gamma$ is a loop and $\beta$ is an arrow but not a loop belongs to $\mathcal{I}$; $(c)$ For any two loops $\gamma_1,\gamma_2$ base at the same vertex, $\gamma_1\gamma_2-\gamma_2\gamma_1$ belongs to $\mathcal{I}$.\\ In this case, let $J$ be the ideal of $A$ generated by all the loops. The quotient algebra $B:=A/J$ is called the {\it loop-reduced algebra} of $A$ and $A$ is called a {\it loop-extended algebra} of $B$. \end{definition} In this paper, we only consider quivers satisfying the commutativity condition of loops. We will show the algebra of these quivers have many useful properties. The following proposition is from \cite{W}, we present a categorical proof here. \begin{proposition}(\cite[Proposition 3.4]{W}) \label{xxcor2.2} Let $A=\Bbbk Q/\mathcal{I}$ be the bound quiver algebra of $Q$, where $\mathcal{I}$ is an admissible ideal of $\Bbbk Q$ satisfying the commutativity condition of loops. Assume $B$ is the loop-reduced algebra of $A$, then we get a one-to-one correspondence between the isomorphism classes below:\[ \{\text{brick}\ A\text{-modules}\} \leftrightarrow\{\text{brick}\ B\text{-modules}\}. \]Moreover, for each two brick $B$-modules $M,N$, there exists a natural isomorphism\[ \Hom_A(M,N)\cong\Hom_{B}(M,N). \] \end{proposition} \begin{proof} Since $B$ is a quotient algebra of $A$, we have $\Hom_{A}(M,M)\cong\Hom_{B}(M,M)=k$ for any brick $B$-module $M$. So \{brick$\ B$-modules\} is a subset of \{brick$\ A$-modules\}. Conversely, if there exists a brick $A$-module $N$ doesn't belong to the set \{brick$\ B$-modules\}, then we can find a vertex $P$ in $Q$ and a loop $\gamma_0$ at $P$ such that $N_{\gamma_0}\ne0$. Consider the map $f: N\rightarrow N$ defined as follow \[ f_R=\begin{cases} 0, R\in Q_{0}\backslash \{P\}\\ N_{\gamma_0}, R=P. \end{cases} \]It is not hard to prove that $f$ is an endomorphism of $N$ because $\mathcal{I}$ satisfies the commutativity condition of loops. Since $N_{\gamma_0}$ is nilpotent due to $\mathcal{I}$ is admissible, $f$ is linearly independent of $1_N$ which implies $N$ is not a brick, contradict to the assumption. Therefore, there is a one-to-one correspondence between the isomorphism classes \{{\rm{brick}}$\ A$-{\rm{modules}}\} and \{{\rm{brick}}$ \ B$-{\rm{modules}}\}. Furthermore, by viewing brick $B$-modules $M,N$ as brick $A$-modules, every $B$-module morphism from $M$ to $N$ can be seen as an $A$-module morphism. It is obvious that an $A$-module morphism $f:M\rightarrow N$ is well-defined as an $B$-module morphism. Thus, we have a natural isomorphism\[ \Hom_A(M,N)\cong\Hom_{B}(M,N). \]\end{proof} Proposition \ref{xxcor2.2} shows that if we want to calculate the Frobenius-Perron dimension of a bound quiver satisfying the commutativity condition, we need only consider the brick sets after removing loops. Therefore, the problems is how the extension spaces change when we remove the loops. \begin{theorem} \label{xxthm2.3} Keep the notation as in Proposition \ref{xxcor2.2}. Then the following hold. $(1)$ If $M,N$ are two $B$-modules, and $\Hom_B(M,N)=0$, then we have\[ \Ext_A^1(M,N)\cong\Ext_B^1(M,N). \] $(2)$ If $M$ is a brick in mod $B$ and $M$ is not simple, then we have\[ \Ext_A^1(M,M)\cong\Ext_B^1(M,M). \] \end{theorem} \begin{proof} $(1)$ Since mod $B$ is a full subcategory of mod $A$, each element in $\Ext_B^1(M,N)$ can be viewed as an element in $\Ext_A^1(M,N)$. If $\Ext_A^1(M,N)\ncong\Ext_B^1(M,N)$, then there exists an element $\eta$ belongs to $\Ext_A^1(M,N)$ but not in $\Ext_B^1(M,N)$. Notice that element in $\Ext_A^1(M,N)$ can be represented by a short exact sequence. $\eta$ corresponding to an exact sequence \[ \eta: \ \ 0\longrightarrow N \stackrel{\varphi}{\longrightarrow} L\stackrel{\psi}{\longrightarrow} M\longrightarrow 0 \]where $L$ is in mod $A$ but not in mod $B$. Which means that there exists a loop $\gamma$ that $L_{\gamma}$ is nonzero linear map. Assume that $\gamma$ is located in the vertex $P$, the non-loop arrows with the target $P$ are $\beta_i(i=1,\cdots,n)$, the non-loop arrows with the source $P$ are $\alpha_i(i=1,\cdots,m)$ and the loops located in $P$ besides $\gamma$ are $\gamma_i(i=1,\cdots,l)$. We emphasize that the above assumption including the cases there is no non-loop arrow with the target $P$, there is no non-loop arrow with the source $P$ or there is no loop located in $P$ besides $\gamma$. So the local part at $P$ is as follows \[ \begin{tikzpicture} \node (1) at (0,0) {$P$}; \node (2) at (-2,1) {$S_1$}; \node (3) at (-2,0){}; \node (4) at (-2,-1){$S_m$}; \node (5) at (2,1){$T_1$}; \node (6) at (2,0) {}; \node (7) at (2,-1) {$T_n$}; \draw[->] (2) --node[above ]{$\alpha_1$} (1); \draw[->][dashed] (3) --node{} (1); \draw[->] (4) --node[below]{$\alpha_m$} (1); \draw[<-] (5) --node[above right]{$\beta_1$} (1); \draw[<-][dashed] (6) --node{} (1); \draw[<-] (7) --node[below right]{$\beta_n$} (1); \draw [->] (0.2,0.2) arc (-75:255:0.5)node[above]{$\gamma_1$}; \draw [->][dashed] (0.25,0.2) arc (-75:255:0.7); \draw [->] (-0.2,-0.2) arc (105:425:0.5)node[below]{$\gamma_l$}; \end{tikzpicture} \] Therefore, for each $\beta\in\{\beta_1,\beta_2,\cdots,\beta_n\}$, we have the following exact sequence\[ \begin{tikzpicture} \node (1) at (-3,0) {$M_P$}; \node (2) at (0,0) {$L_P$}; \node (3) at (3,0){$N_P$}; \node (4) at (-3,-2){$M_T$}; \node (5) at (0,-2){$L_T$}; \node (6) at (3,-2) {$N_T$}; \draw[->] (1) --node[above ]{$\varphi_P$} (2); \draw[->] (2) --node[above ]{$\psi_P$} (3); \draw[->] (4) --node[above]{$\varphi_T$} (5); \draw[->] (5) --node[above ]{$\psi_T$} (6); \draw[->] (1) --node[left]{$M_\beta$} (4); \draw[->] (2) --node[left]{$L_\beta$} (5); \draw[->] (3) --node[left]{$N_\beta$} (6); \draw [->] (-2.8,0.2) arc (-75:255:0.5)node[above]{$M_\gamma$}; \draw [->] (0.2,0.2) arc (-75:255:0.5)node[above]{$L_\gamma$}; \draw [->] (3.2,0.2) arc (-75:255:0.5)node[above]{$N_\gamma$}; \end{tikzpicture} \]where $T$ is the target of $\beta$ and $M_\gamma,N_\gamma=0$. Since as vector spaces $L_P\cong M_P\oplus N_P$ and $L_T\cong M_T\oplus N_T$. We set \[ \varphi_P=\begin{pmatrix} 1\\0 \end{pmatrix},\psi_P=(0,1), \] \[ \varphi_T=\begin{pmatrix} 1\\0 \end{pmatrix},\psi_T=(0,1). \] Suppose that \[ L_\gamma=\begin{pmatrix} f_{11}&f_{12} \\ f_{21}&f_{22} \end{pmatrix}, L_\beta=\begin{pmatrix} g_{11}&g_{12} \\ g_{21}&g_{22} \end{pmatrix} \]Since $L_\gamma\varphi_P=\varphi_PM_\gamma=0$, we get\[ \begin{pmatrix} f_{11}&f_{12} \\ f_{21}&f_{22} \end{pmatrix}\begin{pmatrix} 1\\0 \end{pmatrix}=\begin{pmatrix} f_{11}\\f_{21} \end{pmatrix}=0 \]Similarly, we have $f_{22}=0$. So $L_\gamma=\begin{pmatrix} 0&f_{12} \\ 0&0 \end{pmatrix}$. Since $L_\beta L_\gamma=0$, we get\[ \begin{pmatrix} g_{11}&g_{12} \\ g_{21}&g_{22} \end{pmatrix}\begin{pmatrix} 0&f_{12} \\ 0&0 \end{pmatrix}=\begin{pmatrix} 0&g_{11}f_{12} \\ 0&g_{21}f_{12} \end{pmatrix}=0 \]Also by $L_\beta \varphi_P=\varphi_T M_\beta$, we get\[ \begin{pmatrix} g_{11}&g_{12} \\ g_{21}&g_{22} \end{pmatrix}\begin{pmatrix} 1\\0 \end{pmatrix}=\begin{pmatrix} 1\\0 \end{pmatrix}M_\beta \]Thus $g_{11}=M_\beta$ and $g_{21}=0$. It follows that $M_\beta f_{12}=0$. Dually, for each $\alpha\in\{\alpha_1,\alpha_2,\cdots,\alpha_m\}$, we have the following exact sequence\[ \begin{tikzpicture} \node (1) at (-3,0) {$M_S$}; \node (2) at (0,0) {$L_S$}; \node (3) at (3,0){$N_S$}; \node (4) at (-3,-2){$M_P$}; \node (5) at (0,-2){$L_P$}; \node (6) at (3,-2) {$N_P$}; \draw[->] (1) --node[above ]{$\varphi_S$} (2); \draw[->] (2) --node[above ]{$\psi_S$} (3); \draw[->] (4) --node[above]{$\varphi_P$} (5); \draw[->] (5) --node[above ]{$\psi_P$} (6); \draw[->] (1) --node[left]{$M_\alpha$} (4); \draw[->] (2) --node[left]{$L_\alpha$} (5); \draw[->] (3) --node[left]{$N_\alpha$} (6); \draw [->] (-0.2,-2.2) arc (105:425:0.5)node[below]{$L_\gamma$}; \draw [->] (-3.2,-2.2) arc (105:425:0.5)node[below]{$M_\gamma$}; \draw [->] (2.8,-2.2) arc (105:425:0.5)node[below]{$N_\gamma$}; \end{tikzpicture} \]where $S$ is the source of $\alpha$. By similar argument as above, we have $f_{12}N_\alpha=0$. Now we can construct a nonzero homomorphism $\theta:N\rightarrow M$ such that $\theta_P=f_{12}$ and $\theta_{P'}=0$ for each $P'\in Q_{0}\backslash \{P\}$. Which contradicts to the condition $\Hom_B(M,N)=0$. Therefore $\Ext_A^1(M,N)\cong\Ext_B^1(M,N)$. $(2)$ If $\Ext_A^1(M,M)\ncong\Ext_B^1(M,M)$, by similar argument as $(1)$, we can construct a nonzero homomorphism $\theta$ from $M$ to $M$. Since $M$ is not a simple module, there exists at least two vertices $P_1,P_2$ such that $M_{P_1},M_{P_2}\ne0$. But there is only one vertex $P_0$ satisfies $\theta_{P_0}\ne0$. So $\theta$ is not an isomorphism. Which means $\theta$ and $1_{M}$ are linearly independent. It follows that $\dim_{k}\Hom_B(M,M)\ge2$. Contradicts to the condition that $M$ is a brick. Thus, $\Ext_A^1(M,M)\cong\Ext_B^1(M,M)$. \end{proof} \begin{remark} For a simple $B$-module $S_P$ with vertex $P$ in $Q_0$, by \cite[Ch.\uppercase\expandafter{\romannumeral3}, Lemma 2.12] {ASS}, the value of $\mathrm{dim}\Ext_A(S_P,S_P)$ is equal to the number of loops at $P$. \end{remark} \section{Loop-extended algebras of representation-directed algebras} Let $A$ be an algebra. Recall that a {\it path} in ${\rm mod}\ A$ is a sequence\[ M_0\xrightarrow{f_1}M_1\xrightarrow{f_2}M_2\rightarrow\cdots\rightarrow M_{t-1}\xrightarrow{f_t}M_t, \] where $t\ge1$, $M_0,M_1,\cdots,M_t$ are indecomposable $A$-modules and $f_1,\cdots,f_t$ are non-zero non-isomorphisms homomorphisms. A path in ${\rm mod}\ A$ is called a {\it cycle} if its source module $M_0$ is isomorphic to its target module $M_t$. An indecomposable $A$-module that lies on no cycle in ${\rm mod}A$ is called a {\it directing module}. An algebra is called {\it representation-directed} if every indecomposable $A$-module is directing. \begin{theorem} \label{xxthm3.2} Let $A=\Bbbk Q/\mathcal{I}$ be a bound quiver algebra for some finite quiver $Q$, where $\mathcal{I}$ is an admissible ideal satisfying the commutativity condition of loops. Assume $B$ is the loop-reduced algebra of $A$. If $B$ is representation-directed, then we have\[ \fpdim({\rm mod} \ A)=\max_{P\in Q_0}N_P \]where $N_P$ means the number of loops with the source $P$. \end{theorem} \begin{proof} First we prove that for every brick set $\phi=\{M_i\}_{i=1}^n$ of ${\rm mod}$ $B$, the adjacency matrix is a strictly upper triangular matrix, that is to say, there exists a permutation $\sigma$ of $\{1,\cdots,n\}$ such that for $i\le j$, \begin{equation} \label{E1.1.1} \Hom_B(M_i,\tau M_j)=0. \end{equation} In fact, using induction on $n$. If $n=1$, there is only one element $M_1$ in $\phi$. If $\Hom_B(M_1,\tau M_1)\ne 0$, then there exists a non-zero non-isomorphism homomorphism $f:M_1\rightarrow \tau M_1$. Note that Auslander-Reiten series $\tau M_1\rightarrow \oplus_{i=1}^k N_i\rightarrow M_1$ gives a path $\tau M_1\rightarrow N_1\rightarrow M_1$, so we get a cycle $M_1\rightarrow\tau M_1\rightarrow N_1\rightarrow M_1$ which contradicts the assumption $B$ is a representation-directed algebra. Assume (\ref{E1.1.1}) holds for $n=k-1$. When $n=k$, then exists $i_0$, such that $\Hom_B(M_{i_0},\tau M_j)=0$ for $j=1,2,\cdots, n$. Otherwise, for each $i$, there exists $j$, such that $\Hom_B(M_i,\tau M_j)\ne0$. We can get a path\[ M_{i_1}\rightarrow \tau M_{i_2}\rightarrow N_{i_2}\rightarrow M_{i_2}\rightarrow \tau M_{i_3}\rightarrow \cdots \]Since the brick set is finite, there exist $i_{s}, i_{t}$ such that $i_s=i_t$. And then we get a cycle which contradicts the assumption $B$ is a representation-directed algebra. Define $\sigma_1$ permuting $i_0$ with 1 and fixing the other numbers. By induction hypothesis, we can define the permutation $\sigma_2$ such that $\Hom_A(M_{\sigma_2\sigma_1(i)},\tau M_{\sigma_2\sigma_1(j)})=0$ for $1<\sigma_1(i)\le\sigma_1(j)$. Let $\sigma=\sigma_2 \sigma_1$, then we get what we need. Now let us turn to mod $A$. By Corollary \ref{xxcor2.2}, \[ \{\text{brick}\ A\text{-modules}\} \leftrightarrow\{\text{brick}\ B\text{-modules}\}.\] And for a brick set $\phi$, by Theorem \ref{xxthm2.3}$(1)$, we know the adjacency matrix $A(\phi_{A})$ of $\phi$ in mod $A$ is the same as the adjacency matrix $A(\phi_{B})$ of $\phi$ in mod $B$ not considering the diagonal elements. So $A(\phi_{A})$ is a upper triangular matrix. Assume $M\in\phi$, by Theorem \ref{xxthm2.3}$(2)$, we have \[ \Ext_A(M,M)=0, \ {\rm{if}}\ M \rm{\ is \ not \ simple}, \]and that\[ \Ext_A(M,M)=N_P, \ {\rm{for}} \ M=S_P, \]where $S_P$ means the simple module at the vertex $P$, $N_P$ means the number of loops at $P$. Thus the elements in the diagonal of $A(\phi_{A})$ is nonzero means the corresponding module is simple and there are several loops at the corresponding vertex. Therefore the spectral radius of the adjacency matrix of a brick set in mod $A$ is the maximum of the number of loops at vertices, that is, \[ \fpdim({\rm mod}\ A)=\max_{P\in Q_0}N_P. \] \end{proof} We will give some examples. \begin{example} Define quiver $Q'$ as follow. \[ \begin{tikzpicture} \node (1) at (0,0) {1}; \node (2) at (1.5,1) {2}; \node (3) at (1.5,-1){3}; \node (4) at (3,0){4}; \draw[->] (2) --node[above ]{$\alpha$} (1); \draw[->] (3) --node[below ]{$\gamma$} (1); \draw[->] (4) --node[above]{$\beta$} (2); \draw[->] (4) --node[below ]{$\delta$} (3); \end{tikzpicture} \] $Q$ is the quiver formed from $Q'$ by adding $N_i$ loops to each vertex $i$. Let $A=\Bbbk Q/\langle\mathcal{I}\cup \{\alpha\beta\}\rangle$, where $\mathcal{I}$ is the admissible ideal satisfying the conditions (a),(b),(c) in Definition \ref{def3.1}, $B$ is the loop-reduced algebra of $A$. We can draw the Auslander-Reiten quiver of $B$ as follows. \[ \begin{tikzpicture} \node (1) at (0+8,.3) {1}; \node (2) at (0+8,-.3) {0}; \node (3) at (.3+8,0){0}; \node (4) at (-.3+8,0){0}; \node (1) at (0,.3) {1}; \node (2) at (0,-.3) {0}; \node (3) at (.3,0){0}; \node (4) at (-.3,0){0}; \node (1) at (2,2.3) {1}; \node (2) at (2,1.7) {1}; \node (3) at (2.3,0+2){1}; \node (4) at (1.7,0+2){1}; \node (1) at (0+4,.3+4) {2}; \node (2) at (0+4,-.3+4) {1}; \node (3) at (.3+4,0+4){1}; \node (4) at (-.3+4,0+4){1}; \node (1) at (2+4,2.3+4) {1}; \node (2) at (2+4,1.7+4) {1}; \node (3) at (2.3+4,0+2+4){1}; \node (4) at (1.7+4,0+2+4){0}; \node (1) at (2+6,2.3+6) {1}; \node (2) at (2+6,1.7+6) {0}; \node (3) at (2.3+6,0+2+6){1}; \node (4) at (1.7+6,0+2+6){0}; \node (1) at (0,.3+4) {0}; \node (2) at (0,-.3+4) {1}; \node (3) at (.3,0+4){0}; \node (4) at (-.3,0+4){1}; \node (1) at (2,2.3+4) {1}; \node (2) at (2,1.7+4) {1}; \node (3) at (2.3,0+2+4){0}; \node (4) at (1.7,0+2+4){1}; \node (1) at (2+6-4,2.3+6) {0}; \node (2) at (2+6-4,1.7+6) {1}; \node (3) at (2.3+6-4,0+2+6){0}; \node (4) at (1.7+6-4,0+2+6){0}; \node (1) at (2-4,2.3+4) {0}; \node (2) at (2-4,1.7+4) {0}; \node (3) at (2.3-4,0+2+4){0}; \node (4) at (1.7-4,0+2+4){1}; \node (1) at (2+6-4-4,2.3+6) {1}; \node (2) at (2+6-4-4,1.7+6) {0}; \node (3) at (2.3+6-4-4,0+2+6){0}; \node (4) at (1.7+6-4-4,0+2+6){1}; \node (1) at (0+4,.3) {0}; \node (2) at (0+4,-.3) {1}; \node (3) at (.3+4,0){1}; \node (4) at (-.3+4,0){1}; \node (1) at (2+4,2.3) {1}; \node (2) at (2+4,1.7) {1}; \node (3) at (2.3+4,0+2){1}; \node (4) at (1.7+4,0+2){1}; \node (1) at (0+4+4,.3+4) {0}; \node (2) at (0+4+4,-.3+4) {1}; \node (3) at (.3+4+4,0+4){1}; \node (4) at (-.3+4+4,0+4){0}; \node (1) at (2+4+4,2.3+4) {0}; \node (2) at (2+4+4,1.7+4) {0}; \node (3) at (2.3+4+4,0+2+4){1}; \node (4) at (1.7+4+4,0+2+4){0}; \draw[->] (0.5,0.5) -- (1.5,1.5); \draw[->] (2.5,2.5) -- (3.5,3.5); \draw[->] (4.5,4.5) -- (5.5,5.5); \draw[->] (6.5,6.5) -- (7.5,7.5); \draw[->] (4.5,0.5) -- (5.5,1.5); \draw[->] (6.5,2.5) -- (7.5,3.5); \draw[->] (8.5,4.5) -- (9.5,5.5); \draw[->] (0.5,4.5) -- (1.5,5.5); \draw[->] (2.5,6.5) -- (3.5,7.5); \draw[->] (-1.5,6.5) -- (-0.5,7.5); \draw[->] (0.5,7.5) -- (1.5,6.5); \draw[->] (2.5,5.5) -- (3.5,4.5); \draw[->] (4.5,3.5) -- (5.5,2.5); \draw[->] (6.5,1.5) -- (7.5,0.5); \draw[->] (4.5,7.5) -- (5.5,6.5); \draw[->] (6.5,5.5) -- (7.5,4.5); \draw[->] (-1.5,5.5) -- (-.5,4.5); \draw[->] (0.5,3.5) -- (1.5,2.5); \draw[->] (2.5,1.5) -- (3.5,0.5); \draw[->] (8.5,7.5) -- (9.5,6.5); \draw[dashed](0,1) -- (0,-1); \draw[dashed](8,1) -- (8,-1); \draw(2,2.3) -- (1.7,2); \draw(6,2.3) -- (6.3,2); \end{tikzpicture} \] It is easy to find that there is a brick set consisting of the two modules at the bottom of the Auslander-Reiten quiver, whose corresponding matrix is not a upper triangular matrix. Explicitly, the corresponding matrix is $\begin{pmatrix} N_2 & 1 \\ 1 & 0 \end{pmatrix} $, the spectral radius of which is $\dfrac{N_2+\sqrt{N_2^2+4}}{2}$. Notice that the corresponding matrices of all the other brick sets in ${\rm mod}\ A$ are upper triangulated matrices. By the same argument as in Theorem \ref{xxthm3.2}, we have \[\fpdim({\rm mod}\ A)=\max\{\dfrac{N_2+\sqrt{N_2^2+4}}{2},N_1,N_3,N_4\}.\] \end{example} \begin{remark} For any ADE quiver algebra, whatever the direction of arrows we choose, it is a representation-directed algebra. Hence all of modified ADE bounded quiver algebras are loop-extended algebras of representation-directed algebras. Therefore, by Theorem \ref{xxthm3.2}, the Frobenius-Perron dimension of representation category of these algebras is equal to the maximum number of loops at a vertex, which answers the question in \cite{W}. \end{remark} \section{Some properties of tubes} In the next two sections, we consider the loop-extended algebras of canonical algebras of type $ADE$ and try to calculate the Frobenius-Perron dimension of the corresponding representation category. Notice that there are several tubes in the representation category of canonical algebras, we first give some properties of tubes, compare to (\cite{CG2}, Section 2.2). Recall that in \cite{HJ}, a matrix $A$ is called {\it irreducible} if there is no permutation matrix $P$ such that \[ P^TAP=\begin{pmatrix} B&C\\ 0&D \end{pmatrix} \]where $B$ and $D$ are nonzero matrices. \begin{lemma} \label{lem5.1} Let $\mathcal{T}$ be a tube, $\phi$ be a brick set in $\mathcal{T}$, and $A$ be the adjacency matrix of $\phi$. If $A$ is irreducible, then $A$ is a similar matrix of $$\begin{pmatrix} 0&1\\ &0&\ddots\\ &&\ddots&1\\ 1&&&0 \end{pmatrix}.$$ \end{lemma} \begin{proof} Assume $\phi=\{M_1,M_2,\cdots,M_n\}$. For each $M_i$, there exists some $j\in\{1,\cdots,n\}$ such that $\Ext_{\mathcal{T}}(M_i,M_j)\ne0$ since $A$ is irreducible. If there exists $j'(j'\ne j)$ such that $\Ext_{\mathcal{T}}(M_i,M_{j'})\ne0$, by Serre duality, we have\[ \Hom_{\mathcal{T}}(M_{j'},\tau M_{i})\cong \Ext_{\mathcal{T}}(M_i,M_{j'})\ne0 \]Compared with\[ \Hom_{\mathcal{T}}(M_{j'},M_{i})=0, \]we find that $M_{j'}$ is restricted on a coray of $\mathcal{T}$. Similarly, we have\[ \Hom_{\mathcal{T}}(M_{j},\tau M_{i})\cong \Ext_{\mathcal{T}}(M_i,M_{j})\ne0 \]and\[ \Hom_{\mathcal{T}}(M_{j},M_{i})=0. \]Then $M_j,M_{j'}$ must be on the same coray of $\mathcal{T}$, it is a contradiction to that $\phi$ is a brick set. Therefore, $M_j$ is the unique object in $\phi$ such that $\Ext_{\mathcal{T}}(M_i,M_j)\ne0$. We claim that $\Ext_{\mathcal{T}}(M_i,M_j)=\Bbbk$. In fact, if $\dim \Ext_{\mathcal{T}}(M_i,M_j)\ge2$, then we have\[ \dim \Hom_{\mathcal{T}}(M_j,M_j)\ge \dim \Hom_{\mathcal{T}}(M_j,\tau M_i)=\dim \Ext_{\mathcal{T}}(M_i,M_j)\ge2, \] it is impossible. By a similar argument, we know that there also exists a unique object $M_k\in\phi$ such that $\Ext_{\mathcal{T}}(M_k,M_i)\ne0$ and we have $\Ext_{\mathcal{T}}(M_k,M_i)=\Bbbk$. By the condition that $A$ is irreducible, we can find that $A$ is a similar matrix of $$\begin{pmatrix} 0&1\\ &0&\ddots\\ &&\ddots&1\\ 1&&&0 \end{pmatrix}.$$ \end{proof} \begin{corollary} \label{cor5.2} Let $\mathcal{T}$ be a tube, then we have\[ \fpdim(\mathcal{T})=1. \] \end{corollary} \begin{proof} Let $\phi$ be a brick set in $\mathcal{T}$, assume $\phi=\phi_1\cup\cdots\cup\phi_s$ such that the adjacency matrix of $\phi$ is\[ \begin{pmatrix} A_1&*&\cdots&*\\ 0&A_2&\cdots&*\\ \vdots&\vdots&&\vdots\\ 0&0&\cdots&A_s \end{pmatrix} \] where $A_i$ is the adjacency matrix of $\phi_i$ and $A_i$ is irreducible, $i=1,\cdots,s$. Then each $A_i$ has the form \[ \begin{pmatrix} 0&1\\ &0&\ddots\\ &&\ddots&1\\ 1&&&0 \end{pmatrix} \text{ , } (0) \text{ or } (1). \]So $\rho(A)=\max\{\rho(A_1),\cdots,\rho(A_s)\}\le1$, and it follows that $\fpdim(\mathcal{T})\le 1$. On the other hand, the set consisting of all the simple objects in $\mathcal{T}$ is a brick set, and the adjacency matrix is\[ \begin{pmatrix} 0&1\\ &0&\ddots\\ &&\ddots&1\\ 1&&&0 \end{pmatrix} \]whose spectral radius is 1. Therefore, $\fpdim(\mathcal{T})= 1.$ \end{proof} \begin{lemma} \label{lem5.3} Let $\mathcal{T}$ be a tube. If there exist two different simple objects $S_1,S_2$ in $\mathcal{T}$ satisfying that $\Ext(S_1,S_2)=0$, then we can find an object $M$ such that\[ \Ext(S_1,M)=\Bbbk,\Ext(M,S_2)=\Bbbk. \]and $\{S_1,S_2,M\}$ is a brick set. \end{lemma} \begin{proof} We have\[ \Hom(-,\tau S_1)\cong\Ext(S_1,-)=\Bbbk, \Hom(\tau^{-1} S_2,-)\cong\Ext(-,S_2)=\Bbbk \]which give a ray starting at $\tau^{-1} S_2$ and a coray ending at $\tau S_1$ on the tube. Let $M$ be the intersection of the ray and the coray, by Serre duality, we get what we need. \end{proof} \bigskip \section{Loop-extended algebras of canonical algebras of type ADE} Before calculating the Frobenius-Perron dimension of this type of algebras, we need some lemmas. \begin{lemma} \label{lem6.1} Let $f(x)=(x-n_1)^{r_1}(x-n_2)^{r_2}\cdots (x-n_s)^{r_s}\in \mathbb{R}[x]$, where $r_1,\cdots,r_s\in \mathbb{Z}_{>0}$, $n_1,\cdots,n_s\in \mathbb{R}$ satisfying $0\le n_1< n_2< \cdots <n_{s-1}\le n_s-1$. Let $\{x_i\}_{i=1}^{r_1+\cdots+r_s}$ be the complete set of complex roots of $f(x)-1$. Denote the value $\max\{|x_i|\}_{i=1}^{r_1+\cdots+r_s}$ by $\rho(f(x)-1)$. Then the follows hold. $(1)$ $f(x)-1$ has the unique real root $x_0$ in $(n_s,n_s+1]$ and $\rho(f(x)-1)=x_0$. $(2)$ Assume $0\le m\le n_s-1$, then $\rho((x-m)f(x)-1)<\rho(f(x)-1)$. $(3)$ Assume $s>1$, then $\rho((x-n_s)f(x)-1)>\rho(f(x)-1)$. \end{lemma} \begin{proof} $(1)$ Since $f(n_s)-1<0,f(n_s+1)-1\ge0$, $f(x)-1$ has a real root $x_0$ in $(n_s,n_s+1]$. And we have \begin{align*} f'(x)&=r_1(x-n_1)^{r_1-1}[(x-n_2)^{r_2}\cdots (x-n_{s-1})^{r_{s-1}}\cdot (x-n_s)^{r_s}]\\& +\cdots+[(x-n_1)^{r_1}(x-n_2)^{r_2}\cdots (x-n_{s-1})^{r_{s-1}}]\cdot r_s(x-n_s)^{r_s-1} \end{align*}So it follows that $f'(x)>0$ in $(n_s,n_s+1]$, and $x_0$ is the unique real root in $(n_s,n_s+1]$. If $f(x)-1$ has a complex root $z_0$ such that $|z_0|>x_0$, we get $|z_0-n_j|>|x_0-n_j|,j=1,\cdots,s$. Therefore, $|f(z_0)|>|f(x_0)|=1$ which is a contradiction to the condition $f(z_0)=1$. Hence $\rho(f(x)-1)=x_0$. $(2)$ Using $(1)$, we need only consider the real root in $(n_s,n_s+1]$. Denote $(x-m)f(x)$ by $g(x)$, we find that $f(x)<g(x)$ always holds in $(n_s,n_s+1]$, so $g(x)$ will meet $1$ before $f(x)$ in $(n_s,n_s+1]$. So $\rho(g(x)-1)<\rho(f(x)-1)$. $(3)$ The root in $(n_s,n_s+1]$ of $f(x)-1$ is equal to the root in $(n_s,n_s+1]$ of $f(x)^{\frac{1}{n_s}}-1$. Assume $h(x)=(x-n_1)^{r_1}(x-n_2)^{r_2}\cdots (x-n_{s-1})^{r_{s-1}}$. Then $$f(x)^{\frac{1}{n_s}}=h(x)^{\frac{1}{n_s}}\cdot (x-n_s), [(x-n_s)f(x)]^{\frac{1}{n_s+1}}=h(x)^{\frac{1}{n_s+1}}\cdot (x-n_s).$$ Hence we have \[h(x)^{\frac{1}{n_s}}\cdot (x-n_s)>h(x)^{\frac{1}{n_s+1}}\cdot (x-n_s),\] so $h(x)^{\frac{1}{n_s}}\cdot (x-n_s)$ meets $1$ before $h(x)^{\frac{1}{n_s+1}}\cdot (x-n_s)$ in $(n_s,n_s+1]$. Therefore $\rho((x-n_s)f(x)-1)>\rho(f(x)-1)$. \end{proof} \begin{example} Let $f(x)=x(x-2)-1,$ $g(x)=x(x-1)(x-2)-1$ and $h(x)=x(x-2)^2-1$. Since $(g(x)+1)=(x-1)(f(x)+1)$, by Lemma \ref{lem6.1}(2) we have $\rho(g(x))<\rho(f(x))$. We also have $(h(x)+1)=(x-2)(f(x)+1)$, so it follows that $\rho(h(x))>\rho(f(x))$ by Lemma \ref{lem6.1}(3). Therefore we get\[ \rho(g(x))<\rho(f(x))<\rho(h(x)). \] \end{example} \bigskip Recall a bound quiver algebra $B$ is called a canonical algebra of type ADE if $B$ is one of the following algebras: $(1)$ $A(n,m)=\Bbbk Q_A$ for $n,m\ge0$; \[\begin{tikzpicture} \node (1) at (-1,0) {$Q_A:$}; \node (1) at (0,0) {0}; \node (2) at (1,1) {(1,1)}; \node (n) at (3,1) {(1,n)}; \node (n+1) at (1,-1) {(2,1)}; \node (n+m) at (3,-1) {(2,m)}; \node (n+m+1) at (4,0) {$0'$}; \draw[->] (n+m+1) --node[right]{} (n+m); \draw[->][dashed] (n+m) -- (n+1); \draw[->] (n+1) --node[left]{} (1); \draw[->] (n+m+1) --node[right]{} (n); \draw[->] (2) --node[left]{} (1); \draw[->][dashed] (n) -- (2); \end{tikzpicture}\] $(2)$ $D_{\mathcal{I}}(n)=\Bbbk Q_D/\mathcal{I}$ for $n\ge4$, where $\mathcal{I}$ is the admissible ideal of $\Bbbk Q_D$ generated by $\alpha_1\cdots\alpha_{n-2}+\beta_1\beta_2+\gamma_1\gamma_2$. \[ \begin{tikzpicture} \node (1) at (-1,0) {$Q_D:$}; \node (1) at (0,0) {0}; \node (2) at (1,2) {(1,1)}; \node (n-2) at (3,2) {(1,n-3)}; \node (n-1) at (2,1) {(2,1)}; \node (n) at (2,-1) {(3,1)}; \node (n+1) at (4,0) {$0'$}; \draw[->] (n+1) --node[above ]{} (n-1); \draw[->] (n+1) --node[below right]{} (n); \draw[->] (n-1) --node[above ]{} (1); \draw[->] (n) --node[below left]{} (1); \draw[->] (n+1) --node[above right]{} (n-2); \draw[->] (2) --node[above left]{} (1); \draw[->][dashed] (n-2) -- (2); \end{tikzpicture}\] $(3)$ $E_{\mathcal{I}}(n)=\Bbbk Q_E/\mathcal{I}$ for $n=6,7,8$, where $\mathcal{I}$ is the admissible ideal of $\Bbbk Q_E$ generated by $\alpha_1\cdots\alpha_{n-3}+\beta_1\beta_2+\gamma_1\gamma_2\gamma_3$; \[ \begin{tikzpicture} \node (1) at (-1,0) {$Q_E:$}; \node (1) at (0,0) {0}; \node (2) at (1,1.5) {(1,1)}; \node (n-3) at (3,1.5) {(1,n-4)}; \node (n-2) at (2,0) {(2,1)}; \node (n-1) at (1,-1.5) {(3,1)}; \node (n) at (3,-1.5) {(3,2)}; \node (n+1) at (4,0) {$0'$}; \draw[->] (n+1) --node[above]{} (n-2); \draw[->] (n+1) --node[below right]{} (n); \draw[->] (n-2) --node[above ]{} (1); \draw[->] (n) --node[below left]{} (n-1); \draw[->] (n-1) --node[below left]{} (1); \draw[->] (n+1) --node[above right]{} (n-3); \draw[->] (2) --node[above left]{} (1); \draw[->][dashed] (n-3) -- (2); \end{tikzpicture} \] We call $A(n,m)$ for $n\ge1,m\ge0$, $D_{\mathcal{I}}(n)$ for $n\ge4$ and $E_{\mathcal{I}}(n)$ for $n=6,7,8$ the canonical algebra of type $A, D$ and $E$, respectively. Let $A=\Bbbk Q/\mathcal{I}$ be a bound quiver algebra for some finite quiver $Q$, where $\mathcal{I}$ is an admissible ideal satisfying the commutativity condition of loops and the loop-reduced algebra $B$ of $A$ is a canonical algebra of type ADE . Denote the number of the loops at the sink vertex $0$ by $n_0$; denote the number of the loops at the source vertex $0'$ by $n_{0'}$; denote the number of the loops at the other vertexes $(i,j)$ by $n_{ij}$. Let $S=\{n_{ij}\}_{i,j}\bigcup\{n_1,n_2\}$, $n_{max}=\max S$. \begin{theorem} \label{thm6.3} Let $A$ be the algebra defined as above, then we have $(1)$ If $n_{max}=\max\{n_0,n_{0'}\}$, and $n_{max}>\max\{n_{ij}\}_{i,j}$, then we have\[ \fpdim({\rm mod}\ A)=n_{max}. \] $(2)$ If $n_{max}=n_{i_0j_0}$ for some $(i_0,j_0)$, and $n_{max}>\max\{n_{ij}\}_{i,j},(i,j)\ne (i_0,j_0)$, then we have\[ \fpdim({\rm mod}\ A)=\dfrac{n_{max}+\sqrt{4+n_{max}^2}}{2}. \] $(3)$ If we have $\begin{cases} n_{ij}=n_{max},(i,j)=(i_0,j_0),(i_0,j_0+1),\cdots,(i_0,j_0+s-1)\\ n_{ij}<n_{max},others \end{cases}$,\\ then we get\[ \fpdim({\rm mod}\ A)=\rho(x(x-n_{max})^s-1). \] \end{theorem} \begin{proof} By Proposition \ref{xxcor2.2}, we only need to consider the brick set in mod $B$. Each indecomposable object of mod $B$ is in $\mathcal{P}$, $\mathcal{R}$ or $\mathcal{I}$, where $\mathcal{P}, \mathcal{R},\mathcal{I}$ is respectively the postprojective, regular, preinjective component of mod $B$. And there are no non-zero morphisms from $\mathcal{R}$ to $\mathcal{P}$, $\mathcal{I}$ to $\mathcal{P}$ or $\mathcal{I}$ to $\mathcal{R}$. In other words, mod $B$=$\mathcal{P}\vee \mathcal{R}\vee\mathcal{I}$. Hence $\Ext^1_A(\mathcal{R,P})=0$ and $\Ext_A^1(\mathcal{I,R})=0$, we have\[ \fpdim( {\rm mod}\ A)=\max\{\fpdim{\mathcal{P}},\fpdim{\mathcal{R}},\fpdim{\mathcal{I}}\}.\] In addition, any adjacency matrix of a brick set in $\mathcal{P}$ or $\mathcal{I}$ is a upper triangular matrix (by a suitable order of the bricks), and the diagonal element is nonzero if and only if its corresponding module is simple and there is loops at corresponding vertex. By lemma \ref{lem5.1}, any irreducible adjacency matrix of a brick set in $\mathcal{R}$ has the following form (by a suitable order of the bricks) \[ \begin{pmatrix} d_1&1\\ &d_2&\ddots\\ &&\ddots&1\\ 1&&&d_N \end{pmatrix} \]and the diagonal element is nonzero if and only if its corresponding module is simple and there is loops at corresponding vertex. $(1)$ If $n_{max}=n_0$, which means that $\dim \Ext^1_A(S_{0},S_{0})=n_1$, where $S_{0}$ is the simple module at the sink vertex $0$. Hence we have $\fpdim{\mathcal{I}}=n_{max}$. On the other hand, we have $\fpdim{\mathcal{R}}<\max\{n_{ij}\}_{i,j}+1\le n_{max},\fpdim{\mathcal{P}}\le n_{max}$. Therefore, $\fpdim({{\rm mod}\ A})=n_{max}.$ Similarly, if $n_{max}=n_{0'}$, then $\fpdim({{\rm mod}\ A})=n_{max}.$ $(2)$ If $n_{max}=n_{i_0,j_0}$. First we have $\fpdim{\mathcal{P}}\le n_{max}$ and $\fpdim{\mathcal{I}}\le n_{max}$. Then we consider the brick set in $\mathcal{R}$. According to Lemma \ref{lem5.3}, we can find a module $M$ such that $\{M,S_{i_0,j_0}\}$ is a brick set and the adjacency matrix is $$ \begin{pmatrix} 0&1\\1&n_{max} \end{pmatrix} $$ whose spectral radius is $\dfrac{n_{max}+\sqrt{4+n_{max}^2}}{2}$. For any other irreducible adjacency matrix of a brick set containing $S_{i_0,j_0}$ in $\mathcal{R}$, the set of the diagonal elements denoted by $Diag$ must containing $\{0,n_{max}\}$ and $\max(Diag-\{n_{max},0\})\le n_{max}-1$. Therefore, according to Lemma \ref{lem6.1}$(2)$, its spectral radius is less than $\dfrac{n_{max}+\sqrt{4+n_{max}^2}}{2}$. Hence we have $\fpdim({{\rm mod}\ A})=\dfrac{n_{max}+\sqrt{4+n_{max}^2}}{2}$. $(3)$ According to Lemma \ref{lem5.3}, we can find a brick module $M$ to form a new brick set $\{M,S_{i_0,j_0},\cdots,S_{i_0,j_0+s-1}\}$ and the adjacency matrix is $$ \begin{pmatrix} 0&1\\ &n_{max}&\ddots\\ &&\ddots&1\\ 1&&&n_{max} \end{pmatrix}. $$ By Lemma \ref{lem6.1}$(3)$, we know\[ \rho(x(x-n_{max})^s-1)>\rho(x(x-n_{max})^{s-1}-1)>\cdots>\rho(x(x-n_{max})-1). \]It follows that $\fpdim({{\rm mod}\ A})=\rho(x(x-n_{max})^s-1)$. \end{proof} \bigskip For a set $\{n_{ij}\}_{i,j}$ not satisfying the condition $(1)(2)(3)$ in Theorem \ref{thm6.3}, there is no obvious relationship of the spectral radius. Sometimes we need to consider the element less than $n_{max}$, and sometimes we need not. In the case other than $(1)(2)(3)$ in Theorem \ref{thm6.3}, we have to get the Frobenius-Perron by concrete calculating. We present some examples as follows. \begin{example} Keep the notation as in Theorem \ref{thm6.3}. Denote the quiver of $B$ by $Q_0$. (1) If the follow is the quiver of $B$. \[ \begin{tikzpicture} \node (1) at (0,0) {1}; \node (2) at (1,1.5) {2}; \node (3) at (3,1.5){3}; \node (4) at (5,1.5){4}; \node (5) at (6,0){5}; \draw[->] (2) --node[above ]{} (1); \draw[->] (3) --node[below ]{} (2); \draw[->] (4) --node[above]{} (3); \draw[->] (5) --node[below ]{} (4); \draw[->] (5) --node[below ]{} (1); \end{tikzpicture} \] The corresponding numbers of loops at vertexes $1,2,3,4,5$ are $n_1=0,n_2=2,n_3=1,n_4=2,n_5=0$, then we find that\[ \rho((x-2)^2(x-1)x-1)>\rho((x-2)x-1). \]So we get\[ \fpdim({\rm mod}\ A)=\rho((x-2)^2(x-1)x-1) \]In this case, the Frobenius-Perron dimension is determined not only by $n_{max}$, but also by the numbers of loops at other vertexes. (2) If the follow is the quiver of $B$. \[ \begin{tikzpicture} \node (1) at (0,0) {1}; \node (2) at (1,1.5) {2}; \node (3) at (3,1.5){3}; \node (4) at (5,1.5){4}; \node (5) at (7,1.5){5}; \node (6) at (8,0){6}; \draw[->] (2) --node[above ]{} (1); \draw[->] (3) --node[below ]{} (2); \draw[->] (4) --node[above]{} (3); \draw[->] (5) --node[below ]{} (4); \draw[->] (6) --node[below ]{} (5); \draw[->] (6) --node[below ]{} (1); \end{tikzpicture} \] The corresponding numbers of loops at vertexes $1,2,3,4,5,6$ are $n_1=0,n_2=3,n_3=1,n_4=1,n_5=3,n_6=0$, then we find that\[ \rho((x-3)^2(x-1)^2x-1)<\rho((x-3)x-1). \]So we get\[ \fpdim({\rm mod}\ A)=\rho((x-3)x-1). \]In this case, the Frobenius-Perron dimension is just determined by $n_{max}$. (3) If the quiver of $B$ is the same as (2). The corresponding numbers of loops at vertexes $1,2,3,4,5,6$ are $n_1=0,n_2=6,n_3=4,n_4=5,n_5=6,n_6=0$, then we find that\[ \rho((x-6)^2(x-5)(x-4)x-1)>\rho((x-6)x-1). \]So we get\[ \fpdim({\rm mod}\ A)=\rho((x-6)^2(x-5)(x-4)x-1). \]In this case, note that the ratio of the numbers of loops at vertexes $3,4$ to $n_{max}=n_2=n_5$ are bigger than $(2)$. Although the quiver of $B$ is the same as $(2)$, the Frobenius-Perron dimension is determined not only by $n_{max}$, but also by the numbers of loops at other vertexes. \end{example} \bigskip \section{Polynomial algebras} In this section, we focus on the polynomial algebras $\Bbbk[x_1,x_2,\cdots,x_r]$ which are infinite dimensional algebras. We will calculate the Frobenius-Perron dimension of the representation categories of these algebras. As is known that Auslander-Reiten quiver of the finite dimensional representation category of $\Bbbk[x]$ consists of tubes, so the Frobenius-Perron dimension is 1 by Corollary \ref{cor5.2}. Following, we consider the representation category of $\Bbbk[x,y]$, which is denoted by $\mathcal{REP}$ in this section. It is obvious that a representation in $\mathcal{REP}$ can be written as\[ \begin{tikzpicture} \node (1) at (0,0) {$V$}; \node (1) at (.8,.6) {$A_1$}; \node (1) at (0.7,-.6) {$A_2$}; \draw [->] (0.2,0.2) arc (-75:255:0.5)node[above]{}; \draw [->] (-0.2,-0.2) arc (105:425:0.5)node[below]{}; \end{tikzpicture} \]where $V$ is a $\Bbbk$-linear space of dimension $n$, $A_1,A_2$ are $n\times n$ matrices satisfying $A_1A_2=A_2A_1$. For simplicity, we denote the representation by $(V,A_1,A_2)$. \begin{lemma} \label{lem7.1} A representation in $\mathcal{REP}$ is a brick if and only if it is of one dimension. In addition, there is no morphism between two different bricks, so arbitrary bricks constitute a brick set. \end{lemma} \begin{proof} Obviously, a representation of one dimension is a brick. Conversely, let $M=(V,A_1,A_2)$ be a brick in $\mathcal{REP}$, $B$ be an endmorphism of $M$, i.e. \[ \begin{tikzpicture} \node (2) at (0,0) {$V$}; \node (1) at (.8,.6) {$A_1$}; \node (1) at (0.7,-.6) {$A_2$}; \draw [->] (0.2,0.2) arc (-75:255:0.5)node[above]{}; \draw [->] (-0.2,-0.2) arc (105:425:0.5)node[below]{}; \node (3) at (1+2,0) {$V$}; \node (1) at (1+2.8,.6) {$A_1$}; \node (1) at (3.7,-.6) {$A_2$}; \draw [->] (3.2,0.2) arc (-75:255:0.5)node[above]{}; \draw [->] (1+2-0.2,-0.2) arc (105:425:0.5)node[below]{}; \draw[->] (2) --node[above ]{$B$} (3); \end{tikzpicture} \]Then we can choose $B=\mathrm{Id},B=A_1$ or $B=A_2.$ Since $M$ is a brick, it follows that $A_1=\lambda \mathrm{Id},A_2=\mu \mathrm{Id}$ for $\lambda,\mu\in\Bbbk$. In this case, any decomposition $V=V_1\oplus V_2$ implies a decomposition $M=M_1 \oplus M_2$. Therefore, $V$ must be of one dimension. For two bricks $(\Bbbk,\lambda_1,\mu_1),(\Bbbk,\lambda_2,\mu_2)\in \mathcal{REP}$ ( $\lambda_1,\lambda_2,\mu_1,\mu_2\in\Bbbk$), assume there is a morphism as follows\[ \begin{tikzpicture} \node (2) at (0,0) {$\Bbbk$}; \node (1) at (.8,.6) {$\lambda_1$}; \node (1) at (0.7,-.6) {$\mu_1$}; \draw [->] (0.2,0.2) arc (-75:255:0.5)node[above]{}; \draw [->] (-0.2,-0.2) arc (105:425:0.5)node[below]{}; \node (3) at (1+2,0) {$\Bbbk$}; \node (1) at (1+2.8,.6) {$\lambda_2$}; \node (1) at (3.7,-.6) {$\mu_2$}; \draw [->] (3.2,0.2) arc (-75:255:0.5)node[above]{}; \draw [->] (1+2-0.2,-0.2) arc (105:425:0.5)node[below]{}; \draw[->] (2) --node[above ]{$\nu$} (3); \end{tikzpicture} \] for $\nu\ne 0$. Hence, by the condition, we get $\lambda_1=\lambda_2,\mu_1=\mu_2$. \end{proof} Now we will try to calculate the extension space between bricks. \begin{lemma} \label{lem7.2} For two representations $(V,A_1,A_2),(W,B_1,B_2)\in \mathcal{REP}$ and $\lambda,\mu\in\Bbbk$, there is an isomorphism of $\Bbbk$-linear spaces\[ \Ext^1_\mathcal{REP}((V,A_1,A_2),(W,B_1,B_2))\cong \Ext^1_\mathcal{REP}((V,A'_1,A'_2),(W,B'_1,B'_2)) \]where $A'_1=A_1+\lambda \mathrm{Id},A'_2=A_2+\mu \mathrm{Id}, B'_1=B_1+\lambda \mathrm{Id},B'_2=B_2+\mu \mathrm{Id}$. \end{lemma} \begin{proof} Any element in $\Ext^1_\mathcal{REP}((V,A_1,A_2),(W,B_1,B_2))$ is an short exact sequence as follow by choosing a suitable basis of $\Bbbk$-linear spaces\[ \begin{tikzpicture} \node (-2) at (-2,0) {0}; \node (-1) at (8,0) {0}; \node (2) at (0,0) {$W$}; \draw[->] (-2) --node[above ]{} (2); \node (1) at (.8,.6) {$B_1$}; \node (1) at (0.7,-.6) {$B_2$}; \draw [->] (0.2,0.2) arc (-75:255:0.5)node[above]{}; \draw [->] (-0.2,-0.2) arc (105:425:0.5)node[below]{}; \node (3) at (1+2,0) {$W\oplus V$}; \node (1) at (1+2.8,.6) {$C_1$}; \node (1) at (3.7,-.6) {$C_2$}; \draw [->] (3.2,0.2) arc (-75:255:0.5)node[above]{}; \draw [->] (1+2-0.2,-0.2) arc (105:425:0.5)node[below]{}; \draw[->] (2) --node[above ]{$(1,0)^T$} (3); \node (4) at (3+1+2,0) {$V$}; \draw[->] (4) --node[above ]{} (-1); \node (1) at (1+3+2.8,.6) {$A_1$}; \node (1) at (6.7,-.6) {$A_2$}; \draw [->] (6.2,0.2) arc (-75:255:0.5)node[above]{}; \draw [->] (3+1+2-0.2,-0.2) arc (105:425:0.5)node[below]{}; \draw[->] (3) --node[above ]{$(0,1)$} (4); \end{tikzpicture} \]satisfying $$B_1B_2=B_2B_1,C_1C_2=C_2C_1,A_1A_2=A_2A_1,$$ $$(1,0)^T B_1=C_1(1,0)^T,(1,0)^T B_2=C_2(1,0)^T,$$ $$(0,1)C_1=A_1(0,1),(0,1)C_2=A_2(0,1).$$ Then the following is an element in $\Ext^1_\mathcal{REP}((V,A'_1,A'_2),(W,B'_1,B'_2))$\[ \begin{tikzpicture} \node (-2) at (-2,0) {0}; \node (-1) at (8,0) {0}; \node (2) at (0,0) {$W$}; \draw[->] (-2) --node[above ]{} (2); \node (1) at (.8,.6) {$B_1+\lambda \mathrm{Id}$}; \node (1) at (0.7,-.6) {$B_2+\mu \mathrm{Id}$}; \draw [->] (0.2,0.2) arc (-75:255:0.5)node[above]{}; \draw [->] (-0.2,-0.2) arc (105:425:0.5)node[below]{}; \node (3) at (1+2,0) {$W\oplus V$}; \node (1) at (1+2.8,.6) {$C_1+\lambda \mathrm{Id}$}; \node (1) at (3.7,-.6) {$C_2+\mu \mathrm{Id}$}; \draw [->] (3.2,0.2) arc (-75:255:0.5)node[above]{}; \draw [->] (1+2-0.2,-0.2) arc (105:425:0.5)node[below]{}; \draw[->] (2) --node[above ]{$(1,0)^T$} (3); \node (4) at (3+1+2,0) {$V$}; \draw[->] (4) --node[above ]{} (-1); \node (1) at (1+3+2.8,.6) {$A_1+\lambda \mathrm{Id}$}; \node (1) at (6.7,-.6) {$A_2+\mu \mathrm{Id}$}; \draw [->] (6.2,0.2) arc (-75:255:0.5)node[above]{}; \draw [->] (3+1+2-0.2,-0.2) arc (105:425:0.5)node[below]{}; \draw[->] (3) --node[above ]{$(0,1)$} (4); \end{tikzpicture} \] Similarly, each element \[\begin{tikzpicture} \node (-2) at (-2,0) {0}; \node (-1) at (8,0) {0}; \node (2) at (0,0) {$W$}; \draw[->] (-2) --node[above ]{} (2); \node (1) at (.8,.6) {$B’_1$}; \node (1) at (0.7,-.6) {$B’_2$}; \draw [->] (0.2,0.2) arc (-75:255:0.5)node[above]{}; \draw [->] (-0.2,-0.2) arc (105:425:0.5)node[below]{}; \node (3) at (1+2,0) {$W\oplus V$}; \node (1) at (1+2.8,.6) {$C_1$}; \node (1) at (3.7,-.6) {$C_2$}; \draw [->] (3.2,0.2) arc (-75:255:0.5)node[above]{}; \draw [->] (1+2-0.2,-0.2) arc (105:425:0.5)node[below]{}; \draw[->] (2) --node[above ]{$(1,0)^T$} (3); \node (4) at (3+1+2,0) {$V$}; \draw[->] (4) --node[above ]{} (-1); \node (1) at (1+3+2.8,.6) {$A’_1$}; \node (1) at (6.7,-.6) {$A’_2$}; \draw [->] (6.2,0.2) arc (-75:255:0.5)node[above]{}; \draw [->] (3+1+2-0.2,-0.2) arc (105:425:0.5)node[below]{}; \draw[->] (3) --node[above ]{$(0,1)$} (4); \end{tikzpicture} \] in $\Ext^1_\mathcal{REP}((V,A'_1,A'_2),(W,B'_1,B'_2))$ can be corresponding to the element\[ \begin{tikzpicture} \node (-2) at (-2,0) {0}; \node (-1) at (8,0) {0}; \node (2) at (0,0) {$W$}; \draw[->] (-2) --node[above ]{} (2); \node (1) at (.8,.6) {$B’_1-\lambda \mathrm{Id}$}; \node (1) at (0.7,-.6) {$B’_2-\mu \mathrm{Id}$}; \draw [->] (0.2,0.2) arc (-75:255:0.5)node[above]{}; \draw [->] (-0.2,-0.2) arc (105:425:0.5)node[below]{}; \node (3) at (1+2,0) {$W\oplus V$}; \node (1) at (1+2.8,.6) {$C_1-\lambda \mathrm{Id}$}; \node (1) at (3.7,-.6) {$C_2-\mu \mathrm{Id}$}; \draw [->] (3.2,0.2) arc (-75:255:0.5)node[above]{}; \draw [->] (1+2-0.2,-0.2) arc (105:425:0.5)node[below]{}; \draw[->] (2) --node[above ]{$(1,0)^T$} (3); \node (4) at (3+1+2,0) {$V$}; \draw[->] (4) --node[above ]{} (-1); \node (1) at (1+3+2.8,.6) {$A’_1-\lambda \mathrm{Id}$}; \node (1) at (6.7,-.6) {$A’_2-\mu \mathrm{Id}$}; \draw [->] (6.2,0.2) arc (-75:255:0.5)node[above]{}; \draw [->] (3+1+2-0.2,-0.2) arc (105:425:0.5)node[below]{}; \draw[->] (3) --node[above ]{$(0,1)$} (4); \end{tikzpicture} \]in $\Ext^1_\mathcal{REP}((V,A_1,A_2),(W,B_1,B_2))$. For the arbitrariness of the construction, we get the isomorphism. \end{proof} \begin{lemma} \label{lem7.3} For two bricks $(\Bbbk,\lambda_1,\lambda_2),(\Bbbk,\mu_1,\mu_2)\in \mathcal{REP}$, there is a nontrivial extension between them if and only if $\lambda_1=\mu_1,\lambda_2=\mu_2$. In this case ,we have\[ \dim \Ext_\mathcal{REP}^1((\Bbbk,\lambda_1,\lambda_2),(\Bbbk,\lambda_1,\lambda_2))=2. \] \end{lemma} \begin{proof} Without loss of generality, by Lemma \ref{lem7.2}, we can assume $\lambda_1=\lambda_2=0$. Then $\Ext_\mathcal{REP}^1((\Bbbk,0,0),(\Bbbk,\mu_1,\mu_2))\ne0$ implies that we have\[ \begin{tikzpicture} \node (2) at (0,0) {$\Bbbk$}; \node (1) at (.8,.6) {$\mu_1$}; \node (1) at (0.7,-.6) {$\mu_2$}; \draw [->] (0.2,0.2) arc (-75:255:0.5)node[above]{}; \draw [->] (-0.2,-0.2) arc (105:425:0.5)node[below]{}; \node (3) at (1+2,0) {$\Bbbk^2$}; \node (1) at (1+2.8,.6) {$C_1$}; \node (1) at (3.7,-.6) {$C_2$}; \draw [->] (3.2,0.2) arc (-75:255:0.5)node[above]{}; \draw [->] (1+2-0.2,-0.2) arc (105:425:0.5)node[below]{}; \draw[->] (2) --node[above ]{$(1,0)^T$} (3); \node (4) at (3+1+2,0) {$\Bbbk$}; \node (1) at (1+3+2.8,.6) {$0$}; \node (1) at (6.7,-.6) {$0$}; \draw [->] (6.2,0.2) arc (-75:255:0.5)node[above]{}; \draw [->] (3+1+2-0.2,-0.2) arc (105:425:0.5)node[below]{}; \draw[->] (3) --node[above ]{$(0,1)$} (4); \node (-2) at (-2,0) {0}; \node (-1) at (8,0) {0}; \draw[->] (-2) --node[above ]{} (2); \draw[->] (4) --node[above ]{} (-1); \end{tikzpicture} \]Hence $C_1=\begin{pmatrix} \mu_1&\mu_3\\0&0 \end{pmatrix}$,$C_2=\begin{pmatrix} \mu_2&\mu_4\\0&0 \end{pmatrix}$ for $\mu_2,\mu_4\in\Bbbk$. If $\mu_1\ne0$, we apply $\begin{pmatrix} 1&-\mu_3/\mu_1\\0 &1 \end{pmatrix}$ to $\Bbbk^2$, we get\[ \begin{tikzpicture} \node (2) at (0,0) {$\Bbbk$}; \node (1) at (.8,.6) {$\mu_1$}; \node (1) at (0.7,-.6) {$\mu_2$}; \draw [->] (0.2,0.2) arc (-75:255:0.5)node[above]{}; \draw [->] (-0.2,-0.2) arc (105:425:0.5)node[below]{}; \node (3) at (1+2,0) {$\Bbbk^2$}; \node (1) at (1+2+1,1.3) {$\begin{pmatrix} \mu_1&0\\0&0 \end{pmatrix}$}; \node (1) at (3.7,-1.6) {$\begin{pmatrix} \mu_2&\mu_4-\mu_2 \mu_3/\mu_1\\0&0 \end{pmatrix}$}; \draw [->] (3.2,0.2) arc (-75:255:0.5)node[above]{}; \draw [->] (1+2-0.2,-0.2) arc (105:425:0.5)node[below]{}; \draw[->] (2) --node[above ]{$(1,0)^T$} (3); \node (4) at (3+1+2,0) {$\Bbbk$}; \node (1) at (1+3+2.8,.6) {$0$}; \node (1) at (6.7,-.6) {$0$}; \draw [->] (6.2,0.2) arc (-75:255:0.5)node[above]{}; \draw [->] (3+1+2-0.2,-0.2) arc (105:425:0.5)node[below]{}; \draw[->] (3) --node[above ]{$(0,1)$} (4); \node (-2) at (-2,0) {0}; \node (-1) at (8,0) {0}; \draw[->] (-2) --node[above ]{} (2); \draw[->] (4) --node[above ]{} (-1); \end{tikzpicture} \]Due to $C_1C_2=C_2C_1$, we have $\mu_4-\mu_2 \mu_3/\mu_1=0$, which is contradiction to $\Ext_\mathcal{REP}^1((\Bbbk,0,0),(\Bbbk,\lambda_1,\lambda_2))\ne0$. It follows that $\mu_1=0$ and $\mu_2=0$. By Lemma \ref{lem7.2}, we have $$\Ext_\mathcal{REP}^1((\Bbbk,\lambda_1,\lambda_2),(\Bbbk,\lambda_1,\lambda_2))\cong \Ext_\mathcal{REP}^1((\Bbbk,0,0),(\Bbbk,0,0))\cong \Bbbk^2.$$ \end{proof} \begin{theorem} We have\[ \fpdim(\mathcal{REP})=2. \] \end{theorem} \begin{proof} The conclusion is directly obtained following Lemma \ref{lem7.1} and Lemma \ref{lem7.3}. \end{proof} \begin{corollary} For the representation category $\mathcal{REP}_n=\mathrm{rep}\ \Bbbk[x_1,x_2,\cdots,x_n]$, we have\[ \fpdim(\mathcal{REP}_n)=n. \] \end{corollary} \noindent {\bf Acknowledgements.}\quad This work is supported by the National Natural Science Foundation of China (Grant Nos. 11971398, 12131018 and 12161141001).
2023-04-23T06:10:07.181Z
2022-04-13T02:07:31.000Z
redpajama/arxiv
arxiv_0002
187
10,327
44d6ca13d3c96becaa3e6d6b06d004395ec28cc9
\section{Introduction}\label{s:introduction} The improvement in computing power and an overwhelming wealth of data have enabled the viability of statistical learning methods and have achieved unprecedented performance in several fields of science, engineering, and finance. Artificial Intelligence (AI)-based techniques are also being incorporated by federal agencies to make critical infrastructure safe, secure, and robust~\cite{ai_secure} and healthcare critical infrastructure is not an exception. From first generation rule-based healthcare AIs, the sector is slowly moving towards ML-based methods to investigate, understand, and use complex data patterns for clinical diagnosis~\cite{ai_health}. ML-based healthcare predictions are currently far from having generalizable diagnostic abilities. For example, trained ML models for disease prediction are extremely data dependent, and cannot be seamlessly transferred as in the case of image classification tasks. One such disease prediction application, with profound impact, is cancer (tumor) detection. An early/fast diagnosis of cancer (cancer inference) by a trained ML model can save precious time in prognosis and treatment of a patient. Precision medicine is the process of tailoring the right diagnosis and treatment for the right person at the right place. One of the biggest components of precision medicine is to incorporate patients' genetic information to the diagnosis, treatment, and decision making \cite{precision}. In the context of precision cancer medicine, distinguishability between genetic mutations of normal and malignant tissues is the crux of cancer genomics. These genetic changes developed during a person's life in malignant tumor cells are called somatic (or acquired) changes and are accountable for more than $90\%$ of cancer cases. Somatic Single-Nucleotide Variation (SNV) and Copy-Number Variation (CNV) on protein-coding genes, especially on oncogenes, tumor suppressors and cell cycle regulators are known to cause tumor formation and progress. However, the heterogeneity in various levels makes it difficult to understand precisely which gene is involved in which cancer type. Somatic mutation rate is different across cancer types; even within a single type, this rate is different across patients~\cite{mut_rate}. Conversely, origins of distinct cancer types have been found to share similarities, thus making distinguishability even harder~\cite{similar_origin}. It is important to find explainable relationships between the somatic mutations, the genes that they affect and the type of cancer. We explore the problem of cancer genomics using a real-world dataset consisting of more than 2 million CNV and SNV information of 11 different cancer types~\cite{idash20}. While the use of genomics in cancer detection seems promising for comprehensive understanding of the disease and its treatment, there are major privacy-related concerns. Genomic data is extremely characterizing and identifying, and the data may pin-point to the exact patient. It is also permanent and cannot be changed like other private data (passwords, credit cards, etc.). A partial leak of genomic data may reveal important information about the individual and may also be used to reconstruct their genome \cite{genome_attack}. A patient, waiting for their cancer diagnosis using a server-hosted state-of-the-art ML-based predictive models, should not be subjected to such privacy risks. High privacy risks of genomic data leakage calls for data to be always encrypted during inference. Homomorphic Encryption (HE) is an encryption scheme which allows for encrypted computation, i.e. the encrypted genomic data can be sent to the ML model in the server and the patient can receive the encrypted diagnosis. The first Fully HE (FHE) scheme for arbitrary computation proposed by Gentry et. al. \cite{gentry} had prohibitive computational overheads for real-world applications like ML-based inference. Since the non-linear activation functions were the major bottleneck for private ML-based inference, researchers mainly focused on approximating the non-linear function using square function \cite{cryptonets}, first terms of Taylor expansion \cite{access_imputation}, or piece-wise linearization \cite{minionn}, for fast private inference. A possible solution for this problem is to bypass the non-linear function using small models like logistic regression (as we show in section \ref{sss:method_approx}). But inherent high-dimensionality of genomic data makes the encrypted linear operation impractical for privacy preserving genomics. Using an ML model on the raw cancer dataset would require multiplication of matrices with millions of columns, which is extremely expensive in HE. The intuitive solution towards faster private inference is to reduce the number of computations. This translates to reducing the number of features using feature selection, as can be seen in other HE applications in genomics \cite{access_imputation,ultrafast}. For dimensionality reduction, we develop a feature engineering methodology involving feature (gene) selection and genetic (mutation) information encoding, and is based on a combination of biological intuition and statistical tests. Trivially using statistical scores may result in overfitting especially for genomic datasets with the number of predictors $(f)$ several times larger than the number of samples $(|X|)$ i.e. $f>>|X|$. Domain knowledge is increasingly being used for feature engineering in healthcare predictive models \cite{domain_healthcare} because such methods are not only more interpretable but also integrate years of medical research and case-studies. Healthcare ML models need to be interpretable for sustainability \cite{healthcare_explainability}. Therefore, it is sustainable to use explainable models (like SVM, logistic regression, etc.) than Deep Neural Networks (DNN) that are essentially a black-box. This practically eliminates the automatic feature engineering capabilities of DNNs. Using our methodology of somatic mutation encoding, we reduce the dimensionality of the task (from over 2 million mutations to 43K features), but still the genomic data remains high-dimensional as compared to benchmark ML datasets (like MNIST with image size of $28\times28=784$ features or CIFAR with image size of $32\times32\times3=3072$ features). Since our application \textit{needs} several thousands of features for accurate predictions, not only our time budget is completely exhausted by the linear operation, but also standard matrix multiplication does not offer the performance needed. Another drawback of current HE-based implementations of private inference is that they are designed to maximize throughput, computing on thousands of inputs together to improve efficiency in a cumulative way. However, they suffer in latency, i.e. the algorithms would take the same time to compute on just one input as it would take for thousands of inputs. But the real-world application we consider in this work benefits from improved latency, as it a common use case to have to analyze the genome of only a single patient and not wait for thousands of patients to have their tests done. In summary, to enable practical real-world private inference, we need the ability to compute on high-dimensional data in the encrypted domain with low latency and high throughput. \section{Related work}\label{s:related} The study of genomic data to understand the changes that led to cancer or the genomic alterations that happened as a result of cancer is of utmost importance for initial diagnosis, predicting stage, cancer growth, metastasis, treatment, drug response, and planning a path to recovery. Therefore, there have been several studies focused genomic dynamics on individual cancer types. Researchers from TCGA have delved into genomic and molecular characterization studies for several individual cancer types: Glioblastoma \cite{glioblastoma}, ovarian carcinoma \cite{ovarian_carcinoma}, lung \cite{lung}, endometrial carcinoma \cite{endometrial}, renal cell carcinoma \cite{renal}, urothelial carcinoma \cite{urothelial}. In 2012, TCGA launched a pan-cancer dataset collection i.e. coherent dataset collection over 12 tumor types each profiled using 6 different platforms: Reverse-phase protein arrays measuring protein and phosphoprotein abundance (RPPA), DNA methylation, microarray-based measurement of copy number, single-nucleotide and structural variants mutation using whole exome sequencing, microRNA sequencing, and RNA sequencing and microarray gene expression analysis~\cite{pan_cancer}. This opened up several new avenues of cancer analysis using genomic data like identification of genes that drive carcinogenesis~\cite{driver_genes}, studies on the metastatic nature of cancer types~\cite{metastatis}, and using genomics for precision medicine \cite{precision}. All of these studies that focus on detecting the cancer type analyze the genomic data to help develop biological intuition towards understanding a tumor type. Therefore, we also perform the encoding of genomic mutation data based on biological intuition to retain this wealth of semantic information. Prediction of cancer type using machine learning on genomic data has been of interest in the last few years because of the possibility of computation on the huge volume of genomic data. Researchers have aimed at finding information about cell of origin for all the 33 different tumor types \cite{similar_origin}. Jiao et. al. propose a deep learning-based framework to predict 24 tumors of unknown primary site by analyzing somatic passenger mutations. Machine learning has also been used to analyze genomic data for immunotherapy for pan-cancer datasets \cite{inhibition}. Deep learning has also been used for the prediction of tumor type using gene filtering \cite{gdl} and mutation frequency, sparsity reduction, and cluster gene filtering \cite{deep_gene} as pre-processing steps. Another approach to finding/understanding useful features is through auto-encoders, as proposed for individual cancer detection \cite{encoder_liver, encoder_breast} or for classification of 40 different cancer types achieving an area under curve between 0.54-0.97 for individual types~\cite{auto}. Our work also revolves around efficient tumor detection but we observe that a combination of biological intuition and statistical tests is required for a higher performance in tumor type detection. Amidst the growing privacy concerns for sensitive data, private inference has been a topic of interest in recent years. Cryptonets \cite{cryptonets}, one of the earlier research articles on generic HE-based private inference, implemented DNN by approximating non-linear layers with some lower order polynomial terms. Cryptonets, like many HE algorithms, are also optimized for throughput, predicting class labels for 8192 images together. The performance was improved by several other solutions with Gazelle~\cite{gazelle}, a low latency framework, being able to achieve state-of-the-art performance of 800 ms for one inference. HE-based ML/DNN implementations focus on lowering the cost of non-linear functions like ReLUs as the feature space (and thus, the cost of matrix multiplication) is smaller. For example, the largest inputs considered by the solutions is from the CIFAR dataset, with 3K features, whereas our model needs a minimum of 30K features. In this work, we focus on a different problem, where we require high number of features, high throughput, as well as low latency. Private inference on genomic data is a challenging problem because of the nature of data. Several challenges in HE-based genome privacy were developed in iDash (integrating Data for Analysis, 'anonymization,' and SHaring) like private statistic calculation~\cite{idash15}, privacy-preserving querying~\cite{idash16}, and logistic regression training on a dataset of 18 features and 2 labels~\cite{idash17}, each focusing on a different bottleneck of genome privacy road-map. iDash19 and iDash20 focused on private inference challenges. We observe that rigorous feature engineering is often performed on the original data to reduce the number of features to a bare minimum, from 16K features to 10-40 features \cite{access_imputation,ultrafast}. Authors have performed genomic analysis using several different libraries and types of FHE like CKKS, BFV, and TFHE using different models like SVM, LR, shallow DNNs~\cite{ultrafast}, and also using partial homomorphic encryption \cite{access_imputation}. Naturally, as the number of features are reduced by three orders, there is a reduction in test accuracy, which is justified with performance vs accuracy trade-off. On a different application (genome wide association studies involving a large number of individuals), researchers have also explored approximation techniques in logistic regression algorithms using semi-parallel implementation \cite{idash18} improving evaluation time by 30\% ($\approx$ 6 hours on a single node). Our privacy preserving inference methodology is specifically directed to problems where a large number of features is indeed required. \section{Preliminaries}\label{s:prelim} \subsection{Dataset}\label{ss:dataset} We use the cancer classification dataset from iDASH 2020 competition Task I \cite{idash20} that was collected for private tumor classification. This data is curated from a centralized database The Cancer Genome Atlas (TCGA) ~\cite{TCGA} using patients from 11 different cancer types. As noted in section \ref{s:related}, several subsets of TCGA resulted in different types of studies. In our work, we study the impact of somatic mutations on prediction of cancer. Our dataset consists of two types of somatic alteration information (maybe considered as two subsets of features): Single-Nucleotide Variations (SNVs) and Copy Number Variations (CNVs) on protein-coding genes. In the SNV subset, four different characteristics are given for each somatic SNV of a gene. These characteristics represent the chromosome location, denotes whether the mutation is a single-nucleotide polymorphism, and the effect of the mutation (using two different measures). The effect of the mutation is calculated using Ensembl Variant Effect Predictor (VEP)~\cite{vep} and is reported in two ways: 1) A mutation can be considered as one of the following categorical values; high, moderate, modifier, and low, followed by a real number denoting the impact of the mutation. 2) A mutation can qualitatively be denoted as tolerated or deleterious, based on Sorting Intolerant from Tolerant (SIFT) pathogenicity prediction. All of this information reflects the importance of a mutation, i.e. VEP scores help transform an observation of a mutation to its possible impact in development of the tumor. VEP scores help in developing the biological intuition for our feature engineering methodology, which is required as this subset of SNV features contains 2,044,328 somatic mutation rows. In the copy number subset, each gene for each sample (patient) is given a copy number value depending on whether there has been a change from their parents' genes: 0 for no alteration, 1 or 2 for duplication, and -1 or -2 for deletion from one or both the parents, respectively. For each sample, there are 25,128 genes, and thus, there are 25,128 features for each sample. The dataset comes from 2713 patients belonging to 11 different cancer types. The dataset is unbalanced with the maximum number of patients belonging to Bronchus and Lung ($23.51\%$). We randomly select $80\%$ of the dataset for training and the rest is reserved for testing maintaining the distribution of data, i.e. the distribution of training and test data for each label remains the same. \subsection{Homomorphic encryption} Homomorphic Encryption (HE) is a type of encryption that allows for computation on encrypted data without decryption. Let us consider a function, $f(.)$ operating on plaintext operands $p_1,p_2$, and the equivalent function $f_{enc}(.)$ operating on the corresponding ciphertexts $c_1,c_2$, such that $c1 = Enc(p_1)$, and $c2 = Enc(p_2)$, where $Enc(.)$ is the encryption function. Then, the computation of the function $f(.)$ on plaintext operands $p_1,p_2$ is the decryption of computation of the function $f_{enc}$ on ciphertexts, i.e. using HE, we can say that $f(p1,p2)=Dec(f_{enc}(c_1,c_2))$, where $Dec(.)$ is the decryption function. Depending on the type of computation possible in an encryption scheme, there are several types of HE schemes. For linear models with unencrypted weights, Partial Homomorphic Encryption (PHE) schemes like Paillier \cite{Paillier} can be used. Nevertheless, encryption and decryption operations, which consist of modular exponentiations, hinders the performance of ML models with larger inputs or outputs. In addition, although it is possible to encode several plaintext into a ciphertext in Paillier for certain applications, the density of plaintexts per ciphertext is much lower than in Somewhat Homomorphic Encryption (SHE) or Fully Homomorphic Encryption (FHE). A better approach comes from using SHE/FHE schemes like BFV (Brakerski/Fan-Vercauteren) \cite{bfv} or CKKS (Cheon, Kim, Kim, Song) \cite{ckks}. CKKS enables fixed-point arithmetic and it is the standard choice for ML applications. During computation, CKKS drops the lower bits of the plaintext after each operation, reducing the precision of the result. Conversely, BFV works on integers (modular arithmetic), where we can emulate a fixed-point arithmetic by scaling up the double-precision floating-point number into integers. Similarly to CKKS, there is a limitation for how much precision a BFV ciphertext can provide. However, since it computes on modular arithmetic, we can use the Chinese Remainder Theorem (CRT) to break our values into several smaller values, each one under unique modulus coprime to all other moduli. Each smaller value is then encrypted under a different key. \section{Threat model} In this work we focus on privacy-preserving inference. In our threat mode, the training dataset is public and is comprised of individuals who have agreed to share their data. The Cancer Genome Atlas database is one such example \cite{TCGA}. This database catalogues genomic information, prognosis, diagnosis, on top of personal information like age, gender, race, and ethnicity of the individuals who are identified as case numbers. We do not aim to protect the training dataset, rather use it to build models for cancer prediction. However, when a new patient wants to test for cancer using their genome, their data must be protected. In our threat model, thus, the training is not privacy-preserving, but the inference is private. Fig. \ref{fig:threat_model} summarizes our threat model. \begin{figure} \centering \includegraphics[scale=0.5]{Imgs/threat_model.png} \caption{Our threat model for private inference.} \label{fig:threat_model} \end{figure} \section{Methodology}\label{s:methods} Since computations in the encrypted domain are expensive, private inference on any type of should prioritize towards less number of computations. This translates to a low number of features and smaller ML models (for example, SVM or logistic regression instead of deep networks). Our methodology can be divided into two parts. In the first part of our methodology (subsection \ref{ss:feature_encoding}), we focus on making our dataset compact encoding mutations and reducing the number of features with biological intuitions and statistical tests. This reduces the number of features from over 2 million to 43K. In the second part of our methodology (subsection \ref{ss:mat_mul}), we propose a matrix multiplication algorithm , particularly catered towards implementing a faster version of privacy-preserving logistic regression-based cancer inference. \subsection{Somatic mutation encoding}\label{ss:feature_encoding} For the cancer prediction to be correlated to both CNV and SNV information, the CNV and SNV features can be concatenated together, which can be used to train an ML model. CNV subset has 25K features. But the SNV subset corresponds to over 2 million mutation rows, which may equate to over 2 million features if each of these mutations is analyzed separately. The concatenated dataset, thus, consists of more than 2 million features. Therefore, instead of representing a mutation as a feature, we represent a gene as a feature with encoded mutation as the value of that feature. However, this approach faces the challenge of compacting mutation information of over 2 million data-points to 25K data-points (corresponding to 25K genes). \subsubsection{Step 1: Gene/feature selection using SNV frequency} Previous studies~\cite{deep_gene, gdl} on cancer detection using somatic mutations observed that the frequency of mutation of a gene correlates to higher prediction accuracy. The genes with higher number of mutations are more likely to develop a cancerous tumor. This makes sense because if there are more mutations in a gene, then the expression of that gene is likely disrupted through the mutations. There is also a correlation between the frequency of mutations in a patient cohort and the likelihood of observing the cancer in that cohort. This also makes sense because if a mutation is observed in a large number of patients with same cancer type diagnosis, then that mutation is likely either correlated or causal to that cancer type. Therefore, as the first step of feature selection, we choose genes with higher number of mutations. But each cancer type corresponds to a higher mutation in a different gene. We first rank the genes based on their SNV frequency in the patient cohort for each cancer type by also taking into consideration genes with more than one SNVs on them. We then combine the ranked genes from different cancer types and finally select the top 10,000 genes with highest SNV frequency for each cancer type as our features. This step also ensures removal of genes with low SNV frequency and reduce the dimensionality of our feature space. With all cancer types combined, we have a total of 18,606 genes. \subsubsection{Step 2: Encoding scheme} Encoding techniques, both for the feature vectors and the target variables, ease in training towards better accuracy. Efficient encoding techniques have been proven to result in better performance in genomic classification tasks as well \cite{encoding}. As mentioned in the dataset section, each SNV on a gene is represented by multiple characteristics. This information need to be meaningfully merged with the CNV information of each gene. We explore the following encoding schemes based on a biological intuition as explained below: \begin{enumerate} \item Using presence of an SNV on a gene: The genes selected using frequency are merged for all cancer types. In this encoding we aim to study if a particular gene (mutation) is highly correlated to a cancer type. We assign a binary value $[0,1]$ to each of genes of a patient to denote the presence or absence of one or more SNVs. This allows us to encode SNV information in a categorical manner. This binary value per gene is used as a feature. \item Using the type and impact of an SNV: As denoted in subsection \ref{ss:dataset}, the impact of mutation of a gene is calulated using VEP For each SNV in the dataset, we have two types of measure: 1- a qualitative measure indicating whether an SNV has a tolerated or deleterious effect; 2- a quantitative real-value measure ($s_{i,j}$) representing the strength of the impact and the qualitative confidence with which the mutation can be attributed to the diagnosis. $s_{i,j}$ is the strength of the $i^{th}$ SNV in patient $j$. Each of the encoding scheme are given equidistant real values between 0 to 1. We also experimented with different ranges but did not find any improvement in test accuracy. We think that both of these measures are useful in classifying the tumor type. We first encode the first measure by assigning values for [deleterious, deleterious (low confidence), tolerated (low confidence), tolerated] as [1.0, 0.75, 0.5, 0.25] for each $m_{i,j}$. The reason for this encoding is that a detrimental effect is given the highest value in cancer prediction and similarly a tolerated effect is given the lowest feature value. Either of the effects, when estimated with lower confidence are given lower effect. $m$ is the effect of the $i^{th}$ SNV in patient $j$. We then combine this encoding with the second measure as $m_{i,j} \times s_{i,j}$. The final effect values of SNV impact of a gene is the summation of the impacts of all the SNVs on that gene, if there is more than one SNVs. The resulting value per gene is used as a feature. In addition to the strength ($s$) of an SNV, the qualitative confidence of the effect of an SNV on a gene is also as a categorical variable with values as [high, moderate, modifier, low]. We encode them as [1, 0.4, 0.7, 0.1] since intuitively we want to assign a higher importance to a high mutation effect. A gene with no mutation is assigned 0 effect. Similar to the previous feature, for a gene with multiple SNVs at different locations, the values are added to finally represent the effective value of all the SNVs in a gene. The resulting value per gene is used as a feature (Figure 1). \item {Using CNV of a gene} CNV of a gene is represented as integers between -2 and 2, indicating if both, one, or no copy of the gene is deleted or duplicated. We scale these values to integers between 0 and 4 as statistical feature selection methods (for example $\chi^2$ test) often require positive values. Similarly, the resulting value per gene is used as a feature. Overall, the top 10,000 genes per cancer type are merged into a total of 18,606 genes. For each patient, 18,606 genes with their associated SNV encoding are concatenated with 25,128 genes with their CNV information. In total, we have 43,734 \textit{features} which undergo the following statistical tests. \end{enumerate} \subsubsection{Step 3: Feature selection using $\chi^2$ test} The previous steps of feature selection incorporate biological intuition. In this step, we explore statistical tests for evaluating feature importance. Statistical methods like $\chi^2$ test do not only inform the feature importance but also help reduce dimensionality of the feature space enabling faster computation (both during training and inference). It has previously been used in genetic information based disease prediction studies \cite{chi_cancer}. We choose $\chi^2$ test as a feature selection metric since we achieve the best possible accuracy when compared to feature selection with mutual information or f-score statistics (as reported in Section \ref{s:results}). To decide if a feature is independent of the target label (i.e type of cancer), we perform the $\chi^2$ test where we calculate $\chi^2$ value of each feature with respect to the target variable. The $\chi^2$ value of a feature is given as $\sum \frac{(O_i-E_i)^2}{E_i}$ where $E$ represents the expected value, $O$ represents the actual output and $i$ represents each instance of a $\chi^2$ test between a feature and a target. Note here that, the expected values of a variable is calculated using the distribution of feature values. We run the $\chi^2$ test on all genes (CNV and SNV concatenated together) and sort the features decreasing order of $\chi^2$ values. The top $n$ features are selected and are used to train a classification model. We also use this step to analyze the selected genes and their relative \textit{importance} in cancer type prediction in Section \ref{ss:predictive}. \subsection{Privacy-preserving cancer inference}\label{ss:mat_mul} \subsubsection{Model selection for cancer prediction}For the selection of the ML model that can correlate encoded somatic mutations to cancer type, we train a variety of ML models, while considering the difficulty of their implementation in HE domain. In plaintext, we experiment with Support Vector Machine (SVM) (with radial basis function, polynomial and linear kernels), logistic regression, and Deep Neural Networks (DNNs with two fully connected hidden layers with relu activation) possible classification models. We start with top 1,000 features selected by $\chi^2$ test and increase the number of features by 1,000 in each iteration. In our search for best-performing model, we train models using different number of features, different statistical tests for feature selection, different kernels (if applicable), with several regularization techniques, and with different optimization techniques cross-validated over 5 folds. We report the best performing models in Section \ref{s:results}. \textbf{Measures to reduce overfitting:} We find that the logistic regression model performs best for several encoding schemes with the best test accuracy of $83.61\%$. In logistic regression, the probability that a sample $x$ belongs to class $k$ is given by $P(y=k|x)=\frac{e^{z_k}}{\sum^K_{l=1}e^{z_l}}$ where $z$ is the linear combination of coefficients of the form $\beta_j$. However, the number of features is much larger than the number of samples since each sample has effectively $\approx$ 43,000 features from CNV and SNV, whereas we have a total of 2,173 samples. Therefore, to avoid over-fitting in this high-dimensional setting, we introduce a Lasso ($l1$) penalty \cite{lasso} to the logistic loss function during training such that the features that are unlikely to contribute to the prediction are penalized and weighted zero. Therefore, if the logistic loss function is given by $L(\beta_j)$ where $\beta_j$ represents the coefficients of the features, the loss function after Lasso penalty in the Lagrangian form becomes $L(\beta_j) - \lambda \sum_j^n|\beta_j|$ which is minimized during training. Further, we use k-fold cross-validation on the training set ($k=5$) as another way to prevent the model from over-fitting to the training data. We iterate for 10,000 epochs to converge to a prediction model. \textbf{Metrics to detect problems of unbalanced data:} High test accuracy on unbalanced datasets (with a higher percentage of samples from a particular label) can give a false sense of performance as a random guess (of the label with the highest number of samples) may also result in a high accuracy. For a holistic performance evaluation of our classifiers, we plot Receiver Operating Characteristics (ROC) Curve and report the individual area under curve for each class and the Micro-average Area Under Curve (MAUC) for the classifier. Although our dataset is unbalanced for 2 classes, still we report both test accuracy and ROCs for all the classes. \textbf{Binary models:} Our cancer type prediction model uses the somatic genetic information to predict the cancer type from 11 classification labels. We also build models to predict each cancer type separately (like specific models in \cite{gdl}). These models are supplementary to our main prediction model and focus on one type of cancer. The features for this single cancer type model are chosen following the steps described above and the classifier is trained using a binary label: 0 for the all of the other cancer types and 1 for the cancer type of interest. We create separate models for each of the 11 cancer types and call these models as binary models since the prediction is converted into a binary classification task. It should be noted here that the \textit{binary} classification ability represented by the ROC curves of individual diseases (in our main prediction model) and the binary models for individual diseases are different because of the feature selection steps. In our binary models, the genes important to a specific disease are selected. However, for our main prediction model, the genes which are cumulatively important for all the 11 labels, are selected. Hence, we report both the analyses in section \ref{s:results}. From analyzing the nature of genomic mutation data and the trends in accuracy (details in section \ref{s:results}), we observe that regardless of the ML model selected, the matrix multiplication would involve high-dimensional matrices (Dot product between weights and several thousands of features is common for all the ML models explored in our work). Therefore, following standard matrix multiplications would require a large number of multiplications corresponding to this high-dimensional dataset. The private cancer prediction methodology is characterized by a private inference protocol proposed in subsection \ref{sss:protocol} and then the fast matrix multiplication methodology, crux of the private ML algorithm, is proposed in subsection \ref{sss:mat_mul_algo}. \subsubsection{Private inference protocol}\label{sss:protocol} Fig. \ref{fig:protocol} shows the overview of our inference protocol. It consists of input encoding and encryption, weight and bias encoding, computation, decryption, and decoding. The client starts with a $|X| \times f$ matrix $X$ containing the input values represented with double-precision floating-point numbers, where $|X|$ is the number of inputs and $f$ is the number of features. The values of matrix $X$ are multiplied by a scaling factor $2^{s_x}$ in order to be converted into integers, a requirement of the BFV encryption scheme. This effectively converts our HE operations into fixed-point arithmetic. The scaled matrix of inputs $X_s$ is then encoded into a matrix of polynomial plaintexts $\bar{X}$, where each polynomial contains $n$ coefficients. We pack $n$ features of each row into a polynomial. This leads to an encoded matrix of dimensions $|X| \times \ceil{f/n}$. \begin{figure} \centering \includegraphics[scale=0.5]{Imgs/protocol.png} \caption{Overview of proposed inference protocol. The accent $\bar{\cdot}$ represents encoded data, while $\widehat{\cdot}$ denotes encrypted data.} \label{fig:protocol} \end{figure} It is worth noting that our model requires more precision than what can be represented in the plaintext modulus $t$. From experimental results, we determined that our inputs and weights require 14 bits of precision. Since our inputs are in the interval $0 \leq x < 2^8$, we set $s_x = 6$ to represent the inputs in 14 bits. Meanwhile, weights are in the interval $0 \leq w < 1$, which leads to $s_w = 14$. Due to the fixed-point arithmetic, the biases must be scaled by $2^{s_x+s_w}$. After the computation, the client will receive outputs that are scaled by a factor of $2^{s_x+s_w}$, like the biases, but that require $8 + s_x + s_w + \ceil{\log_{2}(f)}$ bits of precision for representation. For $f = 40960$, that translates to 44 bits. This is above of what a secure BFV ciphertext with enough noise budget for our computation can support. To cope with that without hindering the accuracy of our model, we used the Chinese Remainder Theorem (CRT) to break our plaintext into a pair of smaller plaintexts, each one under its own modulus. We define our plaintext moduli $T = \{t_0, t_1\}$ as $t_0 = 1073872897$, which provides 30 bits of precision, and $t_1 = 114689$, offering 16 bits. This means that for every $n$ features encoded into a polynomial, we are actually encoding into a pair of polynomials, one with coefficient modulus $t_0$, and another with coefficient modulus $t_1$. For simplicity, we refer to this pair as plaintext polynomial. The encoded matrix of inputs $\bar{X}$ is then encrypted with the client's public key $pk$. The encrypted matrix $\hat{X}$ is sent to the server together with public values $\{n, T, s_x\}$. Afterwards, the server scales and encodes the transpose of the matrix of weights $W$, which has dimensions $f \times |Y|$, where $|Y|$ is the number of outputs. The transposition is a requirement of our computation. It packs several feature weights of an output in a plaintext polynomial, leading to an encoded matrix of weights $\bar{W}$ of dimensions $|Y| \times \ceil{f/n}$. Biases are encoded differently, each bias is encoded into a plaintext polynomial, filling all slots with its value. Finally, the server performs the matrix multiplication of encrypted inputs by encoded weights followed by addition of encoded biases $\hat{Y} = \hat{X} \times \bar{W} + \bar{B}$. The resulting matrix $\hat{Y}$ is returned to the client together with public value $s_w$. The client simply decrypts, decodes, and descale $\hat{Y}$ with its secret key $sk$ and obtains the result of the inference in plaintext. \subsubsection{Matrix multiplication algorithm}\label{sss:mat_mul_algo} Our privacy-preserving matrix multiplication algorithm, optimized for implementation in HE, is displayed in Algorithm \ref{alg:mm}. It receives three arguments: The encrypted matrix of inputs $\hat{X}$, encoded matrix of weights $\bar{W}$, and polynomial degree $n$. Each row of $\hat{X}$ represents an input, while each row of $\bar{W}$ represents one output. The computation of the dot product for each input is independent, making this algorithm highly parallelizable. We start the dot product by performing the column-wise multiplication of each row of $\hat{X}$ with the each row of $\bar{W}$ and append the result for each row-row pair into a vector (lines 5-11). Next, we add together all elements of the resulting vector (line 13) and execute $\log_2{n}$ ciphertext rotations and additions to finalize the dot product (lines 14-17). This results in a ciphertext where all its slots contain the result of the dot product of the row-row pair. In order to save memory and reduce communication time, we aim at packing several dot product results into a single ciphertext. For this, first we need to clear the ciphertext slots in all but one carefully chosen position. We do it by multiplying the resulting ciphertext $\hat{c}$ by a plaintext polynomial $\bar{p}$ with one at that specific position and zero in the remaining slots (lines 18-22). Finally, we can compress the dot product results $\hat{R_0}$ by adding them together (lines 25-42). If there are more dot product results than slots in a ciphertext, i.e., $|\hat{X}| \cdot |\bar{W}| > n$, then ciphertexts are appended to the output vector $\hat{Y}$. Lastly, we return the result $\hat{Y}$ (line 43). We provide the mathematical representation of the algorithm in \nameref{s:appendix} \input{algorithm} \subsubsection{Approximation of non-linear function in private inference}\label{sss:method_approx} Tumor prediction is a classification problem which we address using multinomial Logistic Regression (LR). An LR model is trained by reducing the logistic loss function. During inference, the probability that an input $(x \in \mathbb{R}^{1\times d})$, with $d$ features, belongs to a class $(k)$ is given by $P(y=k|x) = \frac{e^z_k}{\sum^K_{l=1} e^z_l}$ where $z=Wx+b$, $W \in \mathbb{R}^{K\times d}$ is the weight matrix, and $b \in \mathbb{R}^{d}$ is the bias. The predicted class $(k_p)$ is the class with the highest probability, i.e. $k_p=argmax({P(y=k_i|x)})$ where $ k_i \in K$. This non-linear logistic function is computationally expensive in HE; thus, we perform the following approximation for building an ML model that can be used for encrypted inference. Since the logistic function is a monotonically increasing function, we can say that $P(y=k|x)$ for a class is higher if $z_k$ is higher, and since the predicted label depends on the relative probability values, the predicted label can also be calculated using $argmax({z_{k_i}})$. Therefore, during inference, a test input $(x_{test})$ needs to be multiplied with the weight matrix to get the final prediction, i.e. the predicted class $k_p = argmax(W \times x_{test}+b)$. Effectively, for efficient inference, the matrix multiplication between the test inputs and the weight matrix must be fast. Please note here that the size of $W$ is dependent on the number of features, i.e. the higher the dimension of an input, the larger is the $W$ matrix, and the more time-consuming is the matrix multiplication. \begin{figure*}[] \centering \includegraphics[width=0.32\linewidth]{Imgs/auc_snv.png} \includegraphics[width=0.32\linewidth]{Imgs/auc_cnv.png} \includegraphics[width=0.32\linewidth]{Imgs/auc_all7.png} \caption{We report the ROC curves and the micro-average scores for the best performing models using different types of genetic information: (a) Using the presence of mutation in a gene as a feature for top 15,000 genes, (b) using CNV information in a gene as a feature for top 17,000 genes, (c) using both encoded SNV and CNV features with top 34,000 features.} \label{fig:MAUC_all} \end{figure*} \begin{table*}[] \centering \caption{Performance of cancer prediction model for each for 11 classes. For each machine learning model and feature selection combination, we report the model with highest performance. The best performing model among them, according to test accuracy, is in boldface. Feature selection denotes the statistical feature selection. Accuracy denoted here is the test accuracy.} \label{tab:all_results} \begin{tabular}{cccccccccc} & \multicolumn{1}{c}{Feature type} & \#features & Model type & Feature selection & Accuracy & MAUC & Precision & Recall & F-score \\ \hline \hline & SNV presence & 15,000 & Logistic regression & $\chi^2$ & 66.85 & 0.928 & 0.670 & 0.668 & 0.660 \\ & CNV only & 17,000 & Logistic regression & $\chi^2$ & 71.27 & 0.940 & 0.718 & 0.712 & 0.711 \\ & CNV + encoded SNV & 13,000 & SVM (linear) & $\chi^2$ & 68.13 & 0.942 & 0.685 & 0.681 & 0.680 \\ & CNV + encoded SNV & 37,000 & SVM (rbf) & $\chi^2$ & 64.82 & 0.949 & 0.685 & 0.648 & 0.634 \\ & CNV + encoded SNV & 34,000 & SVM (polynomial) & $\chi^2$ & 69.98 & 0.954 & 0.712 & 0.699 & 0.698 \\ & CNV + encoded SNV & 43,000 & DNN & $\chi^2$ & 63.16 & 0.925 & 0.658 & 0.631 & 0.628 \\ & \textbf{CNV + encoded SNV} & \textbf{34,000} & \textbf{Logistic regression} & \textbf{$\chi^2$} & \textbf{83.61} & \textbf{0.976} & \textbf{0.834} & \textbf{0.836} & \textbf{0.834} \\ & CNV + encoded SNV & 32,000 & Logistic regression & MI & 81.95 & 0.972 & 0.822 & 0.819 & 0.818 \\ & CNV + encoded SNV & 40,000 & Logistic regression & f-score & 82.68 & 0.974 & 0.827 & 0.826 & 0.824\\ \hline \end{tabular} \end{table*} \begin{table}[t] \centering \caption{Percentage of samples belonging to a class and the corresponding test accuracy of the class } \label{tab:classwise} \begin{tabular}{lcc} & \begin{tabular}[c]{@{}c@{}}Percent of \\ samples\end{tabular} & Accuracy \\ \hline \hline Class 0 (Bladder) & 9.76 & 78.26 \\ Class 1 (Breast) & 7.46 & 74.35 \\ Class 2 (Bronchus and lung) & 23.08 & 91.24 \\ Class 3 (Cervix uteri) & 5.71 & 64.0 \\ Class 4 (Colon) & 9.53 & 87.75 \\ Class 5 (Corpus uteri) & 7.6 & 85.18 \\ Class 6 (Kidney) & 5.39 & 90.625 \\ \begin{tabular}[c]{@{}l@{}}Class 7 (Liver and intrahepatic \\ bile ducts)\end{tabular} & 6.63 & 93.33 \\ Class 8 (Ovary) & 5.85 & 70.83 \\ Class 9 (Skin) & 9.44 & 87.75 \\ Class 10 (Stomach) & 9.49 & 65.11\\ \hline \end{tabular} \end{table} \section{Experimental evaluation}\label{s:results} The evaluation of private cancer prediction methodology is dependent on the precision of the plaintext model and performance of the private inference protocol. We report the evaluation of somatic mutation encoding towards accurate cancer prediction in plaintext in subsection \ref{ss:results_part1} and the performance of the ML model (based on our matrix multiplication methodology) in subsection \ref{ss:results_part2}. \subsection{Evaluation of plaintext cancer prediction}\label{ss:results_part1} \subsubsection{Using only presence of an SNV on a gene} We report different performance metrics and the information on the features used for different models tested in Table \ref{tab:all_results}. We observe that our model achieves a test accuracy of $66.85\%$ and a micro-average area under curve of $0.928$ with top 15,000 features. We also plot an Receiver Operating Characteristics (ROC) curve (Fig.~\ref{fig:MAUC_all}) for each class and observe that skin cancer (class 9) detection has the highest area under the curve of $0.994$ while stomach cancer (class 10) prediction has the lowest area under the curve of $0.754$. Although on a slightly different dataset, other ML-based cancer prediction achieved a similar test accuracy of $65.5\%$ \cite{deep_gene} and of $70.08\%$ \cite{gdl}. These methods also used the SNV frequency to prune the number of features. \subsubsection{Using only CNV of the genes}\label{ss:cnv_results} We also experiment with just the copy number information for all the 25,128 genes and run a $\chi^2$ test to select the top genes. We achieve a slightly higher test accuracy of $71.27\%$ with 17,000 features. From Fig.~\ref{fig:MAUC_all}, we observe that the micro average area also improves to $0.94$ with the lowest area under curve for detection of breast cancer (class 1) at $0.87$ (from $0.754$). Higher test accuracy and MAUC show that CNVs have more distinguishing power on the type of cancers than SNVs, when considered individually. \begin{table*}[t] \centering \caption{Performance of individual models for each cancer type in terms of test accuracy and micro-average area under curve.} \begin{tabular}{lccccc} & Accuracy & MAUC & Precision & Recall & F-score \\ \hline \hline Class 0 (Bladder) & 95.58 & 0.982 & 0.952 & 0.955 & 0.952 \\ Class 1 (Breast) & 94.65 & 0.983 & 0.942 & 0.946 & 0.944 \\ Class 2 (Bronchus and lung) & 91.34 & 0.974 & 0.911 & 0.913 & 0.912 \\ Class 3 (Cervix uteri) & 97.42 & 0.989 & 0.972 & 0.974 & 0.973 \\ Class 4 (Corpus uteri) & 97.42 & 0.997 & 0.974 & 0.974 & 0.974 \\ Class 5 (Colon) & 96.86 & 0.991 & 0.967 & 0.968 & 0.968 \\ Class 6 (Kidney) & 97.97 & 0.996 & 0.979 & 0.979 & 0.979 \\ \begin{tabular}[c]{@{}l@{}}Class 7 (Liver and \\ intrahepatic bile duct)\end{tabular} & 95.39 & 0.992 & 0.952 & 0.953 & 0.953 \\ Class 8 (Ovary) & 97.60 & 0.993 & 0.974 & 0.976 & 0.975 \\ Class 9 (Skin) & 97.42 & 0.995 & 0.973 & 0.974 & 0.973 \\ Class 10 (Stomach) & 96.13 & 0.982 & 0.958 & 0.961 & 0.958 \\ \hline \end{tabular} \label{tab:binary} \end{table*} \subsubsection{Final model: Using both SNV and CNV information} We select top $n$ features using both SNV and CNV information as described above and train several models to evaluate different machine learning algorithms for the tumor classification task. SVM with linear, rbf, and polynomial kernels achieve the best test accuracy of $68.13\%$, $64.82\%$, and $69.98\%$ with 13,000, 37,000, and 34,000 features respectively. Therefore, SVM does not show much improvement from our baseline that used only the presence of SNVs as features. Logistic regression model with Lasso penalty shows the best performance across all performance metrics. We achieve a test accuracy of $83.61\%$ with 34,000 features thereby also reducing the number of features. We also observe an improvement in the micro average area to $0.976$ when compared with models using just CNV or just presence of SNV. ROC for individual classes also have higher scores with the lowest area under curve of $0.94$ for cervical cancer (class 3). All the classes achieve an ROC area under curve of more than $0.9$. This experiment also shows that although CNVs are more informative, using both CNV and SNVs result in the highest prediction accuracy. We also test our logistic regression model using mutual information and f-score as feature selection methods but we achieve lower test accuracy of $81.95\%$ with 32,000 features, and $82.68\%$ with 40,000 features, respectively (Table\ref{tab:all_results}). \subsubsection{Binary models} We built 11 specific models for each cancer type. We use both CNV and SNV information to select the top 34,000 features. In these experiments, we train 11 different models, where each one detects a particular tumor. We report the performance (test accuracy and micro-average score) in Table \ref{tab:binary}. We can see that all of the individual models have a test accuracy of more than $90\%$ and a micro-average area under curve of more than $0.9$. The best performing classifier is for kidney with a test accuracy of $97.97\%$ and a micro-average score of $0.996$. Even the worst performing binary model (for bronchus and lung) has a test accuracy of $91.34\%$ and micro-average score of $0.974$. Binary models have also been evaluated in other somatic genetic information-based cancer classification tasks \cite{gdl} where the authors also achieve high performance for individual tasks but suffer in performance in cancer prediction using all labels. \subsubsection{Predictive genes}\label{ss:predictive} We discuss our findings on the top genes selected. 17,962 genes are selected for CNV data and 16,038 are selected for SNV data as the most informative features. However, more than $60\%$ of these genes are common, i.e. 11,133 genes are selected based on both CNV and SNV information. Considering the $\chi^2$ scores, we also observe that the top 1,030 genes come from using SNV information. We plot histograms of $\chi^2$ statistic scores in Fig.~\ref{fig:feature_imp} and observe that the genes selected based on SNVs is more flat i.e. there are more genes with higher scores. We also verified that the highest score of a gene selected based on SNVs is $\approx 10 \times$ higher than the highest value of a gene selected based on CNVs. Therefore, from feature selection perspective using $\chi^2$ test, SNV data on the selected genes are statistically \textit{more important} than their CNV counterparts. But CNV information improves the test accuracy by adding potentially more relevant biological information. When we investigated the top 10 most informative genes based on the SNV information (Table \ref{tab:top_genes}), we found ``PTEN" are ``APC" genes, which are known tumor suppressors; ``MUC16" gene, which is a biomarker for ovarian cancer; ``ZFHX3" gene, which is implicated in prostate cancer; ``CCDC168" gene, which is known to be associated with Prostate Carcinoma and Uterine Body Mixed Cancer. The other 5 genes in this list are also implicated in important cellular activities that could potentially be related to cancer. These genes and their corresponding Gene Ontology (GO) term enrichment are depicted in Table \ref{tab:top_genes}. \begin{table}[] \caption{Genes selected by our model. The left column denotes the selected genes and the right column represents their GO enrichment terms.} \label{tab:top_genes} \begin{tabular}{ll} \begin{tabular}[c]{@{}l@{}}CNV-based \\ top 10 genes\end{tabular} & Go enrichment analysis \\ \hline \hline RB1 & \begin{tabular}[c]{@{}l@{}}DNA-binding transcription factor activity and \\ enzyme binding\end{tabular} \\ CDKN2A & transcription factor binding \\ LINC00441 & NA \\ DGKH & \begin{tabular}[c]{@{}l@{}}NAD+ kinase activity and diacylglycerol kinase \\ activity\end{tabular} \\ RCBTB2 & Ran guanyl-nucleotide exchange factor activity \\ CDKN2B-AS1 & NA \& Intracranial Aneurysm and Periodontitis \\ LPAR6 & G protein-coupled receptor activity \\ AKAP11 & \begin{tabular}[c]{@{}l@{}}protein kinase A binding and protein phosphatase \\ 1 binding\end{tabular} \\ CDKN2B & \begin{tabular}[c]{@{}l@{}}protein kinase binding and cyclin-dependent \\ protein serine/threonine kinase inhibitor activity\end{tabular} \\ ITM2B & amyloid-beta binding \\ \hline \begin{tabular}[c]{@{}l@{}}SNV-based\\ top 10 genes\end{tabular} & Go enrichment analysis \\ \hline \hline TTN & nucleic acid binding and identical protein binding \\ PTEN & protein kinase binding and magnesium ion binding \\ APC & microtubule binding \\ MUC16 & metabolism \\ DST & calcium ion binding and actin binding \\ ZFHX3 & \begin{tabular}[c]{@{}l@{}}nucleic acid binding and sequence-specific DNA \\ binding\end{tabular} \\ CCDC168 & NA \\ ATRX & chromatin binding and helicase activity \\ DNAH5 & ATPase activity and microtubule motor activity \\ PIK3R1 & GTP binding and transcription factor binding \\ \hline \end{tabular} \end{table} The first gene in the top 10 most informative genes based on the CNV information (Table \ref{tab:top_genes}) is the first ever known tumor suppressor ``RB1". Similarly, ``CDKN2A" and ``CDKN2B" genes are also known tumor suppressors. ``RCBTB2" gene is known to be repressed in prostate cancer. The ``CDKN2B-AS1" gene has the silencing power of many other genes in the genome and strongly implicated in various cancer types. The other 5 genes in this list are also implicated in important cellular activities that could potentially be related to cancer. \begin{figure}[t] \centering \includegraphics[scale=0.48]{Imgs/feature_importance.png} \caption{The distribution of feature scores for genes contributing CNV and SNV data in the final model.} \label{fig:feature_imp} \end{figure} \subsubsection{Insights}The first insight is that biological intuition combined with high-dimensional data analysis methods can together achieve high accuracy (MAUC) while reducing the effective number of features. We further validate, using other studies, that our ML model indeed selects features that are biologically relevant. Using domain expertise, our ML model achieved a high performance of 0.98 MAUC with logistic regression ML model. \begin{figure} \centering \includegraphics[scale=0.5]{Imgs/accuracyVSfeatures.png} \caption{Variation of the achieved test accuracy of the trained models with different number of features selected using $\chi^2$ test.} \label{fig:acc_vs_features} \end{figure} We further plot the variation of test accuracy (of a model trained using best hyper-parameters) with the number of features in Fig. \ref{fig:acc_vs_features} and observe that the test accuracy clearly rises when the number of features is lower than the number of samples and begins to saturate only after 30K features. From Fig. \ref{fig:acc_vs_features} we see that to achieve a test accuracy of more than $80\%$, we need at least 20K features, and to achieve the highest test accuracy, we need more than 30K features. Therefore, private genomic analysis is one such application where extremely high-dimensional must be processed. The second insight motivates the development of matrix multiplication algorithm, which when implemented using BFV, can result in fast yet private cancer prediction. \subsection{Privacy-preserving model evaluation}\label{ss:results_part2} We evaluate our privacy-preserving cancer prediction model on AMD Ryzen Threadripper 3960X 24-Core Processor with 128 GB RAM using 24 threads running Ubuntu 20.04 LTS. Encryption and computation operations are threaded, while decryption runs on single core. We implement our model using the E3 framework \cite{e3} with the underlying Microsoft SEAL library \cite{seal} and encryption parameters set as: polynomial degree $n = 8192$, and plaintext moduli $t_0 = 1073872897$ and $t_1 = 114689$, with a required security level of 128-bits. The cancer prediction model is hosted in the server and the client sends the encrypted genomic data to the server. As a use case, we privately compute cancer label for 543 patients, which constitutes $20\%$ of the dataset. We compare our private logistic regression model with private logistic regression model implemented using standard matrix multiplication (dubbed as standard LR). Please note all private models are implemented using BFV scheme with E3 framework. \subsubsection{Timing evaluation} We report the encryption, decryption, and computation time required for private cancer prediction in Fig. \ref{fig:results}. The time taken to calculate the final cancer label, which is effectively the result of matrix multiplication $(Wx_{test}+b)$, is denoted by computation. Computation, understandably, is the most costly operation in private cancer prediction. We observe that even if the number of features increase from 16K to more than 40K ($2.5\times$), the computation time only increases from 33.44 seconds to 35.52 seconds ($1.06\times$), which corresponds to $\approx7\%$ increase in test accuracy. Therefore, the matrix multiplication is not the bottleneck for private cancer prediction. The time needed for encryption of the test samples increases with the number of features, with 3.87 seconds for 16K features to 10.40 seconds for 40K features ($2.68\times $) which indicates a linear increase in the encryption time as a function of number of features. Decryption is the least expensive operation (less than 1 second) as compared to encryption and computation; the values for decryption time are labelled in Fig. \ref{fig:results}. The maximum total time for private inference of the entire test dataset is required when processing 40,960 features, and is 46.77 seconds. \begin{figure} \centering \includegraphics[scale=0.5]{Imgs/pp_cancer.png} \caption{Performance of our private LR-based cancer prediction model as a function of features.} \label{fig:results} \end{figure} \subsubsection{Latency} \begin{figure} \centering \includegraphics[scale=0.5]{Imgs/latency.png} \caption{Timing for different operations as a function of number of test samples.} \label{fig:latency} \end{figure} As mentioned in section \ref{s:introduction} private computations using HE are generally designed for high throughput, since popular FHE schemes support batching. For our application, we also prioritize latency, i.e. evaluation of a single sample. We report our findings in Fig. \ref{fig:latency}. From the figure we observe that the total amount of time to privately compute cancer label for a sample is 1.08 seconds and there is a linear increase in time with the number of samples. \begin{figure}[t] \centering \includegraphics[scale=0.5]{Imgs/synthetic_data.png} \caption{Latency comparison of our matrix multiplication algorithm with standard privacy-preserving matrix multiplication.} \label{fig:latency_comparison} \end{figure} \subsubsection{Comparison to standard LR} In order to accurately quantify the performance benefits of our proposed methodology, we implement a privacy-preserving version of standard matrix multiplication using BFV. For the experiment we generate synthetic data in the same format (CNVs and SNVs) for 8192 individuals and measure time for encryption, computation, decryption operations. We plot the timing results in a log-log graph in Fig. \ref{fig:latency_comparison}. We observe that the total time required for private inference implemented using standard matrix multiplication for similar number of individuals as the test set is approximately 10 minutes, approximately $10\times$ more than our methodology. Also, the total time required for private inference on 1 individual is 598.25 seconds (similar time required for thousands of individuals), which is $550\times$ more than the time required by our algorithm. Therefore, as compared to standard matrix multiplication, commonly used for implementation of ML models, our algorithm has lower latency, and higher throughput. \subsubsection{Generalizing high-dimensional private inference} Healthcare models are difficult to port trivially across datasets (as discussed in section \ref{s:introduction}). Cancer detection ML model is no exception. However, our matrix multiplication algorithm is not dependent on input data or weight values (like quantization-based DNN design techniques \cite{qnn}) and thus, can be reused for datasets requiring HE-based high-dimensional inference. The transferability of our private inference algorithm across applications is an added advantage. \section{Conclusion}\label{s:conclusion} Current solutions for HE-based privacy preserving inference suffer from impractical overheads; which are further aggravated when dealing with high-dimensional genomic data. In this work we develop a solution for privacy preserving cancer inference on genomic data. We first leverage biological intuition to structure the mutation data and reduce the dimensionality to a practicable limit. For our privacy preserving ML model, we propose a matrix multiplication algorithm to implement logistic regression model, optimized for high throughput and low latency. Our analysis on a real-world genomic dataset shows that our solution achieves cancer prediction MAUC of 0.98 on test dataset and can be computed on encrypted genomic data at $\approx$ 1 second/patient. \printbibliography \section*{Appendix}\label{s:appendix} Here we describe the matrix multiplication of $Y=\hat{X} \times \bar{W}$, where $\hat{X}$ is the encrypted input matrix (encoded genomic data) and $\bar{W}$ is the encoded matrix of LR weights. The polynomial degree is $n$, $|X|$ is the number of inputs, $|Y|$ is the number of outputs, and $f$ is the number of features. The operator $\times$ stands for the standard matrix multiplication, while $\otimes$ represents our algorithm, $[\cdot]_n$ is modular reduction over $n$, and the intervals $[a,b)$ and $[a,b]$ represent elements packed in a ciphertext. When $b<a$, there is a rotation of the $n$ elements of the ciphertext. Function $\rho(\cdot)$ is the element-wise addition of all rotations of a ciphertext, and function $\alpha(\cdot)$ represents the compression part of the algorithm, where one slot of $n$ ciphertexts is selected and combined into a new ciphertext. \input{matrix} \end{document} \endinput
2023-04-23T06:10:07.282Z
2022-04-13T02:09:13.000Z
redpajama/arxiv
arxiv_0002
189
9,718
04ef2fab5b639bc8ec5ee69189ddfbf1c547bde5
\section{Introduction} We describe a procedure to compute a projection of a point in \(\mathds{R}^n\) into the intersection of the set of \(k\)-sparse vectors with a box centered at a \(k\)-sparse vector. Specifically, let \(\Delta \mathds{B}_\infty\) be the \(\ell_{\infty}\)-norm ball of radius \(\Delta \geq 0\) and centered at the origin, and \(x + \Delta \mathds{B}_\infty\) be the same ball centered at \(x \in \mathds{R}^n\). The set of \(k\)-sparse vectors in \(\mathds{R}^n\), otherwise known as the \(\ell_0\)-pseudonorm ``ball'' of radius \(k \in \{0, 1, \ldots, n\}\), is denoted \(k \mathds{B}_0\) and is the set of vectors with at most \(k\) nonzero components. Assume that \(x \in k \mathds{B}_0\). For given \(w \in \mathds{R}^n\), we seek to compute \begin{equation}% \label{eq:proj} p(w) \in P(w) := \mathop{\textup{argmin}} \ \{ \|w - y\|_2 \mid y \in C \} \qquad C := k \mathds{B}_0 \cap (x + \Delta \mathds{B}_\infty). \end{equation} Because \(C\) is closed, \(P(w) \neq \varnothing\), but because \(C\) is nonconvex, \(P(w)\) may contain several elements. In~\eqref{eq:proj}, we seek a global minimum---local nonglobal minima sometimes exist, but are of no particular interest here. Although it may appear as though the problem has exponential complexity due to the combinatorial nature of \(k\)-sparsity, we show that a solution may be found in \(O(n \log(n))\) operations. We describe our Julia implementation and illustrate our procedure in the context of two trust-region methods for nonsmooth regularized optimization. \subsection*{Context} The computation of~\eqref{eq:proj} occurs in the evaluation of proximal operators encountered during the iterations of the trust-region method of \citet{aravkin-baraldi-orban-2021} for nonsmooth regularized optimization. Their method is designed for problems of the form \begin{equation}% \label{eq:minf+h} \minimize{x \in \mathds{R}^n} \ f(x) + h(x), \end{equation} where \(f: \mathds{R}^n \to \mathds{R}\) has Lipschitz-continuous gradient and \(h: \mathds{R}^n \to \mathds{R} \cup \{ \pm \infty \}\) is lower semi-continuous and proper. In large-scale data fitting and signal reconstruction problems, \(h(x) = \chi(x \mid k \mathds{B}_0)\) encodes sparsity constraints and is of interest if one is to recover a solution with at most \(k\) nonzero elements, where \(\chi(\cdot \mid A)\) is the indicator of \(A \subseteq \mathds{R}^n\), i.e., \[ \chi(x \mid A) = \begin{cases} 0 & \text{if } x \in A, \\ \infty & \text{otherwise}. \end{cases} \] All iterates \(x_j\) generated are feasible in the sense that \(x_j \in k \mathds{B}_0\). At iteration \(j\), a step \(s\) is computed in \[ \argmin{u} \tfrac{1}{2} \|u - v\|_2^2 + h(x_j + u) + \chi(u \mid \Delta_j \mathds{B}_\infty), \] where \(v \in \mathds{R}^n\) is given and \(\Delta_j \mathds{B}_\infty\) is the trust region centered at the origin of radius \(\Delta_j > 0\). With the change of variables \(z := x_j + u\), we may rewrite the above as \[ \argmin{z} \ \tfrac{1}{2} \|z - w\|_2^2 + \chi(z \mid k \mathds{B}_0) + \chi(z \mid x_j + \Delta_j B_\infty) - \{x_j\}, \] where \(w := x_j + v\), which precisely amounts to~\eqref{eq:proj} with \(x_j\) in the role of \(x\) and \(\Delta_j\) in the role of \(\Delta\) because the two indicators may be combined into the indicator of the intersection. Because nonsmooth regularized problems often involve a nonlinear least squares smooth term, \citet{aravkin-baraldi-orban-2021b} develop a Levenberg-Marquardt variant of their trust region method. The latter requires the same projections as just described. \subsection*{Notation} Let \(\mathop{\textup{supp}}(x) := \{i = 1, \ldots, n \mid x_i \neq 0\}\) be the \emph{support} of \(x\). If \(A \subseteq \mathds{R}^n\) is closed and \(A \neq \varnothing\), we denote \[ \mathop{\textup{proj}}(w \mid A) := \mathop{\textup{argmin}} \ \{ \|w - y\|_2 \mid y \in A \}, \] the projection of \(w\) into \(A\), which is a set with at least one element. When the projection of \(w\) into \(A\) is unique, such as happens when \(A\) is convex, we slightly abuse notation and write \(y = \mathop{\textup{proj}}(w \mid A)\) instead of \(\{y\} = \mathop{\textup{proj}}(w \mid A)\). If \(B \subseteq \mathds{R}^n\), the notation \(\mathop{\textup{proj}}(\mathop{\textup{proj}}(w \mid A) \mid B)\) refers to the set \(\{z \in \mathop{\textup{proj}}(y \mid B) \text{ for some } y \in \mathop{\textup{proj}}(w \mid A)\}\). If \(S \subseteq \{1, \ldots, n\}\), the cardinality of \(S\) is denoted \(|S|\), and its complement is \(S^c\). For such \(S\) and for \(x \in \mathds{R}^n\), we denote \(x_S\) the subvector of \(x\) indexed by \(S\) and \(A_S := \{x \in \mathds{R}^n \mid x_{S^c} = 0\}\). Clearly, \(0 \in A_S\) for any such \(S\). Because \[ k \mathds{B}_0 = \bigcup \ \{ A_S \mid S \subseteq \{1, \ldots, n\}, \ |S| = k \}, \] \citep[p.~\(175\)]{beck-2017}, we refer to \(A_S\) as a \emph{piece} of \(k \mathds{B}_0\). \subsection*{Related Research} \citet{duchi-2008} describe how to project efficiently into the \(\ell_1\)-norm ball. The \(\ell_1\)-norm is probably the most widely used convex approximation of the \(\ell_0\) norm as minimizing \(\|x\|_1\) promotes sparsity under certain conditions---see, e.g., \citep{candes-romberg-tao-2006} and the vast ensuing compressed sensing literature. \citet{gupta2010l1} describe how to project into the intersection of an \(\ell_1\)-norm ball with a box, which may be seen as a relaxation of~\eqref{eq:proj}. \citet{thom-palm-2013} and \citet{thom-rapp-palm-2015} propose a linear-time and constant space algorithm to compute a projection into a hypersphere with a prescribed sparsity, where sparsity is measured by the ratio of the \(\ell_1\) to the \(\ell_2\) norm. \citet{beck-eldar-2013} provide optimality conditions for the minimization of a smooth function over \(k \mathds{B}_0\). \citet{beck-hallak-2016} provide optimality conditions for problems of the form~\eqref{eq:proj} where the box is replaced with a symmetric set satisfying certain conditions. Unfortunately,~\eqref{eq:proj} does not satisfy those conditions unless \(x = 0\), at which point it is easy to see that a solution simply consists in chaining the projection into \(k \mathds{B}_0\) with that into \(\Delta \mathds{B}_\infty\). That is what \citet[Proposition~\(4.3\)]{luss-teboulle-2013} do with \(1\mathds{B}_2\) instead of \(\Delta \mathds{B}_\infty\). \citet[Proposition~\(4\)]{bolte-sabach-teboulle-2014} show how to project into the intersection of \(k \mathds{B}_0\) with the nonnegative orthant. \citet{pmlr-v28-kyrillidis13} explain how to compute a sparse projection into the simplex, which is probably the most closely related research to our objectives. The simplex necessarily intersects all pieces of \(k \mathds{B}_0\), which need not be the case for~\eqref{eq:proj}. \section{Geometric Intuition} Naively chaining the projection into one set with that into the other, in either order, does not necessarily yield a point into the intersection of the two sets, even if the latter are convex. \Cref{fig:1B0,fig:2projs} illustrates two situations that we may encounter when \(k = 1\) and \(n = 2\). A few simple observations about \Cref{fig:1B0,fig:2projs} reveal some difficulties associated with the computation of \(p(w)\): \begin{enumerate} \item because both components of \(w_1\) are equal in absolute value, as indicated by the thin diagonal in \Cref{fig:2projs}, \(\mathop{\textup{proj}}(w_1 \mid k \mathds{B}_0)\) is a set with two elements, and projecting those into \(x + \Delta \mathds{B}_\infty\) yields \(p(w_1)\) (the correct global minimum) and \(p_2\) (a spurious local minimum); \item moving \(w_1\) up slightly would preserve \(p(w_1)\), but projecting into \(1 \mathds{B}_0\) first would lead to \(p_2\); \item moving \(w_1\) slightly to the right would result in a projection that is slightly to the right of \(p(w_1)\) on the figure, but projecting into \(1 \mathds{B}_0\) first would lead to \(p_2\); \item moving \(w_1\) further to the right would result in \(P(w_1) = \{p(w_1), p_2\}\) and moving it further still would result in \(P(w_1) = \{p_2\}\); \item in the rightmost plot, chaining the projections either way leads to a point that does not even lie in the intersection. \end{enumerate} \begin{figure}[ht] \centering \begin{tikzpicture}[scale=0.75] \draw[thin] (-4, 0) -- (4, 0); \draw[thin] (0, -5) -- (0, 3); \draw[green, ultra thick] (-3, 0) -- (3, 0); \draw[green, ultra thick] (0, 2) -- (0, -4); \fill[black] (0, -1) circle (2pt) node[right] {$x$}; \draw[very thick] (-3, 2) -- (3, 2) -- (3, -4) -- (-3, -4) -- (-3, 2); \fill[black] (-2.5, 2.5) circle (2pt) node[above right] {$w_1$}; \fill[black] (-4, -3.5) circle (2pt) node[below] {$w_2$}; \fill[black] (3, 2) circle (2pt) node[right] {$w_3$}; \fill[black] (-3, 0) circle (2pt) node[above left] {$p_1$}; \fill[black] (-2.5, 0) circle (2pt) node[below right] {$p(w_1)$}; \fill[black] (0, 2) circle (2pt) node[above right] {$p_2$}; \fill[black] (3, 0) circle (2pt) node[above right] {$p_3 \in P(w_3)$}; \fill[black] (0, -4) circle (2pt) node[above right] {$p_4$}; \end{tikzpicture} \qquad \begin{tikzpicture}[scale=0.75] \draw[thin] (-4, 0) -- (4, 0); \draw[thin] (0, -5) -- (0, 3); \draw[green, ultra thick] (0, -1) -- (0, -4); \fill[black] (0, -2.5) circle (2pt) node[right] {$x$}; \draw[very thick] (-1.5, -1) -- (1.5, -1) -- (1.5, -4) -- (-1.5, -4) -- (-1.5, -1); \fill[black] (2, 1) circle (2pt) node[right] {$w$}; \fill[black] (0, -1) circle (2pt) node[above left] {$p_1 \in p(w)$}; \fill[black] (0, -4) circle (2pt) node[above right] {$p_2$}; \end{tikzpicture} \caption{% \label{fig:1B0} The set composed of the two axes is \(1 \mathds{B}_0\) in \(\mathds{R}^2\), the box is \(x + \Delta \mathds{B}_\infty\) and the green set is their intersection. Left: \(P(w_1) = \{p(w_1)\}\), and \(P(w_2) = \{p_1\}\). With respect to \(w_1\), the other cardinal points are \(p_2\), a local minimum, \(p_3\), a local maximum, and \(p_4\), a global maximum. Right: the intersection of \(1 \mathds{B}_0\) with \(x + \Delta \mathds{B}_\infty\) is entirely determined by \(\mathop{\textup{supp}}(x)\), \(P(w) = \{p_1\}\) while \(p_2\) is a global maximum. } \end{figure} \begin{figure}[ht] \centering \begin{tikzpicture}[scale=0.75] \draw[thin] (-4, 0) -- (4, 0); \draw[thin] (0, -5) -- (0, 3); \draw[green, ultra thick] (-3, 0) -- (3, 0); \draw[green, ultra thick] (0, 2) -- (0, -4); \fill[black] (0, -1) circle (2pt) node[right] {$x$}; \draw[very thick] (-3, 2) -- (3, 2) -- (3, -4) -- (-3, -4) -- (-3, 2); \fill[black] (-2.5, 2.5) circle (2pt) node[above right] {$w_1$}; \draw (0, 0) -- (-3, 3); \draw[red, ->, very thick] (-2.5, 2.5) -- (0, 2.5); \draw[red, ->, very thick] (0, 2.5) -- (0, 2); \draw[red, ->, very thick] (-2.5, 2.5) -- (-2.5, 0); \fill[black] (-4, -3.5) circle (2pt) node[below] {$w_2$}; \draw[red, ->, very thick] (-4, -3.5) -- (-3, -3.5); \draw[red, ->, very thick] (-3, -3.5) -- (0, -3.5); \draw[blue, ->, very thick] (-4, -3.5) -- (-4, 0); \draw[blue, ->, very thick] (-4, 0) -- (-3, 0); \fill[black] (-3, 0) circle (2pt) node[above left] {$p(w_2)$}; \fill[black] (-2.5, 0) circle (2pt) node[below right] {$p(w_1)$}; \fill[black] (0, 2) circle (2pt) node[above right] {$p_2$}; \end{tikzpicture} \qquad \begin{tikzpicture}[scale=0.75] \draw[thin] (-4, 0) -- (4, 0); \draw[thin] (0, -5) -- (0, 3); \draw[green, ultra thick] (0, -1) -- (0, -4); \fill[black] (0, -2.5) circle (2pt) node[right] {$x$}; \draw[very thick] (-1.5, -1) -- (1.5, -1) -- (1.5, -4) -- (-1.5, -4) -- (-1.5, -1); \fill[black] (3, 1) circle (2pt) node[right] {$w$}; \fill[black] (0, -1) circle (2pt) node[above left] {$p_1 \in p(w)$}; \draw[red, ->, very thick] (3, 1) -- (3, 0); \draw[red, ->, very thick] (3, 0) -- (1.5, -1); \draw[blue, ->, very thick] (3, 1) -- (1.5, -1); \draw[blue, ->, very thick] (1.5, -1) -- (1.5, 0); \end{tikzpicture} \caption{% Simply composing the projection into \(1 \mathds{B}_0\) with that into \(x + \Delta \mathds{B}_\infty\), in either order, may lead to an erroneous projection. \label{fig:2projs} } \end{figure} Note that \(1 \mathds{B}_0\) is a special case for any value of \(n\): its intersection with \(x + \Delta \mathds{B}_\infty\) consists in either a single line segment, or \(n\) segments. Indeed, the first possibility is that the nonzero component of \(x\) is \(|x_i| > \Delta\). In that case, any \(y \in 1 \mathds{B}_0\) with \(y_j \neq 0\) and \(i \neq j\) satisfies \(\|y - x\|_\infty \geq |x_i| > \Delta\), and therefore \(y \not \in x + \Delta \mathds{B}_\infty\). The only other possibility is that \(|x_i| \leq \Delta\), in which case \(0 \in x + \Delta \mathds{B}_\infty\), and therefore, all pieces of \(1 \mathds{B}_0\) intersect the box. For \(1 < k < n\), however, the intersection may consist in any number of pieces between \(1\) and \({n \choose k}\). \Cref{fig:B0inR3} illustrates situations that may arise for \(k = 1\) or \(2\) and \(n = 3\). \begin{figure}[ht] \centering \begin{tikzpicture}[scale=0.75] \draw[gray!10, fill=gray!10] (-2, 0, 5) -- (0, 0, 5) -- (0, 0, 1) -- (-2, 0, 1) -- (-2, 0, 5); \draw[gray!30, fill=gray!30] (0, -2, 5) -- (0, -2, 1) -- (0, 2, 1) -- (0, 2, 5) -- (0, -2, 5); \draw[gray!10, fill=gray!10] (0, 0, 5) -- (2, 0, 5) -- (2, 0, 1) -- (0, 0, 1) -- (0, 0, 5); \draw[green, ultra thick] (0, 0, 1) -- (0, 0, 5); \fill[black] (0, 0, 3) circle (2pt) node[below right] {$x$}; \draw[very thick] (-2, -2, 5) -- (2, -2, 5) -- (2, 2, 5) -- (-2, 2, 5) -- (-2, -2, 5); \draw[very thick] (-2, 2, 5) -- (-2, 2, 1) -- (2, 2, 1) -- (2, -2, 1) -- (2, -2, 5); \draw[very thick] (2, 2, 5) -- (2, 2, 1); \end{tikzpicture} \hfill \begin{tikzpicture}[scale=0.75] \draw[gray!20, fill=gray!20] (0, 0, 0) -- (-2, 0, 0) -- (-2, -2, 0) -- (0, -2, 0) -- (0, 0, 0); \draw[gray!10, fill=gray!10] (-2, 0, 3) -- (0, 0, 3) -- (0, 0, -1) -- (-2, 0, -1) -- (-2, 0, 3); \draw[gray!20, fill=gray!20] (0, 2, 0) -- (-2, 2, 0) -- (-2, 0, 0) -- (0, 0, 0) -- (0, 2, 0); \draw[gray!30, fill=gray!30] (0, -2, 3) -- (0, -2, -1) -- (0, 2, -1) -- (0, 2, 3) -- (0, -2, 3); \draw[gray!20, fill=gray!20] (0, 0, 0) -- (2, 0, 0) -- (2, -2, 0) -- (0, -2, 0) -- (0, 0, 0); \draw[gray!10, fill=gray!10] (0, 0, 3) -- (2, 0, 3) -- (2, 0, -1) -- (0, 0, -1) -- (0, 0, 3); \draw[gray!20, fill=gray!20] (0, 0, 0) -- (0, 2, 0) -- (2, 2, 0) -- (2, 0, 0) -- (0, 0, 0); \draw[green, ultra thick] (0, 0, -1) -- (0, 0, 3); \draw[green, ultra thick] (-2, 0, 0) -- (2, 0, 0); \draw[green, ultra thick] (0, -2, 0) -- (0, 2, 0); \fill[black] (0, 0, 1) circle (2pt) node[left] {$x$}; \draw[very thick] (-2, -2, 3) -- (2, -2, 3) -- (2, 2, 3) -- (-2, 2, 3) -- (-2, -2, 3); \draw[very thick] (-2, 2, 3) -- (-2, 2, -1) -- (2, 2, -1) -- (2, -2, -1) -- (2, -2, 3); \draw[very thick] (2, 2, 3) -- (2, 2, -1); \end{tikzpicture} \hfill \begin{tikzpicture}[scale=0.75] \draw[green!30, fill=green!30] (-1, 0, 5) -- (3, 0, 5) -- (3, 0, 1) -- (-1, 0, 1) -- (-1, 0, 5); \draw[green!30, fill=green!30] (0, -2, 5) -- (0, -2, 1) -- (0, 2, 1) -- (0, 2, 5) -- (0, -2, 5); \draw[gray!10, ultra thick] (0, 0, 1) -- (0, 0, 5); \fill[black] (1, 0, 3) circle (2pt) node[below right] {$x$}; \draw[very thick] (-1, -2, 5) -- (3, -2, 5) -- (3, 2, 5) -- (-1, 2, 5) -- (-1, -2, 5); \draw[very thick] (-1, 2, 5) -- (-1, 2, 1) -- (3, 2, 1) -- (3, -2, 1) -- (3, -2, 5); \draw[very thick] (3, 2, 5) -- (3, 2, 1); \end{tikzpicture} \caption{% \label{fig:B0inR3} Left: the green segment represents a possible intersection of \(1 \mathds{B}_0\) with a box in \(\mathds{R}^3\). The gray plane sections only serve to position the segment visually in three dimensions. Center: another possible intersection of \(1 \mathds{B}_0\) with a box in \(\mathds{R}^3\). The box either intersects a single axis, or all of them. Right: the green region is a possible intersection of \(2 \mathds{B}_0\) with a box in \(\mathds{R}^3\). The gray segment only serves as a visual aid and is part of the intersection. } \end{figure} \section{Background and Preliminary Results} The unique projection \(y\) of any \(w\) into \(x + \Delta \mathds{B}_\infty\) has components \[ y_i = \max(x_i - \Delta, \min(w_i, x_i + \Delta)), \quad i = 1, \ldots, n. \] Given \(S \subseteq \{1, \ldots, n\}\), we obtain the unique projection of any \(w\) into \(A_S\) by setting \(w_i = 0\) for all \(i \in S^c\). A projection \(y\) of any \(w\) into \(k \mathds{B}_0\) is a vector that has the same \(k\) largest components in absolute value as \(w\), and the rest of its components set to zero \citep[Lemma~\(6.71\)]{beck-2017}. In the vein of \citet{beck-eldar-2013}, it is possible to state necessary optimality conditions for the more general problem \begin{equation}% \label{eq:min-sparse-box} \minimize{y \in \mathds{R}^n} \ f(y) \quad \textup{subject to} \ y \in C, \end{equation} of which~\eqref{eq:proj} is a special case. Despite the fact that our algorithm is not based on such necessary conditions, they are relevant in their own right, and we now review and specialize them to~\eqref{eq:proj}. \begin{lemma}% \label{lem:bf} Let \(y^\star\) be a solution of~\eqref{eq:min-sparse-box} where \(f\) is continuously differentiable. \begin{enumerate} \item If \(\|y^\star\|_0 < k\), then for all \(i = 1, \ldots, n\), \[ \frac{\partial f(y^\star)}{\partial y_i} \begin{cases} \leq 0 & \text{ if } y^\star_i = x_i + \Delta \\ \geq 0 & \text{ if } y^\star_i = x_i - \Delta \\ = 0 & \text{ otherwise;} \end{cases} \] \item if \(\|y^\star\|_0 = k\), the same conditions hold for all \(i \in \mathop{\textup{supp}}(y^\star)\). \end{enumerate} \end{lemma} \begin{proof} The proof follows that of \citep[Theorem~\(2.1\)]{beck-eldar-2013}. If \(\|y^\star\|_0 < k\), then for all \(i = 1, \ldots, n\), \[ 0 \in \argmin{t \in \mathds{R}} \ \{g(t) \mid \|y^\star + t e_i - x\|_\infty \leq \Delta\}, \] where \(e_i\) is the \(i\)-th column of the identity, and \(g(t) := f(y^\star + t e_i)\). Because \(y^\star \in x + \Delta \mathds{B}_\infty\), the constraint above reduces to \(|y^\star_i + t - x_i| \leq \Delta\). The conclusion follows directly from the standard KKT conditions by noting that \(g'(0) = \partial f(y^\star) / \partial y_i\). If \(\|y^\star\|_0 = k\), the same reasoning goes for all \(i \in \mathop{\textup{supp}}(y^\star)\). \end{proof} By analogy with \citep[Theorem~\(2.1\)]{beck-eldar-2013}, a candidate satisfying the conditions of \Cref{lem:bf} is called a \emph{basic feasible} point. The following corollary follows directly from \Cref{lem:bf} with \(f(y) := \tfrac{1}{2} \|w - y\|_2^2\). \begin{corollary}% \label{cor:bf} Let \(y^\star\) be a solution of~\eqref{eq:proj}. \begin{enumerate} \item If \(\|y^\star\|_0 < k\), then for all \(i = 1, \ldots, n\), \[ y^\star_i \begin{cases} \leq w_i & \text{ if } y^\star_i = x_i + \Delta \\ \geq w_i & \text{ if } y^\star_i = x_i - \Delta \\ = w_i & \text{ otherwise;} \end{cases} \] \item if \(\|y^\star\|_0 = k\), the same conditions hold for all \(i \in \mathop{\textup{supp}}(y^\star)\). \end{enumerate} \end{corollary} \Cref{lem:bf} and \Cref{cor:bf} are only necessary conditions, and they are rather weak; there often exist vectors satisfying the conditions stated that are not solutions of~\eqref{eq:min-sparse-box} or~\eqref{eq:proj}. Consider for example \(k = 1\) in \(\mathds{R}^2\), \(x = (0, -1)\), \(\Delta = 2\), and \(w = (2, 3)\). Then, \(y = (2, 0)\) satisfies the conditions of \Cref{cor:bf}: \(\|y\|_0 = 1\), \(\mathop{\textup{supp}}(y) = \{1\}\) and \(y_1 = x_1 + \Delta \leq w_1\). However, \(P(w) = \{(0, 1)\}\). Indeed, \(\|w - (0, 1)\| = 2 \sqrt{2} < 3 = \|w - y\|\). Observe that thanks to \citep[Lemma~\(2.1\)]{beck-eldar-2013}, the number of basic feasible points of~\eqref{eq:proj} is finite. Therefore, so is the cardinality of \(P(w)\). For a constant \(L > 0\), \citeauthor{beck-eldar-2013} define \(y \in C\) to be \(L\)-stationary for~\eqref{eq:min-sparse-box} if it satisfies \(y \in \mathop{\textup{proj}}(y - L^{-1} \nabla f(y) \mid C)\), a condition insipired by optimality conditions for convex problems. They state the following result, whose proof remains valid for~\eqref{eq:min-sparse-box}. \begin{lemma}[{\protect \citealp[Lemma~\(2.2\)]{beck-eldar-2013}}]% \label{lem:L-stat} For any \(L > 0\), \(y \in \mathds{R}^n\) is \(L\)-stationary for~\eqref{eq:min-sparse-box} if and only if \(y \in C\) and \[ \frac{\partial f(y)}{\partial y_i} = 0 \quad (i \in \mathop{\textup{supp}}(y)) \qquad \text{and} \qquad \left| \frac{\partial f(y)}{\partial y_i} \right| \leq L M_k(y) \quad (i \not \in \mathop{\textup{supp}}(y)), \] where \(M_k(y)\) is the \(k\)th largest component of \(y\) in absolute value. \end{lemma} With \(f(y) := \tfrac{1}{2} \|w - y\|_2^2\), \(L\)-stationarity reads \(y \in \mathop{\textup{proj}}(y - L^{-1} (y - w) \mid C)\). Due to the simple form of \(\nabla f(y) = y - w\), \Cref{lem:L-stat} specializes as follows. \begin{corollary}% \label{cor:L-stat} For any \(L > 0\), \(y \in \mathds{R}^n\) is \(L\)-stationary for~\eqref{eq:proj} if and only if \(y \in C\) and \[ w_i = y_i \quad (i \in \mathop{\textup{supp}}(y)), \qquad \text{and} \qquad |w_i| \leq L M_k(y) \quad (i \not \in \mathop{\textup{supp}}(y)). \] \end{corollary} As a special case of \Cref{cor:L-stat}, if \(\|y\|_0 < k\), then \(M_k(y) = 0\) and we obtain \(w_i = 0\) for \(i \not \in \mathop{\textup{supp}}(y)\). In that case, \(L\)-stationarity turns out to be independent of \(L\) and requires that \(y = w\), i.e., there is a unique \(L\)-stationary point if \(w \in C\), and there are no \(L\)-stationary points if \(w \not \in C\). \(L\)-stationarity is stronger than basic feasibility in the sense that if \(y\) is \(L\)-stationary for~\eqref{eq:min-sparse-box} for any \(L > 0\), then \(y\) is also a basic feasible point \citep[Corollary\(2.1\)]{beck-eldar-2013}. Under a Lipschitz assumption, solutions of~\eqref{eq:min-sparse-box} are \(L\)-stationary, as stated in the following result. \begin{proposition}[{\protect \citealp[Theorem~\(2.2\)]{beck-eldar-2013}}]% \label{prop:L-stat} Assume \(\nabla f\) is Lipschitz continuous with constant \(L_f\) and \(y\) solves~\eqref{eq:min-sparse-box}. Then, for any \(L > L_f\), \begin{enumerate} \item \(y\) is \(L\)-stationary; \item \(\mathop{\textup{proj}}(y - L^{-1} \nabla f(y) \mid C)\) is a singleton. \end{enumerate} \end{proposition} \begin{proof} The proof follows by verifying that \citep[Lemma~\(2.4\)]{beck-eldar-2013} continues to hold for~\eqref{eq:min-sparse-box} and the proof of \citep[Theorem~\(2.2\)]{beck-eldar-2013} holds unchanged. \end{proof} \Cref{prop:L-stat} clearly applies to~\eqref{eq:proj} as the gradient of \(f(y) := \tfrac{1}{2} \|w - y\|_2^2\) is Lipschitz continuous with constant \(L_f = 1\). Thus, solutions of~\eqref{eq:proj} are \(L\)-stationary for \(L > 1\). Based on \(L\)-stationarity, \citet{beck-eldar-2013} study the iteration \(y^+ \in \mathop{\textup{proj}}(y - L^{-1} \nabla f(y) \mid C)\) and show convergence to an \(L\)-stationary point for~\eqref{eq:min-sparse-box} under the assumption that \(\nabla f\) is Lipschitz continuous. Unfortunately, in the case of~\eqref{eq:proj}, solving the subproblem is as difficult as solving~\eqref{eq:proj} directly. Finally, \citeauthor{beck-eldar-2013} define the concept of componentwise (CW) optimality as follows: \(y \in C\) is CW-minimum for~\eqref{eq:min-sparse-box} if \begin{enumerate} \item \(\|y\|_0 < k\) and \(f(y) = \min_t f(y + t e_i)\) for \(i = 1, \dots, n\), or \item \(\|y\|_0 = k\) and \(f(y) \leq \min_t f(y - y_i e_i + t e_j)\) for \(i \in \mathop{\textup{supp}}(y)\) and \(j = 1, \dots, n\). \end{enumerate} They observe that any solution is a CW-minimum \citep[Theorem~\(2.3\)]{beck-eldar-2013} and that any CW-minimum is a basic feasible point \citep[Lemma~\(2.5\)]{beck-eldar-2013}. The concept of CW-minimum allows them to show that any solution of~\eqref{eq:min-sparse-box} is \(L\)-stationary for a value \(L\) that can be significantly smaller than \(L_f\). Based on those observations, they propose two coordinate descent-type methods that converge to a CW-minimum. In the next section, we present a number of properties of~\eqref{eq:proj} and an algorithm that identifies a solution directly, without resort to the above stationarity conditions. \section{Computing the Projection} We begin with a few simple observations. \begin{lemma}% \label{lem:projAS} Let \(S \subseteq \{1, \ldots, n\}\) such that \(|S| = k\). If \(y \in x + \Delta \mathds{B}_\infty\) and \(z = \mathop{\textup{proj}}(y \mid A_S)\), then \(\|z - x\|_\infty \leq \|y - x\|_\infty\) and, in particular, \(z \in x + \Delta \mathds{B}_\infty\). \end{lemma} \begin{proof} Without loss of generality, we may write \(z = (y_S, 0)\). Observe now that \[ \Delta \geq \|y - x\|_\infty = \max(\|y_S - x_S\|_\infty, \|y_{S^c} - x_{S^c}\|_{\infty}) = \max(\|y_S - x_S\|_\infty, \|y_{S^c}\|_{\infty}) \geq \|y_S - x_S\|_\infty = \|z - x\|_\infty, \] because \(x_{S^c} = 0\). \end{proof} \Cref{lem:projAS} holds because of the geometry of \(k \mathds{B}_0\) respective to \(\mathds{B}_\infty\) and is specific to the \(\ell_\infty\)-norm. Indeed, consider for example a ball defined in the \(\ell_2\)-norm and set \(x = (0, -1) \in 1 \mathds{B}_0\) and \(\Delta = 2\). For \(y_1 = (\tfrac{3}{4}, -\tfrac{1}{4})\), we have \(z_1 = \mathop{\textup{proj}}(y_1 \mid 1 \mathds{B}_0) = (\tfrac{3}{4}, 0)\) and \(\|z_1 - x\|_2 > \|y - x\|_2\). In this example, \(z_1 \in x + \Delta \mathds{B}_2\), but consider now \(y_2 = (2, -1)\). Then, \(z_2 = \mathop{\textup{proj}}(y_2 \mid 1 \mathds{B}_0) = (2, 0) \not \in x + \Delta \mathds{B}_2\). \begin{lemma}% \label{lem:proj1proj2} If \(w \in x + \Delta \mathds{B}_\infty\), then \(P(w) = \mathop{\textup{proj}}(w \mid k \mathds{B}_0)\). \end{lemma} \begin{proof} Any \(y \in \mathop{\textup{proj}}(w \mid k \mathds{B}_0)\) has the same \(k\) largest components in absolute value as \(w\), and the rest of its components set to zero. Thus, there must exist \(S \subseteq \{1, \ldots, n\}\) with \(|S| = k\) such that \(y = \mathop{\textup{proj}}(w \mid A_S)\). By \Cref{lem:projAS}, \(\|y - x\|_\infty \leq \|w - x\|_\infty \leq \Delta\) so that \(y \in x + \Delta \mathds{B}_\infty\), and hence, \(y \in C\). If there were \(z \in C\) such that \(\|z - w\|_2 < \|y - w\|_2\), because \(z \in k \mathds{B}_0\), there would be a contradiction with the definition of \(y\). Therefore, \(y\) is a closest point to \(w\) in \(C\). \end{proof} \begin{lemma}% \label{lem:suppx} \(C = A_{\mathop{\textup{supp}}(x)} \cap (x + \Delta \mathds{B}_\infty)\) if and only if \ \(|x_i| > \Delta\) for all \(i \in \mathop{\textup{supp}}(x)\). \end{lemma} \begin{proof} The result follows from the observation that for any \(i \in \mathop{\textup{supp}}(x)\), there is no \(y \in x + \Delta \mathds{B}_\infty\) with \(y_i = 0\). Indeed, if \(y_i = 0\), \(\|y - x\|_\infty \geq |y_i - x_i| = |x_i| > \Delta\). \end{proof} \begin{lemma}% \label{lem:proj-suppx} For any \(S \subseteq \{1, \ldots, n\}\) and any \(w \in \mathds{R}^n\), \[ \mathop{\textup{proj}}(w \mid A_S \cap (x + \Delta \mathds{B}_\infty)) = \mathop{\textup{proj}}(proj(w \mid x + \Delta \mathds{B}_\infty) \mid A_S) = \mathop{\textup{proj}}(proj(w \mid A_S) \mid x + \Delta \mathds{B}_\infty), \] whose unique element is the vector \(y\) such that \(y_S = \mathop{\textup{proj}}(w_S \mid x_S + \Delta \mathds{B}_\infty)\) and \(y_{S^c} = 0\). In particular, if \(C = A_{\mathop{\textup{supp}}(x)} \cap (x + \Delta \mathds{B}_\infty)\), then \(P(w) = \{\mathop{\textup{proj}}(\mathop{\textup{proj}}(w \mid x + \Delta \mathds{B}_\infty) \mid A_{\mathop{\textup{supp}}(x)})\}\). \end{lemma} \begin{proof} The projection is unique because \(A_S \cap (x + \Delta \mathds{B}_\infty)\) is convex. If \(y := \mathop{\textup{proj}}(w \mid x + \Delta \mathds{B}_\infty)\) observe that \(z := \mathop{\textup{proj}}(y \mid A_S) \in A_S \cap (x + \Delta \mathds{B}_\infty)\) by \Cref{lem:projAS}. In order to show that \(z \in \mathop{\textup{proj}}(w \mid A_S \cap (x + \Delta \mathds{B}_\infty))\), pick any other \(\bar{z} \in A_S \cap (x + \Delta \mathds{B}_\infty)\). By construction, \(\bar{z} = (\bar{y}_S, 0)\) for some \(\bar{y}\). Among the infinitely many possible \(\bar{y}\), we may choose the one such that \(\bar{y}_{S^c} = y_{S^c}\). Then, \[ \|w - y\|_2^2 = \|w_S - y_S\|_2^2 + \|w_{S^c} - y_{S^c}\|_2^2 = \|w_S - z_S\|_2^2 + \|w_{S^c} - y_{S^c}\|_2^2, \] and \[ \|w - \bar{y}\|_2^2 = \|w_S - \bar{y}_S\|_2^2 + \|w_{S^c} - \bar{y}_{S^c}\|_2^2 = \|w_S - \bar{z}_S\|_2^2 + \|w_{S^c} - y_{S^c}\|_2^2. \] By definition of \(y\), \(\|w - y\|_2 \leq \|w - \bar{y}\|_2\) and the above therefore implies \(\|w_S - z_S\|_2^2 \leq \|w_S - \bar{z}_S\|_2^2\). Because \(z_{S^c} = \bar{z}_{S^c} = 0\), we may add \(\|w_{S^c}\|_2^2\) to both sides of the previous inequality to obtain \(\|w - z\|_2 \leq \|w - \bar{z}\|_2\). \end{proof} \Cref{lem:suppx} provides an easily computable criterion to determine that \(C = A_{\mathop{\textup{supp}}(x)} \cap (x + \Delta \mathds{B}_\infty)\), and, thanks to \Cref{lem:proj-suppx}, we find an element of \(P(w)\) by setting all components of \(\mathop{\textup{proj}}(w \mid x + \Delta \mathds{B}_\infty)\) that are not in \(\mathop{\textup{supp}}(x)\) to zero. Such situation is represented in the rightmost plot of \Cref{fig:1B0}. By \Cref{lem:suppx}, if there is \(|x_i| \leq \Delta\), then \(x + \Delta \mathds{B}_\infty\) intersects other pieces of \(k \mathds{B}_0\) than \(A_{\mathop{\textup{supp}}(x)}\). We now determine which pieces, and their number. Let \[ s(x) := \{ i \in \mathop{\textup{supp}}(x) \mid |x_i| \leq \Delta\} \quad \text{and} \quad \ell(x) := \{ i \in \mathop{\textup{supp}}(x) \mid |x_i| > \Delta \} \] be the \emph{small} and \emph{large} nonzero components of \(x\). In the special case where \(s(x) = \mathop{\textup{supp}}(x)\), i.e., \emph{all} nonzero components of \(x\) are small, \(x + \Delta \mathds{B}_\infty\) intersects \emph{all} pieces of \(k \mathds{B}_0\) because \(0 \in C\). Unfortunately, there are \[ {n \choose k} = \frac{n!}{k! \, (n - k)!} \] of them. As it turns out, it is possible to compute \(p(w) \in P(w)\) for any \(w \in \mathds{R}^n\) in \(O(n \log(n))\) operations. In view of \Cref{lem:proj1proj2}, we assume that \(w \not \in x + \Delta \mathds{B}_\infty\). We may decompose~\eqref{eq:proj} as suggested in \citep{pmlr-v28-kyrillidis13} and observe that \(y_\star \in \mathop{\textup{proj}}(w \mid C)\) if and only if \(S_\star\) and \(y_\star\) are in \begin{equation}% \label{eq:proj-split} \argmin{\substack{S \subseteq \{1, \ldots, n\}\\ |S| = k}} \ \argmin{y \in A_S \cap (x + \Delta \mathds{B}_\infty)} \ \|w - y\|_2^2. \end{equation} In the case of \(\mathds{B}_\infty\), we know that \(y \in A_S \cap (x + \Delta \mathds{B}_\infty)\) if and only if \(y \in x + \Delta \mathds{B}_\infty\) and \(y_{S^c} = 0\), i.e., if and only if \(y_S \in x_S + \Delta \mathds{B}_\infty\) and \(y_{S^c} = 0\). Thus, we may rewrite~\eqref{eq:proj-split} as \[ \argmin{\substack{S \subseteq \{1, \ldots, n\}\\ |S| = k}} \ \argmin{\substack{y \in x + \Delta \mathds{B}_\infty\\ y_{S^c} = 0}} \ \|w_S - y_S\|_2^2 + \|w_{S^c}\|^2 = \argmin{\substack{S \subseteq \{1, \ldots, n\}\\ |S| = k}} \ \argmin{\substack{y_S \in x_S + \Delta \mathds{B}_\infty\\ y_{S^c} = 0}} \ \|w_S - y_S\|_2^2 - \|w_S\|^2. \] For fixed \(S\), the unique solution of the inner problem is \(y = y(S)\) such that \(y_S = \mathop{\textup{proj}}(w_S \mid x_S + \Delta \mathds{B}_\infty)\) and \(y_{S^c} = 0\). Thus, the problem reduces to finding the optimal piece, determined by \begin{equation}% \label{eq:proj-S} S_\star \in \argmax{\substack{S \subseteq \{1, \ldots, n\}\\ |S| = k}} \ \|w_S\|^2 - \|w_S - y_S\|^2. \end{equation} Because~\eqref{eq:proj-S} requires examining all pieces of \(k \mathds{B}_0\), it may be solved by noting that \[ \|w_S\|^2 - \|w_S - y_S\|^2 = e^T z, \quad e = (1, 1, \ldots, 1), \quad z_i = w_i^2 - {(w_i - y_i)}^2, \ i \in S, \] i.e., the objective is the sum of the components of \(z\) with indices in \(S\). Without any further restriction on \(S\), one possibility is to compute \(y = \mathop{\textup{proj}}(w \mid x + \Delta \mathds{B}_\infty)\), \(z_i\) for all \(i = 1, \ldots, n\) and retain the \(k\) largest entries, as those will yield the largest sum. Applying the procedure described in \Cref{alg:projB0Binf} with \(L = \varnothing\) corresponds to the steps just outlined. By \(\pi^{-1}(1)\), we mean the element of \(F\) that is permuted to first position in the ordering. The main cost is the computation of \(\pi\), which can be obtained in \(O(n \log(n))\) operations. \begin{algorithm}[ht] \caption{% \label{alg:projB0Binf} Compute the projection of \(w\) into \(C := k \mathds{B}_0 \cap (x + \Delta \mathds{B}_\infty)\). } \begin{algorithmic}[1] \Require \(w \in \mathds{R}^n\), \(w \not \in x + \Delta \mathds{B}_\infty\), \(L \subseteq \{1, \ldots, n\}\), \(|L| \leq k\) \Comment{\(\mathop{\textup{supp}}(\mathop{\textup{proj}}(w \mid C))\) must contain \(L\)} \State compute \(y := \mathop{\textup{proj}}(w \mid x + \Delta \mathds{B}_\infty)\) \If{\(|L| = k\)} \Return \(L\) and \(\mathop{\textup{proj}}(y \mid A_L)\) \Comment{\Cref{lem:suppx,lem:proj-suppx}} \EndIf \State set \(F := L^c\) and form \(w_F^2\), \(w_F - y_F\), \({(w_F - y_F)}^2\), and \(z := w_F^2 - {(w_F - y_F)}^2\) \Comment{componentwise} \State compute a permutation \(\pi\) that sorts the components of \(z\) in decreasing order \State set \(S := L \cup \{\pi^{-1}(1), \ldots, \pi^{-1}(k - |L|)\}\) \Comment{\(L\) and the indices of the \(k - |L|\) largest elements of \(z\)} \State set \(y_{S^c} = 0\) \State \Return \(S\) and \(y\). \end{algorithmic} \end{algorithm} Consider now the case where \(\ell(x) \neq \varnothing\). If \(i \in \ell(x)\), \(x + \Delta B_\infty\) cannot intersect any \(A_S\) such that \(i \not \in S\). Indeed, any \(y \in \mathds{R}^n\) such that \(y_i = 0\) satisfies \(\|y - x\|_\infty \geq |y_i - x_i| = |x_i| > \Delta\). If \(s(x) = \varnothing\), we are in the context of \Cref{lem:proj-suppx}. Thus, we may focus on the case where both \(s(x)\) and \(\ell(x)\) are nonempty. Necessarily, \(1 < |s(x)| + |\ell(x)| \leq k\) and \(|\ell(x)| < k\). Constraining \(S \subseteq \{1, \ldots, n\}\) to contain \(\ell(x)\) leaves \(k - |\ell(x)|\) indices to be chosen among the remaining \(n - |\ell(x)|\), for a total of \[ {n - |\ell(x)| \choose k - |\ell(x)|} = \frac{(n - |\ell(x)|)!}{(k - |\ell(x)|)! \, (n - k)!} \] possibilities. Again, it appears as though the complexity of identifying \(S\) is exponential in \(n\) in the worst case. However, the only difference with~\eqref{eq:proj-S} is that \(S_\star\) is now constrained to contain \(\ell(x)\). It follows that we may apply \Cref{alg:projB0Binf} with \(L = \ell(x)\). If \(m := n - |\ell(x)|\), the procedure has \(O(m \log(m)) = O(n \log(n))\) complexity. \section{Implementation and Numerical Results} We implemented \Cref{alg:projB0Binf} in the Julia language \citep{bezanson-edelman-karpinski-shah-2017} version \(1.7\) as part of the ShiftedProximalOperators package of \citet{baraldi-orban-shifted-proximal-operators-2022}, whose main objective, as the name implies, is to collect proximal operators of nonsmooth terms with one or two shifts, i.e., \(h(x_k + s_j + t)\), with and without a trust-region constraint, where \(x_k\) and \(s_j\) are fixed iterates set during an outer and an inner iteration. ShiftedProximalOperators is used inside the RegularizedOptimization package of \citet{baraldi-orban-regularized-optimization-2022}, which implements, among others, the trust-region methods for nonsmooth regularized problems of \citet{aravkin-baraldi-orban-2021,aravkin-baraldi-orban-2021b}. We employ \Cref{alg:projB0Binf} to solve~\eqref{eq:proj} inside two trust-region methods for nonsmooth regularized problems of the form~\eqref{eq:minf+h}. The trust region is defined in the \(\ell_\infty\)-norm in both, and provides the box \(x + \Delta \mathds{B}_\infty\), where \(x\) is the current iterate and \(\Delta\) the trust-region radius. At iteration \(j\) of the method of \citet{aravkin-baraldi-orban-2021}, a step \(s_j\) is computed as an approximate solution of the model \[ \minimize{s} \ q(s) + \psi(s; x_j) + \chi(s \mid \Delta \mathds{B}_\infty), \qquad q(s) := \nabla f(x_j)^T s + \tfrac{1}{2} s^T B_j s, \] where \(B_j = B_j^T \in \mathds{R}^{n \times n}\) is a limited-memory LBFGS or LSR1 approximation of the Hessian of \(f\), and \(\psi(s; x_j) \approx h(x_j + s)\). Below, we choose \(\psi(s; x_j) = h(x_j + s) = \chi(x_j + s \mid k \mathds{B}_0)\) for an appropriate value of \(k \in \mathds{N}\). \(s_j\) is computed using an adaptive stepsize variant of the proximal gradient algorithm named R2 \citep{aravkin-baraldi-orban-2021} that generates inner iterates \(s_{j,l}\), starting with \(s_{j,0} := s_j\). At iteration \(l\) of R2, we compute a step \(t_l\) that solves \[ \minimize{t} \ \nabla q(s_{j,l-1})^T t + \tfrac{1}{2} \sigma_l \|t\|_2^2 + \psi(s_{j,l-1} + t; x_j) + \chi(s_{j,l-1} + t \mid \Delta \mathds{B}_\infty), \] where \(\sigma_l > 0\). If we complete the square and perform the change of variables \(y = x_j + s_{j,l-1} + t\), we obtain a problem of the form~\eqref{eq:proj}. We refer to the method outlined above as TR. The second trust-region method is a variant specialized to the case \(f(x) = \tfrac{1}{2} \|F(x)\|_2^2\), where \(F: \mathds{R}^n \to \mathds{R}^m\) inspired from the method of \citet{levenberg-1944} and \citet{marquardt-1963}, where we redefine \(q(s) := \tfrac{1}{2} \|J(x) s + F(x)\|_2^2\). We refer to the latter as LMTR. In both methods, the decrease in the model achieved by \(s_j\) is denoted \(\xi\). Of particular interest is the decrease achieved by \(s_{j,1}\)---the first step in the inner iterations---which is denoted \(\xi_1\). It is possible to show that \(\sqrt{\xi_1}\) may be used as a criticality measure for~\eqref{eq:minf+h}. Each method stops as soon as \(\sqrt{\xi_1} \leq \epsilon + \epsilon \sqrt{\xi_{1,0}}\) where \(\xi_{1,0}\) is the \(\xi_1\) observed at the first outer iteration and \(\epsilon = 10^{-6}\). We illustrate the behavior of the trust-region methods on the LASSO / basis pursuit denoise problem, in which we fit a linear model to noisy observations \(Ax \approx b\), where the rows of \(A \in \mathds{R}^{m \times n}\) are orthonormal. We set \(b = A x_\star + \varepsilon\), where \(\|x_\star\|_0 = k\) with its nonzero components set to \(\pm 1\) randomly and \(\varepsilon \sim \mathcal{N}(0, 0.01)\). In our experiment, we set \(m = 200\), \(n = 512\), and \(k = 10\). We formulate the problem as \begin{equation}% \label{eq:bpdn} \minimize{x \in \mathds{R}^n} \ \tfrac{1}{2} \|Ax - b\|_2^2 + \chi(x \mid k \mathds{B}_0). \end{equation} We report results in the form of the solver output in \Cref{lst:BPDN-TR-LSR1,lst:BPDN-TR-LBFGS,lst:BPDN-LMTR}, where \emph{outer} is the outer iteration counter \(j\), \emph{inner} is the number of inner R2 iterations at each outer iteration, \(f(x)\) and \(h(x)\) are the value of the smooth and nonsmooth part of the objective, respectively, \(\sqrt(\xi_1)\) is our criticality measure, \(sqrt{\xi}\) is the square root of the decrease achieved the by step \(s_j\), \(\rho\) is the ratio of actual versus predicted reduction used to accept or reject the step, \(\Delta\) is the trust-region radius, \(\|x\|\) and \(\|s\|\) are the \(\ell_\infty\)-norm of the iterate and step, respectively, \(\|B_j\|\) is the spectral norm of \(B_j\), and \(1 / \nu\) is the regularization parameter \(\sigma_l\) in the R2 model. In \Cref{lst:BPDN-TR-LSR1}, \(B_j\) is a limited-memory SR1 operator with memory \(5\). In \Cref{lst:BPDN-TR-LBFGS}, \(B_j\) is a limited-memory BFGS operator with memory \(5\). All methods use the initial guess \(x_0 = 0\). \begin{jllisting}[caption=\label{lst:BPDN-TR-LSR1} TR iterations with L-SR1 on~\eqref{eq:bpdn}.] outer inner f(x) h(x) √ξ1 √ξ ρ Δ ‖x‖ ‖s‖ ‖Bⱼ‖ 1 2 1.9e+00 0.0e+00 8.9e-01 8.9e-01 1.5e+00 1.0e+00 0.0e+00 4.7e-01 1.0e+00 2 9 7.4e-01 0.0e+00 4.8e-01 7.2e-01 1.2e+00 1.4e+00 4.7e-01 5.4e-01 1.0e+00 3 12 1.0e-01 0.0e+00 1.8e-01 2.3e-01 1.4e+00 1.6e+00 1.0e+00 3.3e-01 1.0e+00 4 17 3.0e-02 0.0e+00 8.7e-02 1.4e-01 1.0e+00 1.6e+00 1.1e+00 2.9e-01 1.0e+00 5 22 1.0e-02 0.0e+00 1.6e-02 2.6e-02 1.0e+00 1.6e+00 1.0e+00 3.3e-02 1.0e+00 6 18 9.5e-03 0.0e+00 2.7e-03 4.4e-03 1.0e+00 1.6e+00 1.0e+00 6.0e-03 1.0e+00 7 8 9.4e-03 0.0e+00 3.6e-04 3.7e-04 1.5e+00 1.6e+00 1.0e+00 2.6e-04 1.0e+00 8 10 9.4e-03 0.0e+00 2.0e-04 3.0e-04 1.0e+00 1.6e+00 1.0e+00 3.5e-04 1.0e+00 9 6 9.4e-03 0.0e+00 2.0e-05 3.3e-05 1.0e+00 1.6e+00 1.0e+00 4.5e-05 1.0e+00 10 1 9.4e-03 0.0e+00 2.1e-06 2.4e-06 1.2e+00 1.6e+00 1.0e+00 2.7e-06 1.0e+00 TR: terminating with ξ1 = 1.3038641262246793e-6 TR relative error norm(TR_out.solution - sol) / norm(sol) = 0.014710272483962346 \end{jllisting} \begin{jllisting}[caption=\label{lst:BPDN-TR-LBFGS} TR iterations with L-BFGS on~\eqref{eq:bpdn}.] outer inner f(x) h(x) √ξ1 √ξ ρ Δ ‖x‖ ‖s‖ ‖Bⱼ‖ 1 2 1.9e+00 0.0e+00 8.9e-01 8.9e-01 1.5e+00 1.0e+00 0.0e+00 4.7e-01 1.0e+00 2 18 7.4e-01 0.0e+00 3.6e-01 7.1e-01 1.0e+00 1.4e+00 4.7e-01 5.4e-01 1.7e+00 3 23 2.1e-01 0.0e+00 7.1e-02 1.0e-01 1.6e+00 1.6e+00 1.0e+00 9.0e-02 1.8e+00 4 14 2.0e-01 0.0e+00 4.3e-02 2.7e-01 1.6e+00 1.6e+00 1.0e+00 3.6e-01 2.1e+00 5 12 8.6e-02 0.0e+00 1.1e-01 2.4e-01 1.2e+00 1.6e+00 1.1e+00 5.0e-01 2.3e+00 6 18 1.6e-02 0.0e+00 2.9e-02 6.6e-02 1.2e+00 1.6e+00 1.1e+00 1.3e-01 2.5e+00 7 23 1.1e-02 0.0e+00 1.4e-02 2.7e-02 1.4e+00 1.6e+00 1.1e+00 3.2e-02 2.6e+00 8 20 9.8e-03 0.0e+00 7.4e-03 1.7e-02 1.1e+00 1.6e+00 1.0e+00 2.4e-02 2.6e+00 9 14 9.5e-03 0.0e+00 1.7e-03 3.7e-03 1.2e+00 1.6e+00 1.0e+00 5.3e-03 2.7e+00 10 14 9.4e-03 0.0e+00 7.7e-04 1.6e-03 1.2e+00 1.6e+00 1.0e+00 1.6e-03 2.5e+00 11 15 9.4e-03 0.0e+00 3.0e-04 6.1e-04 1.2e+00 1.6e+00 1.0e+00 6.1e-04 2.4e+00 12 9 9.4e-03 0.0e+00 8.9e-05 2.0e-04 1.2e+00 1.6e+00 1.0e+00 3.1e-04 2.5e+00 13 8 9.4e-03 0.0e+00 3.1e-05 6.2e-05 1.3e+00 1.6e+00 1.0e+00 7.6e-05 2.5e+00 14 8 9.4e-03 0.0e+00 1.4e-05 3.0e-05 1.2e+00 1.6e+00 1.0e+00 3.0e-05 2.5e+00 15 4 9.4e-03 0.0e+00 4.3e-06 8.6e-06 1.1e+00 1.6e+00 1.0e+00 5.5e-06 2.6e+00 16 3 9.4e-03 0.0e+00 2.5e-06 5.8e-06 1.0e+00 1.6e+00 1.0e+00 4.3e-06 2.6e+00 TR: terminating with ξ1 = 1.0999297328606739e-6 TR relative error norm(TR_out.solution - sol) / norm(sol) = 0.014709629662551134 \end{jllisting} \begin{minipage}{\linewidth} \begin{jllisting}[caption=\label{lst:BPDN-LMTR} LMTR iterations on~\eqref{eq:bpdn}.] outer inner f(x) h(x) √ξ1 √ξ ρ Δ ‖x‖ ‖s‖ 1/ν 1 9 1.9e+00 0.0e+00 8.9e-01 1.4e+00 1.0e+00 1.0e+00 0.0e+00 1.0e+00 1.0e+00 2 11 1.1e-02 0.0e+00 2.3e-02 4.1e-02 1.0e+00 3.0e+00 1.0e+00 7.0e-02 1.0e+00 3 11 9.4e-03 0.0e+00 3.8e-04 6.8e-04 1.0e+00 3.0e+00 1.0e+00 1.2e-03 1.0e+00 4 4 9.4e-03 0.0e+00 7.0e-06 1.2e-05 1.0e+00 3.0e+00 1.0e+00 1.8e-05 1.0e+00 LMTR: terminating with ξ1 = 2.797637121965124e-12 LMTR relative error norm(LMTR_out.solution - sol) / norm(sol) = 0.014710437655962767 \end{jllisting} \end{minipage} \Cref{fig:bpdn} shows the exact solution \(x_\star\), and the objective history of each solver. All three solvers find a solution where the amplitude of the peaks are within \(10^{-2}\) of the correct amplitude. It is not surprising that LMTR, which exploits the least-squares structure of~\eqref{eq:bpdn} performs better than TR; its model is exact at each iteration, which is reflected in the fact that \(\rho = 1\) at each iteration in \Cref{lst:BPDN-LMTR}. TR also performs well, although, surprisingly, the potentially indefinite L-SR1 Hessian approximations of the positive definite Hessian \(A^T A\) yield fewer iterations than the positive-definite L-BFGS approximation. From a computation cost point of view, each outer TR iteration costs one evaluation of \(f\) and, if the step is accepted, one evaluation of \(\nabla f\). In \Cref{lst:BPDN-TR-LSR1,lst:BPDN-TR-LBFGS}, every step is accepted. Each inner R2 iteration in TR costs a product between the limited-memory quasi-Newton approximation and a vector, and an execution of \Cref{alg:projB0Binf}. Each outer LMTR iteration costs one evaluation of \(F(x)\). Each inner R2 iteration in LMTR costs a Jacobian-vector product, a transposed-Jacobian-vector product, and an executation of \Cref{alg:projB0Binf}. \begin{figure}[ht] \includetikzgraphics[width=.32\linewidth]{bpdn-solution} \hfill \includetikzgraphics[width=.32\linewidth]{bpdn-errors} \hfill \includetikzgraphics[width=.32\linewidth]{bpdn-decreases} \caption{% \label{fig:bpdn} Exact solution of~\eqref{eq:bpdn} (left), absolute errors (center), and objective decrease history as a function of the number of \(\nabla f\) evaluations (right). } \end{figure} In each method, each step is a sum of R2 steps, each of which is a projection of the form~\eqref{eq:proj}. \Cref{fig:lmtr-steps} shows the first three LMTR steps. At iteration~\(1\) (leftmost plot), the trust-region constraint is active, i.e., the step norm \(\|s\|_\infty = \Delta\), which means that at least one of the projections computed during the R2 iterations resulted in a point in \(k \mathds{B}_0\) at the boundary of \(x + \Delta \mathds{B}_\infty\). At subsequent LMTR iterations, \(\|s\|_\infty < \Delta\), which is expected in trust-region methods as convergence occurs, and means that at least the final projection computed during the R2 iterations resulted in a point lying strictly inside \(x + \Delta \mathds{B}_\infty\). \begin{figure}[ht] \begin{center} \includetikzgraphics[width=.32\linewidth]{bpdn-steps-LMTR-B0-1} \hfill \includetikzgraphics[width=.32\linewidth]{bpdn-steps-LMTR-B0-2} \hfill \includetikzgraphics[width=.32\linewidth]{bpdn-steps-LMTR-B0-3} \end{center} \caption{% \label{fig:lmtr-steps} First three steps generated during the iterations of LMTR applied to~\eqref{eq:bpdn}. At iteration~\(1\), the trust-region constraint is active (left). It is inactive at subsequent iterations. } \end{figure} \section{Closing Remarks} Although \(C\) is a nonconvex set, there exists an efficient projection into it, and the latter can be used to design proximal methods for nonsmooth regularized problems \citep{aravkin-baraldi-orban-2021,aravkin-baraldi-orban-2021b}. \Cref{alg:projB0Binf} makes it possible to solve sparsity-constrained problems by way of trust-region methods. It also makes it conceivable to tackle the more general problem~\eqref{eq:min-sparse-box} by way of one of the algorithms proposed by \citep{beck-eldar-2013}. Possible extensions of this work include balls defined by other norms, such as other \(\ell_p\) norms or elliptical norms. However, it is not clear that \Cref{alg:projB0Binf} generalizes in a straightforward way. Indeed, the key is that the projection into \(x + \Delta \mathds{B}_\infty\) is defined componentwise. It is not difficult to sketch an example where the same procedure using the Euclidean norm yields an erroneous projection. Another possible generalization is to consider \(x \not \in k\mathds{B}_0\), as might occur in an infeasible method. The exploration of such generalizations is the subject of ongoing research. \subsection*{Acknowledgements} The author wishes to thank Aleksandr Aravkin and Robert Baraldi for fruitful discussions that made this research possible. \small \bibliographystyle{abbrvnat}
2023-04-23T06:10:07.715Z
2022-04-13T02:05:51.000Z
redpajama/arxiv
arxiv_0002
203
8,731
01fbbfbc78bb6cecc98ae84c04f638488981a8c2
\section{Introduction}\label{sec:intro} Consider ASEP started in step initial data with one second class particle at the origin (see Figure \ref{fig:Traj}). Specifically, at time $t = 0$, each site $j \leq -1$ is occupied with a first class particle, the site $j = 0$ is occupied by a second class particle, and all sites $j > 0$ are initially unoccupied and (for the definition of the dynamics which follows) will be considered infinite class. First and second class particles have left jump rate $L$ and right jump rate $R$ where we assume that $R>L\geq 0$ and $R-L=1$. Jumps are subject to the rule that when a class $k$ particle tries to jump into a site with a class $k'$ particle, the particles switch places if and only if $k<k'$ (otherwise, they stay put). We denote this process by $\boldsymbol{\mathcal{A}}_t=(\boldsymbol{\eta}_t,\boldsymbol{X}_t)$ where $\boldsymbol{\eta}_t\in \{0,1\}^{\mathbb{Z}}$ are the occupation variables for the first class particles and $\boldsymbol{X}_t$ is the location of the second class particle (we require that $\boldsymbol{\eta}_t(\boldsymbol{X}_t)=0$ so there is no first class particle at the site of the second class particle). Initially, $\boldsymbol{\eta}_0(j)=\mathbf{1}_{j<0}$ and $\boldsymbol{X}_0=0$. Our main result, which is the positive resolution of \cite[Conjecture 1.9]{SP}, shows that in large $t$, the trajectory of $\boldsymbol{X}(t)$ is almost surely linear with slope uniform on $[-1, 1]$. In other words, the second class particle chooses a random direction in the rarefaction fan uniformly and then proceeds asymptotically in that direction (see Figure \ref{fig:Traj}). \begin{figure} \begin{center} \includegraphics[width=4in]{Traj.eps} \end{center} \caption{Illustration of \Cref{xtlimitU}.} \label{fig:Traj} \end{figure} \begin{thm}[Conjecture 1.9 of \cite{SP}] \label{xtlimitU} The limit velocity $\boldsymbol{U}:=\lim\limits_{t \rightarrow \infty}\boldsymbol{X}_t/t$ of the second class particle $\boldsymbol{X}_t$ in $\boldsymbol{\mathcal{A}}_t$ exists almost surely and its law is uniform on $[-1, 1]$. \end{thm} The distributional limit of $\boldsymbol{X}_t/t$ (which we recall below) was known to be uniform for $L=0$ from \cite{SCP}, see equation (1.5). That was generalized to all $L$ in \cite[Theorem 2.1]{FGM09}. A different proof of the distributional limit was given in \cite[Theorem 1.1]{GSZ}, based on color-position symmetries for multispecies ASEP discovered in \cite{BorWhe} and \cite{BorBuf}. \begin{prop}[\cite{SCP,FGM09,GSZ} \label{xt1} For any $\rho \in [0, 1]$, \begin{equation*} \lim_{t \rightarrow \infty} \mathbb{P} \big[ \boldsymbol{X}_t/t \le 1 - 2\rho \big] = \rho. \end{equation*} \end{prop} Thus, the proof of Theorem \ref{xtlimit} reduces to the following almost sure limit for $X_t/t$. \begin{thm} \label{xtlimit} The limit $\boldsymbol{U}:=\lim\limits_{t \rightarrow \infty}\boldsymbol{X}_t/t$ exists almost surely. \end{thm} \Cref{xtlimitU} implies well-definedness of the ASEP speed process, confirming \cite[Conjecture 1.10]{SP}. Consider multispecies ASEP where initially at $n\in \mathbb{Z}$, we start with a class $n$ particle. Let the particles evolve as indicated above: each particle independently attempts to jump left and right with rates $L$ and $R$; those attempted jumps are achieved only if the destination is occupied with a higher class (hence lower priority) particle. For each $n\in \mathbb{Z}$, the class $n$ particle sees an initial condition which is equivalent to a translation of the initial condition considered in \Cref{xtlimitU}. Thus \Cref{xtlimitU} applies for each particle, namely if we let $\boldsymbol{X}_t(n)$ denote the location of the particle that started in position $n\in \mathbb{Z}$ at time $t\geq 0$, we have $\big(\boldsymbol{X}_t(n)-n\big)/t$ converges almost surely to random variable $U(n)$ with distribution uniform on $[-1,1]$. Taking a union over all particles implies that this holds simultaneously for all particles. Let $\mu^{{\rm ASEP}}$ denote the joint law of all $\big(U(n)\big)_{n\in \mathbb{Z}}$. \begin{cor}[Conjecture 1.10 of \cite{SP}] The ASEP speed process measure $\mu^{{\rm ASEP}}$ is well defined and translation invariant with each $U(n)$ uniform on $[-1,1]$. \end{cor} Having constructed this measure it is natural to investigate properties of it such as the joint distributions of various $U(n)$. We will not pursue this here, but we mention that \cite{SDMA} establishes various results in this direction (for instance, related to the properties of ``convoys'' of second class particles that move at the same limiting velocity) and \cite{GSZ} probes the distribution of $\min\big(U(1),\ldots, U(n)\big)$ as a function of $n$ \smallskip In the remainder of this introduction we will discuss how our results fit with respect to previous work, and then describe the heuristics and proof ideas. The proof that we provide combines probabilistic ideas (i.e., couplings) with integrable tools (i.e., effective hydrodynamic bounds). The interplay of these two techniques allows us to prove a result that we do not know how to attain with either separately. Second class particles have been extensively studied with varying perspectives and purposes. When such a particle is started at a shock, it tracks out a microscopic version of the evolution of the shock \cite{Fer92,MSL}; when it is started in stationary initial data, it follows the characteristic velocity \cite{FF94, Rez91} and displays super-diffusive scaling around that related to the KPZ two-point distribution \cite{PS02,FS06, CTPS, Agg18,QV07,BS10}. For step (sometimes called anti-shock) initial data, there is an entire rarefaction fan in the hydrodynamic equation and thus a continuum of characteristics velocities \cite{THMC}. The behavior of a second class particle started in such initial data (as we consider here) was first taken up in \cite{SCP} in the case $L=0$. As noted above \Cref{xt1}, they showed the asymptotic uniformity of the location of the second class particle in the rarefaction fan. They also proved that for any $0<s<t$ fixed, $\lim_{\varepsilon\to 0} \big( \frac{\boldsymbol{X}_{s/\varepsilon}}{s/\varepsilon}-\frac{\boldsymbol{X}_{t/\varepsilon}}{t/\varepsilon}\big) =0$ in probability. This convergence was strengthened a decade later in \cite{MG05}, which proved the almost sure limit for the velocity of a second class particle (i.e., the $L=0$ case of \Cref{xtlimitU}); alternative proofs for the same result appeared in \cite{FP05,PTCI}. The starting point for \cite{MG05} is the coupling between $L=0$ TASEP and exponential last passage percolation (LPP). The almost sure limit relied on Sepp\"al\"ainen's microscopic variational formula for TASEP \cite{Sep99} along with some LPP concentration results. This relation to LPP is valuable and relates the second class particle to the competition interface \cite{FP05}. TASEP gaps relate to a totally asymmetric zero range process, leading to an understanding of second class particles for that model \cite{ABGM19,Gon14}. When $L>0$, the LPP variational formula no longer holds. Thus, a new set of ideas is needed to establish \Cref{xtlimitU}. We will outline these below. The proof of \Cref{xtlimitU} is given in Section \ref{Linear}, relying on all of the results developed in this paper. \smallskip \noindent \emph{Understanding the results in terms of hydrodynamics.} The uniformity of $\boldsymbol{X}_t/t$ on $[-1,1]$ is a microscopic manifestation of an observation about the hydrodynamic limit of ASEP. Recall that the evolution of the density $\rho$ of particles on macroscopic time and space scales in ASEP is governed by the weak entropy solution to the inviscid Burgers equation $$\partial_t \rho(t,x) = \partial_x\big(\rho(t,x)(1-\rho(t,x))\big).$$ In particular, as $\varepsilon\to 0$, the density field for the occupation process at time $t/\varepsilon$ in location $x/\varepsilon$ should converge in a weak sense to the solution of this PDE (provided the initial data converges likewise). If we start with step initial data $\rho(0,x)=\mathbf{1}_{x\leq 0}$ versus shifted step-initial data $\rho(0,x)=\mathbf{1}_{x\leq -\delta}$, the difference of the solutions at time $t$ is a function that is essentially uniform with value $\delta/(2t)$ between $-t$ and $t$. By the basic coupling of ASEP (see Section \ref{sec:couplings}), the shift in initial data can be interpreted as the addition of many second class particles to the left of the origin and the behavior of the hydrodynamic limit suggests the uniform distribution of the velocity of those particles. The proof of the uniform distribution in \cite{SCP} uses the fact that ASEP reaches some form of local equilibrium. This means that if the local density is $\rho$, then the local distribution of particles should be given by Bernoulli product measure with parameter $\rho$. These measures are stationary for ASEP. Assuming this local equilibrium behavior, we can start to understand why the second class particle maintains its velocity. Based on the hydrodynamic theory for step initial data, if $\boldsymbol{X}_t/t= 1-2\rho$ for some $\rho\in (0,1)$ then the density around $\boldsymbol{X}_t$ will be roughly $\rho$ and assuming local equilibrium, the occupation variables for first class particles around $\boldsymbol{X}_t$ will be close to i.i.d. Bernoulli with parameter $\rho$. In this equilibrium situation, $\boldsymbol{X}_t$ jumps left at rate $R\rho$ if position $\boldsymbol{X}_t-1$ is occupied by a first class particle and rate $L(1-\rho)$ if $\boldsymbol{X}_t-1$ has a hole; similarly $\boldsymbol{X}_t$ jumps rate at rate $L\rho$ if position $\boldsymbol{X}_t+1$ is occupied by a first class particle and rate $R(1-\rho)$ if $\boldsymbol{X}_t+1$ has a hole. Thus the expected instantaneous velocity of $\boldsymbol{X}_t$ is $(R-L)(1-2\rho) =1-2\rho$ and so in expectation $\boldsymbol{X}_t$ continues to move along the characteristic velocity $1-2\rho$. This is not the same as showing an almost sure limiting velocity. For infinite i.i.d. Bernoulli $\rho$ initial data, \cite{Fer92} showed exactly the latter. \smallskip\noindent \emph{Proof sketch \Cref{xtlimit} when $L=0$ (TASEP).} Though we are interested in the $L>0$ case, it is useful to first focus on $L=0$. The proof we describe here is different than \cite{MG05} and does not rely on LPP. It also extends (using two additional ingredients) to $L>0$. We start by explaining an overly optimistic approach to the proof and then explain how it can be modified to produce an actual proof. \begin{figure} \begin{center} \includegraphics[width=5.5in]{charhydro.eps} \end{center} \caption{Left: The linear characteristic lines used to solve the inviscid Burgers equation from step initial data. At time $S$ the density is perturbed in the interval $(-\varepsilon S,0)$ to match that of the left endpoint of the interval. The subsequent characteristics show how this perturbation evolves in time via the inviscid Burger equation. Right: The densities corresponding to the characteristics on the left. At time $S$ the profile (thin line) is augmented with the bold line to have density $(1+\varepsilon)/2$ on the interval $(-\varepsilon S,0)$. The time $2S$ profile is then shown (with dotted lines transcribing the time $S$ profile).} \label{fig:charhydro} \end{figure} For step initial data TASEP, at a large time $S$, we expect the density of particles will be approximated by the solution to the Burgers equation which linearly interpolates between density one to the left of $-S$ and density zero to the right of $S$ (see the rarefaction fan at the intermediate time in \Cref{fig:charhydro}). Assume for the moment that the occupation variables at time $S$ are independent Bernoulli with parameters given by this hydrodynamic profile, and also assume that $\boldsymbol{X}_S=0$ so it lies along a zero velocity characteristic. (If $\boldsymbol{X}_S$ were along another characteristic, we would need to work in a moving reference frame.) Under these assumptions, we can couple our time $S$ system to another TASEP where the Bernoulli parameter profile is augmented to the left of the origin (i.e., the location of $\boldsymbol{X}_S$) as in \Cref{fig:charhydro}. Under the basic coupling, this corresponds to adding extra second class particles to the left of $\boldsymbol{X}_S$ to create the augmented profile. Importantly, these additional second class particles remain to the left of $\boldsymbol{X}_t$ at all times $t>S$. This fails when $L>0$. Using the above observation, we see that in order to lower-bound the motion of $\boldsymbol{X}_t$ for $t>S$, it suffices to control the locations of the extra second class particles. While it is hard to control individual particles, we know how to control lots of them by use of hydrodynamic limit theory. Consider adding in enough second class particles so as to make a macroscopic change in the density profile. For example, on the interval $(-\varepsilon S,0)$ we can change the density to equal $(1+\varepsilon)/2$, as depicted on the right of \Cref{fig:charhydro}. At time $2S$ (top of \Cref{fig:charhydro}) this perturbation will evolve as to only perturb the density on the interval $(-2\varepsilon S,0)$. This suggests that with high probability, of the $O(S)$ added second class particles, all but $o(S)$ of them will be to the right of $-2\varepsilon S$ and hence $\boldsymbol{X}_{2S}$ will be to the right of $-2\varepsilon S$ as well. Since $\varepsilon$ was arbitrary this suggests that $\boldsymbol{X}_t$ should maintain a velocity at least 0 (and by particle-hole symmetry, the opposite should follow too). There are a number of issues above. The perturbation should really be on a spatial interval of size $o(S)$. This is because the above argument permits the velocity to drop by $\varepsilon$ on the time increment $S$ to $2S$, and if we repeat on doubling time intervals ($2S$ to $4S$, etc) the net drop may compound to become unbounded. This can be remedied by perturbing instead on an interval like $(-S^{1-\gamma},0)$ for some small $\gamma>0$. Assuming our hydrodynamic results extend to this scale, we should be able to bound the total drop in $\boldsymbol{X}_t$ at times of the form $S_n=2^{n}S$ for $n=0,1,\ldots$. However, at intermediate times $\boldsymbol{X}_t$ could wander in a manner that would prevent the velocity from having a limit. To remedy this, we instead consider a sequence of times that grows like $S_n= S e^{\sqrt {n}}$ (in fact, by choosing $S_{n+1} = S_n+ S_n/\log S_n$). By a Poisson bound (from the basic coupling) the intermediate wandering of $\boldsymbol{X}_t$ does not change the velocity much compared to the $S_n$ times. Besides these modifications, there is still the issue of justifying the simplistic assumptions we made based on hydrodynamic theory considerations. This is done by making use of \emph{effective} versions of hydrodynamic limit results that quantify with exponential decay how close the actual number of particles is to the hydrodynamic limit profile on spatial and fluctuation scales that are $o(S)$. For example, for step initial data if we look at the number of particles at time $S$ in an interval $[X,Y]$ with $-S<X<Y<S$, we expect that it will be approximately $S$ times the integral from $X/S$ to $Y/S$ of the hydrodynamic profile function $(1-z)/2$. An effective hydrodynamic concentration inequality would say that for some $\alpha\in (0,1)$ the probability that the deviation of this number of particles around what we expect it to be will exceed $s S^{\alpha}$ is bounded above by $c^{-1} e^{-c s}$ for some $c>0$. (The optimal $\alpha$ should be $1/3$ and the decay should actually be faster than $e^{-cs}$ for any $c>0$, though we do not need or pursue this.) We also make use of similar bounds for other types of initial data such as the perturbed one, though these can be deduced from bounds for the class of step-Bernoulli initial data via coupling arguments. We use the exponential decay in these bounds when taking union bounds to control the hydrodynamic comparison at each $S_n$. The step initial data effective hydrodynamic result is present in the literature. We quote \cite[Theorem 13.2]{LPDL} and \cite[Proposition 4.1 and Proposition 4.2]{ASFTLPM} (see \Cref{l0estimate} below) for this result. In fact, \cite{LPDL} essentially relies on \cite{CTPS} which uses Fredholm determinantant asymptotics as well as Widom's trick to establish the lower and upper tail bounds respectively. In general for determinantal models like TASEP, one tail often follows directly from showing decay of the kernel of the Fredholm determinant while the other is typically more complicated to demonstrate and requires tools like Widom's trick or Riemann-Hilbert problems \cite{BDMMZ}. \smallskip\noindent \emph{Proving \Cref{xtlimit} when $L>0$ (ASEP).} It is easy to see (e.g. considering a two-particle system) that the presence of additional second class particles to the left of $\boldsymbol{X}_t$ may effect its motion and hence the simple coupling used above for TASEP fails. In its place, we make use of a more sophisticated coupling that was introduced in \cite[Section 4]{MSL} (see \Cref{prop:Rez} below). It says that for $t>S$, $\boldsymbol{X}_t$ can be stochastically lower bounded by the motion of a random second class particle unifomly chosen among those added to the left of $\boldsymbol{X}_t$ at time $t=S$. This enables us to implement for ASEP a similar sort of hydrodynamic argument as given above for TASEP. In addition to the above coupling, we also need to develop effective hydrodynamic concentration inequalities for ASEP. Due to reduction and coupling arguments, it suffices for us to demonstrate these in the case of step Bernoulli initial data. Distributional limit theorems for step initial data ASEP go back to \cite{AASIC} and for step Bernoulli initial data to \cite{ASC} where the one-point distribution of the height function (which captures the integrated occupation variables) was analyzed directly. In \cite{BCS} it was realized that the ASEP height function $q$-Laplace transform admits a simpler form as a Fredholm determinant. The $q$-Laplace transform asymptotically captures the tails of the probability distribution. Our effective hydrodynamic results require both upper and lower tail control. As is typical in such formulas, one tail (typically called the upper tail) is readily accessible from the Fredholm determinant formula via decay of the kernel therein (see also \cite{DZ21} which derives the corresponding large deviation principle for this tail). The other (lower) tail requires a different type of argument. As mentioned earlier, in determinantal models, this is sometimes achieved via Widom's trick or Riemann-Hilbert problems, and in related random matrix theory contexts, other tools like electrostatic variational problems or tridiagonal matrices can be used for such bounds. The first instance of a positive temperature model for which the lower tail was bounded in a manner adapted to KPZ scaling was the KPZ equation. This was achieved in \cite{CG} using a remarkable rewriting in \cite{BG} of the KPZ Laplace transform Fredholm determinant formula proved in \cite{ACQ}. Through this formula the Laplace transform for the KPZ equation was matched to a certain multiplicative functional of the determinantal Airy point process. From this, \cite{CG} derived tail bounds by controlling the behavior of the Airy points (something achievable through existing techniques). There is a similar identity from \cite{AEPDPP} which relates the $q$-Laplace transform for ASEP to the expectation of a multiplicative functional of a certain discrete Laguerre determinantal point process (see also \cite{Bor18} which proves a more general result higher in the hierarchy of stochastic vertex models). From this identity is should be possible to extract fairly tight lower tail bounds. However, we do not need to use the full strength of this identity. In fact, the behavior of this multiplicative functional can be upper bounded by the behavior of its lowest particle, which ends up being equal to the TASEP height function. Thus, through this identity we can deduce the ASEP tail from existing knowledge of that of TASEP. \smallskip \noindent \emph{Outline.} Section \ref{sec:couplings} contains the definition of the basic coupling as well as key consequences such as attractivity (\Cref{xizeta1}), finite speed of propagation (\Cref{xizetaequal}) and monotonicity (\Cref{xizeta2}). We also recall as \Cref{prop:Rez} the coupling from \cite{MSL}, the proof of which is provided in Section \ref{sec:Rez} for completeness. Section \ref{sec:mde} contains our effective hydrodynamic concentration estimates that mainly stem from \Cref{hetaxi} -- these include \Cref{hetaxi2} and \Cref{hetalinear}. \Cref{hetaxi} is proved in Appendix \ref{sec:modDevproof} and \ref{RightKernel}. Section \ref{Linear} contains the proof of our main result, \Cref{xtlimit} (which combined with \Cref{xt1} implies \Cref{xtlimitU} immediately). \Cref{xti} gives the main technical result that controls the motion of the second class particle between two times. This result translates into \Cref{cor:almostthere} and then into \Cref{xtlimit}. Section \ref{couple} proves \Cref{xti} by setting up a coupling as outlined in the proof sketch above and then showing (as \Cref{zti}) that most of the additional second class particles move at a speed close to that of the characteristic. Section \ref{LimitProcess} proves \Cref{zti} by utilizing the effective hydrodynamic concentration estimates from Section \ref{sec:mde}. \smallskip \noindent \emph{Notation.} We fix $R > L \ge 0$ with $R - L = 1$. Unless specified otherwise we assume all constants and parameters are real valued, with the exception of indices which are obviously integer valued. When we introduce constants (the value of which may change despite using the same symbol), we will generally specify upon which parameters they depend by writing $c=c(\cdots)$ with the dependence inside the parentheses. We do not attempt to track constants through the paper or optimize our estimates (e.g. in concentration inequalities) beyond what is needed to reach our main result. We will typically use the sanserif font $\mathsf{E}$ for events and write $\mathsf{E}^c$ for complement of $\mathsf{E}$ and $\mathbf{1}_{\mathsf{E}}$ for indicator function which is $1$ on the event $\mathsf{E}$ and $0$ otherwise. We typically use $\eta,\zeta,\xi$ to denote elements of $\{0,1\}^{\mathbb{Z}}$, i.e., occupation variables. We will use bold-faced letters such as $\boldsymbol{\eta},\boldsymbol{X}$ to denote random variables. For real $x\leq y$ define $\llbracket x,y\rrbracket := \big[\lfloor x\rfloor, \lceil y\rceil\big]\cap \mathbb{Z}$; if $x>y$ define $\llbracket x,y\rrbracket =\varnothing$, the empty set. \smallskip \noindent \emph{Acknowledgements.} We thank Gidi Amir, Omer Angel, James B. Martin and Peter Nejjar for helpful comments. A.A. was partially supported by a Clay Research Fellowship and gratefully acknowledges support from the Institute for Advanced Study. I.C was partially supported by the NSF through grants DMS:1937254, DMS:1811143, DMS:1664650, as well as through a Packard Fellowship in Science and Engineering, a Simons Fellowship, a Miller Visiting Professorship from the Miller Institute for Basic Research in Science, and a W.M. Keck Foundation Science and Engineering Grant. A.A, I.C. and P.G. also wish to acknowledge the NSF grant DMS:1928930 which supported their participation in a fall 2021 semester program at MSRI in Berkeley, California, as well as the CRM in Montreal, Canada where this work was initiated in the 2019 conference on ``Faces of Integrability''. \section{Couplings}\label{sec:couplings} The (single class) ASEP can be described as a Markov process on occupation variables or ordered particle location variables. The {\it occupation process} $\boldsymbol{\eta}_t = \big(\boldsymbol{\eta}_t (j)\big)_{j \in \mathbb{Z}}\!\in\! \{0,1\}^\mathbb{Z}$ has infinitesimal generator $\mathcal{L}$ which acts on local functions $f(\eta)$ as $$ \mathcal{L} f(\eta) = \sum_{j\in \mathbb{Z}} \big(R\cdot \eta(j)(1-\eta(j+1)) + L\cdot \eta(j+1)(1-\eta(j))\big) \big(f(\eta^{j,j+1})-f(\eta)\big) $$ where $\eta^{j,j+1}$ switches the value of $\eta(j)$ and $\eta(j+1)$ (so $\eta^{j,j+1}(i)=\eta(i)$ for $i\neq j,j+1$, $\eta^{j,j+1}(j)=\eta(j+1)$ and $\eta^{j,j+1}(j+1)=\eta(j)$). In words, particles jump left and rate according to independent exponential clocks of rates $L$ and $R$, provided that the destination site is unoccupied. The sites $j$ where $\boldsymbol{\eta}_t(j)=1$ are said to be occupied by particles, and otherwise (when $\boldsymbol{\eta}_t(j)=0$) by holes. As mentioned previously, we will always assume that $R>L\geq 0$ so that there is a net drift to the right. \begin{rem} \label{etaeta} Observe that the ASEP is preserved under interchanging particles and holes, and by reversing all jump directions. Stated alternatively, suppose that $\boldsymbol{\eta}_t$ is an ASEP with left jump rate $L$ and right jump rate $R$; then, the process $\check{\boldsymbol{\eta}}_t$ defined by setting $\check{\boldsymbol{\eta}}_t (j) = 1 - \boldsymbol{\eta}_t (-j)$ for all $j\in \mathbb{Z}$ is also an ASEP with left jump rate $L$ and right jump rate $R$. This is sometimes referred to as \emph{particle-hole symmetry}. \end{rem} The {\it basic coupling} provides a single probability space upon which the evolution for all initial data for ASEP can simultaneously be defined (see \cite[VIII.2]{Liggett85}). Moreover, that coupling enjoys the properties of being {\it attractive} and {\it monotone} (these are recorded below), and hence allows us to define second (and more general) class particles. This construction is easily seen to match with the dynamics explained in the introduction. The basic coupling comes from the {\it graphical} construction of ASEP which we now recall (see also Figure \ref{fig:graphical}). To every site $j\in\mathbb{Z}$ we associate two Poisson point processes on $[0,\infty)$, one which has rate $L$ and one which has rate $R$. Call the rate $L$ process the {\it left arrows} and the rate $R$ process the {\it right arrows}. All of these (between sites and at the same site) will be independent. Above every site $j\in \mathbb{Z}$ we draw a vertical line representing time and draw left and right arrows out of $j$ at heights corresponding to the points in the left and right arrow point processes just defined. For any initial data $\boldsymbol{\eta}_0$, we define the time evolution $\boldsymbol{\eta}_t$ in the following manner. Particles initially occupy sites $j$ where $\eta_0(j)=1$ and remain in place until they encounter an arrow out of their site. At that time, they follow the arrow, provided that the destination site is unoccupied; otherwise, they remain in their site until the next arrow. The basic coupling can also be defined directly in terms of the generator of dynamics on multiple choices of initial data -- see Section \ref{sec:Rez} for such generators. \begin{figure} \begin{center} \includegraphics[width=3in]{graphical.eps} \end{center} \caption{The graphical construction of ASEP. Arrows are given by Poisson point processes and particles follow them provided the destination is unoccupied.} \label{fig:graphical} \end{figure} \begin{lem}[Attractivity] \label{xizeta1} Let $\boldsymbol{\eta}_t$ and $\boldsymbol{\zeta}_t$ denote two versions of ASEP with the same jump rates and with initial data such that $\boldsymbol{\eta}_0 (j) \leq \boldsymbol{\zeta}_0 (j)$ for each $j \in \mathbb{Z}$. Then, under the basic coupling, almost surely $\boldsymbol{\eta}_t (j) \le \boldsymbol{\zeta}_t (j)$ for all $j \in \mathbb{Z}$ and $t \ge 0$. \end{lem} Attractivity allows us to define the first and second class particle process $(\boldsymbol{\eta}_t,\boldsymbol{\alpha}_t)$ by the relation $\boldsymbol{\zeta}_t = \boldsymbol{\eta}_t + \boldsymbol{\alpha}_t$ (see Figure \ref{fig:coupling}). By attractivity, $\boldsymbol{\alpha}_t\in \{0,1\}^{\mathbb{Z}}$, and hence can be thought of as occupation variables for second class particles. We write $\mathbb{P}^{\boldsymbol{\eta}_0,\boldsymbol{\alpha}_0}$ for the probability measure associated to the $(\boldsymbol{\eta}_t,\boldsymbol{\alpha}_t)$ process with initial data $(\boldsymbol{\eta}_0,\boldsymbol{\alpha}_0)$. When there is a single second class particle (our particular interest), i.e., $\sum_{i \in \mathbb{Z}} \boldsymbol{\alpha}_0(i)=1$, we denote its location at time $t$ by $\boldsymbol{X}_t$ (so that $\boldsymbol{\alpha}_t(\boldsymbol{X}_t)=1$ and $\boldsymbol{\alpha}_t(j)=0$ for all other $j$) and write $\mathbb{P}^{\boldsymbol{\eta}_0,\boldsymbol{X}_0}$ for the probability measure associated to the $(\boldsymbol{\eta}_t,\boldsymbol{X}_t)$ process with initial data $(\boldsymbol{\eta}_0,\boldsymbol{X}_0)$. \begin{rem} \label{rem:secondclassduality} The particle-hole symmetry noted in \Cref{etaeta} extends to two-species ASEP. In particle if we reverse all jump directions and swap first class particles and holes, and keep second class particles as is, then the two-species ASEP is preserved. Stated alternatively, suppose that $(\boldsymbol{\eta}_t,\boldsymbol{\alpha}_t)$ records the first and second class particle occupation variables, then $\check{\boldsymbol{\eta}}_t(j) = 1- \boldsymbol{\eta}_t(-j)$ and $\check{\boldsymbol{\alpha}}_t(j) = \boldsymbol{\alpha}_t(-j)$ for all $j\in \mathbb{Z}$ is also a two-species ASEP with left jump rate $L$ and right jump rate $R$. \end{rem} For $x\in \mathbb{Z}$, $\boldsymbol{\eta}_0\in \{0,1\}^{\mathbb{Z}}$ and $N\in \mathbb{Z}_{\geq 1}$, let $A^{\leq }(x,\boldsymbol{\eta}_0,N)$ denote the set of $\boldsymbol{\alpha}_0\in \{0,1\}^\mathbb{Z}$ such that $\sum_{j\in \mathbb{Z}} \boldsymbol{\alpha}_0(j)=N$, $\boldsymbol{\eta}_0+\boldsymbol{\alpha}_0\in \{0,1\}^{\mathbb{Z}}$, $\boldsymbol{\alpha}_0(x)=1$, and $\boldsymbol{\alpha}_0(w)= 1$ only if $w\leq x$ (note that the ``only if'' is not ``if and only if''). In words, this means that we start with $N$ second class particles relative to the first class particles at $\boldsymbol{\eta}_0$, with the rightmost one at site $x$. Associate to $\boldsymbol{\alpha}_0\in A^{\leq}(x,\boldsymbol{\eta}_0,N)$ its ordered particle vector $\boldsymbol{Z}_0=(\boldsymbol{Z}_0(1)>\cdots>\boldsymbol{Z}_0(N))$ so that $\boldsymbol{\alpha}(w)=1$ if and only if $w\in \{\boldsymbol{Z}(1),\ldots, \boldsymbol{Z}(N)\}$. Let $\boldsymbol{Z}_t=(\boldsymbol{Z}_t(1)>\cdots>\boldsymbol{Z}_t(N))$ be the ordered locations at time $t$ of $\boldsymbol{\alpha}_t$. The following result can be extracted from \cite[Section 4]{MSL} (we provide a proof of it in Appendix \ref{sec:Rez} for completeness). It says that to control the location of a single second class particle, we can introduce several second class particles to the left and control the location of a typical (uniformly chosen) one of those (see the caption of Figure \ref{fig:coupling}). \begin{figure} \begin{center} \includegraphics[width=3in]{coupling.eps} \end{center} \caption{Top: ASEP with first class particles (black bullets) and one second class particle (open disk). Bottom: ASEP with four additional second class particles added to the left of the top figure's second class particle. \Cref{prop:Rez} shows that we can couple the two versions of ASEP so the top second class particle stays to the right of a uniformly randomly chosen particle among the second class particles in the bottom figure.} \label{fig:coupling} \end{figure} \begin{prop}\label{prop:Rez} For any $y\in \mathbb{Z}$, $\boldsymbol{X}_0\in \mathbb{Z}$ and $\boldsymbol{\eta}_0\in \{0,1\}^{\mathbb{Z}}$ with $\boldsymbol{\eta}_0(\boldsymbol{X}_0)=0$, and for any $N\in \mathbb{Z}_{\geq 1}$ and $\boldsymbol{\alpha}_0\in A^{\leq}(\boldsymbol{X}_0,\boldsymbol{\eta}_0,N)$, \begin{equation}\label{eq:Rezleq} \mathbb{P}^{\boldsymbol{\eta}_0,\boldsymbol{X}_0}[\boldsymbol{X}_t\leq y] \leq \frac{1}{N} \sum_{j=1}^{N} \mathbb{P}^{\boldsymbol{\eta}_0,\boldsymbol{\alpha}_0}[\boldsymbol{Z}_t(j)\leq y]. \end{equation} \end{prop} Another consequence of the graphical construction is ASEP's finite speed of propagation. \begin{lem} \label{xizetaequal} Let $U \le V$, $T \ge 0$, and $\boldsymbol{\xi}$ and $\boldsymbol{\zeta}$ be two versions of ASEP (each with left and right jump rates $L$ and $R$, respectively). If $\boldsymbol{\xi}_0 (j) = \boldsymbol{\zeta}_0 (j)$ for each $j \in \llbracket U, V\rrbracket$, then under the basic coupling we have that $\boldsymbol{\xi}_t (j) = \boldsymbol{\zeta}_t (j)$ for each $j \in \llbracket U + 4RT, V - 4RT\rrbracket$ and $t \in [0, T]$, off of an event of probability at most $4 e^{-T/3}$. \end{lem} \begin{proof} This follows from large deviation bounds on the sum of exponential random variables which control how particles from outside an interval can effect the behavior far inside it. \end{proof} The final general result we give from coupling is {\it montonicity}. It deals with the integrated occupation variables, i.e., sometimes called the height function or current. Let $\boldsymbol{\xi}_t$ denote ASEP and identify the {\it ordered particle locations} by $\cdots < \boldsymbol{Y}_t(1)< \boldsymbol{Y}_t(0) < \boldsymbol{Y}_t(-1) < \cdots$ where the indexing is such that initially $\boldsymbol{Y}_0 (0) \leq 0 < \boldsymbol{Y}_{0}(-1)$ (subsequently, the $\boldsymbol{Y}_t(j)$ track these indexed particles as they jump). For any $x \in \mathbb{Z}$, we define \begin{flalign} \label{jtx} \mathfrak{h}_t (x; \boldsymbol{\xi}) = \displaystyle\sum_{i\in \mathbb{Z}} \big( \textbf{1}_{\boldsymbol{Y}_0(i) \le 0} \textbf{1}_{\boldsymbol{Y}_t(i) > x} - \textbf{1}_{\boldsymbol{Y}_0(i) > 0} \textbf{1}_{\boldsymbol{Y}_t(i) \le x} \big) \end{flalign} and extend $\mathfrak{h}_t (x; \boldsymbol{\xi})$ to a continuous function in $x$ by linear interpolation. For $x,y\in \mathbb{Z}$, \begin{flalign}\label{eq:heightdiff} \mathfrak{h}_t ([x,y]; \boldsymbol{\xi}):= \mathfrak{h}_t (x; \boldsymbol{\xi})-\mathfrak{h}_t (y; \boldsymbol{\xi}) = \sum_{i=x+1}^{y} \boldsymbol{\xi}_t(i) \end{flalign} from which it is clear that for $j\in \mathbb{Z}$, \begin{equation}\label{eqhdiff} \boldsymbol{\xi}_t(j) = \mathfrak{h}_t (j-1;\boldsymbol{\xi})-\mathfrak{h}_t (j;\boldsymbol{\xi}). \end{equation} In particular, if $t=0$ we will use the short-hand $\mathfrak{h}(x; \boldsymbol{\xi})=\mathfrak{h}_0 (x; \boldsymbol{\xi})$ and have that \begin{equation}\label{eqhsum} \mathfrak{h}(x; \boldsymbol{\xi}) =\mathfrak{h}_0(x; \boldsymbol{\xi}) = \begin{cases} -\displaystyle\sum_{i=1}^{x} \boldsymbol{\xi}_0(i)&\textrm{if } x\geq 1, \\ 0&\textrm{if } x=0,\\ \displaystyle\sum_{i=x+1}^{0} \boldsymbol{\xi}_0(i)&\textrm{if } x\leq -1.\end{cases} \end{equation} At most one of the two summands on the right side of \eqref{jtx} is nonzero. Observe that $\mathfrak{h}_t (x;\boldsymbol{\xi})$ has the following combinatorial interpretation: Color all particles initially to the right of $0$ red, and all particles initially at or to the left of $0$ blue. Then, $\mathfrak{h}_t (x;\boldsymbol{\xi})$ denotes the number of red particles at or to the left of $x$ at time $t$ subtracted from the number of blue particles to the right of $x$ at time $t$. The following shows that if we start with two height functions that are coupled so that they are either ordered pointwise (up to a vertical shift by some $H$) or close to each other (within $K$), then this property persists under the basic coupling. In the first statement, the shift by $H$ may be necessary since our height functions are zeroed out to satisfy $\mathfrak{h}_0 (0;\boldsymbol{\xi})=0$; observe that the second statement of the below lemma follows from the first. \begin{lem}[Monotonicity] \label{xizeta2} Let $\boldsymbol{\xi}_t$ and $\boldsymbol{\zeta}_t$ be two ASEPs with the same jump rates. \begin{enumerate}[leftmargin=*] \item If for some $H\in \mathbb{Z}$ we have $\mathfrak{h}_0 (x; \boldsymbol{\xi})+H \ge \mathfrak{h}_0 (x; \boldsymbol{\zeta})$ for each $x \in \mathbb{Z}$, then under the basic coupling we almost surely have $\mathfrak{h}_0 (x; \boldsymbol{\xi})+H \ge \mathfrak{h}_0 (x; \boldsymbol{\zeta})$ for all $x \in \mathbb{Z}$ and $t \ge 0$. \item If for some $K \in \mathbb{Z}$ we have $\big| \mathfrak{h}_0 (x; \boldsymbol{\xi}) - \mathfrak{h}_0 (x; \boldsymbol{\zeta}) \big| \le K$ for each $x \in \mathbb{Z}$, then under the basic coupling we almost surely have $\big| \mathfrak{h}_t (x; \boldsymbol{\xi}) - \mathfrak{h}_t (x; \boldsymbol{\zeta}) \big| \le K$ for all $x \in \mathbb{Z}$ and $t \ge 0$. \end{enumerate} \end{lem} \begin{figure}[t] \begin{center} \includegraphics[width=2.5in]{monotone.eps} \end{center} \caption{Two height functions are depicted. The grey one is determined by the values of $\boldsymbol{\zeta}_0$ while the black one is determined by the values of $\boldsymbol{\xi}_0$. If the later is shifted by $H$ it point-wise exceeds the former. Provided this occurs at time 0, Lemma \ref{xizeta2} shows that this property persists for all time.} \label{fig:monotone} \end{figure} \section{Some effective hydrodynamics concentration estimates} \label{sec:mde} This section establishes uniform estimates that upper bound the maximal deviations that ASEP height functions can have from their hydrodynamic limits. The key to establishing these concentration bounds is an understanding of the fluctuations under the stationary measure (which just boils down to bounds on sums of i.i.d. Bernoulli random variables) and under step-Bernoulli initial data. This later result is contained in \Cref{hetaxi} and proved later in Section \ref{sec:modDevproof}. These are put together using attractivity of the basic coupling. We begin with the following definition describing random particle configurations distributed according to a product measure. Such configurations will often serve as initial data for the versions of ASEP we consider. Throughout, all versions of ASEP will have the same left jump rate $L$ and right jump rate $R$, for $R > L \ge 0$ with $R - L = 1$. \begin{definition} \label{distributedinitial} Fix a finite interval $I = \llbracket A, B\rrbracket$ with integer endpoints $A< B$, as well as a function $\varphi : \mathbb{R} \rightarrow [0, 1]$. We say that a particle configuration $\boldsymbol{\eta} = \big( \boldsymbol{\eta} (x) \big)$ is \emph{$\varphi$-distributed} on $I$ if its coordinates $\big \{ \boldsymbol{\eta} (x) \}$ are all mutually independent and \begin{flalign*} \mathbb{P} \big[ \boldsymbol{\eta} (A + x) = 1 \big] = \varphi \bigg( \displaystyle\frac{x}{B - A} \bigg), \qquad \text{for each $x \in \mathbb{Z}$}. \end{flalign*} We say that $\boldsymbol{\eta}$ is $\varphi$-distributed on $\mathbb{Z}$ if its coordinates $\boldsymbol{\eta} (x)$ are mutually independent and \begin{flalign*} \mathbb{P} \big[ \boldsymbol{\eta} (x) = 1 \big] = \varphi (x), \qquad \text{for each $x \in \mathbb{Z}$}. \end{flalign*} These two notations are somewhat at odds since the former (involving finite $I$) involves rescaling while the latter does not. We hope the reader will excuse us for this. \end{definition} When using \Cref{distributedinitial}, we will often (although not always, for instance, see the formulation of the lemma below) take $I = \llbracket-K, K\rrbracket$ for some integer $K \ge 1$ and $\varphi$ to be some piecewise linear function which takes value zero outside the interval $[0, 1]$. This will guarantee that $\boldsymbol{\eta}$ only has particles on $\llbracket -K, K\rrbracket$. The following is a concentration inequality for $\varphi$-distributed particle configurations. \begin{lem} \label{distributionconcentration} Adopt the notation of \Cref{distributedinitial} and assume that $I=\mathbb{Z}$. For any $s \in \mathbb{R}_{\ge 1}$ and $X, Y \in \mathbb{Z}$, we have \begin{flalign} \label{hx1} \mathbb{P} \bigg[ \Big| \mathfrak{h} (X; \boldsymbol{\eta}) - \mathfrak{h} (Y; \boldsymbol{\eta}) - \displaystyle\sum_{j = X}^{Y} \varphi(j) \Big| \ge s |Y - X|^{1/2} \bigg] \le 2 e^{-s^2}. \end{flalign} Now consider the case where $I = \llbracket A, B\rrbracket$ is finite and $\varphi(x)\equiv 0$ for all $x\notin[0, 1]$. Then, \begin{flalign} \label{hxy2} \mathbb{P} \bigg[ \displaystyle\max_{\substack{X, Y \in \mathbb{Z} \\ X \le Y}} \Big| \mathfrak{h} (X; \boldsymbol{\eta}) - \mathfrak{h} (Y; \boldsymbol{\eta}) - \displaystyle\sum_{j = X}^{Y} \varphi \Big( \displaystyle\frac{j - A}{B - A} \Big) \Big| \ge s (B - A)^{1/2} \bigg] \le 2 (B-A+1)^2 e^{-s^2 }. \end{flalign} \end{lem} \begin{proof} Observe that \eqref{hx1} followed immediately from Hoeffding's inequality and the fact that $\boldsymbol{\eta}$ is $\varphi$-distributed. Next, assume that $I = \llbracket A, B\rrbracket$ is a finite interval and that $\varphi$ is supported on $[0, 1]$. Using the fact that for $X, Y \in I$, we have $|Y - X| \le B - A$, Hoeffding's inequality and a union bound yields \begin{equation*} \mathbb{P} \bigg[ \displaystyle\max_{\substack{X, Y \in \llbracket A,B\rrbracket \\ X \le Y}} \Big| \mathfrak{h} (X; \boldsymbol{\eta}) - \mathfrak{h} (Y; \boldsymbol{\eta}) - \displaystyle\sum_{j = X}^{Y} \varphi \Big( \displaystyle\frac{j - A}{B - A} \Big) \Big| \ge s (B - A)^{1/2} \bigg] \le 2 (B - A + 1)^2 e^{-s^2}. \end{equation*} The bound \eqref{hxy2} follows from combining the above with the fact that since $\varphi$ is supported on $[0, 1]$ we have for $X < A$ and $Y > B$ that $ \mathfrak{h} (A; \boldsymbol{\eta}) - \mathfrak{h} (X; \boldsymbol{\eta}) = 0 = \mathfrak{h} (B; \boldsymbol{\eta}) - \mathfrak{h} (Y; \boldsymbol{\eta}). $ \end{proof} We now specify two choices we will commonly take for $\varphi$ from \Cref{distributedinitial}. \begin{definition} \label{lambdarhofunctions} Fix real numbers $0 \le \lambda \le \rho \le 1$. Define the piecewise constant function $\Xi^{(\rho; \lambda)} : \mathbb{R} \rightarrow [\rho,\lambda]$ and the piecewise linear function $\Upsilon^{(\rho; \lambda)} : \mathbb{R} \rightarrow \mathbb{R}$ by setting \begin{equation*} \Xi^{(\rho; \lambda)} (z)= \begin{cases} \rho&\textrm{if } z\leq 0,\\\lambda&\textrm{if } z>0,\end{cases}\qquad\qquad \Upsilon^{(\rho; \lambda)} (z)= \begin{cases} \rho&\textrm{if } z\leq 1-2\rho,\\ (1-z)/2&\textrm{if } 1-2\rho \leq z\leq 1-2\lambda,\\ \lambda&\textrm{if } z\geq 0.\end{cases} \end{equation*} \end{definition} We say that an ASEP $\boldsymbol{\eta}_t$ has \emph{$(\rho; \lambda)$-Bernoulli initial data} if $\boldsymbol{\eta}_0$ is $\Xi^{(\rho; \lambda)}$-distributed on $\mathbb{Z}$. Observe in particular that $(1; 0)$-Bernoulli initial data is equivalent to step initial data, and that $(\rho; \rho)$-Bernoulli initial data is stationary for the ASEP; we call the latter \emph{$\rho$-stationary initial data}. The $\Upsilon^{(\rho; \lambda)}$-distributed initial data is meant to model the profile that one gets after running $\Xi^{(\rho; \lambda)}$-distributed initial data for a long time (with a linear interpolating rarefaction fan from density $\rho$ to density $\lambda$). The assumption $\lambda \leq \rho$ ensures that the hydrodynamic limit does not have shocks. The following is a key concentration estimate $(\rho; 0)$-step Bernoulli initial data ASEP. This estimate is not optimal, either in the error bound $T^{2/3}$ or in the probability decay $e^{-cs}$. (In the case of step initial data, we believe that the $T^{1/3}$ scale is optimal, but the decay is not.) Note that for our purposes, it is sufficient that we have a bound of the form $T^{\alpha}$ for some $\alpha<1$. A proof of this result is given in Section \ref{sec:modDevproof}. \begin{prop} \label{hetaxi} For any $\varepsilon > 0$, there exists $c = c(\varepsilon) > 0$ such that the following holds. Let $\rho \in [\varepsilon, 1]$ and $\boldsymbol{\eta}$ be$(\rho; 0)$-Bernoulli initial data ASEP. For any $T > 1$ and $s \in [0, T]$, \begin{flalign} \label{hetaxilambda0} \displaystyle\max_{\substack{|X/T| \le 1 - \varepsilon \\ |Y/T| \le 1 - \varepsilon}}\mathbb{P} \Bigg[ \bigg| \mathfrak{h}_T(\llbracket X,Y\rrbracket; \boldsymbol{\eta}) - T \displaystyle\int\limits_{X/T}^{Y/T} \Upsilon^{(\rho; 0)} (z) dz \bigg| \ge s T^{2/3} \Bigg] \le c^{-1} T e^{-c s}. \end{flalign} For step initial data (when $\rho=1$) \eqref{hetaxilambda0} holds with the term $sT^{2/3}$ replaced by $sT^{1/3}$. The constants $c=c(\varepsilon)$ can be chosen so as to weakly decrease as $\varepsilon$ decreases to 0. \end{prop} From \Cref{hetaxi} and monotonicity, we deduce the following corollary showing that \eqref{hetaxilambda0} also holds under $(\rho; \lambda)$-Bernoulli initial data for any $0 \le \lambda \le \rho \le 1$. \begin{cor} \label{hetaxi2} For any $\varepsilon\in(0,1)$, there exists $c = c(\varepsilon) > 0$ such that the following holds. For any $\lambda \in [0, 1 - \varepsilon]$ and $\rho \in [\varepsilon, 1]$ with $\lambda \le \rho$, let $\boldsymbol{\eta}_t$ denote $(\rho; \lambda)$-Bernoulli initial data ASEP. Then, for any $T>1$ and $s \in [0, T]$, \begin{flalign} \label{hetaxilambdarho} \mathbb{P} \Bigg[ \displaystyle\max_{\substack{|X/T| \le 1 - \varepsilon \\ |Y/T| \le 1 - \varepsilon}} \bigg| \mathfrak{h}_T(\llbracket X,Y\rrbracket; \boldsymbol{\eta}) - T \displaystyle\int\limits_{X/T}^{Y/T} \Upsilon^{(\rho; \lambda)} (z) dz \bigg| \ge s T^{2/3} \Bigg] \le c^{-1} T^3 e^{-c s}. \end{flalign} The constants $c=c(\varepsilon)$ can be chosen so as to weakly decrease as $\varepsilon$ decreases to 0. \end{cor} \begin{proof} By the particle-hole symmetry in \Cref{etaeta} along with \Cref{hetaxi} applied to $(1 - \lambda,0)$-Bernoulli initial data ASEP, \eqref{hetaxilambdarho} holds if $(\rho; \lambda) = (1; \lambda)$. Now consider the case where $\lambda = \rho$. Then $\boldsymbol{\eta}$ is stationary in time, and so $\boldsymbol{\eta}_T$ is also $\Xi^{(\rho; \rho)}$-distributed on $\mathbb{Z}$. Hence, the $\varphi = \Xi^{(\rho; \rho)}$ case of \eqref{hx1} together with a union bound over all integer $X, Y \in \llbracket -T, T\rrbracket$ yields \begin{flalign*} \mathbb{P} \Bigg[ \displaystyle\max_{|X|, |Y| < T} \bigg| \mathfrak{h}_T(\llbracket X,Y\rrbracket; \boldsymbol{\eta}) - \rho (Y - X) \bigg| \ge s T^{1/2} \Bigg] \le 50 T^2 e^{-s^2/2}, \end{flalign*} ($50$ is not tight, but sufficiently large), which verifies \eqref{hetaxilambdarho}. Now suppose that $(\rho; \lambda)$ is arbitrary satisfying $\lambda \in [0, 1 - \varepsilon]$ and $\rho \in [\varepsilon, 1]$, with $\lambda \le \rho$. By a union bound, to show \eqref{hetaxilambdarho} it suffices that we show that there exists $c = c(\varepsilon) > 0$ such that for any integers $X\leq Y$ with $|X/T|, |Y/T| \le (1 - \varepsilon)$, \begin{flalign} \label{probability1rholambda} \begin{aligned} & \mathbb{P} \bigg[ \mathfrak{h}_T(\llbracket X,Y\rrbracket; \boldsymbol{\eta}) \ge T \displaystyle\int\limits_{X/T}^{Y/T} \Upsilon^{(\rho; \lambda)} (z) dz + s T^{2/3} \bigg] \le c^{-1} T e^{-c s}, \\ & \mathbb{P} \bigg[\mathfrak{h}_T(\llbracket X,Y\rrbracket; \boldsymbol{\eta}) \le T \displaystyle\int\limits_{X/T}^{Y/T} \Upsilon^{(\rho; \lambda)} (z) dz - s T^{2/3} \bigg] \le c^{-1} T e^{-c s}. \end{aligned} \end{flalign} We only establish the first bound in \eqref{probability1rholambda}, as the proof of the latter is entirely analogous. To that end, let $\boldsymbol{\xi}_t$ and $\boldsymbol{\zeta}_t$ denote two ASEPs started with $(1; \lambda)$-Bernoulli initial data and $\rho$-stationary initial data, respectively. Since $\lambda \le \rho \le 1$, we may couple the Bernoulli initial data $\boldsymbol{\eta}_0, \boldsymbol{\xi}_0$ and $\boldsymbol{\zeta}_0$ on the same probability space so that $\boldsymbol{\eta}_0 (x) \le \min \big\{ \boldsymbol{\xi}_0 (x), \boldsymbol{\zeta}_0 (x) \big\}$, for each $x \in \mathbb{Z}$, almost surely. (This is a microscopic form of the ordering illustrated in \Cref{fig:Corollary35}.) The basic coupling used in \Cref{xizeta1} implies the existence of a coupling between $\boldsymbol{\eta}_t, \boldsymbol{\xi}_t$ and $\boldsymbol{\zeta}_t$ such that $\boldsymbol{\eta}_t (x) \le \min \big\{ \boldsymbol{\xi}_t (x), \boldsymbol{\zeta}_t (x) \big\}$ holds for each $x \in \mathbb{Z}$ and $t \in \mathbb{R}_{\ge 0}$, almost surely. In particular, using \eqref{eq:heightdiff} we almost surely have that \begin{flalign*} \mathfrak{h}_T(\llbracket X,Y\rrbracket; \boldsymbol{\eta}) =\displaystyle\sum_{j = X+1}^{Y} \boldsymbol{\eta}_T (j) & \le \min \Bigg\{ \displaystyle\sum_{j = X+1}^Y \boldsymbol{\xi}_T (j), \displaystyle\sum_{j = X+1}^Y \boldsymbol{\zeta}_T (j) \Bigg\} \\ & = \min \big\{ \mathfrak{h}_T (X; \boldsymbol{\xi}) - \mathfrak{h}_T (Y; \boldsymbol{\xi}), \mathfrak{h}_T (X; \boldsymbol{\zeta}) - \mathfrak{h}_T (Y; \boldsymbol{\zeta}) \big\}. \end{flalign*} \begin{figure}[t] \begin{center} \includegraphics[width=5in]{Corollary35.eps} \end{center} \caption{The initial data $\Xi^{(\rho,\lambda)}$ (on the left) can be bounded above by the minimum of $\Xi^{(1,\lambda)}$ and $\Xi^{(\rho,\rho)}$, and likewise $\Upsilon^{(\rho,\lambda)}$ (on the right) can be bounded above by the minimum of $\Upsilon^{(1,\lambda)}$ and $\Upsilon^{(\rho,\rho)}$.} \label{fig:Corollary35} \end{figure} By \Cref{lambdarhofunctions} we have (see \Cref{fig:Corollary35}) that $\Upsilon^{(\rho; \lambda)} (z) = \min \big\{ \Upsilon^{(1; \lambda)} (z), \Upsilon^{(\rho; \rho)} (z) \big\}$. Therefore, to establish the first bound in \eqref{probability1rholambda}, it suffices to show that \begin{flalign} \label{xizetaprobability1rholambda} \begin{aligned} & \mathbb{P} \bigg[ \mathfrak{h}_T(\llbracket X,Y\rrbracket; \boldsymbol{\xi}) \ge T \displaystyle\int\limits_{X/T}^{Y/T} \Upsilon^{(1; \lambda)} (z) dz + s T^{2/3} \bigg] \le c^{-1} T e^{-c s}, \\ & \mathbb{P} \bigg[\mathfrak{h}_T(\llbracket X,Y\rrbracket; \boldsymbol{\zeta}) \ge T \displaystyle\int\limits_{X/T}^{Y/T} \Upsilon^{(\rho; \rho)} (z) dz + s T^{2/3} \bigg] \le c^{-1} T e^{-c s}. \end{aligned} \end{flalign} Since the first and second estimates in \eqref{xizetaprobability1rholambda} follow from the already established $(\rho; \lambda) = (1; \lambda)$ and $(\rho; \lambda) = (\rho; \rho)$ cases of the corollary, we deduce the first inequality in \eqref{probability1rholambda}. The second inequality in \eqref{probability1rholambda} follows similarly as above by lower bounding by $(\rho,0)$-Bernoulli and $\lambda$-stationary initial data. This completes the proof of \eqref{hetaxilambdarho} and hence the corollary. \end{proof} The rest of this section establishes effective hydrodynamic concentration inequalities for ASEP with initial data given by specific piecewise linear functions (though the methods apply more generally) defined below and illustrated in \Cref{fig:Prop41withoutpsi}. \begin{figure}[t] \begin{center} \includegraphics[width=4in]{Prop41withoutpsi.eps} \end{center} \caption{$\Phi_{\varepsilon; \beta}^{(\rho)}$ and $\Upsilon_{\varepsilon}^{(\rho)}$ (with slight vertical shifts to make it easier to distinguish).} \label{fig:Prop41withoutpsi} \end{figure} \begin{definition}\label{linearconstant} Fix any $\varepsilon \in \big( 0, \frac{1}{2} \big)$ and $\rho \in [\varepsilon, 1 - \varepsilon]$. Define $\Upsilon_{\varepsilon}^{(\rho)} : \mathbb{R} \rightarrow [0, 1]$ by \begin{equation} \label{functionlinear} \Upsilon_{\varepsilon}^{(\rho)} (z) = \begin{cases}\rho + \varepsilon (\frac{1}{2} - z) & \textrm{if }z \in [0, 1],\\ 0 &\textrm{if } z\notin[0,1].\end{cases} \end{equation} The function $\Upsilon_{\varepsilon}^{(\rho)}$ is a suitable translation and scaling of the function $\Upsilon^{(\rho; \lambda)}$ from \Cref{lambdarhofunctions}, where we additionally set it to $0$ outside of the interval $[0, 1]$. The function $\Upsilon_{\varepsilon}^{(\rho)}$ is linear on its non-zero support. It will also be useful to consider versions of this function that (continuously) transition from being linear to constant. To that end, for any $\varepsilon, \beta \in \big( 0, \frac{1}{2} \big)$ and $\rho \in [\varepsilon, 1 - \varepsilon]$, define $\Phi_{\varepsilon; \beta}^{(\rho)}: \mathbb{R} \rightarrow [0, 1]$ by \begin{equation*} \Phi_{\varepsilon; \beta}^{(\rho)} (z) =\begin{cases} \rho + \varepsilon (\frac{1}{2} - z )& \textrm{if }z \in [ 0, \frac{1}{2} - \beta ],\\ \rho + \varepsilon \beta & \textrm{if }z \in [\frac{1}{2} - \beta, 1 ],\\0&\textrm{if }z\notin[0,1].\end{cases} \end{equation*} \end{definition} The following proposition provides effective hydrodynamic concentration estimates for the ASEP under either $\Upsilon_{\varepsilon}^{(\rho)}$-distributed or $\Phi_{\varepsilon; \beta}^{(\rho)}$-distributed initial data. \begin{prop} \label{hetalinear} For any fixed $\delta \in \big( 0, \frac{1}{16R} \big)$, there exists $c = c(\delta) > 0$ such that the following holds. For any $S, T \in \mathbb{R}_{\ge 1}$ with $S \ge \delta^{-2} T$, $\beta \in \big( 0, \frac{1}{4} \big)$, $\varepsilon \in \big( 4 \delta, \frac{1}{2} \big)$, $\rho \in [\varepsilon, 1 - \varepsilon]$, and $\kappa \in [15, T]$: \begin{enumerate}[leftmargin=*] \item\label{hetaxilambdarholinear} ASEP $\boldsymbol{\eta}_t$ with $\Upsilon_{\varepsilon}^{(\rho)}$-distributed initial data on the interval $\llbracket -\varepsilon S,\varepsilon S\rrbracket$ satisfies \begin{flalign*} \begin{aligned} \mathbb{P} \bigg[ \displaystyle\max_{\substack{|X/S| \le \varepsilon / 4 \\ |Y/S| \le \varepsilon / 4}} \Big|\mathfrak{h}_T(\llbracket X,Y\rrbracket; \boldsymbol{\eta}) - T\!\! \displaystyle\int\limits_{X/T}^{Y/T} \Big( \rho + \displaystyle\frac{(1 - 2 \rho - z) T}{2 (S + T)} \Big) dz \Big| \ge \kappa S^{2/3} \bigg] \le c^{-1} S^3 e^{-c \kappa}; \end{aligned} \end{flalign*} \item\label{hetaxilambdarholinear2} ASEP $\boldsymbol{\eta}_t$ with $\Phi_{\varepsilon; \beta}^{(\rho)}$-distributed initial data on the interval $\llbracket -\varepsilon S,\varepsilon S\rrbracket$ satisfies \begin{flalign*} \begin{aligned} \!\!\!\!\!\!\!\!\mathbb{P} \bigg[ \displaystyle\max_{\substack{|X/S| \le \varepsilon / 4 \\ |Y/S| \le \varepsilon / 4}} \Big| \mathfrak{h}_T(\llbracket X,Y\rrbracket; \boldsymbol{\eta})- T\!\!\! \displaystyle\int\limits_{X/T}^{Y/T}\!\!\! \max \Big\{ \rho + \displaystyle\frac{(1 - 2 \rho - z) T}{2 (S + T)}, \rho + \varepsilon \beta \Big\} dz \Big|\! \ge \kappa S^{2/3} \bigg]\! \le c^{-1} S^3 e^{-c \kappa}. \end{aligned} \end{flalign*} \end{enumerate} \end{prop} \begin{proof} The proofs of \Cref{hetalinear} \eqref{hetaxilambdarholinear} and \eqref{hetaxilambdarholinear2} are very similar, so we only detail that of \eqref{hetaxilambdarholinear}. The idea will be to compare $\boldsymbol{\eta}_t$ on the time interval $t\in [0,T]$ to another version of ASEP $\boldsymbol{\zeta}_t$ that corresponds to step initial data ASEP, with all particles outside a specific window destroyed at time $S$ and then run for time $t\in [S,S+T]$. The window is chosen so the step initial data hydrodynamic limit replicates the profile for the initial data of $\boldsymbol{\eta}_0$. To this end, let $\boldsymbol{\xi}_t$ denote ASEP under step initial data (to establish \eqref{hetaxilambdarholinear2} we would instead let $\boldsymbol{\xi}_t$ denote an ASEP under two-sided $(1; \rho + \varepsilon \beta)$-Bernoulli initial data). By \Cref{hetaxi2}, there exists $c = c (\delta)>0$ such that for any $U>1$ and $\kappa \in [1, U]$, \begin{flalign} \label{hxizetau} \displaystyle\max_{\substack{|X/U| \le 1 - \delta \\ |Y/U| \le 1 - \delta}} \mathbb{P} \bigg[ \Big| \mathfrak{h}_U (\llbracket X,Y\rrbracket; \boldsymbol{\xi}) - U \displaystyle\int\limits_{X/U}^{Y/U} \Big( \displaystyle\frac{1 - z}{2} \Big) dz \Big| > \kappa U^{2/3} \bigg] < c^{-1} U e^{- c \kappa}. \end{flalign} Now define $\boldsymbol{\zeta}_t$ to be an ASEP started from random initial data $\boldsymbol{\zeta}_0$ given by \begin{flalign} \label{zeta0xis} \boldsymbol{\zeta}_0 (x) = \begin{cases} \boldsymbol{\xi}_S \big( j + \lfloor(1 - 2 \rho) S\rfloor \big) & \textrm{if }j \in \llbracket -\varepsilon S, \varepsilon S\rrbracket, \\ 0 &\textrm{if } j\notin \llbracket -\varepsilon S, \varepsilon S\rrbracket. \end{cases} \end{flalign} By \Cref{xizetaequal}, we may couple the ASEPs $\boldsymbol{\zeta}_t (j)$ and $\boldsymbol{\xi}_{S + t} \big( j + \lfloor(1 - 2 \rho) S \rfloor \big)$ so that for all $t \in [0, T]$ they coincide with high probability on $j \in \llbracket -(\varepsilon S - 4RT), \varepsilon S - 4RT\rrbracket$, namely \begin{flalign} \label{aevent} \mathbb{P} [\mathsf{A}] \ge 1 - 4 e^{-T / 3}, \end{flalign} where the event $\mathsf{A}$ is defined by \begin{equation*} \mathsf{A} = \Big\{ \boldsymbol{\zeta}_t (j) = \boldsymbol{\xi}_{S + t} \big( j + \lfloor(1 - 2 \rho)\rfloor S \big)\textrm{ for all }t\in [0,T] \textrm{ and } j \in \llbracket -(\varepsilon S - 4RT), \varepsilon S - 4RT\rrbracket \Big\}. \end{equation*} By \eqref{eq:heightdiff} it then follows that for $X \in \llbracket -(\varepsilon S - 4RT), \varepsilon S - 4RT\rrbracket$, \begin{flalign} \label{hha} \textbf{1}_{\mathsf{A}} \mathfrak{h}_t(X; \boldsymbol{\zeta}) = \textbf{1}_{\mathsf{A}} \Big( \mathfrak{h}_{S+t} \big(\big\llbracket X+\lfloor (1 - 2 \rho) S\rfloor,\lfloor (1 - 2 \rho) S\rfloor\big\rrbracket ; \boldsymbol{\xi}\big) \Big). \end{flalign} Next, by applying \eqref{hxizetau} with $(U; X, Y)$ equal to $\big( S; X+\lfloor (1 - 2 \rho) S\rfloor, \lfloor (1 - 2 \rho) S\rfloor \big)$, and using the matching from \eqref{zeta0xis}, we see that there exists $c=c(\delta)>0$ such that for $\kappa \in [6, 6U]$, \begin{flalign} \label{bprobability} \mathbb{P} \big[ \mathsf{B} (\kappa) \big] \ge 1 - c^{-1} S^2 e^{- c \kappa}, \end{flalign} where the event $\mathsf{B} (\kappa)$ is defined by \begin{flalign*} \mathsf{B} (\kappa) = \bigg\{ \displaystyle\max_{|X/S| \le \varepsilon} \Big| \mathfrak{h}_0(X; \boldsymbol{\zeta} \big) - S \displaystyle\int\limits_{X/S}^0 \Big( \rho - \displaystyle\frac{z}{2} \Big) dz \Big| \le \displaystyle\frac{\kappa S^{2/3}}{6} \bigg\}. \end{flalign*} In order to apply \eqref{hxizetau} we used the fact that $\big|\lfloor (1-2\rho)S\rfloor \pm \varepsilon S\big|]\leq S(1-\delta)$ as follows from the restrictions we assumed on $\varepsilon$ and $\rho$. Now, turning to $\boldsymbol{\eta}_0$, recall that it is $\Upsilon_{\varepsilon}^{(\rho)}$ distributed on $\llbracket -\varepsilon S, \varepsilon S\rrbracket$ and hence by \Cref{distributionconcentration} there exists $c>0$ such that for any $\kappa \in [12, 12 U]$, \begin{flalign} \label{cprobability} \mathbb{P} \big[ \mathsf{C} (\kappa) \big] \ge 1 - 2(2S+1)^2 e^{ -\kappa^{2}/{144} }\geq 1- c^{-1} S^2 e^{- c\kappa}, \end{flalign} where the event $\mathsf{C} (\kappa)$ is defined by \begin{flalign*} \mathsf{C} (\kappa) = \bigg\{ \displaystyle\max_{|X/S| \le \varepsilon} \Big| \mathfrak{h}_0 (X; \boldsymbol{\eta}) - \displaystyle\int\limits_X^0 \Big( \rho - \displaystyle\frac{z}{2S} \Big) dz \Big| \le \displaystyle\frac{\kappa S^{2/3}}{12} + 1 \bigg\}. \end{flalign*} In applying \Cref{distributionconcentration} we use the fact that $S^{1/2}<S^{2/3}$ for $S>1$ and instead bound $\mathsf{C}(\kappa)$ with the term $S^{2/3}$ replaced by $S^{1/2}$. The $1$ on the right-hand side of the inequality in $\mathsf{C}$ takes into account the potential effect of replacing the summation in \eqref{hxy2} by the above integral. In our next deduction, however, we will use the fact that $\frac{\kappa S^{1/2}}{12}+1\leq \frac{\kappa S^{1/2}}{6}$ since we have assumed $\kappa>15$ and $S>1$. By definition, $\boldsymbol{\eta}_0 (x) = 0 = \boldsymbol{\zeta}_0 (x)$ for $x \notin \llbracket-\varepsilon S, \varepsilon S\rrbracket$, thus combining \eqref{bprobability} and \eqref{cprobability} yields that there exists $c=c(\delta)>0$ such that for all $\kappa \in [12, 6S]$ \begin{flalign}\label{eqDbound} \mathbb{P} \big[ \mathsf{D} (\kappa) \big] \ge 1 - c^{-1} S^2 e^{- c\kappa}, \end{flalign} where the event $\mathsf{D} (\kappa)$ is defined by \begin{flalign} \label{probabilityd} \mathsf{D} (\kappa) = \Big\{ \displaystyle\max_{X\in \mathbb{Z}} \Big| \mathfrak{h}_0 (X; \boldsymbol{\eta}) - \mathfrak{h}_0 (X; \boldsymbol{\zeta}) \Big| \le \displaystyle\frac{\kappa S^{2/3}}{3} \bigg\}. \end{flalign} By the second part of \Cref{xizeta2}, we may couple $\boldsymbol{\eta}_t$ and $\boldsymbol{\zeta}_t$ such that \begin{flalign*} \textbf{1}_{\mathsf{D} (\kappa)} \displaystyle\sup_{t \ge 0} \displaystyle\max_{X \in \mathbb{Z}} \Big| \mathfrak{h}_t (X; \boldsymbol{\eta}) - \mathfrak{h}_t (X; \boldsymbol{\zeta}) \Big| \le \displaystyle\frac{\kappa S^{2/3}}{3}, \end{flalign*} holds almost surely for all $t>0$. Combined this with \eqref{hha}, along with the fact that $\big[- \frac{\varepsilon S}{4}, \frac{\varepsilon S}{4} \big] \subseteq [-(\varepsilon S - 4RT), \varepsilon S - 4RT]$ (as $T^{-1} S \ge \delta^{-2} \ge 4 \varepsilon^{-1} \delta^{-1} \ge 64 \varepsilon^{-1} R$) yields \begin{flalign}\label{eqADmax} \textbf{1}_{\mathsf{A}} \textbf{1}_{\mathsf{D} (\kappa)} \displaystyle\max_{|X/S| \le \varepsilon / 4} \Big| \mathfrak{h}_T(X; \boldsymbol{\eta}) - \mathfrak{h}_{S+T} \big(\big\llbracket X+\lfloor (1 - 2 \rho) S \rfloor,\lfloor (1 - 2 \rho) S \rfloor\big\rrbracket ; \boldsymbol{\xi}\big) \Big| < \displaystyle\frac{\kappa S^{2/3}}{3}. \end{flalign} Finally, let us define the event \begin{align}\label{eqEevent} \mathsf{E} (\kappa) = \bigg\{ \displaystyle\max_{|X/T| \le \varepsilon / 4} \Big| & \mathfrak{h}_{S + T} \big( \llbracket (1 - 2 \rho) S + X, (1 - 2 \rho) S\rrbracket ; \boldsymbol{\xi} \big) \\ \nonumber & \quad - T \displaystyle\int\limits_{X/S}^0 \Big( \rho + \displaystyle\frac{T}{2 (S + T)} (1 - 2 \rho - z) \Big) dz \Big| \le \displaystyle\frac{\kappa (S + T)^{2/3}}{12} \bigg\}. \end{align} From the $(U; X, Y) = \big( S + T, X+\lfloor (1 - 2 \rho) S \rfloor, \lfloor (1 - 2 \rho) S \rfloor\big)$ case of \eqref{hxizetau}, we have that there exists $c=c(\delta)>0$ such that for all $\kappa \in [12, 12(S+T)]$, \begin{flalign} \label{probabilitye} \mathbb{P} \big[ \mathsf{E} (\kappa) \big] \ge 1 - c^{-1} S^2 e^{ -c\kappa}. \end{flalign} In fact, when applying \eqref{hxizetau} we initially have $(S+T)$ on the right-hand side, but since $S\geq \delta^{-2}T$ by assumption, we can replace this by $S$ up to a $\delta$-dependent constant. Furthermore, in applying \eqref{hxizetau} we arrive at a slightly different form for the integral in $\mathsf{E} (\kappa)$, namely \begin{flalign*} (S + T) \displaystyle\int\limits_{(\lfloor (1 - 2 \rho) S \rfloor + X)/(S+T)}^{\lfloor (1 - 2 \rho) S \rfloor/(S+T)} \Big( \displaystyle\frac{1-w}{2}\Big) dw = T \displaystyle\int\limits_{X/T}^0 \Big( \rho + \displaystyle\frac{T}{2(S + T)} (1 - 2 \rho - z) \Big) dz + \textrm{Error} \end{flalign*} where the equality is facilitated through the change of variables $z = T^{-1} (S + T)w - T^{-1} \lfloor (1 - 2 \rho) S \rfloor$) and the error (which comes from replacing $\lfloor (1 - 2 \rho) S \rfloor$ by $(1 - 2 \rho) S$ after the change of variables) is bounded in magnitude by $\frac{T}{2(S+T)}$. That error term can be absorbed, as in the case of $\mathsf{C}(\kappa)$ in \eqref{cprobability}, via the triangle inequality. This yields \eqref{probabilitye}. Combining \eqref{eqADmax} with \eqref{eqEevent} and using Bonferroni's inequality (and the fact that under our assumptions $\kappa (S+T)^{2/3}/12 <\kappa S^{2/3}/6$) we see the first inequality below \begin{align*} \mathbb{P} \bigg[ \displaystyle\max_{|X/S| \le \varepsilon / 4} \Big| \mathfrak{h}_T (X; \boldsymbol{\eta}) -& T \displaystyle\int\limits_{X/T}^0 \Big(\rho + \displaystyle\frac{T}{2 (S + T)} (1 - 2 \rho - z) \Big) dz \Big| \ge \displaystyle\frac{\kappa S^{2/3}}{2} \bigg] \\ & \ge \mathbb{P} [\mathsf{A}] + \mathbb{P} \big[ \mathsf{D} (\kappa) \big] + \mathbb{P} \big[ \mathsf{E} (\kappa) \big] - 2\geq 1 - c^{-1} S^2 e^{-c\kappa}, \end{align*} while the second (which holds for some $c=c(\delta)>0$) uses \eqref{aevent}, \eqref{eqDbound}, and \eqref{probabilitye}. \Cref{hetalinear} \eqref{hetaxilambdarholinear} involves a maximum over both $|X/S| \le \varepsilon / 4$ and $|Y/S| \le \varepsilon / 4$. This result follows from the above inequality by the triangle inequality and union bound. \end{proof} \section{Linear Trajectories of Second Class Particles and proof of \Cref{xtlimit}} \label{Linear} Recall from the beginning of \Cref{sec:intro} that $\boldsymbol{\mathcal{A}}_t=(\boldsymbol{\eta}_t,\boldsymbol{X}_t)$ denotes ASEP started with first class particles at every site of $\mathbb{Z}_{\leq -1}$, a single second class particle started at the origin, and all other site empty. Let $\mathcal{F}_s$ denote the $\sigma$-algebra generated by $\boldsymbol{\mathcal{A}}_t$ up to and including time $s$, for $s \in \mathbb{R}_{\ge 0}$. For any event $\mathsf{E}$, we will write $\mathbb{P}[\mathsf{E}|\boldsymbol{\mathcal{A}}_s] := \mathbb{E}[\mathbf{1}_{\mathsf{E}}|\mathcal{F}_s]$ for the conditional probability of $\mathsf{E}$ given $\mathcal{F}_s$. In Section \ref{couple} we will prove the following. \begin{prop} \label{xti} For any $S>2$ let $T=S (\log S)^{-1}$ and define the $\mathcal{F}_{S}$-measurable random variable $\boldsymbol{\rho}_{S} \in \mathbb{R}$ by the relation $1 - 2 \boldsymbol{\rho}_{S} = S^{-1} \boldsymbol{X}_{S}$, the $\varepsilon$-dependent event \begin{equation}\label{eq:hspsbdrho} \mathsf{P}_S := \{\boldsymbol{\rho}_{S} \in (\varepsilon, 1 - \varepsilon)\} \end{equation} and the $\mathcal{F}_{S+T}$-measurable events \begin{flalign*} \mathsf{E}^{\geq}_{S} &:= \Big\{ \boldsymbol{X}_{S+T} - \boldsymbol{X}_{S} \ge (1 - 2 \boldsymbol{\rho}_{S}) T - S^{1 -1/200} \Big\},\\ \mathsf{E}^{\leq}_{S} &:= \Big\{ \boldsymbol{X}_{S+T} - \boldsymbol{X}_{S} \le (1 - 2 \boldsymbol{\rho}_{S}) T + S^{1 - 1/200} \Big\}, \end{flalign*} and $\mathsf{E}_{S}:= \mathsf{E}^{\geq}_{S}\cap \mathsf{E}^{\leq}_{S}$. Then, for any $\varepsilon \in ( 0, 1/4 )$, there exists $c = c (\varepsilon) > 0$ and a $\mathcal{F}_{S}$-measurable event $\mathsf{H}_{S}$ such that and all $S>2$ we have \begin{equation}\label{eq:hspsbd} \mathbb{P}[\mathsf{P}_S\cap (\mathsf{H}_{S})^c] \leq c^{-1} e^{-c S^{1/12}}\qquad \textrm{and} \quad \mathbb{P}[\mathsf{E}_{S} | \mathcal{F}_{S}] \ge (1 - c^{-1} S^{-1/5})\mathbf{1}_{\mathsf{H}_{S}\cap \mathsf{P}_S}. \end{equation} The constants $c=c(\varepsilon)$ can be chosen so as to weakly decrease as $\varepsilon$ decreases to 0. \end{prop} The following is a corollary of \Cref{xti}. \begin{cor}\label{cor:almostthere} Define $$ U^{\inf} = \liminf_{t\to \infty} \frac{\boldsymbol{X}_t}{t},\quad U^{\sup} = \limsup_{t\to \infty} \frac{\boldsymbol{X}_t}{t}, \quad \mathsf{L}_{\varepsilon} = \big\{ |U^{\inf}-U^{\sup}|<\varepsilon\big\}. $$ Then, there exists $c>0$ such that for all $\varepsilon\in (0,1/4)$, $ \mathbb{P}[\mathsf{L}_{\varepsilon} ]>1-c\varepsilon. $ \end{cor} Before proving this, let us see how this readily implies \Cref{xtlimit}. \begin{proof}[Proof of \Cref{xtlimit}] Observe that for $\varepsilon'<\varepsilon$, $\mathsf{L}_{\varepsilon}\subseteq \mathsf{L}_{\varepsilon'}$. In other words, as $\varepsilon\to 0$ the events increase. Their intersection $\mathsf{L}=\cap_{\varepsilon\in (0,1/4)} \mathsf{L}_{\varepsilon}$ is equal to the event that $U^{\inf}=U^{\sup}$ which is exactly the event that $\lim_{t\to \infty} \frac{\boldsymbol{X}_t}{t}$ exists. By the aforementioned containment and the bound $\mathbb{P}[\mathsf{L}_{\varepsilon} ]>1-c\varepsilon$ from \Cref{cor:almostthere} we see that $\mathbb{P}[\mathsf{L}] = \lim_{\varepsilon\to 0} (1-c\varepsilon) = 1$, thus proving the almost limit, as desired. \end{proof} It remains to show how \Cref{cor:almostthere} follows from \Cref{xti}. The idea is to work with a set of times $S_m$ that grows so that $S_{m+1}= S_m + S_m/\log S_m$. Taking the first time $S_0$ large enough with probability like $1-2\varepsilon$ we have that $\boldsymbol{\rho}_{S_0}= (1-S_0^{-1} \boldsymbol{X}_{S_0})/2$ lies within $(\varepsilon,1-\varepsilon)$ -- this is the event $\mathsf{P}_{S_0}$. From \Cref{xti} there exists a hydrodynamic event $\mathsf{H}_{S_0}$ which is exponentially likely on the event $\mathsf{P}_{S_0}$ such that on $\mathsf{P}_{S_0}$ and $\mathsf{H}_{S_0}$, the event $\mathsf{E}_{S}$ holds with probability like $1-c^{-1} S^{-1/5}$. On the event $\mathsf{E}_{S}$, we can bound how much $\boldsymbol{\rho}_{S_1}$ and $\boldsymbol{\rho}_{S_0}$ can differ to be like $S_0^{-1/200}$. Then, we can iterate on each subsequent time $S_1$, $S_2$ and so on. Since the $S_m$ grow like $e^{\sqrt{m}}$, the total change in the $\boldsymbol{\rho}_S$ as well as the total probabilistic error built up over each iteration can be made arbitrarily small. This shows that on the sequence of times $S_m$ we can show the claim of \Cref{cor:almostthere}. For intermediate times, we use a brutal Poissonian bound on the motion of ASEP particles to show that wandering cannot change the velocity much there either. Before proving \Cref{cor:almostthere} we introduce the set of times involved in our multi-scale argument and some properties of functions of those times. \begin{definition}\label{ti} For any $S_0 \in \mathbb{R}_{\ge 2}$ define $T_m, S_m \in \mathbb{R}_{> 0}$ inductively as follows. For each $m\in\mathbb{Z}_{\ge 1}$, set $T_{m - 1} = T(S_{m-1})$ where $T(S):= S (\log S)^{-1}$ and set $S_m = S_{m - 1} + T_{m - 1}$. We will make use of the following two properties of $T(S)$: \begin{itemize} \item[P$_1$] The function $S\mapsto S+T(S)$ is increasing for $S\geq 2$. \item[P$_2$] $T(S)$ has a unique minimum for $S\geq 2$ at $S=e$ in which case $T(e) = e$ and $T(S)$ is increasing for $S>e$. \end{itemize} \end{definition} The following lemma provides a lower bound on each $S_m$. It may be helpful to note that the recursion for $S_m$ is a discrete version of solving the differential equation $dS(m)/dm = S(m)/\log S(m)$ with $S(0)=S_0$, whose solution is $S(m)=\exp(\sqrt{2m+\log(S_0)^2})$. \begin{lem} \label{rti} For each $m\in\mathbb{Z}_{\geq 1}$, we have that $S_m \ge e^{\sqrt{m}}$. Moreover, for any real $\delta > 0$ and $\vartheta > 0$, there exists $D = D(\delta, \vartheta) > 1$ such that if $S_0>D$, then \begin{flalign*} \textrm{{\bf(a)}}\, \sum_{m = 0}^{\infty} S_m^{-\vartheta} < \delta,\qquad \textrm{{\bf(b)}}\, \sum_{m = 0}^{\infty} e^{-\vartheta S_m} < \delta,\qquad \textrm{{\bf(c)}}\, \sum_{m = 0}^{\infty} e^{-\vartheta T_m} < \delta. \end{flalign*} \end{lem} \begin{proof} We establish the first statement of the lemma (that $S_m \ge e^{\sqrt{m}}$) by induction on $m$. The base case $m=1$ is verified by using (P$_1$) and (P$_2$) to see that $S+T(S)$ is minimal at $S=2$ and exceeds $2+e\geq e^{\sqrt{1}}$ there. To show the induction in $m$, assume that $S_m \ge e^{\sqrt{m}}$ holds for $m = k$, for some $k\in\mathbb{Z}_{\geq 1}$. Then the induction follows from the inequalities \begin{flalign*} S_{k + 1} = S_k + T_k \ge e^{\sqrt{k}} ( 1 + k^{-1 / 2}) \ge e^{\sqrt{k + 1}}. \end{flalign*} The first equality is by definition; the next inequality uses P$_1$ and the $k=m$ induction hypothesis; the final inequality follows since $ \exp \big( (k + 1)^{1 / 2} - k^{1 / 2} \big) \le \exp \big(\frac{1}{2 k^{1 / 2}}\big) \le 1 + k^{-1 / 2}. $ Here, the first inequality relies upon writing $(k+1)^{1/2}-k^{1/2} = k^{1/2}(1+k^{-1})^{1/2}-k^{1/2}$ and the inequalities $(1+x)^{1/2} <1+x/2$ and $x<e^x$ (both for $x>0$); the second inequality is equivalent to $(2 k^{1/2})^{-1} \leq \log(1+k^{-1/2})$ which follows from $x/2 <\log (1+x)$ for $x\in (0,1)$. Turning to {\bf (a)} and {\bf (b)}, observe that we now know that $S_m\geq \max(S_0,e^{\sqrt{m}})$. Thus $$ \sum_{m = 0}^{\infty} S_m^{-\vartheta} \leq \sum_{m = 0}^{\infty} \min\big(S_0^{-\vartheta},e^{-\sqrt{m}\vartheta}\big)\qquad \sum_{m = 0}^{\infty} e^{-\vartheta S_m} \leq \sum_{m = 0}^{\infty} e^{-\vartheta\min(S_0,e^{\sqrt{m}})}. $$ In both of these expressions it is clear that as $S_0$ goes to infinity, each summand goes to zero. Additionally, if we drop the $S_0$ term each summation is finite. Hence, by the dominated convergence theorem, each summation goes to zero as $S_0$, and thus taking $S_0$ large enough we can upper bound each sum by $\delta$ as desired. The argument for {\bf (c)} follows similarly. Since $S_0\geq 2$, combining (P$_1$) and (P$_2$) we also see that for $m\in\mathbb{Z}_{\geq 1}$, $T_m \geq S_0(\log S_0)^{-1}$. On the other hand, we also know that the function $S\mapsto T(S)$ monotonically increases as $S$ increases and thus, by the first part of the lemma which gives $S_m\geq e^{\sqrt{m}}$, we have that $T_m =T(S_m)\geq T(e^{\sqrt m}) =e^{\sqrt{m}}/\sqrt{m}$. Using $T_m \geq \max\big(S_0(\log S_0)^{-1},e^{\sqrt{m}}/\sqrt{m}\big)$ and the dominated convergence theorem yields {\bf (c)}. \end{proof} \begin{proof}[Proof of \Cref{cor:almostthere}] For the duration of this proof let $\mathsf{P}^{\varepsilon}_S$ and $\mathsf{H}^{\varepsilon}_S$ denote the events $\mathsf{P}_S$, and $\mathsf{H}_S$ coming from a particular value of $\varepsilon$ (this dependence was implicit in the notation used elsewhere). For a given $S_0>2$ and $\varepsilon_0\in (0,1/4)$, define recursively for $m\in \mathbb{Z}_{\geq 1}$ $$ \varepsilon_m = \varepsilon_{m-1} - S_{m-1}^{-1/200}. $$ For a given $\varepsilon\in (0,1/4)$, it follows from \Cref{rti} that there exists $D=D(\varepsilon)>0$ such that for all $S_0>D$ \begin{equation}\label{eq:threeeqs} \sum_{m=0}^{\infty} S_m^{-1/200} < \varepsilon/4, \quad \sum_{m=0}^{\infty} c^{-1} S_m^{-1/5} < \varepsilon/2, \quad \sum_{m=0}^{\infty} c^{-1} e^{-c S_m^{1/12}} < \varepsilon/2,\quad\sum_{m=0}^{\infty} e^{-T_m} < \varepsilon/2. \end{equation} where $c=c(\varepsilon)$ is given by \Cref{xti}. For $k\in \mathbb{Z}_{\geq 0}$ define the event \begin{equation} \widetilde{\mathsf{L}}^{\varepsilon_0}_{S_0}(k)=\bigcap_{m=0}^{k-1} \mathsf{P}_{S_m}^{\varepsilon_m} \cap \mathsf{H}_{S_m}^{\varepsilon_m} \cap \mathsf{E}_{S_m} \end{equation} with the convention that $\widetilde{\mathsf{L}}^{\varepsilon}_{S_0}(0)=\Omega$, the full sample space, and that $\widetilde{\mathsf{L}}^{\varepsilon}_{S_0}:=\widetilde{\mathsf{L}}^{\varepsilon}_{S_0}(\infty)$ is the infinite intersection. We make two claims: \smallskip \noindent {\bf Claim 1:} For all $\varepsilon\in (0,1/4)$ there exists $D=D(\varepsilon)>0$ so that for all $S_0>D$, \begin{equation}\label{eq:Lcvarep} \mathbb{P}[\widetilde{\mathsf{L}}^{\varepsilon}_{S_0}]\geq 1-4 \varepsilon. \end{equation} \smallskip \noindent {\bf Claim 2:} Let $\mathsf{W}^{\varepsilon}_{S,S'} = \big\{\sup_{S\leq s<s'\leq S'}|\boldsymbol{X}_{s}/s-\boldsymbol{X}_{s'}/s'|> \varepsilon/2\}$. For all $\varepsilon\in (0,1/4)$ there exists $D>D(\varepsilon)>0$ so that for all $S_0>D$, \begin{equation}\label{eq:Wvareps} \sum_{m=0}^{\infty} \mathbb{P}[\mathsf{W}^{\varepsilon}_{S_m,S_{m+1}}]< \varepsilon. \end{equation} Assuming these claims, let us complete the proof of \Cref{cor:almostthere}. Assume that $\varepsilon=\varepsilon_0\in (0,1/4)$ is given and $D=D(\varepsilon)>0$ is suitably large so that for all $S_0>D$, \eqref{eq:threeeqs}, \eqref{eq:Lcvarep} and \eqref{eq:Wvareps} hold. This implies that \begin{equation}\label{eq:LWcap} \widetilde{\mathsf{L}}^{\varepsilon}_{S_0}\cap \bigcap_{m=0}^{\infty} \left(\mathsf{W}^{\varepsilon}_{S_m,S_{m+1}}\right)^c \end{equation} holds with probability at least $1-5\varepsilon$. Assume below that this event \eqref{eq:LWcap} holds. On the event $\mathsf{E}_{S_m}$, we have that \begin{equation}\label{eq:rhodiff} \left|\frac{\boldsymbol{X}_{S_{m+1}}}{S_{m+1}}-\frac{\boldsymbol{X}_{S_{m}}}{S_{m}}\right| \leq S_m^{-1/200}. \end{equation} By \eqref{eq:threeeqs}, the right-hand side summed over $m\in \mathbb{Z}_{\geq 0}$ is bounded above by $\varepsilon/4$. Thus, on the event in \eqref{eq:LWcap} it follows that $$ \sup_{m,m'\in \mathbb{Z}_{\geq 0}} \left|\frac{\boldsymbol{X}_{S_{m}}}{S_{m}}-\frac{\boldsymbol{X}_{S_{m'}}}{S_{m'}}\right| \leq \varepsilon/2. $$ This controls the maximal change in $\boldsymbol{X}_S/S$ on the set of times $S_0,S_1,\ldots$. This is complemented by the control on intermediate wiggling that is afforded to us by the intersection of the events $\left(\mathsf{W}^{\varepsilon}_{S_m,S_{m+1}}\right)^c$. Combined, this implies that on the event in \eqref{eq:LWcap} $$ \sup_{s,s'\geq D} \left|\frac{\boldsymbol{X}_{s}}{s}-\frac{\boldsymbol{X}_{s'}}{s'}\right| \leq \varepsilon . $$ This implies that on the event in \eqref{eq:LWcap}, $U^{\inf}$ and $U^{\sup}$ differ by at most $\varepsilon$. Since the probability of the event in \eqref{eq:LWcap} is at least $1-5\varepsilon$, \Cref{cor:almostthere} follows. What remains is to prove the two claims from above. \noindent{\bf Proof of Claim 1.} Observe that \begin{flalign*} \mathbb{P}[\widetilde{\mathsf{L}}^{\varepsilon_0}_{S_0}] =& \mathbb{P}[\mathsf{P}^{\varepsilon_0}_{S_0}] - \mathbb{P}[\mathsf{P}^{\varepsilon_0}_{S_0}\cap(\mathsf{H}^{\varepsilon_0}_{S_0})^c ]- \mathbb{P}[\mathsf{P}^{\varepsilon_0}_{S_0}\cap \mathsf{H}^{\varepsilon_0}_{S_0}\cap (\mathsf{E}^{\varepsilon_0}_{S_0})^c ]-\sum_{k=1}^{\infty} \mathbb{P}[\widetilde{\mathsf{L}}^{\varepsilon_k}_{S_k}\cap (\mathsf{P}^{\varepsilon_k}_{S_k})^{c}]\\ & -\sum_{k=1}^{\infty} \mathbb{P}[\widetilde{\mathsf{L}}^{\varepsilon_k}_{S_k}\cap \mathsf{P}^{\varepsilon_k}_{S_k}\cap (\mathsf{H}^{\varepsilon_k}_{S_k})^c] -\sum_{k=1}^{\infty} \mathbb{P}[\widetilde{\mathsf{L}}^{\varepsilon_k}_{S_k}\cap \mathsf{P}^{\varepsilon_k}_{S_k}\cap \mathsf{H}^{\varepsilon_k}_{S_k}\cap (\mathsf{E}^{\varepsilon_k}_{S_k})^c]. \end{flalign*} Observe that $ \mathbb{P}[\mathsf{P}_{S_0}^{\varepsilon_0}]> 1-3\varepsilon_0 $ provided $S_0$ is large enough (as follows from the weak convergence of $\boldsymbol{\rho}$ to a $U[0,1]$ random variable via \Cref{xt1}). Observe now that for any $k\geq 1$, $\mathbb{P}[\widetilde{\mathsf{L}}^{\varepsilon}_{S_k}\cap (\mathsf{P}^{\varepsilon_k}_{S_k})^{c}]=0$. This is because the combination of the event $\mathsf{P}^{\varepsilon_{k-1}}_{S_{k-1}}$ and $\mathsf{E}^{\varepsilon_{k-1}}_{S_{k-1}}$ implies the event $\mathsf{P}^{\varepsilon_k}_{S_k}$ (this follows from \eqref{eq:rhodiff} which shows that $|\boldsymbol{\rho}_k-\boldsymbol{\rho}_{k-1}|\leq S_{k-1}^{-1/200}= \varepsilon_{k-1}-\varepsilon_{k}$). Observe that for any $k\geq 0$, $$ \mathbb{P}\big[\widetilde{\mathsf{L}}^{\varepsilon_k}_{S_k}\cap \mathsf{P}^{\varepsilon_k}_{S_k}\cap (\mathsf{H}^{\varepsilon_k}_{S_k})^c\big] \leq \mathbb{P}\big[\mathsf{P}^{\varepsilon_k}_{S_k}\cap (\mathsf{H}^{\varepsilon_k}_{S_k})^c\big]\leq c^{-1} e^{-c S_k^{1/12}} $$ where the constant $c=c(\varepsilon_0)>0$ can be chosen the same for all $k$ (as follows from the final statement in \Cref{xti}). Similarly observe that for any $k\geq 0$, $$ \mathbb{P}\big[\widetilde{\mathsf{L}}^{\varepsilon_k}_{S_k}\cap \mathsf{P}^{\varepsilon_k}_{S_k}\cap \mathsf{H}^{\varepsilon_k}_{S_k}\cap (\mathsf{E}^{\varepsilon_k}_{S_k})^c\big]= \mathbb{E}\Big[\mathbf{1}_{\widetilde{\mathsf{L}}^{\varepsilon_k}_{S_k}}\mathbf{1}_{ \mathsf{P}^{\varepsilon_k}_{S_k}\cap \mathsf{H}^{\varepsilon_k}_{S_k}} \mathbb{E}\big[\mathbf{1}_{ (\mathsf{E}^{\varepsilon_k}_{S_k})^c}|\mathcal{F}_{S_k}\big]\Big]\leq c^{-1} S_k^{-1/5} $$ where, as above, the constant $c=c(\varepsilon_0)>0$ can be chosen the same for all $k$. The first equality is evident from conditional expectations, while the second relies on the equality $ \mathbf{1}_{ \mathsf{P}^{\varepsilon_k}_{S_k}\cap \mathsf{H}^{\varepsilon_k}_{S_k}} \mathbb{E}\big[\mathbf{1}_{ (\mathsf{E}^{\varepsilon_k}_{S_k})^c}|\mathcal{F}_{S_k}\big] = \mathbf{1}_{ \mathsf{P}^{\varepsilon_k}_{S_k}\cap \mathsf{H}^{\varepsilon_k}_{S_k}} \big(1 - \mathbb{E}\big[\mathbf{1}_{\mathsf{E}^{\varepsilon_k}_{S_k}}|\mathcal{F}_{S_k}\big]\big) $ along with the second inequality in \eqref{eq:hspsbd} and the final statement in \Cref{xti}. Putting together the above deductions we see that $$ \mathbb{P}[\widetilde{\mathsf{L}}^{\varepsilon_0}_{S_0}] \geq 1- 3\varepsilon_0 - \sum_{k=0}^{\infty} c^{-1} e^{-c S^{1/12}}-\sum_{k=0}^{\infty} c^{-1} S_k^{-1/5}\geq 1-4\varepsilon_0 $$ by the second and third inequalities in \eqref{eq:threeeqs}. This proves Claim 1. \smallskip \noindent{\bf Proof of Claim 2.} We start by noting a brutal Poisson process bound on the second class particle. Recall that this particle moves left into an unoccupied site at rate $L$, and left into a site occupied by a first class particle at rate $R$ (this is the rate at which the first class particle moves right and switches places with the second class particle). Since $R>L$ by assumption, this implies that we can lower-bound the trajectory of $\boldsymbol{X}_S$ by a Poisson random walk that jumps to left at rate $L+R\leq 2R$. By similar reasoning, we can upper-bound $\boldsymbol{X}_S$ by another Poisson random walk that jumps to the right at rate $L+R\leq 2R$. Recall that for a Poisson$(\lambda)$ random variable $\boldsymbol{Z}$, if $x>\lambda$ then $\mathbb{P}[\boldsymbol{Z}>x]\leq (e\lambda/x)^x e^{-\lambda}$. Now, observe that by the union bound and triangle inequality $$\mathbb{P}[\mathsf{W}^{\varepsilon}_{S_m,S_{m+1}}] \leq 2\mathbb{P}[\widetilde{\mathsf{W}}^{\varepsilon}_{S_m,S_{m+1}}]\quad\textrm{where}\quad \widetilde{\mathsf{W}}^{\varepsilon}_{S_m,S_{m+1}} = \left\{\sup_{s\in [S_{m},S_{m+1}]}\left|\frac{\boldsymbol{X}_{S_m}}{S_m}-\frac{\boldsymbol{X}_{s}}{s}\right|> \frac{\varepsilon}{4}\right\}. $$ Noting that $$ \frac{\boldsymbol{X}_{S_m}}{S_m}-\frac{\boldsymbol{X}_{s}}{s} = \frac{\boldsymbol{X}_{S_m}-\boldsymbol{X}_s}{s}+ \frac{s-S_m}{sS_m}\boldsymbol{X}_{S_m} $$ we see that $$ \widetilde{\mathsf{W}}^{\varepsilon}_{S_m,S_{m+1}} \subseteq \bigg\{\sup_{s\in [S_{m},S_{m+1}]} \bigg|\frac{\boldsymbol{X}_{S_m}-\boldsymbol{X}_s}{s}\bigg|>\frac{\varepsilon}{8}\bigg\}\cup \bigg\{\sup_{s\in [S_{m},S_{m+1}]} \bigg|\frac{s-S_m}{s S_M} \boldsymbol{X}_{S_m}\bigg|>\frac{\varepsilon}{8}\bigg\}. $$ By the brutal Poisson bound above, there exists a $D=D(\varepsilon)>0$ such that for all $S_0>D$, $$ \mathbb{P}\bigg[\sup_{s\in [S_{m},S_{m+1}]} \bigg|\frac{\boldsymbol{X}_{S_m}-\boldsymbol{X}_s}{s}\bigg|>\frac{\varepsilon}{8}\bigg] \leq \mathbb{P}\bigg[\sup_{s\in [S_{m},S_{m+1}]} \bigg|\frac{\boldsymbol{X}_{S_m}-\boldsymbol{X}_s}{T_m}\bigg|>\frac{\varepsilon}{8}\log S_m \bigg] \leq e^{-T_m}. $$ Similarly, we see that $$ \mathbb{P}\bigg[\sup_{s\in [S_{m},S_{m+1}]} \bigg|\frac{s-S_m}{s S_m} \boldsymbol{X}_{S_m}\bigg|>\frac{\varepsilon}{8}\bigg]\leq \mathbb{P}\bigg[\sup_{s\in [S_{m},S_{m+1}]} \bigg|\frac{\boldsymbol{X}_{S_m}}{S_m}\bigg|>\frac{\varepsilon}{8} \log S_m \bigg]\leq e^{-S_m}. $$ Provided $D$ is large enough, the sum over $m$ of the above upper bounds $e^{-T_m}$ and $e^{-S_m}$ are bounded above by $\varepsilon/2$ which implies Claim 2 and completes the proof of \Cref{cor:almostthere}. \end{proof} \begin{comment} The basic idea here will be to use \Cref{xti} to show that at a sequence of times $S_0,S_1,\cdots$ that grows to infinity roughly like QQQQ \note{ Now we use this proposition to prove the a.s. limit result. Here is a sketch of the idea.} 1. Fix some $\varepsilon\in (0,1/4)$ and go to a time $S_0$ sufficiently large so that for any fixed time after that we are marginally within the interval $(-(1-2\varepsilon),1-2\varepsilon)$ with probability at least $1-4\varepsilon$ (which should have probability $1-2\varepsilon$). 2. Also choose $S_0$ sufficiently large so that $\sum S_m^{1-1/200}<\varepsilon$ and $\sum S^{-1/5}<\varepsilon$ and $\sum c^{-1} e^{-c S^{1/12}}<\varepsilon$. In this case, we can show that $\mathsf{P}_{S_0}\cap\mathsf{H}_0 \cap \mathsf{P}_{S_1}\cap \mathsf{H}_{S_1}\cdots$ happens with probability at least $1-c\varepsilon$ and on that event, the velocity can change by at most $\epsilon/4$. 3. In other words, for all $\varepsilon$ there is a time after which with probability $1-c\varepsilon$ the particle velocity does not change by more than $c\varepsilon$. There's a bit of a hitch which is that we only show this on a logarithmic set of times and we need to argue that in between nothing much can happen. Assuming this, we should be done since the event of not deviating by more than $\varepsilon$ terminally should be contained in the event of not deviating by more than $\varepsilon/2$, etc and as $\varepsilon$ decreases, the probability increase to 1. 4. To show there are no intermediate deviations we can use a brutal poisson bound on the motion of a second class particle (need to write down the details). As will be clear from the proof of \Cref{xti}, the event $\mathsf{H}_m$ corresponds to having the empirical measure at time $S_m$ close to the hydrodynamic limit. This ensures that the second class particle is in a local environment which is close to a stationary measure of density $\boldsymbol{\rho}_m$ so that the second class particle continues to roughly follow the characteristic line. All of these estimates break-down near the edge of the rarefaction fan, hence the requirement that $\boldsymbol{\rho}_m\in (\varepsilon,1-\varepsilon)$. Thus, the key statement in the proposition says that on the $\mathcal{F}_{S_m}$-measurable event $\big\{\boldsymbol{\rho}_m \in (\varepsilon, 1 - \varepsilon)\big\}\cap \mathsf{H}_m$, we have $\mathbb{P} [\mathsf{E}_m | \mathcal{F}_{S_m}] \ge 1 - C T_m^{-1 / 500}$. \note{start here, essentially with the proof of our main theorem. } Assuming \Cref{xti}, we can establish the following corollary which essentially states that if $\boldsymbol{X}_D = (1 - 2 \boldsymbol{\rho}) D$ for some large $D$, then with high probability $\boldsymbol{X}_S$ must approximately remain to the right of the line $T = (1 - 2 \boldsymbol{\rho}) \boldsymbol{X}$ for all $S > D$. \note{I think we need to also intersect in this corollary with the hydrodynamic events $\mathsf{H}_m$ from the proposition. Then we can apply the proposition and the reasoning. I should probably read and verify the proposition before completing this proof though.} \begin{cor}\label{xtxb} For $D>0$ define $\boldsymbol{\rho}_0=\boldsymbol{\rho}_0(D)\in \mathbb{R}$ by the relation $1 - 2 \boldsymbol{\rho}_0 = D^{-1} \boldsymbol{X}(D)$, and events $\mathsf{E}=\mathsf{E}(D,\varepsilon)$ and $\mathsf{R}=\mathsf{R}(D,\varepsilon)$ (which additionally depends on $\varepsilon \in \big(0,\frac{1}{2}\big)$) by \begin{flalign*} \mathsf{E}(D,\varepsilon) := \bigg\{ \inf_{s \ge D} s^{-1} \boldsymbol{X}_s \ge 1 - 2 \boldsymbol{\rho}_0 - \varepsilon \bigg\} \qquad \textrm{and}\qquad \mathsf{R}(D,\varepsilon):= \big\{ \boldsymbol{\rho}_0 \in [\varepsilon, 1 - \varepsilon] \big\}. \end{flalign*} Then, for any $\varepsilon \in \big(0,\frac{1}{2}\big)$ there exists $C = C (\varepsilon) > 0$ such that if $D > C$ then $\mathbb{P} [\mathsf{E} \cap \mathsf{R}| \mathcal{F}_D] \ge 1 - \varepsilon$. \note{should be $\mathbb{P}(\mathsf{E}|\mathcal{F}_D)\geq (1-\varepsilon)1_{R}1_H$} \end{cor} \begin{proof} Proof strategy 1. We will take $D$ as large as needed. 2. We can assume that all $\mathsf{H}_m$ hold with $\varepsilon$ cost. 3. It follows from the proposition that $\mathsf{R}$ holds with $\varepsilon$ cost. 4. We will control the rho at $S_m$ times and between them. 5. Define $F_i$ and $G_i$; we can show that at $\varepsilon$ cost, all $F_i$ must hold. So, we can assume that. 6. We want to control how much you can change between the $S_m$ times. This is done with the prop 5.5 which tells us that at $\varepsilon$ cost, you will not change your slope much (uses summability of the $T_m$ lemma. 7. Then we use a brutal poisson estimate to control intermediate movement. \note{Can immediately show that $\mathsf{R}$ is likely from Proposition \ref{xti} for $m=0$, provided $D$ is large enough.} Assume, for the moment, that $D>2$ and let $S_m$ and $T_m$ be specified as functions of $D$ as in Definition \ref{ti}. Eventually we will need to assume that $D>C(\varepsilon)$ for $C(\varepsilon)$ determined in what follows. Assume also that $\varepsilon$ is fixed as in the statement of the corollary. First, observe that since the events $\mathsf{H}_m$ in Proposition \ref{xti} have $\mathbb{P}[\mathsf{H}_m] \geq 1- S_m^{-1/500}$, it follows that there exists $C=C(\varepsilon)>0$ such that for $D>C$, we have\footnote{We first show this with $\infty$ replaced by $N$ in the summation and intersection. Assuming that, we can then pass to the $N\to \infty$ limit to establish the claim. For finite $N$, the first inequality follows from Boole's inequality, and the second inequality follows for all $N$ (and hence for the limit as well) from Lemma \ref{rti}{\bf (c)}.} \begin{flalign}\label{Phbound} \mathbb{P}[\mathsf{H}] \geq 1- \sum_{m=0}^{\infty} S_m^{-1/500}\geq 1-\varepsilon,\qquad \textrm{where } \mathsf{H}=\bigcap_{m=0}^{\infty} \mathsf{H}_m. \end{flalign} On account of \eqref{Phbound}, we can assume that $H$ holds without QQQ \note{this seems to indicate that we assume $\mathsf{H}$ below and in the statement.} Our proof will proceed by controlling the behavior of $S^{-1}\boldsymbol{X}_S$ at the $S_m$-times and at the intermediate times between consecutive $S_m$. Let us start with the intermediate times for which we will can use the following brutal Poisson process bound. The second class particle moves left into an unoccupied site at rate $L$, and into a site occupied by a first class particle at rate $R$ (this is the rate at which the first class particle moves right and switches places with the second class particle). Since $R>L$ by assumption, this implies that we can lower-bound the trajectory $X(S)$ by a Poisson random walk that jumps to left at rate $R$. By similar reasoning, we can upper-bound $\boldsymbol{X}_S$ by another Poisson random walk that jumps to the right at rate $R$. Recall that for a Poisson$(\lambda)$ random variable $Z$, if $x>\lambda$ then $\mathbb{P}[Z>x]\leq (e\lambda/x)^x e^{-\lambda}$. It follows from this and the stochastic domination described above that there exists $c=c(R)>0$ such that for all $m\in\mathbb{Z}_{\geq 0}$ \begin{flalign} \label{miestimate} \mathbb{P} \big[ \mathsf{M}_m \big| \mathcal{F}_{S_m} \big] < e^{-c T_m} \quad \text{where} \quad \mathsf{M}_m = \bigg\{ \displaystyle\sup_{s, s' \in [S_m, S_{m + 1}]} \big| \boldsymbol{X}_S - \boldsymbol{X}_{s'} \big| > c^{-1} T_m \bigg\}. \end{flalign} Thus, there exists a $C(\varepsilon)>0$ large enough so that for all $D>C(\varepsilon)$, we have\footnote{The first inequality follows from \eqref{miestimate} combined with the union bound and taking complements; the second inequality follows from the second displayed bound in Lemma \ref{rti} with $\vartheta=c$.} \begin{flalign}\label{intermediatetimeestimate} \mathbb{P} \Big[ \bigcap_{m=0}^{\infty} \mathsf{M}^c_m \big| \mathcal{F}_{D} \Big]\geq 1-\sum_{m=0}^{\infty} e^{-c T_m} \geq 1-\tfrac{\varepsilon}{2}. \end{flalign} The purpose of this bound is that we can restrict ourselves to the event that $\bigcap_{m=0}^{\infty} \mathsf{M}^c_m$ holds with a minimal $\tfrac{\varepsilon}{2}$ probabilistic cost. (We will make this statement formal later.) \note{reword?} Let $\boldsymbol{\rho}(s)$ be defined by $s^{-1}\boldsymbol{X}_s = 1-2\boldsymbol{\rho}_s$. By choosing $C(\varepsilon)>0$ large enough, on the event $\bigcap_{m=0}^{\infty} \mathsf{M}^c_m$, we have for all $m\in \mathbb{Z}_{\geq 0}$, that\footnote{ As $\boldsymbol{\rho}_s = \tfrac{1}{2}(1-s^{-1}\boldsymbol{X}_s)$ we can rewrite $ 2\big(\boldsymbol{\rho}_s - \boldsymbol{\rho}_{S_m}\big) = \frac{\boldsymbol{X}_{S_m}-\boldsymbol{X}_{s}}{s} + \frac{(s-s_M)}{sS_m}\boldsymbol{X}_{S_m}. $ On the event $\bigcap_{m=0}^{\infty} \mathsf{M}^c_m$, it follows that $|\boldsymbol{X}_{S_m}-\boldsymbol{X}_{s}|\leq c^{-1} T_m$ and (by summing such bounds) that $|\boldsymbol{X}_{S_m}| \leq c^{-1} S_m$. Using these observations and the fact that $s-S_m\leq T_m$, we see that $ \left|\boldsymbol{\rho}_s - \boldsymbol{\rho}_{S_m}\right|\leq c^{-1} \frac{T_m}{S_m}. $ By definition, $T_m/S_m = 1/\log(S_m)$, so it suffices to show that for any $\varepsilon$, by taking $S_0=D$ large enough, we can make sure that for all $m\in\mathbb{Z}_{\geq 0}$, $1/\log(S_m) \leq c\varepsilon/4$. Lemma \ref{rti} shows that $S_m\geq e^{\sqrt{m}}$ provided $D\geq 2$. If we start with $D\geq e^{\sqrt{k}}$ for some $k\in \mathbb{Z}_{\geq 0}$, then the inductive definition for $S_m$ implies that $S_m\geq e^{\sqrt{m+k}}$. In that case, $1/\log(S_m) \leq 1/\sqrt{m+k}$. So, by taking $k$ large enough, we can be assured that $1/\sqrt{m+k} \leq c\varepsilon/4$ for all $m\in \mathbb{Z}_{\geq 0}$. Finally, we use the triangle inequality to compare $\boldsymbol{\rho}_s$ and $\boldsymbol{\rho}_{s'}$.} $$\sup_{s, s' \in [S_m, S_{m + 1}]}|\boldsymbol{\rho}_s-\boldsymbol{\rho}_{s'}|\leq \tfrac{\varepsilon}{2}.$$ We can rewrite $\mathsf{E}(D,\varepsilon)$ as $$\mathsf{E}(D,\varepsilon) = \Big\{\sup_{S\geq D} \boldsymbol{\rho}_{S} \leq \rho_0+\varepsilon/2\Big\}.$$ So the goal is to show that $\boldsymbol{\rho}_{S}$ does not get too big compared to $\rho(0)$ \note{presumably $\boldsymbol{\rho}_S$ not $\boldsymbol{\rho}_{0}$} provided that $\boldsymbol{\rho}_0\in (\varepsilon,1-\varepsilon)$. The idea is to use Proposition \ref{xti} for $\boldsymbol{\rho}_i\in (\varepsilon/2,1-\varepsilon/2)$ to show that the increase in $\boldsymbol{\rho}_{i+1}-\boldsymbol{\rho}_i$ cannot be very large with high probability; or if $\boldsymbol{\rho}_i<\varepsilon/2$, to just use the above $M^c$ bound to show that $\boldsymbol{\rho}_{i+1}<\varepsilon$ necessarily. QQQ -On the Mc events, show that the rhos cannot change much. -On R and Mc, using prop 1.5 we can show that the rhos cannot increase by much (for rho less than $\varepsilon/2$ the most it can increase is $\varepsilon/2$ and above $\varepsilon/2$, it is by the $T_i^{-1/500}$ amount with probability $1-CT_i^{-1/500}$. Having dealt with the intermediate times, let us now address the behavior at the $S_i$ times. For each $i\in \mathbb{Z}_{\geq 0}$, define the $\mathcal{F}_{S_i}$-measurable random variable $\rho_i\in\mathbb{R}$ by the relation $1 - 2 \rho_i = S_i^{-1} X (S_i)$, and define the $\mathcal{F}_{S_i}$-measurable events \begin{flalign*} F_i = \Big\{ \rho_i \in \big[ \tfrac{\varepsilon}{2}, 1 - \tfrac{\varepsilon}{2} \big] \Big\} \qquad\textrm{and}\qquad G_i = \Big\{ S_i^{-1} X (S_i) \ge 1 - 2 \rho_0 - \tfrac{\varepsilon}{2} \Big\}. \end{flalign*} Now we claim that there exists $C=C(\varepsilon)>0$ such that for $D>C$, \begin{flalign}\label{FEbounds} \mathbb{P} \Bigg[ \bigcup_{i = 0}^{\infty} F_i^c \cup \bigcap_{i = 0}^{\infty} G_i \bigg| \mathcal{F}_D \Bigg] \ge 1 - \tfrac{\varepsilon}{2}. \end{flalign} Assuming \eqref{FEbounds} for the moment, let us complete the proof of the corollary. First, observe that by combining \eqref{intermediatetimeestimate} with \eqref{FEbounds} we have that \begin{flalign}\label{combinebound} \mathbb{P} \Bigg[ \bigg(\bigcup_{i = 0}^{\infty} F_i^c \cup \bigcap_{i = 0}^{\infty} G_i\bigg)\cap \bigcap_{i=0}^{\infty} \mathfrak{M}^c_i \bigg| \mathcal{F}_D \Bigg] \ge 1 - \varepsilon. \end{flalign} We claim that there exists $C=C(\varepsilon)>0$ such that for $D>C$, \begin{flalign*} \mathcal{E} \supseteq \bigg(\bigcup_{i = 0}^{\infty} F_i^c \cup \bigcap_{i = 0}^{\infty} G_i\bigg)\cap \bigcap_{i=0}^{\infty} \mathfrak{M}^c_i. \end{flalign*} QQQQ Idea of the proof. Show $\rho_0\in (\varepsilon,1-\varepsilon)$ with probability at least $1-3\varepsilon$ (I guess the epsilons need to be tweaked in the statement. After that, assume that we are on this event. To control the rest of $\mathcal{E}$ we first show the $M$ stuff doesn't cost much. If $\rho_i$ is too small then $\mathcal{E}$ holds which is good. So we are only concerned with $\rho_i$ getting too large. However, since $\rho_0$ started ok, by prop 1.5 we cannot change too much. In other words, once you are in the good $\rho$ interval, the proposition slows the rate of increase for the $\rho_i$ sufficiently so that it cannot get much larger than $\rho_0$. On the $G$ and $M$ events, the $X$ cannot dip too low which means that the first part of E occurs. In fact, if ever $\rho_i$ gets too small ($X$ too large) then by the $M$'s, you either stay large, or you pass through the middle region where prop 1.5 applies. On the other hand, if $\rho_i$ QQQQ Show that this combine with the M bound implies the corollary. Then show the above bound. \note{Use sanserif for events} Therefore, by summing \eqref{miestimate} over all $i \ge 0$, it suffices to show for $D > C$ that \begin{flalign*} \mathbb{P} \Bigg[ \bigcup_{i = 0}^{\infty} F_i^c \cup \bigcap_{i = 0}^{\infty} G_i \bigg| \mathcal{F}_D \Bigg] \ge 1 - \displaystyle\frac{\varepsilon}{2}, \end{flalign*} where we have used the facts that $\rho_i \le 1 - \frac{\varepsilon}{2}$ holds on $G_{i - 1} \cap \mathfrak{M}_{i - 1}$ (since $\rho_0 \le 1 - \varepsilon$) and $G_i$ holds when $\rho_i \le \frac{\varepsilon}{2}$ (since $\rho_0 \ge \varepsilon$). To that end, recall the events $E_i$ from \eqref{evente} and observe by \Cref{xti} that \begin{flalign*} \mathbb{P} [E] \ge 1 - C \displaystyle\sum_{i = 0}^{\infty} T_i^{- 1 / 500} \ge 1 - \displaystyle\frac{\varepsilon}{2}, \qquad \text{where $E = \bigcap_{i = 0}^{\infty} E_i$}. \end{flalign*} for sufficiently large $D > C (\delta) > 0$, where to deduce the last inequality we applied the second statement of \Cref{rti}. Hence it suffices to show for $D > C$ that $E \subseteq \bigcup_{i = 0}^{\infty} F_i^c \cup \bigcap_{i = 0}^{\infty} G_i$ or, equivalently, that \begin{flalign} \label{etxi} \bigcap_{i = 0}^{\infty} (E_i \cap F_i) \subseteq G_i, \qquad \text{for each integer $i \ge 0$}. \end{flalign} To do this observe, since $S_{i + 1} = S_i + T_i$, we have on the event $E_i \cap F_i$ that \begin{flalign*} S_{i + 1} (1 - 2 \rho_{i + 1}) = X (S_{i + 1}) \ge X (S_i) + (1 - 2 \rho_i) T_i - T_i^{1 - 1 / 500} = S_{i + 1} (1 - 2 \rho_i) - T_i^{1 - 1 / 500}. \end{flalign*} Thus, $\rho_{i + 1} - \rho_i \ge - T_i^{-1 / 500}$ on $E_i \cap F_i$, and so \begin{flalign*} \displaystyle\inf_{i \ge 0} S_i^{-1} X (S_i) \ge D^{-1} X (D) - \displaystyle\sum_{i = 1}^{\infty} T_i^{-1 / 500} = 1 - 2 \rho - \displaystyle\sum_{i = 1}^{\infty} T_i^{-1 / 500} \ge 1 - 2 \rho - \displaystyle\frac{\varepsilon}{2}, \end{flalign*} on $\bigcap_{i = 0}^{\infty} (E_i \cap F_i)$ if $D > C$, where we have again used the second statement of \Cref{rti} in the last inequality. This establishes \eqref{etxi} and therefore the corollary. \end{proof} Now we can establish \Cref{xtlimit}. \begin{proof}[Proof of \Cref{xtlimit} Assuming \Cref{xti}] Here is the idea: 1. $X(s)$ is stochastically dominated above and below by Poisson jump processes, so we can show that $\liminf s^{-1}X(s)$ and $\limsup s^{-1}X(s)$ are almost surely bounded by some constants $\pm C$. 2. We apply the cor. to show that there exists $c>0$ such that for all $\delta>0$, there is a $D>0$ such that $$ \mathbb{P}\left[\liminf s^{-1}X(s) \leq D^{-1}X(D) -\varepsilon\right] \leq c\varepsilon. $$ 3. We know from the corollary that there exists $c>0$ such that for all $\varepsilon>0$ $$ \mathbb{P}\left[\liminf s^{-1}X(s) \leq \limsup s^{-1}X(s) -\varepsilon\right] \leq c\varepsilon. $$ Since $X (t)$ is stochastically dominated by a Poisson random walk that jumps to the right at rate $R$, there exists a (random) constant $C > 1$ such that $S^{-1} X(S) < 2R$ holds almost surely for $S > C$. This, \Cref{xt1}, and \Cref{xti} together imply $\lim_{S \rightarrow \infty} S^{-1} X_S$ exists almost surely. Then the theorem follows from applying \Cref{xt1} again. \end{proof} \end{comment} \section{Proving \Cref{xti}} \label{couple} To prove \Cref{xti} (we focus on the $\mathsf{E}^{\geq}_S$ case as the $\mathsf{E}^{\leq}_{S}$ case follows immediately from particle-hole symmetry as in \Cref{rem:secondclassduality}) we start in \Cref{b} by coupling $\boldsymbol{\mathcal{A}}$ with a slightly different multi-species ASEP $\boldsymbol{\mathcal{B}}$ obtained from $\boldsymbol{\mathcal{A}}$ at time $S$ by adding a number of second class particles to the left of $\boldsymbol{X}_{S}$. Appealing to \Cref{prop:Rez}, we can control the behavior of $\boldsymbol{X}_{S+T}$ in terms of the behavior of the bulk of the new second class particles. That behavior can be controlled by hydrodynamic estimates. All of this, however, requires that the time $S$ density profile in $\boldsymbol{\mathcal{A}}$ is close enough to its hydrodynamic limit. This condition is encapsulated in the event $\mathsf{H}_S$ (see \eqref{eq:Hsdef} in the proof of \Cref{zti}). \begin{figure}[t] \includegraphics[width=.7\linewidth]{ABcoupling.eps} \caption{The coupling between $\boldsymbol{\mathcal{A}}_S$ and $\boldsymbol{\mathcal{B}}_0$ from \Cref{b}. The second class particle (black circle) in $\boldsymbol{\mathcal{A}}_S$ moves to the origin and everything is translated relative to that, and additional second class particles (grey circles) are added with probabilities given in \eqref{probabilityb}.} \label{fig:ABcoupling} \end{figure} \begin{definition} \label{b} Fix $\gamma = \frac{1}{100}$ (i.e., something close enough to 0). Recall that in $\boldsymbol{\mathcal{A}}_S$ the second class particle is denoted by $\boldsymbol{X}_S$. Given the state of $\boldsymbol{\mathcal{A}}_S$ we define a new process $\boldsymbol{\mathcal{B}}$ which is a multi-species ASEP with left jump rate $L$, right jump rate $R$, and the following initial data. Each site $j \in \mathbb{Z}$ is initially occupied in $\boldsymbol{\mathcal{B}}_0$ by a first class particle if and only if $j + \boldsymbol{X}_S$ is occupied by a first class particle in $\boldsymbol{\mathcal{A}}_S$. Site 0 in $\boldsymbol{\mathcal{B}}_0$ is initially occupied by a second class particle and, furthermore, for each site $j \in \llbracket - 2 S^{1 - \gamma}, - 1\rrbracket$ with $j + \boldsymbol{X}_S$ not occupied by a first class particle in $\boldsymbol{\mathcal{A}}_S$, $\boldsymbol{\mathcal{B}}_0(j)$ contains a second class particle independently and with probability (see \Cref{bparticles} for an explanation of the choice of these probabilities and \Cref{rem:probs} regarding their positivity, and recall $\boldsymbol{\rho}_S$ is defined in \Cref{xti}) \begin{equation} \label{probabilityb} \left( S^{-\gamma} + \displaystyle\frac{j}{2S} \right) \left( 1 - \boldsymbol{\rho}_S + \displaystyle\frac{j}{2S} \right)^{-1}. \end{equation} Let $\boldsymbol{M}$ equal the number of second class particles in $\boldsymbol{\mathcal{B}}$. Denote their tagged positions at any time $t \ge 0$ by $\boldsymbol{Z}_t(1) > \cdots > \boldsymbol{Z}_t(\boldsymbol{M})$, so that $\boldsymbol{Z}_0(1) = 0$. Set $\{\!\!\{\boldsymbol{Z}_t\}\!\!\} = \big\{ \boldsymbol{Z}_t(1), \ldots , \boldsymbol{Z}_t(\boldsymbol{M})\big\}$. Equivalent to the above description, we let $\boldsymbol{\mathcal{B}}_t=(\tilde\boldsymbol{\eta}_t,\tilde\boldsymbol{\alpha}_t)$ and assume initial data $\tilde\boldsymbol{\eta}_0(j) = \boldsymbol{\eta}(\boldsymbol{X}_S+j)$ for all $j\in \mathbb{Z}$, while for $\tilde\boldsymbol{\alpha}_0$ we assume that $\tilde\boldsymbol{\alpha}_0(0)=1$, and that for all $j \in \llbracket- 2 S^{1 - \gamma}, - 1\rrbracket $ with $\tilde\boldsymbol{\eta}_0(j)=0$, the $\tilde\boldsymbol{\alpha}_0(j)$ are independent Bernoulli random variables with probability \eqref{probabilityb} of equaling $1$. For all other choices of $j$ set $\tilde\boldsymbol{\alpha}_0(j)=0$. The Markov dynamics for $(\tilde\boldsymbol{\eta}_t,\tilde\boldsymbol{\alpha}_t)$ are those of first and second class particles under the basic coupling. It will be convenient, e.g. in \Cref{LimitProcess}, for us to use $\boldsymbol{\mathcal{B}}_t^{(1)}$ to denote the occupation variables for just the first class particles in $\boldsymbol{\mathcal{B}}_t$ and $\boldsymbol{\mathcal{B}}_t^{(1\cup 2)}$ to denote the occupation variables for the union of first and second class particles in $\boldsymbol{\mathcal{B}}_t$, i.e. $\boldsymbol{\mathcal{B}}_t^{(1)} = \tilde\boldsymbol{\eta}_t$ and $\boldsymbol{\mathcal{B}}_t^{(1\cup 2)} = \tilde\boldsymbol{\eta}_t+\tilde\boldsymbol{\alpha}_t$. The above definition of $\boldsymbol{\mathcal{B}}$ depends (i.e., is measurable with respect to $\mathcal{F}_S$) on the location $\boldsymbol{X}_S$ of the second class particle in $\boldsymbol{\mathcal{A}}_S$ and the associated hydrodynamic density $\boldsymbol{\rho}_S$ defined by the relation $1-2\boldsymbol{\rho}_S=\boldsymbol{X}_S/S$. We will also need notation where we define a version of $\boldsymbol{\mathcal{B}}$ relative to a specified choice of $\boldsymbol{\rho}_S$ and hence also $\boldsymbol{X}_S$. Let \begin{equation}\label{eq:IEps} I^{\varepsilon}_S= \left\{\rho\in (\varepsilon,1-\varepsilon): S(1-2\rho)\in\mathbb{Z}\right\},\qquad \textrm{and for }\rho\in I^{\varepsilon}\textrm{ let } X^{\rho}_S=S(1-2\rho). \end{equation} These represent the potential values of the random variables $\boldsymbol{\rho}_S$ and $\boldsymbol{X}_S$ respectively. For such a $\rho\in I^{\varepsilon}_S$ and corresponding $X^{\rho}_S$, define $\boldsymbol{\mathcal{B}}^{\rho}$ exactly as above but with $\boldsymbol{\rho}_S$ and $\boldsymbol{X}_S$ replaced by the specified values $\rho$ and $X^{\rho}_S$. Similarly, let $\boldsymbol{\mathcal{B}}^{(1),\rho}$ and $\boldsymbol{\mathcal{B}}^{(1\cup 2),\rho}$ respectively denote the first class particle process, and union of first and second class particle processes. In this notation, $\boldsymbol{\mathcal{B}} = \boldsymbol{\mathcal{B}}^{\boldsymbol{\rho}_S}$ where the variable $\rho$ is replaced by the random variable $\boldsymbol{\rho}_S$. Recall that we are using the convention that bold symbols are random variables while their unbolded counterparts are deterministic variables. \end{definition} \begin{rem} \label{bparticles} Let us briefly explain the choice of the probabilities in \eqref{probabilityb}. In view of the hydrodynamic limit for the ASEP with step initial data (as in \Cref{hetaxi} with $\rho=1$), the probability that a first class particle occupies a site $j \in \llbracket-\varepsilon S, \varepsilon S\rrbracket$ in $\boldsymbol{\mathcal{B}}_0$ is approximately $\boldsymbol{\rho}_S - \frac{j}{2S}$. Therefore, the probability that site $j$ is empty should approximately be $1 - \boldsymbol{\rho}_S + \frac{j}{2S}$. So, \eqref{probabilityb} essentially ensures that the density of either first or second class particles in the interval $\llbracket -2S^{1 - \gamma}, -1\rrbracket$ in $\boldsymbol{\mathcal{B}}_0$ is approximately constant and equal to $\boldsymbol{\rho}_S + S^{- \gamma}$. In particular, the density of particles in $\boldsymbol{\mathcal{B}}_0$ decreases linearly with slope $\frac{1}{2S}$ on $\llbracket -\varepsilon S, -2S^{1 - \gamma}\rrbracket$ to $\boldsymbol{\rho}_S + S^{-\gamma}$ at $- 2S^{1 - \gamma}$, remains constant at $\boldsymbol{\rho}_S + S^{-\gamma}$ on $\llbracket -2 S^{1 - \gamma}, -1\rrbracket$, discontinuously decreases to $\boldsymbol{\rho}_S$ at site $0$, and then decreases linearly with slope $\frac{1}{2S}$ on $\llbracket 0, \varepsilon S\rrbracket$, see \Cref{fig:ASEPBdensity}. \end{rem} \begin{figure}[t] \includegraphics[width=.8\linewidth]{ASEPBdensity.eps} \caption{The average particle density versus spatial location for a typical instance of $\boldsymbol{\mathcal{B}}_0$, as explained \Cref{bparticles}. Since there are only second class particles in $[-2S^{1-\gamma},0]$, the densities only differ therein. The upper line there corresponds to the density of the union of first and second class particles $\boldsymbol{\mathcal{B}}_0^{(1\cup 2)}$ while the lower line is just for first class particles $\boldsymbol{\mathcal{B}}_0^{(1)}$.} \label{fig:ASEPBdensity} \end{figure} \begin{rem} \label{rem:probs} Depending on the value of $\boldsymbol{\rho}_S$ and $S$, the probabilities in \eqref{probabilityb} may exceed $1$. However, for a given value of $\varepsilon$ we can choose $c(\varepsilon)$ in the statement of \Cref{xti} small enough so that for $\boldsymbol{\rho}_S\in (\varepsilon,1-\varepsilon)$, either the expressions in \eqref{probabilityb} remain bounded in $(\varepsilon/2,1-\varepsilon/2)$ for all relevant $j$, or $1-c^{-1} S^{-1/5}<0$. In the former case, the Bernoulli random variables are well-defined, while in the later case, the second claimed inequality in \eqref{eq:hspsbd} in \Cref{xti} is trivially true (since the probability will always exceed $0$). \end{rem} Now observe that \eqref{eq:Rezleq}, from \Cref{prop:Rez}, implies that for any $y\in \mathbb{Z}$, \begin{equation}\label{eq:XZcompare} \mathbb{P}\big[\boldsymbol{X}_{S+T} - \boldsymbol{X}_S\le y|\mathcal{F}_S \big] \leq \displaystyle\frac{1}{\boldsymbol{M}} \displaystyle\sum_{j = 1}^{\boldsymbol{M}} \mathbb{P}^{\boldsymbol{\mathcal{B}}_0}\big[ \boldsymbol{Z}_T(j) \le y \big]. \end{equation} The left-hand side of this inequality is measurable with respect to $\mathcal{F}_S$ while the right-hand side is measurable with respect to the sigma algebra formed by $\mathcal{F}_S$ and the Bernoulli random variables used to form $\boldsymbol{\mathcal{B}}_0$ from $\boldsymbol{\mathcal{A}}_S$. In particular, for any choice of the Bernoulli random variables, the inequality holds. We can rephrase the inequality \eqref{eq:XZcompare} in the following manner: Let $\boldsymbol{K}$ be uniformly distributed on $\{1,\ldots, \boldsymbol{M}\}$, then \eqref{eq:XZcompare} is equivalent to \begin{equation}\label{eq:XZcompare2} \mathbb{P}\big[\boldsymbol{X}_{S+T} \ge \boldsymbol{X}_S + y|\mathcal{F}_S \big] \geq \mathbb{P}\big[ \boldsymbol{Z}_T(\boldsymbol{K}) \ge y|\mathcal{F}_S \big]. \end{equation} In light of \eqref{eq:XZcompare2}, we see that in order to establish \Cref{xti}, it suffices to control the locations of most of the second class particles in $\boldsymbol{\mathcal{B}}$ with high probability. The following proposition achieves this aim. \begin{prop}\label{zti} For any $\varepsilon \in ( 0,1/4)$, there exists $c=c(\varepsilon) > 0$ and $\mathcal{F}_S$-measurable events $\mathsf{H}_S$ such that for all $S>2$, \begin{equation}\label{eq:HPbd} \mathbb{P}[\mathsf{P}_S\cap (\mathsf{H}_{S})^c] \leq c^{-1} e^{-c S^{1/12}} \end{equation} where $\mathsf{P}$ is defined in \eqref{eq:hspsbdrho} and \begin{align} \label{ztestimate} \mathbb{P} \Big[ \big| \{\!\!\{\boldsymbol{Z}_T\}\!\!\} \cap \big[(1-2\boldsymbol{\rho}_S)T-S^{1 - \frac{\gamma}{2}}, \infty\big) \big| \ge \boldsymbol{M}(1 - S^{-\frac{1}{5}}) \Big| \mathcal{F}_S \Big] \ge\big(1 - c^{-1} e^{-cS^{1/12}}\big)\mathbf{1}_{\mathsf{H}_S\cap \mathsf{P}_S}. \end{align} The constants $c=c(\varepsilon)$ can be chosen so as to weakly decrease as $\varepsilon$ decreases to 0. \end{prop} This will be proved in \Cref{LimitProcess}, but first we prove \Cref{xti} assuming it. \begin{proof}[Proof of \Cref{xti}] Let $\mathsf{H}_S$ and $c=c(\varepsilon)>0$ be given as in \Cref{zti}, in which case the first inequality in \eqref{eq:hspsbd} holds on account of \eqref{eq:HPbd}. We argue here that \begin{equation}\label{eq:Egeqbd} \mathbb{P}[\mathsf{E}^{\geq}_{S} | \mathcal{F}_{S}] \ge (1 - c^{-1} e^{-c S^{1 / 12}})\mathbf{1}_{\mathsf{H}_{S}\cap \mathsf{P}_S}. \end{equation} Assuming this, we can deduce the same bound with $\mathsf{E}^{\leq}_{S}$. This is because after applying the particle-hole symmetry (\Cref{rem:secondclassduality}) to our process, the initial data remains unchanged and the events $\mathsf{E}^{\leq}_{S}$ and $\mathsf{E}^{\geq}_{S}$ swap. To show \eqref{eq:Egeqbd}, assume that $\mathsf{H}_S\cap \mathsf{P}_S$ holds and let (recall $ \{\!\!\{\boldsymbol{Z}_T\}\!\!\}$ defined below \eqref{probabilityb}) $$\Lambda = \{\!\!\{\boldsymbol{Z}_T\}\!\!\} \cap \big[(1-2\boldsymbol{\rho}_S)T-S^{1 - \frac{\gamma}{2}}, \infty\big)$$ and define the events $$ \mathsf{F}_S = \big\{\boldsymbol{Z}_T(\boldsymbol{K})\geq (1-2\boldsymbol{\rho}_S)T-S^{1 - \frac{\gamma}{2}}\big\}, \quad \textrm{and}\quad \mathsf{G}_S = \big\{|\Lambda|\geq \boldsymbol{M}(1-S^{-1/5})\big\} $$ (recall that $\gamma =1/100$ and $\boldsymbol{K}$ is uniformly chosen on $\{1,\ldots, \boldsymbol{M}\}$). From \eqref{eq:XZcompare2} it follows that $\mathbb{P}\big[\mathsf{E}^{\geq}_S| \mathcal{F}_S \big]\geq \mathbb{P}\big[\mathsf{F}_{S}| \mathcal{F}_S \big]$. Since $\boldsymbol{M}=|\{\!\!\{\boldsymbol{Z}_T\}\!\!\}|$, the event $\mathsf{G}_S$ says that the fraction of particles in $\{\!\!\{\boldsymbol{Z}_T\}\!\!\}$ which lie in $[(1-2\boldsymbol{\rho}_S)T-T^{1 - \frac{\gamma}{2}}, \infty)$ exceeds $1-S^{-1/5}$. The event $\mathsf{F}_S$ is that a randomly chosen particle in $\{\!\!\{\boldsymbol{Z}_T\}\!\!\}$ lies in $\big[(1-2\boldsymbol{\rho}_S)T-S^{1 - \frac{\gamma}{2}}, \infty\big)$. Thus, conditioned on $\mathsf{G}_S$, the probability of $\mathsf{F}_S$ exceeds $1-S^{-1/5}$. This implies that $ \mathbb{P}\big[\mathsf{F}_S| \mathcal{F}_S \big]\geq \mathbb{P}(\mathsf{G}_S|\mathcal{F}_S) - S^{-1/5} $ and by \Cref{zti}, $\mathbb{P}(\mathsf{G}_S|\mathcal{F}_S)\geq 1-c^{-1} e^{-cS^{1/12}}$. Putting this all together shows that $ \mathbb{P}\big[\mathsf{E}^{\geq}_S| \mathcal{F}_S \big]\geq 1-c^{-1} S^{-1/5} $ which yields the second inequality in \eqref{eq:hspsbd} as desired. The final sentence of \Cref{xti} follows from that of \Cref{zti}. \end{proof} \section{Proof of \Cref{zti}: reduction to a hydrodynamic limit estimate} \label{LimitProcess} It remains to establish \Cref{zti}. To this end, we will start by comparing the multi-class ASEP $\boldsymbol{\mathcal{B}}$ from \Cref{b} to two versions of ASEP in \Cref{xizetaprocesses} ($\boldsymbol{\mathcal{B}}^{(1)}$ will be compared to $\boldsymbol{\xi}^{(1)}$ while $\boldsymbol{\mathcal{B}}^{(1\cup 2)}$ will be compared to $\boldsymbol{\xi}^{(1\cup 2)}$). The idea, developed in \Cref{xizetab} is that the height function for $\boldsymbol{\xi}^{(1)}_0$ will be close (by close, we mean at most order $S^{3/4}$ apart with probability at least $1-c^{-1} e^{-c S^{1/12}}$) to that of $\boldsymbol{\mathcal{B}}_0^{(1)}$ (the first class particles in $\boldsymbol{\mathcal{B}}_0$), while the height function for $\boldsymbol{\xi}^{(1\cup 2)}_0$ will be close to that of $\boldsymbol{\mathcal{B}}_0^{(1\cup 2)}$ (the union of first and second class particles in $\boldsymbol{\mathcal{B}}_0$). This event of height function closeness is part of the hydrodynamic event $\mathsf{H}_S$ which appears in the statement of \Cref{zti}. \Cref{xizetab} then shows that the simpler $\boldsymbol{\xi}^{(1)}$ and $\boldsymbol{\xi}^{(1\cup 2)}$ processes evolve over time $T=S/\log S$ to be close to the same hydrodynamic limit in the region $(-\infty , (1-2\boldsymbol{\rho}_S)T - S^{1-\frac{\gamma}{2}})$. Since the number of second class particles is close to $S^{1-2\gamma}$ which is much larger than $S^{3/4}$, this implies that most of the second class particles in $\boldsymbol{\mathcal{B}}^{(1\cup 2)}$ are in the complementary region $[(1-2\boldsymbol{\rho}_S)T - S^{1-\frac{\gamma}{2}},\infty)$ which is exactly what we seek to show in \Cref{zti}. The processes $\boldsymbol{\mathcal{B}}^{(1)}_t$, $\boldsymbol{\mathcal{B}}^{(1\cup 2)}_t$, $\boldsymbol{\xi}^{(1)}_t$ and $\boldsymbol{\xi}^{(1\cup 2)}_t$ all depend on the random variable $\boldsymbol{\rho}_S$ (recall from the beginning of \Cref{couple}). In order to make the comparisons mentioned above, we will instead consider $\boldsymbol{\mathcal{B}}^{(1),\rho}_t$, $\boldsymbol{\mathcal{B}}^{(1\cup 2),\rho}_t$, $\boldsymbol{\xi}^{(1),\rho}_t$ and $\boldsymbol{\xi}^{(1\cup 2),\rho}_t$ for deterministic values of $\rho\in I^{\varepsilon}_S$ (recall from \eqref{eq:IEps}). Taking a union bound over all potential values of $\rho$ we establish that for random $\boldsymbol{\rho}_S$, the comparison likewise holds. \begin{definition} \label{xizetaprocesses} For $\rho\in (\varepsilon,1-\varepsilon)$, let $\boldsymbol{\xi}^{(1),\rho}_t$ and $\boldsymbol{\xi}^{(1\cup 2),\rho}_t$ denote two versions of ASEP, each with left and right jump rates $L$ and $R$ and initial data given as follows (see also \Cref{fig:ASEPxizetadensity}). For each $j \notin \llbracket-\varepsilon S, \varepsilon S\rrbracket$, we deterministically set $\boldsymbol{\xi}^{(1),\rho}_0 (j) = 0 = \boldsymbol{\xi}^{(1\cup 2),\rho}_0 (j)$. To define $\boldsymbol{\xi}^{(1),\rho}_0$ elsewhere, for each $j \in \llbracket-\varepsilon S, \varepsilon S\rrbracket$, we define $\boldsymbol{\xi}^{(1),\rho}_0 (j)$ according to independent Bernoulli random variables with probabilities \begin{equation}\label{eq:xirho} \mathbb{P} \big[ \boldsymbol{\xi}^{(1),\rho}_0 (j) = 1 \big] = \rho - \frac{j}{2S}, \qquad \mathbb{P} \big[ \boldsymbol{\xi}^{(1),\rho}_0 (j) = 0 \big] = 1 - \rho + \frac{j}{2S}. \end{equation} In the language of \Cref{distributedinitial}, this initial data is $\Upsilon^{(\rho)}_{\varepsilon}$-distributed on the interval $\llbracket-\varepsilon S, \varepsilon S\rrbracket$. We define $\boldsymbol{\xi}^{(1\cup2),\rho}_0 (j)$ for $j \in \llbracket-\varepsilon S, \varepsilon S\rrbracket$ so for each $j \in \llbracket-\varepsilon S, \varepsilon S\rrbracket \setminus \llbracket -2 S^{1 - \gamma}, - 1\rrbracket$, $$ \mathbb{P} \big[ \boldsymbol{\xi}^{(1\cup2),\rho}_0 (j) = 1 \big] = \rho - \frac{j}{2S}; \qquad \mathbb{P} \big[ \boldsymbol{\xi}^{(1\cup2),\rho}_0 (j) = 0 \big] = 1 - \rho + \frac{j}{2S}, $$ while for each $j \in \llbracket -2 S^{1 - \gamma}, - 1\rrbracket$, $$ \mathbb{P} \big[ \boldsymbol{\xi}^{(1\cup2),\rho}_0 (j) = 1 \big] = \rho + S^{-\gamma} \qquad \mathbb{P} \big[ \boldsymbol{\xi}^{(1\cup2),\rho}_0 (j) = 0 \big] = 1 - \rho - S^{-\gamma}. $$ Again, these choices are mutually independent over all $j$. Moreover, we assume that all of these Bernoulli random variables are chosen independent of the state of $\boldsymbol{\mathcal{B}}_0$. Finally, set $\boldsymbol{\xi}^{(1)}_t=\boldsymbol{\xi}^{(1),\boldsymbol{\rho}_S}_t$ and $\boldsymbol{\xi}^{(1\cup2)}_t=\boldsymbol{\xi}^{(1\cup2),\boldsymbol{\rho}_S}_t$, i.e., the processes just defined above but with $\rho$ replaced by $\boldsymbol{\rho}_S$ determined by the location of the second class particle in $\boldsymbol{\mathcal{A}}_S$. \end{definition} \begin{figure}[t] \includegraphics[width=.8\linewidth]{ASEPxizetadensity.eps} \caption{Particles in $\boldsymbol{\xi}^{(1),\rho}_0$ and $\boldsymbol{\xi}^{(1\cup2),\rho}_0$ (see \Cref{xizetaprocesses}) are initially present according to independent Bernoulli random variables with probabilities give by the plot shown here. The probabilities coincide for $\boldsymbol{\xi}^{(1),\rho}_0$ and $\boldsymbol{\xi}^{(1\cup2),\rho}_0$, except in the window $[-2S^{1-\gamma},-1]$ where the $\boldsymbol{\xi}^{(1\cup2),\rho}_0$ probability remains flat and the $\boldsymbol{\xi}^{(1),\rho}_0$ probability decreases linearly.} \label{fig:ASEPxizetadensity} \end{figure} Under these choices, we have the following lemma, which essentially states that $\boldsymbol{\xi}^{(1)}_0$ initially approximates $\boldsymbol{\mathcal{B}}_0^{(1)}$ and $\boldsymbol{\xi}^{(1\cup2)}_0$ initially approximates $\boldsymbol{\mathcal{B}}_0^{(1\cup 2)}$ (recall \Cref{b}). \begin{prop}\label{xizetab} For all $\varepsilon\in (0,1/4)$, there exists $c=c(\varepsilon)>0$ such that for \begin{equation*} \mathsf{D}_S(\boldsymbol{\mathcal{B}},\boldsymbol{\xi}) = \big\{ \displaystyle\max_{|j| \le \varepsilon S} \big| \mathfrak{h}_0 \big(j; \boldsymbol{\mathcal{B}} \big) - \mathfrak{h}_0 (j; \boldsymbol{\xi}) \big| > S^{\frac{3}{ 4}}\big\},\qquad \mathsf{M}_S= \big\{\big|\boldsymbol{M} - S^{1-2\gamma}\big| > S^{\frac{3}{ 4}} \big\} \end{equation*} and $\mathsf{P}_S$ as in \eqref{eq:hspsbdrho}, the following holds for any $S>2$: \begin{align} \label{eq:hbxi} \mathbb{P} \big[\mathsf{D}_S(\boldsymbol{\mathcal{B}}^{(1)},\boldsymbol{\xi}^{(1)})\cap \mathsf{P}_S \big] &< c^{-1} e^{- c S^{1/12}}, \\ \label{eq:hbzeta} \mathbb{P} \big[\mathsf{D}_S(\boldsymbol{\mathcal{B}}^{(1\cup 2)},\boldsymbol{\xi}^{(1\cup 2)})\cap \mathsf{P}_S \big] &< c^{-1} e^{- c S^{1/12}},\\ \label{eq:Mbdd} \mathbb{P} \big[\mathsf{M}_S\cap \mathsf{P}_S\big] &< c^{-1} e^{- c S^{1/12}}. \end{align} The constants $c=c(\varepsilon)$ can be chosen so as to weakly decrease as $\varepsilon$ decreases to 0. \end{prop} \begin{rem} Note that in \Cref{xizetab}, $S$ comes into the definition of $\boldsymbol{\mathcal{B}}_0$ since it determines that time at which we observe and modify the state of $\boldsymbol{\mathcal{A}}$; $S$ comes into the definition of $\boldsymbol{\xi}^{(1)}$ and $\boldsymbol{\xi}^{(1,2)}$ in determining the parameters of the Bernoulli occupation variables; and $S$ comes into the definition of $\boldsymbol{\rho}_S$ since $(1-2\boldsymbol{\rho}_S)S = \boldsymbol{X}_S$. Also, note that for any $S_0>2$, by taking $C$ large enough and $c$ small enough, we can make the bounds in \Cref{xizetab} trivial for $S<S_0$ (i.e., make the right-hand side exceed $1$). We will use this in proving this result. Also, note that our proof of \eqref{eq:hbxi} and \eqref{eq:hbzeta} applies for $S^{3/4}$ replaced by any power of $S$ exceeding $2/3$. We choose $3/4$ as it is sufficient for our purposes. \end{rem} \begin{proof} Equation \eqref{eq:hbxi} follows readily from the triangle inequality and a union bound by combining \Cref{hetaxi} (which controls the deviations of the height function for $\boldsymbol{\mathcal{B}}_0^{(1)}$ around its hydrodynamic limit) and \Cref{distributionconcentration} (which controls the deviation of the height function for $ \boldsymbol{\xi}^{(1)}$ around its hydrodynamic limit). The proof of \eqref{eq:hbzeta} is more involved since we need to track the effect of the additional particles added to go from $\boldsymbol{\mathcal{B}}_0^{(1)}$ to $\boldsymbol{\mathcal{B}}_0^{(1\cup 2)}$. We give the details below. Recall $I^{\varepsilon}_S$ from \eqref{eq:IEps} and observe that the event on the left-hand side of \eqref{eq:hbzeta} satisfies $$ \mathsf{D}_S(\boldsymbol{\mathcal{B}}^{(1\cup 2)},\boldsymbol{\xi}^{(1\cup 2)})\cap\mathsf{P}_S\subseteq \bigcup_{\rho\in I^{\varepsilon}_S} \mathsf{D}_S(\boldsymbol{\mathcal{B}}^{(1\cup 2),\rho},\boldsymbol{\xi}^{(1\cup 2),\rho}) $$ Since $|I^{\varepsilon}_S|$ is of order $S$, to establish \eqref{eq:hbzeta} it suffices to show that there exists $c=c(\epsilon)>0$ such that for all $\rho\in I^{\varepsilon}_S$ and $S>2$ \begin{equation}\label{eq:boundD1} \mathbb{P}\big[\mathsf{D}_S(\boldsymbol{\mathcal{B}}^{(1\cup 2),\rho},\boldsymbol{\xi}^{(1\cup 2),\rho})\big] \leq c^{-1} e^{- c S^{1/12}}. \end{equation} Observe that for any choice of function $P^{(1\cup 2),\rho}(j)$, we have $$ \mathsf{D}_S(\boldsymbol{\mathcal{B}}^{(1\cup 2),\rho},\boldsymbol{\xi}^{(1\cup 2),\rho})\subseteq \mathsf{D}_S(\boldsymbol{\mathcal{B}}^{(1\cup 2),\rho})\cap \mathsf{D}_S(\boldsymbol{\xi}^{(1\cup 2),\rho}) $$ where ($\mathsf{D}_S(\boldsymbol{\xi}^{(1\cup 2),\rho})$ is likewise defined with $\boldsymbol{\xi}^{(1\cup 2),\rho}$ replacing $\boldsymbol{\mathcal{B}}^{(1\cup 2),\rho}$) $$ \mathsf{D}_S(\boldsymbol{\mathcal{B}}^{(1\cup 2),\rho})= \big\{ \displaystyle\max_{|j| \le \varepsilon S} \big| \mathfrak{h}_0 (j; \boldsymbol{\mathcal{B}}^{(1\cup 2),\rho}) -P^{(1\cup 2),\rho}(j) \big| >S^{\frac{3}{4}}/2\big\}. $$ Thus, to prove \eqref{eq:boundD1} it suffices to find $P^{(1\cup 2),\rho}(j)$ such that there exists $c=c(\epsilon)>0$ so that for all $\rho\in I^{\varepsilon}_S$ and $S>2$ \begin{equation}\label{eq:boundD1diff} \mathbb{P}\big[\mathsf{D}_S^{(1\cup 2),\rho;\beta}\big] \leq c^{-1} e^{- c S^{1/12}}\qquad \textrm{and}\qquad \mathbb{P}\big[\mathsf{D}_S^{(1\cup 2),\rho;\xi}\big] \leq c^{-1} e^{- c S^{1/12}}. \end{equation} We make the natural choice (defining $P^{(1\cup 2),\rho}([a,b])=P^{(1\cup 2),\rho}(a)-P^{(1\cup 2),\rho}(b)$ for $a,b\in \mathbb{Z}$) $$ P^{(1\cup 2),\rho}(j) = \mathbb{E}\big[ \mathfrak{h}_0 (j; \boldsymbol{\xi}^{(1\cup 2),\rho})\big] $$ from which the second inequality in \eqref{eq:boundD1diff} follows immediately from applying Hoeffding's inequality (in the spirit of \Cref{distributionconcentration}). This gives a stronger bound with $S^{3/4}$ replace by $S^{1/2}$, though we will not need this here. It remains to demonstrate the first bound in \eqref{eq:boundD1diff}. This follows from showing that there exists $c=c(\epsilon)>0$ such that for all $\rho\in I^{\varepsilon}_S$ and $S>2$ \begin{align}\label{eq:threeregions} \nonumber &\mathbb{P}\bigg[ \displaystyle\max_{j\in \llbracket 0,\varepsilon S\rrbracket} \Big| \mathfrak{h}_0 \big(j; \boldsymbol{\mathcal{B}}^{(1\cup 2),\rho} \big) -P^{(1\cup 2),\rho}(j) \Big| >\frac{ S^{3 / 4}}{6}\bigg] \leq c^{-1} e^{- c S^{1/12}},\\ &\mathbb{P}\bigg[ \displaystyle\max_{j\in \llbracket -2S^{1-\gamma},-1\rrbracket} \Big| \mathfrak{h}_0 \big(j; \boldsymbol{\mathcal{B}}^{(1\cup 2),\rho} \big) -P^{(1\cup 2),\rho}(j) \Big| >\frac{ S^{3 / 4}}{6} \bigg] \leq c^{-1} e^{- c S^{1/12}},\\ \nonumber &\mathbb{P}\bigg[ \displaystyle\max_{j\in \llbracket -\varepsilon S,-2S^{1-\gamma}\rrbracket} \Big| \mathfrak{h}_0 \big(\llbracket j,-2S^{1-\gamma}\rrbracket; \boldsymbol{\mathcal{B}}^{(1\cup 2),\rho} \big) -P^{(1\cup 2),\rho}(\llbracket j,-2S^{1-\gamma}\rrbracket) \Big| >\frac{ S^{3 / 4}}{6} \bigg] \leq c^{-1} e^{- c S^{1/12}}. \end{align} where in the final inequality we recall the notation from \eqref{eq:heightdiff}. The first and third inequalities above are immediate from \eqref{eq:hbxi}: For $j\in \llbracket 0,\varepsilon S\rrbracket$ we have $\mathfrak{h}_0 \big(j; \boldsymbol{\mathcal{B}}^{(1\cup 2),\rho} \big)=\mathfrak{h}_0 \big(j; \boldsymbol{\mathcal{B}}^{(1),\rho} \big)$ and for $j\in \llbracket -\varepsilon S,-2S^{1-\gamma}\rrbracket$ we have $\mathfrak{h}_0 \big(\llbracket j,-2S^{1-\gamma}\rrbracket; \boldsymbol{\mathcal{B}}^{(1\cup 2),\rho} \big) = \mathfrak{h}_0 \big(\llbracket j,-2S^{1-\gamma}\rrbracket; \boldsymbol{\mathcal{B}}^{(1),\rho} \big)$. Thus, we are left to show the middle inequality in \eqref{eq:threeregions}. To do this we will split the interval $\llbracket -2S^{1-\gamma},-1\rrbracket$ into pieces of size $S^{2/3}$. On each of these we will control the number of first class particles in $\boldsymbol{\mathcal{B}}^{(1\cup 2),\rho}$ to order $S^{1/3}$ by using the final part of \Cref{hetaxi} (as we are dealing with step initial data), and then control the number of second class particles by bounds on sums of Bernoulli random variables. This will yield an upper and lower bound with error of order $S^{1/3}$ on the number of first and second class particles in $\boldsymbol{\mathcal{B}}^{(1\cup 2),\rho}$ within each interval. Summing over order $S^{1/3}$ such intervals introduces an error of order $S^{2/3}$ which is still much smaller than the $S^{3/4}$ allowed error. Define $K_S = \lfloor 2S^{1/3-\gamma}\rfloor$ and intervals $I_k = \llbracket -(k+1) S^{2/3}, -kS^{2/3}\rrbracket$ for $k\in \llbracket 0,K_S-1\rrbracket$ and $I_{k_S} = \llbracket -2S^{1-\gamma},-K_S S^{2/3}\rrbracket$. Let $j_0,\ldots, j_{K_S}$ denote the endpoints of these intervals, i.e., $I_k = [j_{k+1},j_{k}]$ and notice that the union of these intervals covers $\llbracket -2S^{1-\gamma},-1\rrbracket$. Since $\mathfrak{h}_0 \big(j; \boldsymbol{\mathcal{B}}^{(1\cup 2),\rho} \big)$ and $P^{(1\cup 2),\rho}(j)$ are both 1-Lipschitz functions and since $S^{2/3}\ll S^{3/4}$ it suffices to show the following claim: there exist a constant $c>0$ such that for all $\rho\in I^{\varepsilon}_S$, $k\in \llbracket 0,K_S\rrbracket$ and $S>2$ \begin{equation}\label{eq:hojk} \mathbb{P}\bigg[\Big| \mathfrak{h}_0 \big(j_k; \boldsymbol{\mathcal{B}}^{(1\cup 2),\rho} \big) - P^{(1\cup 2),\rho}(j_k)\Big| > \frac{S^{3 / 4}}{8}\bigg] \leq c^{-1} e^{- c S^{1/12}}, \end{equation} This implies the middle equation in \eqref{eq:threeregions} since the most that $\mathfrak{h}_0 \big(j; \boldsymbol{\mathcal{B}}^{(1\cup 2),\rho} \big) - P^{(1\cup 2),\rho}(j)$ can change over $j\in I_k$ is by $2|I_k| = 2S^{2/3}$. For large $S$, this is much smaller than $S^{3/4}/24$ (while for small $S$, we can just choose $c$ small enough so that the right-hand side of the middle equation in \eqref{eq:threeregions} exceeds 1, and hence the relation there trivially holds). For each $k\in \llbracket 0,K_S\rrbracket$ define $\mathrm{First}_k = \mathfrak{h}_0 \big(I_k; \boldsymbol{\mathcal{B}}^{(1),\rho} \big)$ and the event $$ \mathsf{F}_k(\kappa) := \left\{ \rho S^{2/3} +\frac{2k+1}{4} S^{1/3} -\kappa S^{1/3} \leq \mathrm{First}_k \leq \rho S^{2/3} +\frac{2k+1}{4} S^{1/3} +\kappa S^{1/3}\right\} $$ that the number of first class particles in $I_k$ is within $\kappa S^{1/3}$ of the expected number according to the hydrodynamic limit. Noting that the term $ \rho S^{2/3} +\frac{2k+1}{4} S^{1/3}$ agrees with the hydrodynamic limit profile for step initial data, we see that by the final part of \Cref{hetaxi} there exists $c,\kappa_0>0$ such that for all $k\in \llbracket 0,K_S\rrbracket$ and $\kappa\in [\kappa_0,S^{2/3}/2]$, $ \mathbb{P}\big[\mathsf{F}_k(\kappa)\big]\leq c^{-1} e^{-c \kappa}. $ On the event $\mathsf{F}_k(\kappa)$, we can bound the number of empty sites $\mathrm{Empty}_k:=S^{2/3}-\mathrm{First}_k$ for $\boldsymbol{\mathcal{B}}^{(1),\rho}$ at time zero in the interval $I_k$ by $$ (1-\rho) S^{2/3} - \frac{2k+1}{4}S^{1/3} - \kappa S^{1/3}\leq \mathrm{Empty}_k\leq (1-\rho) S^{2/3} - \frac{2k+1}{4}S^{1/3} + \kappa S^{1/3}. $$ As explained in \Cref{b}, in order to construct $\boldsymbol{\mathcal{B}}^{(1\cup 2),\rho}$ from $ \boldsymbol{\mathcal{B}}^{(1),\rho}$ on the interval $I_k$, we replace a hole at location $j$ by a second class particle (independently over all $j\in I_k$) with the probability in \eqref{probabilityb}. Let us denote this probability by $Q(j)$. Observe that $Q(j)$ increases as $j$ decreases, and thus we can lower bound the total number of second class particles on $I_k$ by replacing $Q(j)$ by $Q(-kS^{2/3})$ for each $j\in I_k$, and likewise upper bound the number by using $Q(-(k+1)S^{2/3})$. This shows that given $\mathrm{Empty}_k$, the expected number second class particles that will be added in the interval $I_k$ will be bounded between $\mathrm{Empty}_k Q(-kS^{2/3})$ and $\mathrm{Empty}_{k}Q(-(k+1)S^{2/3})$. Call $\mathrm{Second}_k$ the number of second class particles added in the interval $I_k$ and define the event $$ \mathsf{S}_k(\kappa) := \left\{ \mathrm{Empty}_k \cdot Q(-kS^{2/3}) -\kappa S^{1/3}\leq \mathrm{Second}_k \leq \mathrm{Empty}_{k}\cdot Q(-(k+1)S^{2/3}) + \kappa S^{1/3}\right\}. $$ By Hoeffding's inequality there exists $c,\kappa_0>0$ such that for all $k\in \llbracket 0,K_S\rrbracket$ and $\kappa\geq \kappa_0$, $$ \mathbb{P}\big[\mathsf{S}_k(\kappa)\big] \leq c^{-1} e^{-c \kappa}. $$ On the event that both $\mathsf{F}_k(\kappa)$ and $\mathsf{S}_k(\kappa)$ hold, it follows that $$ (\rho + S^{-\gamma}) S^{2/3} - 4\kappa S^{1/3}\leq \mathrm{First}_k +\mathrm{Second}_k \leq (\rho + S^{-\gamma}) S^{2/3} + 4\kappa S^{1/3} $$ where we have expanded the terms $Q(-kS^{2/3})$ and $Q(-(k+1)S^{2/3})$ and absorbed errors into the $4\kappa S^{1/3}$ term. Recalling that $P^{(1\cup 2),\rho}(I_k) =(\rho + S^{-\gamma}) S^{2/3}$ and $\mathfrak{h}_0 \big(I_k; \boldsymbol{\mathcal{B}}^{(1\cup 2),\rho} \big)= \mathrm{First}_k +\mathrm{Second}_k$, and using the bounds above on $\mathbb{P}\big[\mathsf{F}_k(\kappa)\big]$ and $\mathbb{P}\big[\mathsf{S}_k(\kappa)\big]$, we conclude that there exists $c,\kappa_0>0$ such that for all $k\in \llbracket 0,K_S\rrbracket$ and $\kappa\in [\kappa_0/4,S^{2/3}/2]$, \begin{equation} \mathbb{P}\bigg[\Big| \mathfrak{h}_0 \big(I_k; \boldsymbol{\mathcal{B}}^{(1\cup 2),\rho} \big) - P^{(1\cup 2),\rho}(I_k)\Big| >\kappa S^{1/3}\bigg] \leq c^{-1} e^{- c \kappa}. \end{equation} Taking $\kappa =S^{1/12}/8$ and a union bound over all $k\in \llbracket 0,K_S\rrbracket$ leads to \eqref{eq:hojk}, as desired. The inequality, \eqref{eq:Mbdd}, follows from what we have shown in \eqref{eq:threeregions} above upon noting that $$\boldsymbol{M} = \mathfrak{h}_0(\llbracket -2S^{1-\gamma},-1\rrbracket ; \boldsymbol{\mathcal{B}}^{(1\cup 2)}) - \mathfrak{h}_0(\llbracket -2S^{1-\gamma},-1\rrbracket ; \boldsymbol{\mathcal{B}}^{(1)}).$$ Notice that the centering of $\boldsymbol{M}$ by $S^{1-2\gamma}$ is consistent with the hydrodynamic limit, namely that the area of the triangle bounded between the two profiles in Figure \ref{fig:ASEPxizetadensity}. \end{proof} Having established \Cref{xizetab} we now know that the initial condition for the height functions of $\boldsymbol{\mathcal{B}}^{(1)}$ and $\boldsymbol{\xi}^{(1)}$ as well as for $\boldsymbol{\mathcal{B}}^{(1\cup 2)}$ and $\boldsymbol{\xi}^{(1\cup 2)}$ are, respectively, close to order $S^{3/4}$. The next result, \Cref{zetaestimate}, will show that the product form initial height profiles for $\boldsymbol{\xi}^{(1)}$ and $\boldsymbol{\xi}^{(1\cup 2)}$ evolved over a time interval $T$ will be close to order at least $T^{3/4}$ to their hydrodynamic limits (at least when focusing to the left of the characteristic velocity $1-2\boldsymbol{\rho}_S$). \Cref{zti} follow then follow by combining \Cref{zetaestimate} with \Cref{xizetab} and the monotonicity afforded to us by \Cref{xizeta2}. \begin{prop} \label{zetaestimate} For any $\varepsilon \in ( 0, 1/2 )$, there exists $c = c (\varepsilon) > 0$ such that the following holds for any $S>2$ (recall $T=S(\log S)^{-1}$). Define the interval and function \begin{flalign*} \mathcal{J}_{S,T,\rho}\!= \!\Big[\! -\frac{\varepsilon S}{4}, (1 - 2 \rho) T - S^{1 - \frac{\gamma}{2}} \Big],\quad\! \mathcal{H}_{S,T,\rho}(X,Y)\!=\! \bigg(\! \rho + \displaystyle\frac{T (1 - 2 \rho)}{2 (S + T)} \bigg) (Y - X)\! + \displaystyle\frac{Y^2 - X^2}{4 (S + T)}, \end{flalign*} as well as the maximal deviation of the height function and hydrodynamic limit function \begin{flalign*} \mathrm{Diff}^{\pm}_{S,T}(\xi,\rho)&= \max_{X, Y \in \mathcal{J}_{S,T,\rho}} \pm \big( \mathfrak{h}_T (\llbracket X,Y\rrbracket ; \xi) -\mathcal{H}_{S,T,\rho}(X,Y)\big),\\ \mathrm{Diff}_{S,T}(\xi,\rho) &= \max_{X, Y \in \mathcal{J}_{S,T,\rho}} \big| \mathfrak{h}_T (\llbracket X,Y\rrbracket ; \xi) -\mathcal{H}_{S,T,\rho}(X,Y)\big| =\max\big(\mathrm{Diff}^{+}_{S,T}(\xi,\rho),\mathrm{Diff}^{-}_{S,T}(\xi,\rho)\big) \end{flalign*} Then we have that \begin{flalign} \mathbb{P} \Big[\big\{\mathrm{Diff}_{S,T}(\boldsymbol{\xi}^{(1)},\boldsymbol{\rho}) \ge S^{3/4}\big\} \bigcap \big\{ \boldsymbol{\rho}_S\in (\varepsilon,1-\varepsilon)\big\} \Big] & < c^{-1} e^{-c S^{1/12}},\label{xizetahb}\\ \mathbb{P} \Big[ \big\{\mathrm{Diff}_{S,T}(\boldsymbol{\xi}^{(1\cup 2)},\boldsymbol{\rho}) \ge S^{3/4} \big\} \bigcap \big\{ \boldsymbol{\rho}_S\in (\varepsilon,1-\varepsilon)\big\} \Big]& < c^{-1} e^{-c S^{1/12}}.\label{xizetahb2} \end{flalign} The constants $c=c(\varepsilon)$ can be chosen so as to weakly decrease as $\varepsilon$ decreases to 0. \end{prop} \begin{proof}[Proof of \Cref{zetaestimate}] As in the proof of \Cref{xizetab}, we will demonstrate that there exists $c = c (\varepsilon) > 0$ such that the following holds for all $S>2$ and all $\rho\in I^{\varepsilon}_S$ (recall \eqref{eq:IEps}): \begin{flalign} \mathbb{P}\Big[\mathrm{Diff}_{S,T}(\boldsymbol{\xi}^{(1),\rho},\rho) \ge S^{3/4}\Big]&\leq c^{-1} e^{-c S^{1/12}},\label{xizetahbfixedrho}\\ \mathbb{P}\Big[\mathrm{Diff}_{S,T}(\boldsymbol{\xi}^{(1\cup 2),\rho},\rho) \ge S^{3/4}\Big]&\leq c^{-1} e^{-c S^{1/12}}.\label{xizetahb2fixedrho} \end{flalign} Having shown this, the results in the statement of \Cref{zetaestimate} follow by a union bound (absorbing the resulting linear prefactor of $S$ into the exponent $ c^{-1} e^{-c S^{1/12}}$). By \Cref{xizetaprocesses}, the initial data for $\boldsymbol{\xi}^{(1),\rho}$ is $\Upsilon_{\varepsilon}^{(\rho)}$-distributed (recall \eqref{functionlinear}) on $[-\varepsilon S, \varepsilon S]$. Thus, \eqref{xizetahbfixedrho} follows from the first statement of \Cref{hetalinear} (with $\kappa=S^{1/12}$ there), together with the fact that \begin{flalign*} T \displaystyle\int\limits_{X/T}^{Y/T} \Big( \rho + \displaystyle\frac{T}{2 (S + T)} (1 - 2 \rho - z) \Big) dz = \mathcal{H}_{S,T,\rho}(X,Y). \end{flalign*} To establish \eqref{xizetahb2fixedrho}, first observe by \Cref{xizeta1} that $\boldsymbol{\xi}^{(1),\rho}$ and $\boldsymbol{\xi}^{(1\cup 2),\rho}$ can be coupled so that $\mathfrak{h}_t (\llbracket X,Y\rrbracket ; \boldsymbol{\xi}^{(1\cup 2),\rho}) \ge \mathfrak{h}_t (\llbracket X,Y\rrbracket ;\boldsymbol{\xi}^{(1),\rho})$, for each $t \ge 0$, whenever $X \le Y$. By this and \eqref{xizetahbfixedrho}, there exists $c = c(\varepsilon) > 0$ such that for all $S>2$ and all $\rho\in I^{\varepsilon}_S$ (recall \eqref{eq:IEps}): $$ \mathbb{P}\Big[\mathrm{Diff}^{-}_{S,T}(\boldsymbol{\xi}^{(1\cup 2),\rho},\rho) \ge S^{3/4}\Big]\leq \mathbb{P}\Big[\mathrm{Diff}^{-}_{S,T}(\boldsymbol{\xi}^{(1),\rho},\rho) \ge S^{3/4}\Big]\leq c^{-1} e^{-c S^{1/12}}. $$ So, it suffices to establish the complementary bound \begin{flalign} \label{xyestimatexy} \mathbb{P}\Big[\mathrm{Diff}^{+}_{S,T}(\boldsymbol{\xi}^{(1\cup 2),\rho},\rho) \ge S^{3/4}\Big]\leq c^{-1} e^{-c S^{1/12}}. \end{flalign} To establish \eqref{xyestimatexy} observe that $\boldsymbol{\xi}^{(1\cup 2),\rho}_0$ is $\Phi_{\varepsilon; \beta}^{(\rho)}$-distributed (as in \Cref{linearconstant}) with $\beta = \varepsilon^{-1} S^{-\gamma}$. Thus, applying the second part of \Cref{hetalinear} yields \begin{flalign} \nonumber\mathbb{P}\Bigg[ \displaystyle\max_{\substack{|X/S| \le \varepsilon / 4 \\ |Y/S| \le \varepsilon / 4}} \bigg| \mathfrak{h}_T(\llbracket X,Y\rrbracket ; \boldsymbol{\xi}^{(1\cup 2),\rho}) - T \displaystyle\int\limits_{X/T}^{Y/T} \max \Big\{ \rho + \displaystyle\frac{(1 - 2 \rho - z) T}{2 (S + T)},& \rho + S^{-\gamma} \Big\} dz \bigg| > S^{3/4} \Bigg]\\ & < c^{-1} e^{-c S^{1/12}}.\label{eq:htXYzetam} \end{flalign} Recall that we have assumed $X, Y \le (1 - 2 \rho) T - S^{1 - \frac{\gamma}{2}}$. For large enough $S$, we have that $(1 - 2 \rho) T - S^{1 - \frac{\gamma}{2}} \le (1 - 2 \rho) T - 2 S^{-\gamma} (S + T)$. In that case, \begin{flalign*} \displaystyle\int\limits_{X/T}^{Y/T} \max \Big\{ \rho + \displaystyle\frac{(1 - 2 \rho - z) T}{2 (S + T)}, \rho + S^{-\gamma} \Big\} dz & = \displaystyle\int\limits_{X/T}^{Y/T} \Big( \rho + \displaystyle\frac{(1 - 2 \rho - z) T}{2 (S + T)} \Big) dz =\frac{\mathcal{H}_{S,T,\rho}(X,Y)}{T}. \end{flalign*} Combining this with \eqref{eq:htXYzetam} yields \eqref{xyestimatexy}, as desired. \end{proof} \begin{proof}[Proof of \Cref{zti}] We will start by defining the $\mathcal{F}_S$-measurable event $\mathsf{H}_S$ (recall that $\mathsf{E}^c$ is the complement of an event $\mathsf{E}$): \begin{equation}\label{eq:Hsdef} \mathsf{H}_S= \mathsf{D}_S(\boldsymbol{\mathcal{B}}^{(1)},\boldsymbol{\xi}^{(1)})^c\cap \mathsf{D}_S(\boldsymbol{\mathcal{B}}^{(1\cup 2)},\boldsymbol{\xi}^{(1\cup 2)})^c \cap \mathsf{M}_S^{c} \end{equation} where these events (all of which also depend on $S$ but whose dependence is not explicit in the notation) are defined in \Cref{xizetab}. Recalling the notation $\mathsf{P}_S$ from \eqref{eq:hspsbdrho} observe that by the union bound and then \eqref{eq:hbxi}, \eqref{eq:hbzeta} and \eqref{eq:Mbdd} we have that there exists $c=c(\varepsilon)>0$ such that for all $S>2$ \begin{align*} \mathbb{P}[\mathsf{P}_S\cap (\mathsf{H}_{S})^c] &\leq \mathbb{P} \big[\mathsf{P}_S \cap \mathsf{D}_S(\boldsymbol{\mathcal{B}}^{(1)},\boldsymbol{\xi}^{(1)})\big]+ \mathbb{P} \big[\mathsf{P}_S \cap \mathsf{D}_S(\boldsymbol{\mathcal{B}}^{(1\cup 2)},\boldsymbol{\xi}^{(1\cup 2)})\big]+ \mathbb{P} \big[\mathsf{P}_S \cap \mathsf{M}_S\big]\\ &\leq c^{-1} e^{- c S^{1/12}}. \end{align*} This shows \eqref{eq:HPbd}. Thus, to prove \Cref{zti} it now suffices to show that for the choice of $\mathsf{H}_S$ in \eqref{eq:Hsdef}, \eqref{ztestimate} holds, namely there exists $c=c(\varepsilon)>0$ such that for all $S>2$ $$ \mathbb{P} \Big[ \Big| \{\!\!\{\boldsymbol{Z}_T\}\!\!\} \cap \big[(1-2\boldsymbol{\rho}_S)T-S^{1 - \frac{\gamma}{2}}, \infty\big) \Big| \ge \boldsymbol{M}(1 - c^{-1} S^{-\frac{1}{5}}) \Big| \mathcal{F}_S \Big] \ge\big(1 - c^{-1} e^{-c S^{1/12}}\big)\mathbf{1}_{ \mathsf{H}_S\cap\mathsf{P}_S}. $$ In other words, to prove the above bound we must show that there exists $c=c(\varepsilon)>0$ such that for any $S>2$, assuming the event $\mathsf{P}_S\cap \mathsf{H}_S$ holds, it follows that with probability at least $1 - c^{-1} e^{-c S^{1/12}}$ the number of second class particles in the interval $ \big[(1-2\boldsymbol{\rho}_S)T-S^{1 - \frac{\gamma}{2}}, \infty\big) $ is at least $\boldsymbol{M}(1 - c^{-1} S^{-\frac{1}{5}})$. Observe that on the event $\mathsf{H}_S\cap\mathsf{P}_S$, we have that $\big|\boldsymbol{M} - S^{1-2\gamma}\big| \leq S^{\frac{3}{4}}$ holds and that \begin{align} \mathbb{P}\Big[ \Big| \{\!\!\{\boldsymbol{Z}_T\}\!\!\} \cap \Big(-\infty, - \frac{\varepsilon S}{4}\Big]\Big|= 0\Big] &\geq 1- c^{-1} e^{-c S^{1/12}},\label{eq:llbraczt1}\\ \mathbb{P}\Big[ \Big| \{\!\!\{\boldsymbol{Z}_T\}\!\!\} \cap \Big(- \frac{\varepsilon S}{4}, (1-2\boldsymbol{\rho}_S)T-S^{1 - \frac{\gamma}{2}}\Big]\Big|< 4S^{\frac{3}{4}}\Big] &\geq 1- c^{-1} e^{-c S^{1/12}}.\label{eq:llbraczt2} \end{align} The first of these inequalities follows immediately from \Cref{xizetaequal} (and does not depend on the occurrence of $\mathsf{H}_S$). This is because $\boldsymbol{\mathcal{B}}^{(1)}$ and $ \boldsymbol{\mathcal{B}}^{(1\cup 2)}$ are the same at time 0 on the interval $(-\infty, -2S^{1-\gamma})$ and hence remain the same on the smaller interval $(-\infty, -2S^{1-\gamma}-4RT)$ at time $T=S/\log S$ with probability at least $1-4e^{-T/3}$. We can find $c=c(\varepsilon)>0$ such that for all $S>2$ either $(-\infty, -2S^{1-\gamma}-4RT)\subset (-\infty, - \frac{\varepsilon S}{4}]$ and $1-4e^{-T/3}\geq 1-c^{-1} e^{-cS^{1/12}}$, or $1-c^{-1} e^{-cS^{1/12}}<0$. In the first case (which occurs for large enough $S$) \eqref{eq:llbraczt1} follows, and in the second case (for small $S$) \eqref{eq:llbraczt1} follows trivially as the right-hand side is negative. The second inequality, \eqref{eq:llbraczt2}, relies on \Cref{zetaestimate}. Observe that by the triangle inequality, on the event that \begin{equation}\label{eq:Diffevs} \big\{\mathrm{Diff}_{S,T}(\boldsymbol{\xi}^{(1)},\boldsymbol{\rho}) < S^{3/4}\big\} \cap \big\{\mathrm{Diff}_{S,T}(\boldsymbol{\xi}^{(1\cup 2)},\boldsymbol{\rho}) < S^{3/4}\big\} \end{equation} holds in addition to $ \mathsf{H}_S\cap\mathsf{P}_S$, it follows that \begin{equation}\label{eq:htcompars} \mathfrak{h}_T\Big(\Big\llbracket - \frac{\varepsilon S}{4}, (1-2\boldsymbol{\rho}_S)T-S^{1 - \frac{\gamma}{2}}\Big\rrbracket; \boldsymbol{\mathcal{B}}^{(1\cup 2)}\Big)- \mathfrak{h}_T\Big(\Big\llbracket - \frac{\varepsilon S}{4}, (1-2\boldsymbol{\rho}_S)T-S^{1 - \frac{\gamma}{2}}\Big\rrbracket; \boldsymbol{\mathcal{B}}^{(1)}\Big) \leq 4S^{3/4}. \end{equation} Here we used the monotonicity from \Cref{xizeta2} to show that the $S^{3/4}$ closeness of $\boldsymbol{\mathcal{B}}^{(1)}$ and $\boldsymbol{\xi}^{(1)}$, and of $\boldsymbol{\mathcal{B}}^{(1\cup 2)}$ and $\boldsymbol{\xi}^{(1\cup 2)}$, at time $0$ (which holds on $\mathsf{D}_S(\boldsymbol{\mathcal{B}}^{(1)},\boldsymbol{\xi}^{(1)})^c\cap \mathsf{D}_S(\boldsymbol{\mathcal{B}}^{(1\cup 2)},\boldsymbol{\xi}^{(1\cup 2)})^c$) persists for all time. Then we used the fact that on the event in \eqref{eq:Diffevs} both $\boldsymbol{\xi}^{(1)}$ and $\boldsymbol{\xi}^{(1\cup 2)}$ have height functions that are within $S^{3/4}$ of the same hydrodynamic limit function $\mathcal{H}_{S,T,\rho}(X,Y)$. By \Cref{xizeta1} and equation \eqref{eq:heightdiff}, the left-hand side of \eqref{eq:htcompars} is the event $$ \Big| \{\!\!\{\boldsymbol{Z}_T\}\!\!\} \cap \Big(- \frac{\varepsilon S}{4}, (1-2\boldsymbol{\rho}_S)T-S^{1 - \frac{\gamma}{2}}\Big]\Big|< 4S^{\frac{3}{4}} $$ whose probability we wish to control in \eqref{eq:llbraczt2}. By \Cref{zetaestimate} the probability of the event in \eqref{eq:Diffevs} (that, in conjunction with $\mathsf{H}_S \cap \mathsf{P}_S$, imply \eqref{eq:htcompars}) is at least $1- c^{-1} e^{-c S^{1/12}}$ for some $c=c(\varepsilon)>0$. This establishes \eqref{eq:llbraczt2}. We can now show \eqref{ztestimate} holds. By \eqref{eq:llbraczt1} and \eqref{eq:llbraczt2}, on the event $\mathsf{H}_S \cap \mathsf{P}_S$, we have that $$ \Big\{\Big| \{\!\!\{\boldsymbol{Z}_T\}\!\!\} \cap \big[(1-2\boldsymbol{\rho}_S)T-S^{1 - \frac{\gamma}{2}}, \infty\big) \Big| \ge \boldsymbol{M} - 4S^{3/4}\Big\} $$ holds with probability at least $1-2c^{-1} e^{-c S^{1/12}}$. On $\mathsf{H}_S \cap \mathsf{P}_S$ we also have $\boldsymbol{M}>S^{1-2\gamma} - S^{3/4}$ which implies that there exists $S_0>0$ such that $$ \boldsymbol{M} - 4S^{3/4} = \boldsymbol{M} \Big( 1- \frac{4S^{3/4}}{\boldsymbol{M}}\Big) \geq \boldsymbol{M}\Big(1- \frac{4S^{3/4}}{S^{1-2\gamma}-S^{3/4}}\Big) \geq \boldsymbol{M}(1-S^{-1/5}). $$ This implies \eqref{ztestimate}, provided $S>S_0$ (for smaller $S$ it follows by taking $c$ sufficiently small). All that remains to completes the proof of \Cref{zti} is to show that the constants $c=c(\varepsilon)$ in that statement can be chosen so as to weakly decrease as $\varepsilon$ decreases to 0. However, this is easily seen to be the case due to the fact that all results upon which we relied in this proof have a similar qualification on the constants. \end{proof}
2023-04-23T06:10:07.808Z
2022-04-13T02:03:46.000Z
redpajama/arxiv
arxiv_0002
205
24,989
8917e2eb4e177e7ee6dc6b27d93339f19287d4fc
\section{Introduction} To join the Bitcoin P2P network, new peers need to learn the addresses of peers that are already part of the network. A peer has one or multiple addresses that can be used to find the peer and initiate connections to the peer. Bitcoin uses a decentralized way to disseminate addresses to peers: A peer announces its own addresses by sending \textsc{addr} messages to its neighbors and the neighbors forward the addresses to other peers. In July and August 2021, a huge wave of addresses was flooded in the Bitcoin P2P network that caused an increase in the number of addresses distributed per day from 40,000 to about 6,000,000 unique addresses per day \cite{web-dsn-bitcoin}. These spam addresses did not belong to actual peers and were sent by an unknown party. While we do not know the purpose for sending the spam addresses, we look at the effects that the spamming had and what information about the topology of the Bitcoin P2P network can be extracted from observing the effects. We estimate the degree (number of neighbors) of reachable peers. While previous work has shown that the peer degree distribution of other cryptocurrencies' P2P networks resembles a power law distribution \cite{delgado-segura_txprobe_2019,cao_exploring_2020,wang_ethna_2021-1}, our observations indicate that in the Bitcoin P2P network about half of the peers have a degree of around 125. Because 125 is the default maximum for connections of Bitcoin Core, the most commonly used client, this finding means that many peers do not have slots available for new incoming connections. As the ability for peers to connect to other peers in the network is important for the health and resilience of the P2P network, we validate this observation by running an experiment to measure how many peers accept incoming connections and find that more than 50\,\% of all reachable peers do not accept additional incoming connections or are close to their connection limit. We further show that the majority of peers being hosted in the networks of cloud providers have around 125 connections while the networks of ISPs include peers that tend to have fewer neighbors. Finally, we estimate the number of unreachable peers in the Bitcoin P2P network from the peer degree distribution. We estimate that there are about 32,800 unreachable peers in the network which aligns with estimations from previous work \cite{neudecker_security_2019,web-lukejr-history,grundmann_announcements_2022}. and find sets of addresses that belong to the same reachable peers. This mapping shows that estimating the number of reachable peers by counting reachable addresses overestimates their number by about 13\,\%. {\em Related Work.} While different methods to learn about the topology of the Bitcoin P2P network have been proposed, most of them were impractical or too costly to be run in the real Bitcoin P2P network. A notable exception is AddressProbe \cite{miller_discovering_2015} that exploited an information leak in the handling of addresses to infer connections between reachable peers. The authors of \cite{miller_discovering_2015} used AddressProbe to infer the topology of P2P network's subgraph that contains only reachable peers and calculate the resulting peer degree distribution. The degree distribution showed that the majority of reachable peers had a degree between eight and twelve which differs strongly from our results because our results also include connections between reachable and unreachable peers. While other methods to infer parts of the topology have been proposed \cite{neudecker_timing_2016,grundmann_exploiting_2019,delgado-segura_txprobe_2019}, these methods were too costly to be run in the Bitcoin P2P network. However, the peer degree distribution of other cryptocurrencies' P2P networks has been analyzed, e.g. the P2P networks of the Bitcoin testnet \cite{delgado-segura_txprobe_2019}, Monero \cite{cao_exploring_2020}, and Ethereum \cite{wang_ethna_2021}. The Bitcoin transaction network, sometimes simply referred to as the `Bitcoin network', is the graph defined by the transactions of the Bitcoin blockchain. The topology of this network has been analyzed previously \cite{ron_quantitative_2013,reid_analysis_2013,lischke_analyzing_2016,filtz_evolution_2017,di_francesco_maesa_data-driven_2018,tao_complex_2021-1} but the transaction network is completely different from the Bitcoin P2P network which is the focus of this work. \section{Observations and Monitoring Setup} In July 2021, user piotr\_n reported in the BitcoinTalk Forum \cite{web-bitcointalk} that spam addresses were distributed in the Bitcoin P2P network. piotr\_n found that the behavior of the spamming peers is to connect to reachable peers, send them 500 \textsc{addr} messages with ten spam addresses each, and then disconnect. We observed the behavior described by piotr\_n at a reachable peer: During July and August 2021, about 400 times one of this peer's neighbors sent within a few seconds a batch of 5,000 unique IPv4 addresses. Over the observed time, the spam originated from 243 different IP addresses. All spam addresses in a batch had the same associated timestamp which was set to a value up to nine minutes into the future. We analyzed the distribution of the received spam addresses and found that they were distributed uniformly over the IPv4 address space and included IP addresses from reserved IPv4 address blocks like 127.0.0.0/8. We take this finding as evidence that the spam addresses were randomly chosen and did not belong to actual peers. Our monitoring setup consists of three monitor nodes that connect to all reachable peers but do not accept incoming connections. Two of those monitor nodes are located in the network of our university (AS 34878) and a third monitor node is located in a different location (AS 680). All monitor peers log received \textsc{addr} messages and connections to other peers that are opened or closed. \section{Estimating the Degree of Reachable Peers} \begin{figure}[tbp] \centering \includegraphics[width=\linewidth]{img/Degree-Estimation.pdf} \caption{Overview of the peer degree estimation. A reachable peer is connected to a spamming peer, our monitor and $n-2$ other peers. The spamming peer sends $500 \cdot 10$ addresses to the reachable peer (I). The peer propagates the addresses to all neighbors except the spamming peer (II). From the number of propagated addresses, the monitor can estimate the number of neighbors. } \label{fig-degree-estimation} \end{figure} Bitcoin Core, the Bitcoin reference client that is used by most peers \cite{web-bitnodes}, accepts addresses with an associated timestamp of up to ten minutes into the future and propagates addresses until their associated timestamp is older than ten minutes. Additionally, an address is only propagated if it was received in an \textsc{addr} message with a size of at most ten. Because both conditions are fulfilled for the spam addresses, a peer that runs Bitcoin Core considers these addresses for propagation to its neighbors. However, Bitcoin Core forwards only routable addresses and about 1.3\,\% of the IPv4 address space are considered as unroutable \cite{web-github-is-routable}. Therefore, on average $4,935$ addresses of the $5000$ received addresses are forwarded. Because each routable address is forwarded to two peers but not the peer that the address was received from, a peer $p$ with $n_p$ neighbors forwards each address to two out of $n_p-1$ neighbors and sends on average $c_p = 4,935 \cdot \frac{2}{n_p-1}$ addresses to each neighbor. Consequently, our monitor nodes receive on average $c_p$ addresses from each peer that receives 5,000 spam addresses and we can estimate the number of neighbors of each reachable peer based on these observations (see \cref{fig-degree-estimation}). While the main idea of this estimation approach has been proposed in 2014 by Biryukov et al. \cite[Section 10.1]{biryukov_deanonymisation_2014-1}, to the best of our knowledge results of this method applied to the Bitcoin P2P network have so far not been published. \subsection{Estimation and Validation} Our monitor nodes are connected to each reachable peer and receive the propagated spam addresses (see \cref{fig-degree-estimation}, II). However, our monitor nodes also receive spam addresses that are not directly forwarded from a spamming peer (\cref{fig-degree-estimation}, III). To filter out these messages and get only directly forwarded messages, we (1) analyze only \textsc{addr} messages received at the monitor that contain at least four entries and (2) we select only those addresses that have a timestamp that is three to ten minutes into the future from the point when the \textsc{addr} message was received and (3) we analyze only addresses if $c_{p,t}$, the number of addresses we received with the same timestamp $t$ from peer $p$, is greater than ten. For each batch of spam address messages with size $c_{p,t}$, we calculate $n_{p,t} = 1 + 4,935 / c_{p,t} \cdot 2$ as an intermediate estimate for the number of neighbors of peer $p$. As the intermediate estimates contain outliers, we calculate the estimate $n_p$ for the number of neighbors of peer $p$ by determining the median of all intermediate estimates $n_{p,t}$ during the time window of one day. The length of this time window is chosen as a trade-off between a short time window during which the number of a peer's neighbors remains constant and a longer time window during which we collected more observations to get a more precise estimate. To validate the estimation approach, we logged at three reachable validation peers the number of neighbors and compared the logs to our estimation. Two of the three validation peers received the spam address tuples on their IPv4 and IPv6 address, the other peer only on its IPv4 address. For each of these five addresses of our validation peers and each day with observed spam, we estimate the peer's degree using the above method and compare it to the peer's connection count logs. As ground truth we take for each peer the average connection count of this peer during this day. We compute the mean deviation of each estimate from this ground truth in percent and average the absolute percentage values. This calculation leads to an average deviation of 4.1\,\% which means that the estimation is reasonably reliable. \subsection{Resulting Degree Distribution} \begin{figure}[tbp] \centering \includegraphics[width=\linewidth]{charts/degree-scatter-all} \caption{Relative frequencies of estimated peer degrees.} \label{fig-degree-scatter-all} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=\linewidth]{charts/degree-histogram-stacked} \caption{Normalized histogram (with a bin width of 5) of the estimated peer degree of all reachable peers. The colors indicate our categorization of the autonomous system that peers are located in.} \label{fig-degree-histogram-stacked} \end{figure} One important topological characteristic of a network is the distribution of peer degrees which can be estimated based on our observations. Using the method described above, we receive one estimate per peer per day. We show the resulting distribution of peer degrees in \cref{fig-degree-scatter-all}. The distribution shows that the majority of reachable peers has an estimated degree of around 125, which is the default maximum number of connections in Bitcoin Core. These results suggest that about 50\,\% of the reachable peers use this default configuration and all of their connection slots are filled. The distribution of estimated peer degrees has a long tail of a few peers that have more than 140 connections. We suppose that there are reachable peers with even more connections, however, we can only estimate the degree of peers with up to 1,000 connections.\footnote{This restriction is due to the fact that we use the condition that $c_t > 10$ to distinguish between addresses that were directly forwarded after being received from a spamming peer and addresses that were received from another peer. With the knowledge of which spam addresses are sent to which peer, this restriction is not necessary and higher degrees can be estimated, too. } We also looked up the autonomous system (AS) of each peer's address using \cite{web-asn-mapping} and categorize each AS into the four categories `ISP', `Cloud Provider', `Both', and `Uncategorized'. We manually classified ASes that contain a large percentage of peers and retrieved the category for the remaining ASes from the ASdb \cite{ziv_asdb_2021} database. \Cref{fig-degree-histogram-stacked} shows the distribution of peer degrees separated by the category of a peer's AS. While the median of estimated degrees for peers hosted at cloud providers is 125, the median of estimated degrees for peers located in networks by ISPs is 97. This result shows that most peers with high degrees are hosted by cloud providers while the majority of peers with low degrees are located in the networks of ISPs. One reason might be that peers running in data centers accumulate more incoming connections because they are less often restarted and their addresses are better distributed in the network because they change their address less often than peers running outside of data centers. \subsection{Measurement of Available Slots for Incoming Connections} Above results show that many reachable peers maintain the default maximum amount of connections and do not have slots for incoming connections available. We validate this result by running the following experiment. A reachable peer running Bitcoin Core always accepts a new incoming connection but, if the new connection fills the last remaining connection slot, a connection is evicted. The evicted connection might be the connection that was just accepted but it might also be a previously existing connection. To harden the resilience against Eclipse attacks, some connections are protected from eviction. The remaining connections are grouped based on their AS and the youngest connection from the AS with the most connections is evicted. In our experiment, we run a test peer that walks through a list of all reachable peers and opens a TCP connection to each peer. If a connection was established, the test peer waits for three seconds and checks if the connection is still open. If it is, the test peer opens four additional TCP connections to this peer, waits for three seconds and checks whether all five connections are still open. Based on the behavior of Bitcoin Core described above, we expect the following results: \begin{itemize} \item If a peer $p$ has more than five incoming connections slots available, the peer $p$ accepts all five tested incoming connections. \item If a peer $p$ has no incoming connection slots available and the test peer's AS is the AS with the most incoming connections to peer $p$, the peer $p$ evicts the first incoming connection. \item If a peer $p$ has no incoming connection slots available and there is an AS from which more peers are connected to $p$ than from the test peer's AS, the peer $p$ accepts the first new connection and evicts another connection. When the four additional connections are opened, the test peer's AS might become the AS with the most connections and the peer $p$ evicts our test peer's connections which means that we would see the first connection accepted but some of the additional connections evicted. \end{itemize} We run the experiment in November 2021 from three test peers located in two different AS. To create the list of reachable peers, we collected all addresses that we received in unsolicited \textsc{addr} messages at one of our monitors on the day before. Our test peers were able to connect to on average 9,461 peers of which 4,493 (47\,\%) accepted all five incoming connections. On average 2,360 (25\,\%) accepted the first connection but not all five connections and 2,608 (28\,\%) evicted already the first connection. We conclude that for 28\,\% of the reachable peers the slots for incoming connections are all taken while 25\,\% of the reachable peers are close to their capacity. Only 47\,\% of the reachable peers seem to freely accept incoming connections. This result confirms our interpretation of the peer degree distribution and shows that slots for incoming connections are a limited resource. \section{Finding Peers with Multiple Addresses} As far as we observed, the 5,000 spam addresses that are sent to one peer with one timestamp are not sent to another peer with the same timestamp. Thus, a batch of spam addresses with the same timestamp mark a specific peer and we can use these markers to find multiple IP addresses that belong to the same peer (see \cref{fig-multiple-addresses}). A reachable peer with multiple IP addresses that received spam addresses from a spamming peer forwards the spam addresses on all of its IP addresses (\cref{fig-multiple-addresses}, II). If tuples of spam address and timestamp were received by the monitor from two different IP addresses, we can match these IP addresses to the same peer (III). False positives can occur if spam addresses are propagated over multiple hops (IV). To filter out these indirectly received spam addresses, we ignore spam addresses that have a timestamp less than five minutes into the future or if they were received from an IP address that sent us fewer than ten spam addresses with the same timestamp. Further, we only match two IP addresses to the same peer if at least five identical tuples of spam address and timestamp were received by the monitor from both IP addresses. \begin{figure}[tbp] \centering \includegraphics[width=\linewidth]{img/Multiple-Addresses.pdf} \caption{A peer with multiple reachable IP addresses is connected to the monitor with each IP address. By observing the propagated spam, the monitor can find IP addresses that belong to the same peer. } \label{fig-multiple-addresses} \end{figure} We run this analysis on data collected by our monitor nodes and obtain sets of addresses that belong to the same peers. After merging all intersecting sets of addresses that belong to the same peer, we get a mapping of 3,478 addresses to 1,536 peers. While there seems to be one peer having 286 IPv6 addresses of the same /118 subnet, the majority of peers has only two addresses. Most of these pairs of addresses are an IPv4 and IPv6 address, however, there are some pairs that are both IPv4 or IPv6 addresses. We validate the method using three of our peers that are using an IPv4 and IPv6 address and find that their IP addresses were correctly matched. While one can estimate the number of reachable peers by counting reachable addresses, we can improve such estimations using the mapping from addresses to actual peers: In August 2021, our monitor nodes were connected on average to 8,800 reachable IP addresses per day which map to 7,650 unique reachable peers per day. To cross-check with our estimation of the peer degree, we compare the peer degrees for each set of matching addresses matched to the same peer and find that matching peers have very similar estimated degrees: We estimate the average peer degree over the whole observed time span for each address. For each set of addresses that we matched to the same peer $p$, we calculate the average degree $\tilde{n}_p$ of the estimates for the addresses of $p$. Then, we calculate the relative deviation of the estimate for each address of $p$ from the mean $\tilde{n}_p$. We determine this deviation between the estimates for each peer and find that the average over all deviations is only 0.2\,\%. \section{Estimating the Number of (Unreachable) Peers} \begin{figure}[tbp] \centering \includegraphics[width=\linewidth]{img/Unreachable-Distribution.pdf} \caption{Usage of connection slots of all reachable peers. } \label{fig-unreachable-distribution} \end{figure} Whether they are reachable or unreachable, peers in the Bitcoin P2P network create outgoing connections to reachable peers. If each peer would create exactly ten outgoing connections and we knew the number of incoming connections at reachable peers, we could infer the total number of peers by dividing the number of incoming connections at reachable peers by ten. While we can estimate the number of existing (incoming and outgoing) connections at reachable peers from the peer degree distribution, not every peer in the real network creates exactly ten outgoing connections. For instance, there are peers such as our monitor nodes that create outgoing connections to all reachable peers and, thus, have multiple thousand outgoing connections. We call these peers super peers. We further assume a class of semi-super peers that open connections to half of the reachable peers to model peers that open connections to many but not all of the reachable peers. We have estimated above that the number of reachable peers is about 7,650 peers. Assuming that each reachable peer runs Bitcoin Core with the default configuration, each reachable peer opens ten connections and, thus, we estimate that there are $7,650 \cdot 10$ outgoing connections of reachable peers. As every outgoing connection is an incoming connection at another peer, there are also $7,650 \cdot 10$ incoming connections from reachable peers. To find the number of super peers, we analyze the logs of three reachable peers that have been running for several months. In October and November 2021, on average $18$ peers were connected to the three reachable peers. Therefore, we assume that there are $18$ super peers that are connected to all reachable peers in the network. To estimate the number of semi-super peers, we count the number of peers that were connected to two reachable peers that do not have a connection limit. On average 44 peers were connected to both of these two reachable peers. Therefore, we assume a number of $44 - 18 = 26$ semi-super peers. The super peers take up $18 \cdot 7,650$ connection slots and the semi-super peers take up $26 \cdot 7,650/2$ connection slots in the network. To estimate how many connections exist in the network, we calculate the sum of all estimated peer degrees of peers that have an estimated degree not higher than $130$. We use this cut-off of the default maximum of $125$ and an error margin of 4\,\% and ignore peers with a higher degree because for peers with a higher degree we know that they are not using the default configuration and we do not know how many of their connections are outgoing or incoming connections. Using our estimated peer degree distribution, we get an estimate of $712,840$ filled connection slots. Subtracting the number of incoming connections that we ascribe to reachable peers, super peers, and semi-super peers, we get a remaining number of $322,690$ connection slots that are probably filled by unreachable peers (\cref{fig-unreachable-distribution}). To estimate the number of unreachable peers from the number of connections of unreachable peers, we need to know the number of outgoing connections of unreachable peers. We determine the distribution of clients used by unreachable peers by calculating the distribution of user agents that are announced to our reachable peers. Based on their distribution and the default number of outgoing connections created by each client\footnote{Bitcoin Core: 10 (8 for full relay and 2 block relay only) / 78.4\,\%, BitcoinJ: 12 / 6.5\,\%, Bread: 3 / 3.3\,\%, bcoin: 8 / 2.8\,\%}, we calculate that unreachable peers open on average 9.8 outgoing connections. This result leads to an estimated number of $32,800$ unreachable peers. This estimate for the number of unreachable peers at one point in time is plausible, given that previous work has estimated the existence of 27,000 to 35,000 unreachable peers per day \cite{grundmann_announcements_2022} or 155,000 peers in each six-hour interval and a high churn \cite{wang_towards_2017}. \section{Conclusion} We have shown the current peer degree distribution of the Bitcoin P2P network. As the openness of the P2P network depends on reachable peers accepting incoming connections, it is a notable result that more than a quarter of reachable peers do not accept incoming connections. Our analysis is based on the observation of a spam wave of addresses in July and August 2021. The same spam wave is not possible anymore in the Bitcoin P2P network because, right before the spam wave started, a change that could reduce the impact of such spam by rate-limiting \textsc{addr} propagation was implemented \cite{web-bitcoin-pr} and has been released with Bitcoin Core 22.0 \cite{web-bitcoin-version-history} in September 2021. However, the insights that our observations of the spam wave gave into the Bitcoin P2P network can be helpful for future development of Bitcoin and other P2P networks. \bibliographystyle{splncs04}
2023-04-23T06:10:08.208Z
2021-12-16T02:25:14.000Z
redpajama/arxiv
arxiv_0002
220
4,072
d2cae8dcf3dac6e5ea8c04844cc2477409cb58be
\section{Introduction} \label{sec:introduction} Stability and performance are two crucial factors to be taken into consideration while designing any controller. One of the most promising optimization based controller is the Model Predictive Control (MPC). MPC finds applications in every field of science, engineering and technology \cite{Mayne1990, Qin2003, Camacho2007}. Various researchers have presented overview of MPC schemes \cite{Allgower1999, Mayne2000}. Primary concept for ensuring nominal stability involves inclusion of stabilizing constraints \cite{Rawlings1993, Michalska1993}. Commonly used stabilizing constraints include a) Terminal equality constraint, b) Terminal penalty term, and c) Terminal inequality constraint \cite{Allgower1999, Fontes2001, Rawlings2009}. A significant process has taken place in the area of nominal stability of linear MPC \cite{Muske1993} and Nonlinear MPC (NMPC) \cite{Oliveira1994, Sistu1996, Mayne2014, Rawlings2017}. Grimm et al. have presented examples when a significantly small change in any of the model parameters can alter the stability characteristics of NMPC \cite{Grimm2004}. Hence formally establishing the asymptotic stability becomes important, necessary and challenging. Executing terminal equality constraint is convenient \cite{Keerthi1988}, however main limitation is that it is highly conservative and often leads to infeasibility specifically when using constrained formulations. Michalska and Mayne conceptualized dual mode MPC scheme where in the idea of terminal region was introduced \cite{Michalska1993}. NMPC controller is expected to drive the plant trajectory into a region, termed as terminal region, around the set point in a finite time using the feasible inputs. Subsequently local linear controller will take the system trajectory to the set point. This idea was extended by Chen and Allg\"ower where in NMPC controller was used inside the terminal region instead of using a linear controller, which has resulted in the concept of Quasi Infinite Horizon - Nonlinear Model Predictive Control (QIH-NMPC) scheme \cite{Chen1998}. Region of attraction for NMPC is a set of initial conditions which result in all the constraints being satisfied with feasible inputs within a specified finite time. It may be noted that the size of the terminal region is directly correlated to the size of the feasible region i.e. the region of attraction. For a given finite horizon formulation with constant prediction horizon time, larger the terminal region results in a larger region of attraction. Having larger region of attraction indicates ability of controller to converge to the desired operating point from an initial condition which is far away from the set point \cite{Mayne2000}. Alternately, for identical initial conditions, controller would require smaller prediction horizon time to satisfy the terminal inequality constraint. Mhaskar et al. presented asymptotically stable NMPC design for continuous time switched systems \cite{Mhaskar2005}. Major drawbacks are explicit characterization of the feasible initial conditions and applicability to only switched systems. Limon et al. presented design of NMPC without terminal inequality constraint. Concept involved appropriate scaling the terminal penalty term to compensate for the difference due to absence of the terminal inequality constraint \cite{Limon2006}. It may be noted that there is a limitation as to what extent designer can increase the terminal penalty term and results in smaller region of attraction. Pannocchia et al. presented an algorithm to convert infinite horizon constrained linear quadratic regulator formulation into a finite dimensions quadratic programming problem after assuming piece-wise linear inputs \cite{Pannocchia2010}. However the issue of convergence of solution and sub-optimality need to be addressed. Esterhuizen et al. presented NMPC asymptotic stability results without stabilizing terminal ingredients. However, two key assumptions of sufficiently longer prediction horizon and cost controllability assumption limit the applicability of the algorithm to limited systems \cite{Esterhuizen2021}. Jadbabaie et al. present unconstrained NMPC stability results without terminal ingredients. The approach makes use of gradual reduction of Lyapunov function eventually resulting in an asymptotic stability characteristics \cite{Jadbabaie2001}. However, amount of time required to reach the desired operating points may be very large and also the design is suitable for unconstrained systems. The proposed approach in this work is suitable for any kind of nonlinear continuous time system with inputs constraints. Chen and Allg\"ower presented an approach for the computation of the terminal penalty term and also for the characterization of the terminal region for the continuous time NMPC formulation \cite{Chen1998}. Research involves local linearization at the set point followed by solving a modified Lyapunov equation. Subsequently Chen and Allg\"ower provide an approach to numerically characterize the terminal region using an inequality based conditions. First major drawback of their approach is a tuning parameter which is nearly independent of the NMPC formulation stage weighting matrices. Second limitation of Chen and Allg\"ower's approach is that, it provides a single scalar tuning parameter which restricts the design to one degree of freedom for shaping of the terminal region, hence, resulting in a very conservative terminal region. Chen and Allg\"ower's approach makes use of Linear Quadratic Regulator (LQR) controller, which is designed using the stage cost weighting matrices and in turn do not provide any degrees of freedom to the controller designer. Several researchers have developed approaches for the terminal region characterization for NMPC formulations for the discrete time cases \cite{Limon2002, Johansen2004, Rajhans2017, Yu2017, Rajhans2019}. It may be noted that discrete time formulations require a separate considerations due to the concept of sampling time vastly affecting the terminal region shape and size \cite{Astrom1997, Grune2011, Rajhans2019}. Although the approaches developed for the discrete time QIH-NMPC formulations provide large degrees of freedom for enlarging the terminal region, however, their application to continuous time QIH-NMPC formulations is very limited. Hence, there is a need to develop approaches for the terminal region characterization for the continuous time NMPC formulations which provide large degrees of freedom. Chen and Allg\"ower \cite{Chen1998a} established that terminal inequality constraint can be avoided when the terminal penalty term and the prediction horizon is chosen sufficiently large for continuous time NMPC formulation. However, the result is applicable only for stable set points or stable continuous time nonlinear systems. In general, for any kind of system, without the terminal ingredients, nominal stability of NMPC controller is not guaranteed. In addition, when the terminal inequality constraint is avoided, typically designer is required to use a relatively larger prediction horizon time which increases the computational burden significantly. Such limitation can be overcome by using terminal inequality constraint which assists in reducing the prediction horizon time \cite{Chen1998}. The approach by Chen and Allg\"ower \cite{Chen1998} is based on linear controller designed at the origin and is applicable to any continuous time nonlinear system governed by Ordinary Differential Equations (ODEs). Lucia et al. \cite{Lucia2015} have extended this work by making use of nonlinear controller for design of the terminal ingredients. Their approaches is based on Taylor series expansions of the system dynamics with considering higher order terms of the stage weighting matrices. However, this approach is applicable to only a special class of continuous time systems where in time derivatives of the system dynamics are polynomial functions. In this work, two approaches are presented which are applicable to any type of nonlinear continuous time system governed by ODEs. Rajhans et al. presented alternate arbitrary controller based approach for the computation of the terminal penalty and for the characterization of the terminal region for the continuous time NMPC formulations \cite{Rajhans2016}. Arbitrary controller based approach makes use of a single additive matrix as the tuning parameter for shaping of the terminal region. Current work converts norm based method to inequality based method, which assists in enlarging the terminal region. In the proposed approached in the current work, two tuning matrices are provided which further increase the degrees of freedom available with the controller designer. Proposed approach provides three degrees of freedom namely a) linear stabilizing controller, b) additive state weighting matrix, and c) additive input weighting matrix. Current work proposed one novel LQR based approach for the terminal region characterization, which provides two additive weighting matrices for enlarging the terminal region. Efficacy of the proposed approaches with three tuning parameters is demonstrated using simulations on a benchmark Chemical engineering system called Continuous Stirrer Tank Reactor (CSTR) \cite{Hicks1971}. Various researchers have used two state CSTR system for demonstrating their controller performance \cite{Tenny2004, Ghaffari2013, Ellis2014, Narasingam2019, Ramesh2021}. However, application of the continuous time quasi infinite horizon NMPC with guaranteed stability is very limited and one additional novelty of the current work. In the demonstration example, it can be observed that the proposed approaches result in significantly larger terminal regions when compared to the approaches available in the literature. Work also presents closed loop simulations of the system under continuous time NMPC controller to validate the applicability of the controller in practical scenarios. Results pertaining to the reduction of the prediction horizon time are presented in detail. Second section presents the continuous time NMPC formulation in detail. In addition, approach by Chen and Allg\"ower \cite{Chen1998} is stated formally along with its limitation. Third section presents the proposed arbitrary controller based approach using inequality method for the computation of the terminal penalty and for the characterization of the terminal region. In addition, third section also presented novel LQR based approach for the terminal region characterization. Subsequently, asymptotic stability result is presented. Forth section presents numerical characterization of the terminal region using the approaches presented in the third section. Fifth section presents the terminal region characterization using demonstration case study. Sixth section details the CSTR continuous time simulation and results obtained using the CSTR case study. Seventh section gives the conclusions from the theory and cases study. \section{Continuous Time NMPC Formulation} Consider a continuous time nonlinear system is given by \begin{eqnarray} \frac{d{\mathbf{X}(t)}}{dt} = {\mathbf{f}_c} ({\mathbf{X}}(t),{\mathbf{U}}(t)) \label{csystem1} \end{eqnarray} where $\mathbf{X}(t) \in \mathbb{R}^{n_x}$ denotes the state vector in absolute terms and $\mathbf{u}(t) \in \mathbb{R}^{n_u}$ denotes the input vector in absolute terms. Let $(\mathbf{X}_s, \mathbf{U}_s)$ be the constant steady state of the system (\ref{csystem1}) i.e. $\mathbf{0} = {\mathbf{f}_c} (\mathbf{X}_s, \mathbf{U}_s)$. Defining shift of origin as follows: \begin{eqnarray} \mathbf{x}(t) = \mathbf{X}(t) - \mathbf{X}_s \\ \mathbf{u}(t) = \mathbf{U}(t) - \mathbf{U}_s \end{eqnarray} After shift of origin, consider the continuous time nonlinear system given as \begin{eqnarray} \frac{d({\mathbf{X}(t)-\mathbf{X}_s})}{dt} = {\mathbf{f}_c} ({\mathbf{X}(t)-\mathbf{X}_s},{\mathbf{U}(t)-\mathbf{U}_s}) \label{csystem2} \end{eqnarray} Rewiring using a simpler notation gives \begin{align} \frac{d{\mathbf{x}(t)}}{dt} &= {\mathbf{f}} ({\mathbf{x}}(t),{\mathbf{u}}(t)) \label{csystem} \\ {\bf{x}}(0) &= {\bf{x}}_0 \end{align} where $\mathbf{x}(t) \in \mathcal{X} \subset \mathbb{R}^{n_x}$ denotes the state vector and $\mathbf{u}(t) \in \mathcal{U} \subset \mathbb{R}^{n_u}$ denotes the input vector. Assumptions are stated as follows: \begin{description} \item[C1] System dynamics function $\mathbf{f}: \mathbb{R}^{n_x}\times \mathbb{R}^{n_u} \to \mathbb{% R}^{n_x}$ is twice continuously differentiable. \item[C2] The origin $\mathbf{0} \in \mathbb{R}^{n_x}$ is an equilibrium point of the system (\ref{csystem}) i.e. $\mathbf{f}\left( \mathbf{0}, \mathbf{0} \right) =\mathbf{0}$. \item[C3] The inputs $\mathbf{u}(t)$ are constrained inside a closed and convex set $\mathcal{U} \subset \mathbb{R}^{n_u}$. \item[C4] The system (\ref{csystem}) has a unique solution for any initial condition $\mathbf{x}_{0} \in \mathcal{X}$ and any piece wise right continuous input $\mathbf{u}% (\cdot):[0,\infty) \to $ $\mathcal{U}$. \item[C5] The state $\mathbf{x}(t)$ is perfectly known at any time $t$ i.e. all the states are measured. \item[C6] External disturbances do not affect the system dynamics. \end{description} \subsection{NMPC Formulation} For the continuous time system given by (\ref{csystem}), NMPC formulation is stated as follows: \begin{equation} \begin{array}{c} \min \\ \overline{\mathbf{u}}_{[t, t+T_{p}]}% \end{array}% J\left( \mathbf{x}(t),\overline{\mathbf{u}}_{[t, t+T_{p}]}\right) \label{COptimal} \end{equation}% with \begin{eqnarray} J\left( \mathbf{x}(t),\overline{\mathbf{u}}_{[t, t+T_{p}]}\right) &=&\int_{t}^{t+T_{p}}\left\{ \mathbf{z}(\tau)^T \mathbf{W}_{x} \mathbf{z}(\tau) + \overline{\mathbf{u}}(\tau)^T \mathbf{W}_{u} \overline{\mathbf{u}}(\tau) \right\} d\tau +\mathbf{z}(t+T_p)^T \mathbf{P} \mathbf{z}(t+T_p) \label{StageCost} \end{eqnarray}% \begin{equation} \overline{\mathbf{u}}_{[t, t+T_{p}]}=\left\{ \mathbf{u}(\tau )\in \mathcal{U}% :\tau \in \left[t, t+T_{p}\right] \right\} \label{InputSet} \end{equation}% subject to \begin{eqnarray} \frac{d\mathbf{z}(\tau)}{d\tau }&=&\mathbf{f}\left( \mathbf{z}(\tau ), % \overline{\mathbf{u}}(\tau )\right) \text{ for } \tau \in \left[ t, t+T_{p}\right] \label{PredictedState} \\ \mathbf{z}(t)&=&\mathbf{x}(t) \label{InitialCondition} \\ \mathbf{z}\left( t+T_{p}\right) &\in& \Omega \label{TerminalRegion} \end{eqnarray}% where $\mathbf{W}_{x}$ and $\mathbf{W}_{u}$ are state and input weighting matrices of dimension $\left( n_x \times n_x \right) $, $\left( n_u \times n_u \right) $ respectively. $\mathbf{P}$ is the terminal penalty matrix of dimension $\left( n_x \times n_x \right) $. $\mathbf{W}_{x}, \mathbf{W}_{u}, \mathbf{P}$ are symmetric positive definite matrices. $T_{p}$ is a finite prediction horizon time and is identical to the control horizon time. $\mathbf{z}(\tau )$ denotes the predicted state in the NMPC formulation and $% \overline{\mathbf{u}}(\tau )$ denotes the future control input moves. The set $\Omega$ is termed as the \emph{terminal region} in the neighborhood of the origin. The set $\mathcal{X}_{T_p} \subset \mathcal{X} \subset \mathbb{R}^{n_x}$ is termed as the \emph{region of attraction} is the set of all feasible initial conditions i.e. it is a set of all initial conditions $\mathbf{x}_0$ such that terminal inequality constraint (\ref{TerminalRegion}) is satisfied with inputs constrained given by equation (\ref{InputSet}) satisfied. \subsection{Design and Implementation of NMPC Formulation} The terminal region $\Omega$ is chosen as an invariant set for the nonlinear system (\ref{csystem}) controlled by local linear controller with gain matrix $\mathbf{K}$. The terminal penalty term is chosen such that for all trajectories starting from any point inside the terminal region $\Omega$, with approximation that a single cost term having larger value that the sum of all the predicted stage cost terms from end of horizon to infinity and is given as follows: \begin{equation} \mathbf{z}(t+T_p)^T \mathbf{P} \mathbf{z}(t+T_p) \geq \int_{t+T_{p}}^{\infty }\left\{ \mathbf{z}(\tau)^T \mathbf{W}_{x} \mathbf{z}(\tau) + \overline{\mathbf{u}}(\tau) \mathbf{W}_{u} \overline{\mathbf{u}}(\tau) \right\} d\tau \label{TRCondition} \end{equation} with $\overline{\mathbf{u}}(\tau) = -\mathbf{K} \mathbf{z}(\tau) \in \mathcal{U}$ for all $\tau \geq (t+T_p)$ and for all $\mathbf{z}(t+T_p) \in \Omega$. It is assumed that the solution to the optimal problem (\ref{COptimal}) with stage cost defined by (\ref{StageCost}) with input set given by (\ref{InputSet}) subject to the predicted state dynamics (\ref{PredictedState}) with initial condition (\ref{InitialCondition}) and terminal constraint (\ref{TerminalRegion}) i.e. $\overline{\mathbf{u}}_{[t, t+T_{p}]}^*$ exists and can be computed numerically. Controller is implemented as a moving horizon framework. Accordingly, only the first control move \begin{eqnarray} \mathbf{u}(t) = \overline{\mathbf{u}}^*(t) \label{InputMove} \end{eqnarray} is implemented in the plant. Entire process is repeated at next time point $t+\delta$ with $\delta$ being an sufficiently small sampling period. The term \emph{Quasi Infinite} is because of the fact that the NMPC formulation deplicts the stability properties of the infinite horizon formulation, however, the actual implementation is finite horizon. Such implementation is achieved with the help of the equation (\ref{TRCondition}). However, only ensuring the terminal penalty term satisfying the condition (\ref{TRCondition}) is not sufficient to guarantee the nominal asymptotic stability of the NMPC controller, hence terminal constraint as given by (\ref{TerminalRegion}) becomes inevitable. It may be noted that local linear controller with gain matrix $\mathbf{K}$ is not used for implementation of the NMPC controller and is only a mathematical construct to characterize the terminal region $\Omega$. \subsection{Chen and Allg\"ower's Approach} Before proceeding to the proposed arbitrary controller based approach, a look at Chen and Allg\"ower's approach is required. Consider, Jacobian linearization of the nonlinear system (\ref{csystem}) in the neighborhood the origin as, \begin{equation} \frac{d\mathbf{x}(t)}{dt}=\mathbf{Ax}(t)+\mathbf{Bu}(t) \label{CLinSys} \end{equation}% where \begin{equation*} \mathbf{A=}\left[ \frac{\partial \mathbf{f}}{\partial \mathbf{x}}\right] _{\left( \mathbf{0},\mathbf{0}\right) }\text{ and \ }% \mathbf{B=}\left[ \frac{\partial \mathbf{f}}{\partial \mathbf{u}}\right] _{\left( \mathbf{0},\mathbf{0}\right) } \end{equation*} One additional assumption is required at this stage. \begin{description} \item[C7] The linearized system (\ref{CLinSys}) is stabilizable. \end{description} Chen and Allg\"ower characterize the terminal region as, \begin{equation} \Omega \equiv \left\{ \mathbf{x} \in \mathbb{R}^{n} | \mathbf{x}^{T}% \mathbf{P} \mathbf{x} \leq \alpha, -\mathbf{Kx} \in \mathcal{U}% \right\} \end{equation}% where linear gain $\mathbf{K}$ and the terminal penalty matrix $\mathbf{P}$ are the steady state solutions of the modified Lyapunov equation given as follows: \begin{equation} \left( \mathbf{A}_{K} + \kappa \mathbf{I}\right) ^{T} \mathbf{P} +% \mathbf{P} \left( \mathbf{A}_{K}+\kappa \mathbf{I}\right) =-\mathbf{Q% }^* \label{ChenLyapunov} \end{equation}% \begin{equation} \mathbf{Q}^* = \mathbf{W}_{x} + \mathbf{K}^T \mathbf{W}_{u} \mathbf{K} \label{Qstar} \end{equation} where $\mathbf{A}_{K} = \mathbf{A-BK}$ and parameter $\kappa > 0$ is chosen such that $\kappa < -Re \left[ \lambda _{\max }\left( \mathbf{A-BK} \right) \right]$. Note $Re \left[ \lambda _{\max }\left( \mathbf{A-BK} \right) \right]$ is the real part of the right most eigenvalue of $\mathbf{A}_{K}$ i.e. eigen value having largest real part and it is negative due to the fact that linear matrix $\mathbf{A}_{K}$ is stable by design. It can be noted that once stage cost weighting matrices $\mathbf{W}_{x}, \mathbf{W}_{u}$ are chosen, there is barely any degree of freedom left to the designer for shaping of the terminal region. This results in a very conservative terminal regions. The limitation is overcome by using the arbitrary controller based approach wherein additive tuning matrices are introduced which provide large degrees of freedom for enlarging of the terminal region and is presented in the subsequent section. \section{Alternate Approaches for the Terminal Region Characterization} In the arbitrary controller based approach, an arbitrary stabilizing linear controller is designed using any of the methods available in the literature such as pole placement \cite{Kailath1980, Albertos2006}, linear quadratic Gaussian control \cite{Kirk1970} and so on. We prove the following lemma for the arbitrary controller based approach: \begin{lemma} \label{lemma1} Suppose that assumptions C1 to C7 are satisfied and a stabilizing feedback control law is designed i.e. $\mathbf{A}_{K}=(% \mathbf{A-BK})$ is stable indicating all the eigenvalues have negative real part. Let $\Delta \mathbf{Q}$ is any positive definite matrix. Let matrix $\mathbf{P}$ denote the solution of the following modified Lyapunov equation: \begin{equation} \mathbf{A}_{K}^T \mathbf{P} + \mathbf{P} \mathbf{A}_{K}=-(% \mathbf{Q}^* + \Delta \mathbf{Q)} \label{ACLyap} \end{equation}% where $\mathbf{Q}^*$ is defined by equation (\ref{Qstar}). Then there exists a constant $\alpha > 0$ which defines an ellipsoid of the form \begin{equation} \Omega \equiv \left\{ \mathbf{x} \in \mathbb{R}^{n_x} | \mathbf{x}^{T} \mathbf{% P} \mathbf{x} \leq \alpha , -\mathbf{Kx} \in \mathcal{U} \right\} \label{ACTR} \end{equation}% such that $\Omega$ is an invariant set for the nonlinear system given by (\ref{csystem}) with linear controller $\mathbf{u}(t) = - \mathbf{Kx}(t)$. Additionally, for any $\mathbf{x}(t+T_p) \in \Omega$ the inequality given by (\ref{TRCondition1}) holds true. \begin{equation} \mathbf{z}(t+T_p)^T \mathbf{P} \mathbf{z}(t+T_p) \geq \int_{t+T_{p}}^{\infty }\left\{ \begin{array}{c} \mathbf{z}(\tau)^T \mathbf{W}_{x} \mathbf{z}(\tau) + \overline{\mathbf{u}}(\tau) \mathbf{W}_{u} \overline{\mathbf{u}}(\tau) \end{array} \right\} d\tau \label{TRCondition1} \end{equation} \end{lemma} \begin{proof} Since $\mathbf{A}_{K}=(\mathbf{A-BK})$ is stable, hence, the eigenvalues of $\mathbf{A}_{K}$ are having negative real part. Using the solvability condition of the modified Lyapunov equation, a unique $\mathbf{P} > 0$ can be computed which solves the equation (\ref{ACLyap}). According to Assumption C2, the origin $\mathbf{0} \in \mathbb{R}^{n_u}$ is in the interior of the input constraints set $\mathcal{U}$. Accordingly, we can compute a constant $\gamma$ which defined a set $\Omega_{\gamma}$ such that \begin{equation} \Omega_\gamma \equiv \left\{ \mathbf{x} \in \mathbb{R}^{n_x} | \mathbf{x}^{T}% \mathbf{P} \mathbf{x} \leq \gamma, - \mathbf{Kx} \in \mathcal{U}% \right\} \label{Omegagamma} \end{equation}% Now, let $0 < \alpha \leq \gamma$ specify a region of the form given by equation (\ref{ACTR1}). \begin{equation} \Omega \equiv \left\{ \mathbf{x} \in \mathbb{R}^{n_x} | \mathbf{x}^{T} \mathbf{% P} \mathbf{x} \leq \alpha \right\} \label{ACTR1} \end{equation}% As the input constraints are satisfied in $\Omega _\gamma$ and $\Omega \subseteq \Omega_\gamma$ (by virtue of $0 < \alpha \leq \gamma$), the system dynamics can be equivalently viewed as an input unconstrained system in the set $\Omega$. Consider a vector $\mathbf{\Phi}_{K}(\mathbf{x})$ representing the nonlinearity in the system dynamics defined as \begin{equation} \mathbf{\Phi }_{K}(\mathbf{x})=\mathbf{f}(\mathbf{x, -Kx}) - \mathbf{A}_{K}% \mathbf{x} \label{PhyK} \end{equation}% Note for a linear system $\mathbf{\Phi}_{K}(\mathbf{x}) = \mathbf{0}$. Consider a Lyapunov candidate defined as \begin{equation} V(\mathbf{x}) = \mathbf{x}^{T} \mathbf{P} \mathbf{x} \label{Vx} \end{equation}% The time derivative of $V(\mathbf{x})$ can be expressed as follows: \begin{align} \frac{dV(\mathbf{x})}{dt}& = \frac{d \mathbf{x}^{T}}{dt} \mathbf{P} \mathbf{x} + \mathbf{x}^{T} \mathbf{P} \frac{d \mathbf{x}}{dt} \label{Vdot1} \end{align}% Substituting from (\ref{PhyK}) into (\ref{Vdot1}), \begin{align} \frac{dV(\mathbf{x})}{dt} = \mathbf{x}^{T} \left( \mathbf{A}_{K}^{T} \mathbf{P} + \mathbf{P} \mathbf{A}_{K} \right) \mathbf{x} + 2 \mathbf{x}^{T} \mathbf{P} \mathbf{\Phi }_{K} \mathbf{(x)} \label{Vdot2} \end{align}% Using equation (\ref{ACLyap}) into (\ref{Vdot2}), \begin{align} \frac{dV(\mathbf{x})}{dt} = -\mathbf{x}^{T} \left( \mathbf{Q}^*+ \Delta \mathbf{Q} \right) \mathbf{x} +2\mathbf{x}^{T} \mathbf{P} \mathbf{\Phi }_{K} \mathbf{(x)} \label{Vdot3} \end{align}% Rearranging results in the following equation: \begin{align} \frac{dV(\mathbf{x})}{dt} = -\mathbf{x}^{T} \mathbf{Q}^* \mathbf{x} + \left( -\mathbf{x}^{T} \Delta \mathbf{Q} \mathbf{x} +2\mathbf{x}^{T} \mathbf{P} \mathbf{\Phi }_{K} \mathbf{(x)} \right) \label{Vdot3b} \end{align}% There are two possibility to characterize the terminal region. First is a norm based method and second is the inequality based method. \\ Method A - Norm based method: Taking norm of second term of the equation (\ref{Vdot3}), \begin{align} \mathbf{x}^{T} \mathbf{P} \mathbf{\Phi }_{K} \mathbf{(x)} \leq |P| L_\Phi |\mathbf{x}|^2 \label{Vdot4} \end{align}% Since $\mathbf{x}^T \Delta \mathbf{Q} \mathbf{x} \ge \lambda_{min}(\Delta \mathbf{Q})$ and combining (\ref{Vdot4}) into (\ref{Vdot3}), \begin{align} \frac{dV(\mathbf{x})}{dt} \le -\mathbf{x}^{T} \mathbf{Q}^* \mathbf{x} - \left[ \lambda_{min}(\Delta \mathbf{Q}) - 2 |\mathbf{P}| L_\Phi \right] |\mathbf{x}|^2 \label{Vdot5} \end{align}% If $\Omega$ is chosen such that \begin{align} \left[ \lambda_{min}(\Delta \mathbf{Q}) - 2 |\mathbf{P}| L_\Phi \right] \leq 0 \label{Vdot6} \end{align}% then \begin{align} \frac{dV(\mathbf{x})}{dt} \le -\mathbf{x}^{T} \mathbf{Q}^* \mathbf{x} \label{Vdot7} \end{align}% Method B - Inequality based method: Rearranging terms from the equation (\ref{Vdot3}), \begin{align} \frac{dV(\mathbf{x})}{dt} = -\mathbf{x}^{T} \mathbf{Q}^* \mathbf{x} + \left( -\mathbf{x}^{T} \Delta \mathbf{Q} \mathbf{x} +2\mathbf{x}^{T} \mathbf{P} \mathbf{\Phi }_{K} \mathbf{(x)} \right) \label{Vdot11} \end{align}% Consider second term of the expression (\ref{Vdot11}), \begin{equation} \mathbf{\Psi} (\mathbf{x}) := \left( \mathbf{x}^{T} \Delta \mathbf{Q} \mathbf{x} -2 \mathbf{x}^{T} \mathbf{P} \mathbf{\Phi }_{K} \mathbf{(x)} \right) \label{PsiDef} \end{equation} Using (\ref{PsiDef}) in (\ref{Vdot11}), \begin{align} \frac{dV(\mathbf{x})}{dt} = -\mathbf{x}^{T} \mathbf{Q}^* \mathbf{x} - \mathbf{\Psi} (\mathbf{x}) \label{Vdot12} \end{align}% If $\Omega$ is chosen such that \begin{align} \mathbf{\Psi} (\mathbf{x}) = \left( \mathbf{x}^{T} \Delta \mathbf{Q} \mathbf{x} -2 \mathbf{x}^{T} \mathbf{P} \mathbf{\Phi }_{K} \mathbf{(x)} \right) \geq 0 \label{Vdot13} \end{align}% then \begin{align} \frac{dV(\mathbf{x})}{dt} \le -\mathbf{x}^{T} \mathbf{Q}^* \mathbf{x} \label{Vdot14} \end{align}% Equation (\ref{Vdot14}) for inequality based method is identical to equation (\ref{Vdot7}) for norm based method. Integrating inequality (\ref{Vdot7}) or (\ref{Vdot14}) over the interval, $[t+T_{p}, \infty ),$ it follows that \begin{equation} V(\mathbf{x(}t+T_{p}))\geq \int_{t+T_{p}}^{\infty} \mathbf{x}(\tau)^{T}% \mathbf{Q}^* \mathbf{x}(\tau) d\tau \end{equation}% i.e. inequality (\ref{TRCondition1}) holds true for any $\mathbf{x(}t+T_{p}) \in \Omega$. \end{proof} \begin{lemma} \label{lemma2} Suppose that assumptions C1 to C7 are satisfied. Let $\widetilde{\mathbf{W}}_{x} > \mathbf{W}_x$ and $\widetilde{\mathbf{W}}_{u} > \mathbf{W}_u$ be any positive definite matrices. Let matrix $\mathbf{P}_{LQ}$ denote the solution of the following modified Lyapunov equations: \begin{equation} \mathbf{A}_{{K}_{LQ}}^{T}\mathbf{P}_{LQ}+\mathbf{P}_{LQ}\mathbf{A}_{{K}_{LQ}} =-\left( \widetilde{\mathbf{W}}_{x}+\mathbf{K}% _{LQ}^{T}\widetilde{\mathbf{W}}_{u}\mathbf{K}_{LQ}\right) \label{CARE} \end{equation} \begin{equation} \mathbf{K}_{LQ}\mathbf{=}\left( \widetilde{\mathbf{W}}_{u}\right) ^{-1}% \mathbf{B}^{T}\mathbf{P}_{LQ} \label{Kgain} \end{equation} where $\mathbf{A}_{{K}_{LQ}} = \mathbf{A}-\mathbf{B} \mathbf{K}_{LQ}$. Then there exists a constant $\alpha > 0$ which defines an ellipsoid of the form \begin{equation} \Omega \equiv \left\{ \mathbf{x} \in \mathbb{R}^{n_x} | \mathbf{x}^{T} \mathbf{% P} \mathbf{x} \leq \alpha , -\mathbf{K}_{LQ} \mathbf{x} \in \mathcal{U} \right\} \label{LQRTR} \end{equation}% such that $\Omega$ is an invariant set for the nonlinear system given by (\ref{csystem}) with linear controller $\mathbf{u}(t) = - \mathbf{K}_{LQ} \mathbf{x}(t)$. Additionally, for any $\mathbf{x}(t+T_p) \in \Omega$ the inequality given by (\ref{TRCondition1}) holds true. \begin{equation} \mathbf{z}(t+T_p)^T \mathbf{P} \mathbf{z}(t+T_p) \geq \int_{t+T_{p}}^{\infty }\left\{ \begin{array}{c} \mathbf{z}(\tau)^T \mathbf{W}_{x} \mathbf{z}(\tau) + \overline{\mathbf{u}}(\tau) \mathbf{W}_{u} \overline{\mathbf{u}}(\tau) \end{array} \right\} d\tau \label{TRCondition1} \end{equation} \end{lemma} \begin{proof} Proof is similar to the proof of Lemma \ref{lemma1} except for minor changes such as $\mathbf{K}$ is replaced by $\mathbf{K}_LQ$, $\mathbf{P}$ is replaced by $\mathbf{P}_LQ$ and remaining changes are shown below: Consider a candidate Lyapunov function defined as \begin{equation*} V(\mathbf{x})=\mathbf{x}^{T}\mathbf{P}_{LQ}\mathbf{x} \end{equation*} Using equation (\ref{CARE}), the time derivative of $V(\mathbf{x}) $ can be expressed as follows \begin{equation} \frac{dV(\mathbf{x})}{dt}=\mathbf{x}^{T}\left( \mathbf{A}_{K}^{T}\mathbf{P}_{LQ}% \mathbf{+P}_{LQ}\mathbf{A}_{K}\right) \mathbf{x}+2\mathbf{x}^{T}\mathbf{P}_{LQ}% \mathbf{\phi (x)} \label{Vder} \end{equation}% Defining matrices, \begin{equation} \mathbf{\Delta W}_{x}\equiv \widetilde{\mathbf{W}}_{x}-\mathbf{W}_{x}>0\text{ and }\mathbf{\Delta W}_{u}\equiv \widetilde{\mathbf{W}}_{u}-\mathbf{W}_{u}>0 \label{deltaW} \end{equation}% one can write \begin{equation} \widetilde{\mathbf{W}}_{x}+\mathbf{K}_{LQ}^{T}\widetilde{\mathbf{W}}_{u}% \mathbf{K}_{LQ}\mathbf{=Q}^{\ast }+\mathbf{\Delta Q} \end{equation}% \begin{eqnarray} \mathbf{Q}^{\ast } &=&\mathbf{W}_{x}+\mathbf{K}_{LQ}^{T}\mathbf{W}_{u}% \mathbf{K}_{LQ} \\ \mathbf{\Delta Q} &\mathbf{=}&\mathbf{\Delta W}_{x}+\mathbf{K}% _{LQ}^{T}\Delta \mathbf{W}_{u}\mathbf{K}_{LQ} \end{eqnarray}% and the equation (\ref{CARE}) can be re-written as follows \begin{equation} \mathbf{A}_{K}{}^{T}\mathbf{P}_{LQ}+\mathbf{P}_{LQ}\mathbf{A}_{K}=-\left( \mathbf{Q}^{\ast }+\Delta \mathbf{Q}\right) \label{CLyap1} \end{equation}% Equation (\ref{Vder}) and equation (\ref{CLyap1}) are combined as follows: \begin{equation} \frac{dV(\mathbf{x})}{dt}=-\mathbf{x}^{T}\mathbf{(\mathbf{Q}^{\ast }+\Delta \mathbf{Q})x}+2\mathbf{x}^{T}\mathbf{P}_{LQ}\mathbf{\phi (x)} \label{Teq} \end{equation}% Rearranging results in the following equation: \begin{equation} \frac{dV(\mathbf{x})}{dt}=-\mathbf{x}^{T}\mathbf{Q}^{\ast} \mathbf{x} + \left( -\mathbf{x}^{T} \Delta \mathbf{Q} \mathbf{x}+2\mathbf{x}^{T}\mathbf{P}_{LQ}\mathbf{\phi (x)} \right) \label{Teq2} \end{equation}% Equation (\ref{Teq2}) is identical to the equation (\ref{Vdot3b}). Rest of the proof is similar to the proof of Lemma \ref{lemma1}. Both the methods i.e. norm based method and inequality based method are applicable for the LQR based approach as well. \end{proof} Consider the feasibility lemma as follows: \begin{lemma} \label{lemma3} Let the assumptions C1-C7 hold true. For the nominal continuous time system, feasibility of continuous time QIH-NMPC formulation problem (\ref{COptimal}) at time $t=0$ implies its feasibility for all $t > 0$. \end{lemma} \begin{proof} Proof is identical to the proof of the lemma 2 from \cite{Chen1998}. \end{proof} Consider the asymptotic stability result as follows: \begin{theorem} \label{theorem1} Let a) Assumptions C1-C7 hold true and b) the continuous time NMPC problem is feasible at $t = 0$. The nominal nonlinear system (\ref{csystem}) controlled with NMPC controller is asymptotically stable at the origin. \end{theorem} \begin{proof} From equation (\ref{Vx}) from the lemma \ref{lemma1} or \ref{lemma2}, consider the Lyapunov candidate function \begin{equation} V(\mathbf{x}) = \mathbf{x}^{T} \mathbf{P} \mathbf{x} \label{Vx1} \end{equation}% Consider the following three properties \cite{Khalil2002}: \begin{itemize} \item $V(\mathbf{0}) = (\mathbf{0}^{T}) \mathbf{P} (\mathbf{0}) = 0$. \item Since $\mathbf{P}$ is a positive definite matrix, $V(\mathbf{x}) = \mathbf{x}^{T} \mathbf{P} \mathbf{x} > 0$ for all $\mathbf{x} \neq \mathbf{0}$. \item Using (\ref{Vdot7}) or (\ref{Vdot14}) and $\mathbf{Q}^* > 0$ implies \begin{align} \frac{dV(\mathbf{x})}{dt} \le -\mathbf{x}^{T} \mathbf{Q}^* \mathbf{x} < 0 \label{Vx2} \end{align}% \end{itemize} Thus, the candidate function $V(\mathbf{x})$ is a Lyapunov function for the nonlinear system for $\mathbf{x} \in \Omega$ under NMPC controller. Hence, the closed loop system is asymptotically stable at the origin. \end{proof} Note $\mathbf{K}$ is to be read as $\mathbf{K}_{LQ}$ for linear gain matrix and $\mathbf{P}$ is to be read as $\mathbf{P}_{LQ}$ for terminal penalty matrix for the subsequent sections for the application of LQR based approach. Notation is simplified for readability. \section{Terminal Region Characterization} Lemma \ref{lemma1} or Lemma \ref{lemma2} gave conditions for explicit characterization of the terminal region. It is possible to numerically compute the terminal region and subsequently implement the QIH-NMPC controller. \subsection{Steps for the Characterization of the Terminal Region} Steps for characterization of the terminal region using arbitrary controller based approach are given below: \begin{description} \item[S1] Computation of Upper Bound Set: \\ Compute the largest value of $\gamma$ such that inputs constraints are satisfied in the set $\Omega_\gamma$. \begin{equation} \Omega_\gamma \equiv \left\{ \mathbf{x} \in \mathbb{R}^{n_x} | \mathbf{x}^{T}% \mathbf{P} \mathbf{x} \leq \gamma, - \mathbf{Kx} \in \mathcal{U}% \right\} \label{Omegagamma1} \end{equation}% This can be formulated as a simple Quadratic Programming (QP) problem if the constraints are defined by upper bound and lower bound on each of the input signal. Typically the set $\Omega_\gamma$ would be tangential to at least one of the input constraint. \item[S2a] Computation of the Terminal Region using norm based method: \\ Compute the largest $\alpha \in (0, \gamma]$ such that \begin{align} L_\Phi \leq L_\Phi^* = \frac{\lambda_{min}(\Delta \mathbf{Q})}{2 |\mathbf{P}|} \label{TRCompute1} \end{align}% where \begin{align} L_\Phi = \begin{array}{c} \max \\ \mathbf{x} \in \Omega% \end{array}% \frac{|\mathbf{\Phi}_K (\mathbf{x})|}{|\mathbf{x}|} \label{TRCompute2} \end{align} This is identical to the method given by Rajhans et al. in \cite{Rajhans2016} for the arbitrary controller based approach. \item[S2b] Computation of the Terminal Region using inequality based method: \\ Compute the largest $\alpha \in (0, \gamma]$ such that \begin{align} \left[ \begin{array}{c} \min \\ \mathbf{x}(k) \in \Omega% \end{array}% \mathbf{\Psi}(\mathbf{x}) \right] = 0 \label{TRCompute3} \end{align} The condition given by (\ref{TRCompute3}) ensures that $\mathbf{\Psi}(\mathbf{x}) > 0$ for all $\mathbf{x} \in \Omega$, which is the necessary condition to further establish the nominal asymptotic stability. \end{description} It may be noted that the steps S1 and S2a results in a conservative terminal region and steps S1 and S2b result in larger terminal region. The step S2b is implemented as follows: \\ Initially $\alpha = \gamma$ and condition (\ref{Vdot13}) i.e. $(\mathbf{\Psi}(\mathbf{x}) \geq 0)$ is checked. If (\ref{Vdot13}) is true, then $\alpha = \gamma$. If (\ref{Vdot13}) is false i.e. $(\mathbf{\Psi}(\mathbf{x}) < 0)$ for at least one $\mathbf{x} \in \Omega$, then the value of $\alpha$ is further reduced by a multiplicative factor $\beta < 1$ and $\beta \approx 1$. The process continues until condition (\ref{Vdot13}) is satisfied. Terminal region shape changes according to the computed $\mathbf{P}$ matrix and its size changes according to the value of $\alpha$. In order to compare the size of the terminal regions, area is computed for state dimension of $2$ as \begin{align} A_2 = \frac{\pi \alpha}{\sqrt{det(\mathbf{P})}} \label{Area} \end{align} \section{CSTR Case Study} Effectiveness of the proposed approaches for the terminal region characterization and its applicability to NMPC continue time simulations is demonstrated using the benchmark CSTR case study. \subsection{Choice of Tuning Matrices} According to the design of the arbitrary controller based approach, the gain matrix $\mathbf{K}$ can be any arbitrary stabilizing linear controller. However, in order to simply the computations, simulation results are presented with the following choice. Controller gain $\mathbf{K}$ is the steady state solution of the simultaneous equations (\ref{LQRP}) and (\ref{LQRK}). \begin{equation} \mathbf{A}^T \mathbf{P} + \mathbf{P} \mathbf{A} = - \mathbf{W}_x + \mathbf{P} \mathbf{B} \left( \mathbf{W}_u \right)^{-1} \mathbf{B}^T \mathbf{P} \label{LQRP} \end{equation}% \begin{equation} \mathbf{K} = \left( \mathbf{W}_{u}\right) ^{-1} \mathbf{B}^{T}\mathbf{P} \label{LQRK} \end{equation}% The tuning matrix $\Delta \mathbf{Q}$ is any positive definite matrix. However, in order to simply and structure the computations of the terminal region, following parameterization is carried out: \begin{equation} \Delta \mathbf{Q} = \widetilde{\mathbf{W}}_{x} + \mathbf{K}^T \widetilde{\mathbf{W}}_{u} \mathbf{K} \label{ACDQ} \end{equation}% In order to further simplify the numerical computation of the terminal region, additional parameterization is carried out as follows: \begin{equation} \widetilde{\mathbf{W}}_{x} = \rho_x \mathbf{W}_{x} \text{ and } \widetilde{\mathbf{W}}_{u} = \rho_u \mathbf{W}_{u} \label{TuningM} \end{equation} Note that it is sufficient to have $\widetilde{\mathbf{W}}_{x} > \mathbf{W}_{x}$ or $\widetilde{\mathbf{W}}_{u} > \mathbf{W}_{u}$ to satisfy $\Delta \mathbf{Q} > 0$, however, usually both $\widetilde{\mathbf{W}}_{x} > \mathbf{W}_{x}$ and $\widetilde{\mathbf{W}}_{u} > \mathbf{W}_{u}$ is preferred in practice. Using the matrices (\ref{TuningM}) into (\ref{ACDQ}), \begin{equation} \Delta \mathbf{Q} = \rho_x \mathbf{W}_{x} + \rho_u \mathbf{K}^T \mathbf{W}_{u} \mathbf{K} \label{ACDQ1} \end{equation} where $\rho_x > 0$ and $\rho_u > 0$ are the tuning scalars. Rajhans et al. presented terminal region characterization with only single tuning parameter $\rho_x > 0$ \cite{Rajhans2016}. However, in the current work, two parameters $\rho_x > 0$ and $\rho_u > 0$ are varied for obtaining the terminal region. Chen and Allg\"ower presents both the norm based method and inequality based method \cite{Chen1998}. It is reported that inequality based method results in larger terminal region when compared to the norm based method. Approach by Rajhans et al. in \cite{Rajhans2016} makes use of norm based method, however, the proposed approach in this work makes use of inequality based method which is inherently less conservative. Efficacy of having two tuning parameters is efficiently demonstrated using the case study in the next sub-sections. In the case study, table \ref{CSTR_Iter} steps in which parameters are varied in order to obtain a significantly larger terminal regions. For approach by Chen and Allg\"ower's \cite{Chen1998}, there is a single constant scalar tuning parameter $\kappa$. In the case of arbitrary controller based approach. there are two iterations. In the first iterations, tuning parameter $\rho_x$ is varied keeping $\rho_u$ constant. In the second iteration, value of $\rho_x = \rho_x^*$ where $\rho_x^*$ is the value of $\rho_x$ resulting in maximum terminal region area in the first iteration. Arbitrary controller based approach with single tuning parameter $\rho_x$ is given by Rajhans et al. in \cite{Rajhans2016}. \begin{table}[tbph] \caption{Terminal Region Computation Iteration Steps} \label{CSTR_Iter}\centering% \begin{tabular}{|l|c|c|c|c|c|} \hline $\text{Approach}$ & Iteration & $\begin{array}{c} \text{Tuning} \\ \text{parameters} \end{array}$ & $\begin{array}{c} \text{Constant} \\ \text{parameters} \end{array}$ & $\begin{array}{c} \text{Initial} \\ \text{value} \end{array}$ & $\begin{array}{c} \text{Increasing} \\ \text{parameter} \end{array}$ \\ \hline Chen and Allg\"ower's \cite{Chen1998} & 1 & $\kappa$ & $\kappa$ & $-0.95*\left[ \lambda _{\max }\left( \mathbf{A-BK} \right) \right]$ & - \\ \hline Arbitrary controller based \cite{Rajhans2016} & 1 & $\rho_x$ & $\rho_{u} = 0$ & $\rho_x = 0.1$ & $\rho_x$ \\ \hline Arbitrary controller based & 2 & $\rho_x, \rho_u$ & $\rho_{x}^*$ & $\rho_u = 0.1$ & $\rho_u$ \\ \hline LQR based & 1 & $\rho_x$ & $\rho_{u} = 1$ & $\rho_x = 1.1$ & $\rho_x$ \\ \hline LQR based & 2 & $\rho_x, \rho_u$ & $\rho_{x}^*$ & $\rho_u = 1.1$ & $\rho_u$ \\ \hline \end{tabular}% \end{table} \subsection{CSTR System Details} Consider Continuous Stirred Tank Reactor (CSTR) initially given by Hicks and Ray\ \cite% {Hicks1971} and later used by Huang et al. \cite{Huang2012}. The system dynamics equations are: \begin{align} \frac{dz_{c}}{dt}& =\frac{(1-z_{c})}{m_{2}}-k_{0}z_{c} e^{(-E_{a}/z_{T})} \label{cstrmodel1} \\ \frac{dz_{T}}{dt}=& \frac{(z_{T}^{f}-z_{T})}{m_{2}}% +k_{0}z_{c} e^{(-E_{a}/z_{T})}-\alpha _{0}m_{1}(z_{T}-z_{T}^{CW}) \label{cstrmodel2} \end{align}% where $z_{c}$ and $z_{T}$ represent dimensionless concentration and dimensionless temperature, respectively. Control inputs are cooling water flow rate $m_{1}$ and inverse of the dilution rate $m_{2}$. \subsection{Nominal Parameters and Linearization} Nominal values of the parameters are given in the Table \ref{CSTR_parameters}. \begin{table}[tbph] \caption{CSTR System: Nominal Parameters} \label{CSTR_parameters}\centering% \begin{tabular}{|c|c|} \hline Variable & Nominal Value \\ \hline\hline $z_{T}^{CW}$ & $0.38$ \\ \hline $z_{T}^{f}$ & $0.395$ \\ \hline $E_{a}$ & $5$ \\ \hline $\alpha _{0}$ & $1.95\times 10^{-4}$ \\ \hline $k_{0}$ & $300$ \\ \hline \end{tabular}% \end{table} To improve numerical stability of the optimization routine, the inputs $(m_{1},m_{2})$ appearing in the system dynamics are scaled as $u_{1}=m_{1}/600$ and $u_{2}=m_{2}/40$. Operating point is given as, \begin{equation} \pmb{X}_{s}=\left[\begin{array}{c} 0.6416 \\ 0.5387 \end{array} \right] \end{equation} \begin{equation} \pmb{U}% _{s}=\left[ \begin{array}{c} 0.5833 \\ 0.5000 \end{array} \right] \label{cstrdiscss} \end{equation}% The input constraints are given as follows: \begin{align} \mathcal{U}=\left\{u_{1},u_{2}\in \mathbb{R} | -0.4167\leq u_{1}\leq 0.4167, -0.4750\leq u_{2}\leq 0.5 \right\} \end{align}% Jacobian linearization of the continuous time nonlinear system at $(\mathbf{X}_{s},\mathbf{U}_{s})$ yields: \begin{equation} \mathbf{A}=% \begin{bmatrix} -0.0779 & -0.3088 \\ 0.0279 & 0.1905 \end{bmatrix}% ~\text{and } \mathbf{B}=% \begin{bmatrix} 0 & -0.0358 \\ -0.0184 & 0.0144 \end{bmatrix}% \end{equation}% Eigenvalues of the open loop continuous time dynamics are $(-0.0406, ~0.1532)$, which is unstable (i.e. negative real part). \subsection{NMPC Controller Design} Stage cost matrices for the MPC formulation are given as follows: \begin{equation} \pmb{W}_{x}=\left[ \begin{array}{cc} 10 & 0 \\ 0 & 2% \end{array}% \right] \end{equation} \begin{equation} \pmb{W}_{u}=\left[ \begin{array}{cc} 1 & 0 \\ 0 & 0.5% \end{array}% \right] \end{equation}% Since concentration of the mixture is more crucial compared to the temperature of the reactor, hence, the weight for the first state (concentration) is chosen $5$ times larger when compared to weight of the second state (temperature). Sampling interval of $T=1~unit$ is used. \subsection{Comparison of the Terminal Regions for CSTR System} Linear gain matrix and terminal penalty matrix obtained using Chen and Allg\"ower's \cite{Chen1998} approach ($\kappa = 0.1059$) is given as follows: \begin{align} \mathbf{K}_{CA}=% \begin{bmatrix} -1.6118 & -10.7187 \\ -2.1094 & 10.5029 \end{bmatrix}% ,~ \mathbf{P}_{CA}=10^{3}\times \begin{bmatrix} 8.4569 & 5.8384 \\ 5.8384 & 4.8968 \end{bmatrix} \end{align} Linear gain matrix and terminal penalty matrix obtained using Arbitrary Controller based approach ($\rho _{x}=50, \rho _{u}=20$) is given as follows: \begin{align} \mathbf{K}=% \begin{bmatrix} -1.6118 & -10.7187 \\ -2.1094 & 10.5029 \end{bmatrix}% ,~ \mathbf{P}=10^{4}\times \begin{bmatrix} 0.3492 & 0.3406 \\ 0.3406 & 1.2265 \end{bmatrix} \label{Kgain6} \end{align} Linear gain matrix and terminal penalty matrix obtained using LQR based approach ($\rho _{x}=50,\rho _{u}=1500$) are given as follows: \begin{align} \mathbf{L}_{LQ}=% \begin{bmatrix} -1.2963 & -10.4475 \\ 1.1335 & 11.3084 \end{bmatrix}% ,~ \mathbf{P}_{LQ}=10^{5}\times \begin{bmatrix} 0.1877 & 1.0578 \\ 1.0578 & 8.5254 \end{bmatrix} \label{Kgain5} \end{align}% Table \ref{CSTR_TR_CA_AC_LQR} compares areas of the largest terminal regions obtained using Chen and Allg\"ower's \cite{Chen1998} (as CA), Arbitrary Controller (as AC) based approach and LQR based approach (as LQ). It can be observed that the terminal region obtained using arbitrary controller based approach is approximately 45 times larger than the area of the terminal region obtained using the approach by Chen and Allg\"ower's \cite{Chen1998}. Additionally, the terminal region obtained using LQR based approach is approximately 412 times and 9 times larger than the area of the terminal region obtained using the approach by Chen and Allg\"ower's \cite{Chen1998} and arbitrary controller based approach respectively. It can be observed that arbitrary controller based approach using two tuning parameters $\rho_x, \rho_u$ result approximately $4.1$ times increase in area of the terminal region when compared to the arbitrary controller based approach using a single tuning parameter $\rho_x$ as given in \cite{Rajhans2016}. \begin{table}[tbph] \caption{CSTR system: comparison of maximum terminal regions } \label{CSTR_TR_CA_AC_LQR}\centering% \begin{tabular}{|l|c|c|c|c|} \hline $\text{Approach}$ & $% \text{Degrees of freedom} $ & $\gamma$ & $\alpha $ & $\text{Area of }\Omega $ \\ \hline Chen and Allg\"ower's \cite{Chen1998} & $\kappa = 0.1059$ & $1.3620$ & $0.1282$ & $% 1.4880 \times 10^{-4}$ \\ \hline Arbitrary controller based \cite{Rajhans2016} & $\rho _{x}=50, \rho _{u}=0$ & $1.3563$ & $0.6467$ & $0.0016$ \\ \hline Arbitrary controller based & $\rho _{x}^*=50, \rho _{u}=20$ & $11.9270$ & $11.9270$ & $0.0067$ \\ \hline LQR based & $\rho _{x}=50,\rho _{u}=1$ & $0.0940$ & $0.0435$ & $2.225 \times 10^{-4}$ \\ \hline LQR based & $\rho _{x}^*=50,\rho _{u}=1500$ & $1.3560 \times 10^{3}$ & $1.3560 \times 10^{3}$ & 0.0614 \\ \hline \end{tabular}% \end{table} \section{NMPC Demonstration Results} In order to formally demonstrate the efficacy of the larger terminal regions on the MPC, continuous time simulations are carried out using the largest terminal region which is obtained using the novel LQR based approach with two tuning parameters $\rho_x, \rho_u$. Three initial conditions given in the deviation variables and computed in different directions to affirm that the result is certain and not by chance, are given as follows: \begin{equation} \mathbf{x}% _{P_{1}}(0)= \left[ \begin{array}{c} -0.001 \\ -0.050 \end{array} \right] \text{,~} \mathbf{x}% _{P_{2}}(0)= \left[ \begin{array}{c} -0.625 \\ 0.380 \end{array} \right] \text{,~} \mathbf{x}% _{P_{3}}(0)= \left[ \begin{array}{c} 0.400 \\ 0.230 \end{array} \right] \label{Hicks2S_IC1} \end{equation} Note, in the actual variable terms, the initial conditions for the system become \begin{equation} \mathbf{X}_{Pi}(0) = \mathbf{X}_s + \mathbf{x}_{Pi}(0) \text{ for } i = 1, 2, 3 \label{Hicks2S_IC2} \end{equation} Figure \ref{Hicks2S_Cont_NMPC_result1_actual_states} displays plot of states in actual variables for the MPC simulation. It can be observed that all the states converge to the steady state operating point. \begin{figure}[!ht] \centerline{\includegraphics[width=\columnwidth]{Hicks2S_Cont_NMPC_result1_actual_states.eps}} \caption{CSTR System: Plot of states in actual variables} \label{Hicks2S_Cont_NMPC_result1_actual_states} \end{figure} Figure \ref{Hicks2S_Cont_NMPC_result1_deviation_states} shows trajectories of the states in the deviation variables for the MPC simulation. It can be observed that all the states converge to the origin. \begin{figure}[!ht] \centerline{\includegraphics[width=\columnwidth]{Hicks2S_Cont_NMPC_result1_deviation_states.eps}} \caption{CSTR System: Plot of states in deviation variables} \label{Hicks2S_Cont_NMPC_result1_deviation_states} \end{figure} Figure \ref{Hicks2S_Cont_NMPC_result1_inputs} shows the plot of the control inputs (as voltage in V). It can be seen that both the control inputs remained inside the limits indicating the feasibility. Both the control inputs converge to the steady state value after sufficient time has elapsed. \begin{figure}[!ht] \centerline{\includegraphics[width=\columnwidth]{Hicks2S_Cont_NMPC_result1_inputs.eps}} \caption{CSTR System: Plot of control inputs} \label{Hicks2S_Cont_NMPC_result1_inputs} \end{figure} Figure \ref{Hicks2S_Cont_NMPC_result1_xPx} depicts initial condition value i.e. $log_{10} \left[\mathbf{x}(t)^T \mathbf{P} \mathbf{x}(t) \right]$ value along with a limit $log_{10} \alpha$, which represents the terminal set boundary. Initially, values are larger than $\log_{10}\alpha $, which indicates that the initial condition is outside the terminal region. Subsequently, value (in log scale) keeps becoming quiet small indicating that the states converge to the origin i.e. $\mathbf{x}(t) \to \mathbf{0}$ as $t \to \infty$. Logarithmic scale is used because the range of values is higher. \begin{figure}[!ht] \centerline{\includegraphics[width=\columnwidth]{Hicks2S_Cont_NMPC_result1_xPx.eps}} \caption{CSTR System: Plot of states in actual variables} \label{Hicks2S_Cont_NMPC_result1_xPx} \end{figure} Figure \ref{Hicks2S_Cont_NMPC_result1_TCV} shows the terminal constraint value i.e. value of $\mathbf{z}(t+T_p)^{T}\mathbf{P}\mathbf{z}(t+T_p)$ along with its limit $\alpha$. Value of $\alpha$ corresponds to the terminal region boundary. Value always remains below $\alpha$ indicating that the predicted state at the end of the horizon time i.e. $\mathbf{z}(t+T_p)$ is always inside the terminal region i.e. terminal inequality constraint is satisfied every time. \begin{figure}[!ht] \centerline{\includegraphics[width=\columnwidth]{Hicks2S_Cont_NMPC_result1_TCV.eps}} \caption{CSTR System: Plot of states in actual variables} \label{Hicks2S_Cont_NMPC_result1_TCV} \end{figure} Figure \ref{Hicks2S_Cont_NMPC_result1_xx_log} depicts value i.e. $log_{10} \left[\mathbf{x}(t)^T \mathbf{x}(t) \right] = log_{10} |\mathbf{x}(t)|^2$ value. For trajectories starting from the initial conditions $P_3$, value increases slightly at $t=4$, which clearly motivates the need for developing Lyapunov stability theory. However, it can be noted that during the entire trajectory value of the Lyapunov function $\left[\mathbf{x}(t)^T \mathbf{P} \mathbf{x}(t) \right]$ as shown in the figure \ref{Hicks2S_Cont_NMPC_result1_xPx} is continuous decreasing every time. This effectively illustrates the requirement of the presence of the matrix $\mathbf{P}$ in the Lyapunov function. \begin{figure}[!ht] \centerline{\includegraphics[width=\columnwidth]{Hicks2S_Cont_NMPC_result1_xx_log.eps}} \caption{CSTR System: Plot of states in actual variables} \label{Hicks2S_Cont_NMPC_result1_xx_log} \end{figure} Table \ref{Hicks2S_N} presents approximate minimum prediction horizon time for MPC formulation to be feasible for the chosen initial conditions. It can be noticed that there is significant reduction in the minimum prediction horizon time, which is primarily due to the fact that the size of the terminal regions are larger in the arbitrary controller based approach and LQR based approach when compared to the literature approach. It is well established that the computation time required for MPC optimization convergence reduce exponentially when the prediction horizon time is reduced \cite{Rawlings2017}. Hence, the efficacy of the proposed approaches to significantly reduce the prediction horizon time is effectively demonstrated using the CSTR system case study. Since states and inputs in the CSTR case study are converted to dimensionless entities by scaling, the time variable is also scaled. Hence, it would not be legitimate to directly compare the MPC optimization convergence loop time with the sampling time for this case. However, it is observed that the time taken for MPC optimization convergence using literature approaches is significantly larger than the time taken in the case of proposed approaches, which is primarily due to the significantly lesser prediction and control horizon time(s) requirements. \begin{table}[htbp] \caption{Minimum prediction horizon time required for feasibility} \label{Hicks2S_N} \begin{center} \begin{tabular}{|l||c|c|c|c|} \hline Approach $\downarrow$ / Point $\rightarrow$ & $P_1$ & $P_2$ & $P_3$ \\ \hline\hline Chen and Allg\"ower's approach ($\kappa$) \cite{Chen1998} & 15 & 5 & 28 \\ \hline Arbitrary controller based approach ($\rho_x, \rho_u$) & 6 & 3 & 11 \\ \hline LQR based approach ($\rho_x, \rho_u$) & 4 & 3 & 3 \\ \hline \end{tabular}% \end{center} \end{table} \section{Conclusions} Approaches available in the literature for the terminal region characterization for the continuous time NMPC formulations provide a limited degrees of freedom and often result in a conservative terminal region, thereby resulting in a conservative region of attraction. Larger the terminal region larger is the region of attraction. An arbitrary stabilizing controller based approach and novel LQR based approach is presented in this work which provides a large degrees of freedom for shaping of the terminal region for the continuous time systems. Terminal penalty term is computed using the modified Lyapunov equation and subsequently the nominal asymptotic stability of continuous time NMPC with updated terminal ingredients is established. Proposed approaches provides linear controller gain and two additive matrices as the tuning parameters for enlargement of the terminal region and also makes use of inequality based method. Efficacy of the both the terminal region characterization approaches is demonstrated using benchmark CSTR system. It is observed that terminal region area obtained using the the arbitrary controller based approach and the novel LQR based approach is approximately 45 and 412 times larger by area as compared to the largest terminal region obtained using Chen and Allg\"ower's inequality based approach from \cite{Chen1998} respectively. Continuous time NMPC simulations validate the asymptotic stability property of the designed controller. It is observed that the minimum prediction horizon required for feasibility of the NMPC formulation using the proposed approaches is significantly smaller than the one required using the literature approach. During the simulations for simplicity, tuning parameter matrices are chosen to be multiple of the stage weighting matrices. Future research would involve choosing a completely arbitrary tuning matrices for shaping of the terminal regions. In addition, choosing smaller control horizon time when compared to the prediction horizon time and establishing asymptotic stability is another research direction to explore.
2023-04-23T06:10:08.221Z
2021-08-03T02:30:23.000Z
redpajama/arxiv
arxiv_0002
221
8,734
015c40090e4a05f020a9cef755a64e09fa9247e2
\section{Introduction} Generic object tracking, aiming to infer the location and scale of an arbitrary object in a video sequence, is one of the fundamental problems in computer vision \cite{intro1,intro2,SURVEY,intro3}. The recent prevailing Siamese methods \cite{siamFC, SiamBAN, SiamCAR, siamRPNpp, SiamFC++, SiamAtt, Ocean}, decompose the tracking problem into a \emph{relation learning} task and a \emph{state estimation} task. In the former case, the goal is to measure the similarity between exemplar and candidate (search) images. The second task, which is normally comprised of foreground classification and scale regression \cite{ATOM, siamRPNpp, Ocean}, is followed to estimate the target state. \begin{figure} \begin{center} \includegraphics[width=1\linewidth]{figs/vis2.pdf} \end{center} \vspace{-1em} \caption{Comparisons of our approach with depthwise cross-correlation based trackers SiamRPN++ \cite{siamRPNpp} and Ocean \cite{Ocean}. Our model, employing the automatically searched matching networks, can better handle different challenging factors, \emph{e.g.,} distractor in the first video, occlusion and scale change of the second one, background clutter and fast motion of the third sequence.} \label{fig:vis} \end{figure} Fuelled by the emergence of object detection that facilitates bounding box regression, the network design for state estimation has substantially advanced in recent years \cite{SiamBAN, ATOM, siamRPN, SiamFC++, Ocean}. However, the advancements in relation learning have been limited. Previous works generally perform relation learning with heuristically designed matching operators. Concretely, the seminal work SiamFC \cite{siamFC} employs cross-correlation to model the relation between exemplar and candidate images. The follow-ups propose upchannel cross-correlation \cite{siamRPN} and depthwise cross-correlation \cite{siamRPNpp} to learn fine-grained feature similarities. Besides their great success, it is important to note that the heuristic matching network design requires substantial effort of human experts, and it is extremely difficult to guarantee robustness in all challenging environments, as experimentally verified in Fig.~\ref{fig:vis} and Tab.~\ref{tab:operators}. One straightforward solution is to find the optimal matching operator under various circumstances, which is however obviously tedious and impractical. Hence, it is natural to throw a question: \emph{can we search for a general matching network for Siamese tracking?} In this work, we show the answer is affirmative by proposing a search algorithm for automatic matching network design. Instead of adopting the conventional cross-correlation and its variants, we explore more feasibility of matching operator selection. Specifically, besides cross-correlation, we introduce six novel matching operators to Siamese tracking, namely Concatenation, Pointwise-Addition, Pairwise-Relation, FiLM, Simple-Transformer and Transductive-Guidance. We shed light on the intrinsic differences of these operators by comparing their performances under different environment degradation types. Surprisingly, by simply replacing the cross-correlation to concatenation, the strong baseline tracker Ocean \cite{Ocean} achieves 1.2 points gains on success score of OTB100 \cite{OTB-2015} (see Tab.~\ref{tab:operators}). Moreover, we observed that the matching operators show different resilience on various challenging factors and image contents. This inspires us to combine them to exploit complementary informative features. To this end, we propose a search algorithm, namely Binary Channel Manipulation (BCM), to automatically select and combine matching operators. Firstly, we construct a search space with the aforementioned seven operators. The exemplar and candidate images pass through all matching operators to generate the corresponding response maps. For each response channel, we assign it with a learnable manipulator to indicate its contribution for other tracking steps. Gumbel-Softmax \cite{gumbel-softmax} is applied to discretize the manipulators as binary decision, as well as guarantee the differentiable training. Then, we aggregate manipulators of all channels to identify the operator's potential for adapting to the baseline tracker. Our search algorithm aims to find the matching networks with better generalization on different tracking environments. Thus, the performance on the validation set is treated as the reward or fitness. Concretely, we solve the search algorithm using bilevel optimization, which finds the optimal manipulators on the validation set with the weight of other layers (\emph{e.g.,} convolution kernels) learned on the training data. Notably, we simultaneously predict matching networks for both the classification and regression branches in state estimation. \textbf{The different search results for classification and regression demonstrate that our method is capable of finding task-dependent matching networks.} Finally, we integrate the learned matching networks into the baseline tracker \cite{Ocean} and train it following the standard Siamese procedure. The effectiveness of the proposed framework is verified on OTB100 \cite{OTB-2015}, LaSOT \cite{LASOT}, GOT10K \cite{GOT10K}, TrackingNet \cite{TrackingNet} and TNL2K \cite{TNL2K}. Our approach surpasses the baseline tracker \cite{Ocean} on all five benchmarks. It is worth noting that the proposed tracker also outperforms the recent online updating methods DiMP \cite{DiMP} and KYS \cite{KYS} on all criteria of the evaluated datasets. The main contributions of this work are twofold. \begin{itemize}[leftmargin=0.55cm] \item{ We introduce six novel matching operators for Siamese tracking. A systematic analysis reveals that the commonly-used (depthwise) cross-correlation is not a requisite, and an appropriate matching operator can further bring remarkable performance gains. } \item{ A conceptually simple algorithm, namely Binary Channel Manipulation (BCM), is proposed for automatic matching networks design with the introduced operators. By integrating the learned matching networks into the baseline tracker, it achieves remarkable performance gains with neglectable overhead on tracking speed. } \end{itemize} \section{Related Work} In this section, we review the related work on matching based tracking, as well as briefly describe recent thriving Siamese trackers, where the baseline tracker belongs to. \subsection{Tracking via Heuristic Matching} \label{SEC2-1} In the context of visual tracking, it usually corresponds to the process of predicting foreground probability as a one-shot matching problem. SINT \cite{SINT} proposes to learn a matching function to identify candidate image locations that match with the initial object appearance. The matching function is simply defined as \emph{dot product} operation. Held et al. introduce GOTURN \cite{GOTURN}, which predicts target location by directly regressing the \emph{concatenation} feature of the exemplar and candidate images. Global Track \cite{GlobalTrack} and ATOM \cite{ATOM} inject the target information into the region proposal network by applying \emph{hadamard product} to exemplar and candidate embeddings. Recent prominent Siamese trackers \cite{siamFC, siamRPN, siamRPNpp} achieve groundbreaking results on all benchmarks, which is mostly attributed to the effective \emph{cross-correlation} module and its variants. We observed that when choosing matching functions for a tracking method, expertise and massive experiments are inevitably required. Moreover, the heuristic matching network may not be an optimal architecture design. In this work, we propose a differentiable search algorithm to automatically determine which matching functions to use and how to combine them in visual tracking. Since the proposed search algorithm is applied to the Siamese framework, in the following, we briefly retrospect the development of Siamese tracking. \begin{figure*}[!t] \vspace{-0.5em} \begin{center} \includegraphics[width=1\linewidth]{figs/matchingFuns.pdf} \end{center} \vspace{-2em} \caption{Matching operators: (a) Concatenation (b) Pointwise-Addition (c) Pairwise-Relation (d) FiLM (see Sec.~\ref{sec:Operators}).} \vspace{-0.5em} \label{fig:operators1} \end{figure*} \subsection{Siamese Tracking} \label{SEC2-2} Siamese tracking has drawn attention because of its balanced accuracy and speed. The pioneering work of Siamese trackers, \emph{i.e.}, SiamFC \cite{siamFC}, introduces the \emph{cross-correlation} layer as a similarity metric for target matching, which significantly boosts tracking efficiency. SiamRPN \cite{siamRPN} ensues to improve SiamFC by advocating a region proposal network for scale estimation. The follow-up works unleash the capability of deeper backbone networks in Siamese tracking by alleviating position bias \cite{siamRPNpp} and perceptual inconsistency \cite{SiamDW}. The estimation network evolves from anchor-based to anchor-free mechanism recently \cite{SiamBAN,SiamCAR,Ocean,SiamFC++}. Whilst deeper backbone and advanced estimation network significantly enhance the transferability of tracking models, the feasibility of matching network design remains less investigated. In this work, we narrow this gap by introducing new matching operators and searching their optimal combination for Siamese tracking. \section{Analysis of Matching Operators} \subsection{Instantiations} \label{sec:Operators} The standard Siamese tracker takes an exemplar image $\bm{z}$ and a candidate image $\bm{x}$ as input. The image $\bm{z}$ represents object of interest in the first frame, while $\bm{x}$ is typically larger and represents the search area in subsequent video frames. The two images are first fed into a shared backbone network to generate two corresponding feature maps $\bm{F}_{z} \in \mathbb{R}^{H_z \times W_z \times C}$ and $\bm{F}_{x} \in \mathbb{R}^{H_x \times W_x \times C}$. Then a matcing network $\varphi$ is applied to inject the information of exemplar $\bm{F}_{z}$ to $\bm{F}_{x}$, which outputs a correlation feature $\bm{R}$, \begin{equation} \bm{R} = \varphi(\bm{F}_{z}, \bm{F}_{x}). \label{eq:matching} \end{equation} Recent top-ranked Siamese trackers define $\varphi$ as \emph{depth-wise cross-correlation} \cite{siamRPNpp, SiamAtt, SiamBAN, SiamCAR, SiamFC++, Ocean}. Notably, when the spatial size of $\bm{F}_z$ is $1\times 1$ ($\bm{f}_z$), the depthwise cross-correlation resembles hadamard product \cite{GlobalTrack}. Besides depthwise cross-correlation, in this work, we explore other matching operators, namely \emph{Concatenation}, \emph{Pointwise-Addition}, \emph{Pairwise-Relation}, \emph{FiLM}, \emph{Simple-Transformer} and \emph{Transductive-Guidance}. The concatenation operator has been exploited in previous work \cite{GOTURN}, while others have not, to the best of our knowledge. We detail each of them in the following. \textbf{Concatenation} is used by the pairwise function in Relation Networks \cite{RelationNetwork} for visual reasoning. We also explore a concatenation form of $\varphi$, as shown in Fig.~\ref{fig:operators1} (a): \begin{equation} \bm{R} = \operatorname{Conv}([\bm{f}_{z}, \bm{F}_{x}]), \label{eq:concatanation} \end{equation} here $\bm{f}_{z} \in \mathbb{R}^{1 \times 1 \times C}$ is the pooled features on $\bm{F}_{z}$ (inside the bounding box). [·, ·] denotes concatenation and $\operatorname{Conv}$ is a $1\times 1$ convolution layer with output channel of $C$. \textbf{Pointwise-Addition} is similar to the hadamard product, but changes ``multiplication'' to ``addition'' (see Fig.~\ref{fig:operators1} (b)): \begin{equation} \bm{R} = \bm{f}_{z} + \bm{F}_{x}, \label{eq:addition} \end{equation} where $+$ denotes elementwise addition. \input{tables/operators} \textbf{Pairwise-Relation} is widely used in video object segmentation \cite{TVOS}. It is a variant of non-local attention \cite{NonLocal}, and is defined as, \begin{equation} \bm{R} = \operatorname{matmul}(S(\bm{F}_x), S(\bm{F}_z)), \label{eq:pairwise-relation} \end{equation} where $S$ reshapes $\bm{F}_x$ and $\bm{F}_z$ to the size of $H_x W_x \times C$ and $C \times H_z W_z$, respectively (see Fig.~\ref{fig:operators1} (c)). Here, $\operatorname{matmul}$ denotes matrix multiplication. The pairwise-relation measures the affinity of each cell in the exemplar feature to all that in the candidate feature. \textbf{FiLM} is firstly introduced in visual reasoning \cite{FiLM}. It learns to adaptively influence the output of a neural network by applying an affine transformation to the network’s ``intermediate features'', based on some ``input''. For visual tracking, we consider the exemplar feature $\bm{f}_z$ as the ``input'', and the candidate feature $\bm{F}_x$ as ``intermediate features''. More formally, \begin{equation} \begin{split} \gamma = \operatorname{Conv}(\bm{f_z}), \\ \beta = \operatorname{Conv}(\bm{f_z}), \\ \bm{R} = \gamma \bm{F}_x + \beta, \end{split} \label{eq:FiLM} \end{equation} where the coefficient $\gamma$ and bias $\beta$ are two tensors with size of $1 \times 1 \times C$, as shown in Fig.~\ref{fig:operators1} (d). \begin{figure}[!t] \begin{center} \includegraphics[width=1\linewidth]{figs/matchingFuns2.pdf} \end{center} \vspace{-1.5em} \caption{Matching operators: (a) Simple-Transformer (b) Transductive-Guidance. Details are described in Sec.~\ref{sec:Operators}.} \vspace{-0.5em} \label{fig:operators2} \end{figure} \textbf{Simple-Transformer} is motivated by recent booming visual transformer \cite{ViTsurvey}, \begin{equation} \bm{R} = \operatorname{Att}(query, key, value), \label{eq:FiLM} \end{equation} where $query=\operatorname{Conv}(\bm{F}_x), key=\operatorname{Conv}(\bm{F}_z), value=\operatorname{Conv}(\bm{F}_z)$. $\operatorname{Att}$ is a multi-head attention layer in visual transformer \cite{ViTsurvey}, and is implemented by ``nn.multiheadAttention'' in PyTorch \cite{PYTORCH}. More details are presented in Fig.~\ref{fig:operators2} (a). \textbf{Transductive-Guidance} is originated from mask propagation mechanism in video object segmentation \cite{TVOS, OceanPlus}, where the segmentation masks of previous frames guide the prediction of the current frame. In our work, we specifically modify it for Siamese tracking. First, the affinity between examplar and candidate feature is predicted by, \begin{equation} \bm{A} = \operatorname{matmul}(S(\bm{F}_x), S(\bm{F}_z)). \\ \end{equation} This step is the same as the computation of the pairwise-relation. With the affinity, the spatial guidance is learned by propagating the pseudo mask of the first frame, \begin{equation} \bm{G} = \operatorname{matmul}(\bm{A}, S(\bm{M}_z)), \end{equation} where $\bm{M}_z$ is the pseudo mask of the first frame. Specifically, the pixels inside and outside the bounding box are set to 1 and 0, respectively, as shown in Fig.~\ref{fig:operators2} (b). $\bm{G}$ serves as the spatial guidance for target localization, in which each pixel indicates the foreground probability of a location. Then the spatial guidance is fused with the visual feature by, \begin{equation} \bm{R} = \bm{G} + \bm{F}_x. \end{equation} \begin{figure}[!t] \begin{center} \includegraphics[width=1\linewidth]{figs/OpsActivation.pdf} \end{center} \vspace{-1em} \caption{Activation maps of different matching operators. (a) Depthwise Cross-correlation (b) Concatenation (c) Pointwise-Addition (d) Pairwise-Relation (e) FiLM (f) Simple-Transformer (g) Transductive-Guidance.} \vspace{-1em} \label{fig:quantitative} \end{figure} \begin{figure*}[!t] \begin{center} \hspace{-0.5em} \includegraphics[width=1\linewidth]{figs/framework.pdf} \end{center} \vspace{-1.5em} \caption{Overview of the proposed framework AutoMatch. The \textcolor[RGB]{146,205,220}{matching operators} in search space explore the relation between exemplar and candidate features. The \textcolor[RGB]{192,0,0}{crosses} and dashed arrows indicate the discarded operators after searching with binary channel manipulation. And operators linked with the \textcolor{green}{green arrows} constructs the searched matching network. The search algorithm is applied to both classification and regression, and only one of that is illustrated here for simplicity.} \label{fig:framework} \end{figure*} \subsection{Analysis} \label{sec:CC} In Sec.~\ref{sec:Operators}, we introduce six novel matching operators for Siamese tracking, besides the conventional depthwise cross-correlation. It is natural to ask: \emph{How do these new operators perform, and could the conventional depthwise cross-correlation be replaced by these proposed operators?} We answer the questions in this section. \textbf{Performance of Individual Operators.} To investigate the impact of each operator on Siamese tracking, we apply them to a recent tracker Ocean \cite{Ocean}, and evaluate the performance on OTB100 \cite{OTB-2015}. As shown in Tab.~\ref{tab:operators}, the vanilla Ocean \cite{Ocean} with depthwise cross-correlation (\ding{172}) achieves overall success of 67.2. When replacing the depthwise cross-correlation by Simple-Transformer (\ding{177}) and Transductive-Guidance (\ding{178}), the overall score drops to 65.8 and 65.0, respectively. The performance degradation illustrates that randomly choosing a matching operator may bring negative impacts to a tracking framework. But surprisingly, the results of all other four operators (\ding{173}$\sim$ \ding{176}) are favorably comparable to or even better than depthwise cross-correlation. The comparisons inspire us that the classical depthwise cross-correlation is not the optimal choice for Siamese tracking, and an appropriate matching operator can lead to better tracking accuracy. \textbf{Potential of Complementarity.} Although one well-designed matching operator may surpass classical depthwise cross-correlation under certain circumstances, the improvements cannot be guaranteed for all challenging cases. As shown in Tab.~\ref{tab:operators}, although the concatenation operator (\ding{173}) exhibits superiority over most challenging factors, it is inferior to Transductive-Guidance (\ding{178}) on Scale Variation (SV), Pairwise-Relation (\ding{175}) on Out-of-Plane Rotation (OPR), Depthwise Cross-correlation (\ding{172}) on Out-of-View (OV) and Point-Addition (\ding{174}) on Low Resolution (LR). We further visualize the activation map of matching outputs in Fig.~\ref{fig:quantitative}. It shows that the depthwise cross-correlation (a), Pairwise-relation (d), and Transductive-Guidance (g) tend to filter out the context features and focus on the target itself. Conversely, the concatenation (b), Pointwise-Addition (c), Simple-Transformer (e), and FiLM (e) exploit more context information. The possible reason is that the hard negative examples introduced by the context help prevent overfitting to the easy background. In a nutshell, the quantitative comparison in Tab.~\ref{tab:operators} and qualitative analysis in Fig.~\ref{fig:quantitative} demonstrate that different matching operators show different resilience on various challenging factors and image contents. This inspires us to combine them to exploit complementary informative features. Instead of searching for the best matching operators under various circumstances, which is obviously impractical, we propose an automatic method that can adaptively learn to choose and combine the matching functions. \section{Methodology} \label{sec:method} \subsection{Overview of AutoMatch} \label{sec:method-overview} The proposed framework AutoMatch is illustrated the in Fig.~\ref{fig:framework}. Typical Siamese tracking framework contains three main steps, \emph{i.e.}, feature extraction, matching, and target localization. Given an exemplar image $\bm{z}$ and a candidate image $\bm{x}$, a backbone network is first applied to extract visual features $\bm{F}_z$ and $\bm{F}_x$. $\bm{F}_z$ and $\bm{F}_x$ then pass through a matching network $\varphi$ to learn their relation. $\varphi$ is generally defined as depthwise cross-correlation in recent works \cite{siamRPNpp, Ocean}. In our study, the matching network design evolves from heuristic selection to automatic search. Concretely, $\bm{F}_z$ and $\bm{F}_x$ are fed to matching operators in the search space (see Sec.~\ref{sec:Operators}), which obtains $m$ multi-channel response features $\{\bm{r}_1, \bm{r}_2, ..., \bm{r}_m\}$. Each channel of a response feature is assigned with a learnable manipulator $w_i^{j}$, indicating a feature channel's contribution to other tracking steps. We introduce the binary Gumbel-Softmax \cite{gumbel-softmax} to discretize the manipulators for binary decision, as well as guarantee the differentiable training. The learning of manipulators is formulated as bilevel optimization (see Sec.~\ref{sec:method-BO}). Two operators are finally retained based on the guidance of the learned manipulators, and their response maps are concatenated as the input of the following steps. With the learned matching networks, the classification and regression networks are followed to predict the target state (see Sec.~\ref{sec:ex-training}). \subsection{Binary Channel Manipulation} \label{sec:method-BCM} Let $\mathcal{O}=\{o_1,o_2, ..., o_m\}$ be the search space consisting of optional matching operators $o_i(\cdot)$ to be applied to exemplar and candidate features. The response set $\mathcal{R}$ is got by, \begin{equation} \mathcal{R} = \{o_1(\bm{z}, \bm{x}), ..., o_m(\bm{z}, \bm{x})\}. \label{eq:rpset} \end{equation} The search algorithm aims to find the optimal combination of operators based on the response set $\mathcal{R}$. We propose binary channel manipulation (BCM) to decide the contribution of an operator for target state prediction. Each element $\bm{r}_i^j$ in $\mathcal{R}$ is a tensor with size of $H_x \times W_x \times C$. We assign each feature channel with a learnable manipulator $w_i^{j}$, and then aggregates the weighted maps in $\mathcal{R}$ by concatenation, \begin{equation} \bm{E} = [\sigma(w_1^{1})\bm{r}_1^{1},..., \sigma(w_i^{j})\bm{r}_i^{j}, ..., \sigma(w_m^{C})\bm{r}_m^{C}], \label{eq:aggre} \end{equation} where $\bm{r}_i^{j}$ indicates the $j_{th}$ channel of the $i_{th}$ response feature. $\sigma$ is sigmoid. $\bm{E} \in \mathbb{R}^{H_x \times W_x \times C|\mathcal{O}|}$ denotes the aggregated feature, which is used as the input of subsequent target estimation network. The manipulator defines the channel's contribution to target location. For each operator, we define the summation of channel manipulators as the potential $p_i$ of an operator for adapting to the baseline tracker, \begin{equation} p_i = \sum_{j=1}^{C} \sigma(w_i^{j}). \label{eq:adaption} \end{equation} Inspired by channel pruning \cite{GATEDNETWORKS} and differentiable network architecture search \cite{FairDARTS, DARTS}, we translate the continuous solution $w_i^{j}$ to discrete one for final decision. These discrete decisions are trained end-to-end using the Gumbel-Softmax \cite{gumbel-softmax}. Concretely, given a distribution with (two) class probabilities $\pi=\{\pi_1=\sigma(w_i^{j}), \pi_2=1-\sigma(w_i^{j})\}$, the discrete samples $d$ can be drawn using, \begin{equation} d = \operatorname{onehot}(\mathop{\arg\min}_{k}[\log(\pi _{k}) + g_k]), \end{equation} where $g_k$ is noise sample drawn from Gumbel distribution. $k \in \{1,2\}$ denotes binary classification. The Gumbel-Softmax defines a continuous, differentiable approximation by replacing the argmax with a softmax, \begin{equation} y_k = \frac{\exp((\log(\pi _k)+g_k) / \tau)}{\sum_{c=1}^{2}\exp((\log(\pi _c)+g_c) / \tau)}. \label{eq:GumbelCon} \end{equation} Substituting $\pi _{1}=\sigma(w_i^{j})$, $\pi _{2}=1-\sigma(w_i^{j})$, Eq. \ref{eq:GumbelCon} is simplified to ($k=1$ for binary case), \begin{equation} y_1 = \sigma(\frac{w_i^{j}+g_1-g_2}{\tau}). \label{eq:GumbelFinal} \end{equation} We attach the derivation in supplementary materials due to space limit. The $\tau$ is set to 1, $g_k$ to 0 following \cite{BengioConditional, gumbel-softmax}. For the discrete sample $d$, a hard value is used during the forward pass and gradients are obtained from soft value during the backward pass: \begin{equation} d=\left\{ \begin{array}{lr} y_1 > 0.5 \equiv \frac{w_i^{j}+g_1-g_2}{\tau} = w_i^{j} >0, forward& \\ y_1, backward.& \end{array} \right. \label{GumbelDis} \end{equation} \input{tables/results} \subsection{Bilevel Optimization} \label{sec:method-BO} With binary channel manipulation, our goal is to jointly learn the manipulators $w$ and the weights $\theta$ of other layers (\emph{e.g.,} convolution layers in operators). Analogous to differentiable architecture search \cite{DARTS}, where the validation set performance is treated as the reward or fitness, we aim to optimize the validation loss. Let $\mathcal{L}_{train}$ and $\mathcal{L}_{val}$ denote the training and validation loss, respectively. The goal for matching network search is to find $w^{*}$ that minimizes the validation loss $\mathcal{L}_{val}(\theta^{*}; w^{*})$, where the network parameters $\theta^{*}$ associated with the architecture are obtained by minimizing the training loss $\theta^{*} = \operatorname{argmin}_w \ \mathcal{L}_{train}(\theta, w^{*})$. This implies a bilevel optimization problem \cite{DARTS, FairDARTS} with $w$ as the upper-level variable and $\theta$ as the lower-level variable, \begin{gather} \operatorname{min}_{w} \ \mathcal{L}_{val}(\theta^{*}(w); w), \\ s.t. \quad \theta^{*}(w)=\operatorname{argmin}_{\theta} \ \mathcal{L}_{train}(\theta, w). \end{gather} To speed up the bilevel optimization during training, Liu et.al propose a simple approximation in \cite{DARTS}, \begin{gather} \nabla_w \mathcal{L}_{val}(\theta^{*}(w); w) \\ \approx \nabla_w \mathcal{L}_{val}(\theta - \epsilon \nabla_\theta \mathcal{L}_{train}(\theta, w), w), \label{eq:biapp} \end{gather} where $\epsilon$ is the learning rate for a step of inner optimization. The derivation is beyond the scope of this work. We refer the reader to \cite{DARTS} for more details about the approximation. In summary, we propose binary channel manipulation to identify the contribution of a matching operator. Then we learn the manipulators by bilevel optimization. We simultaneously apply the search algorithm on the classification and regression branches in state estimation to learn task-dependent matching networks. After training, the first two operators with the maximum potential $p_i$ are retained (see \textcolor{green}{green arrows} in Fig.~\ref{fig:framework}). Finally, we follow the procedure of the baseline tracker \cite{Ocean} to train the searched architecture. \section{Experiments} \subsection{Implementation Details}\label{sec:ex-training} \noindent \textbf{Network Architecture.} We adopt the recent Siamese tracker Ocean \cite{Ocean} as the baseline model. The backbone network is the modified ResNet50 \cite{MCF}. The target localization network consists of a classification branch and a regression branch. Though the updating branch of Ocean \cite{Ocean} is not used in our work, our tracker remarkably outperforms its online updating version. We refer the readers to \cite{Ocean} for more details about the baseline tracker. In this work, we simultaneously search for the target-dependent matching networks for the classification and regression branches. \noindent \textbf{Training Procedure.} The training procedure consists of two stages, \emph{i.e.,} matching network search and new tracker training. In the first stage, we search for the matching networks using methods in Sec.~\ref{sec:method} and determine the best cell based on the validation performance. In the second stage, we use the optimized matching networks to construct a new tracker on the baseline approach Ocean \cite{Ocean}. Both stages are trained with Youtube-BB \cite{YTB}, ImageNet-VID \cite{VID}, ImageNet-DET \cite{VID}, GOT10K \cite{GOT10K} and COCO \cite{COCO} (including training and validation sets). The search algorithm's training takes 5 epochs, with each containing $6 \times 10^{5}$ pairs. The learning rate exponentially decays from $10^{-3}$ to $10^{-4}$. The training of the new tracker follows the baseline model \cite{Ocean}. \textbf{Notably, we simplify Ocean \cite{Ocean} by reducing the training epochs from 50 to 20 to expedite the learning process.} For the first 5 epochs, we start with a warmup learning rate of $10^{-3}$. For the remaining epochs, the learning rate exponentially decays from $5 \times 10^{-3}$ to $5 \times 10^{-5}$. Both stages are trained with synchronized SGD \cite{SGD} on 4 GTX2080 Ti GPUs, with each hosting 32 images. \subsection{State-of-the-art Comparison} The search algorithm determines different matching networks for the classification and regression branches. \textbf{After the first stage training, Simple-Transformer and FiLM are retrained for the classification branch, meanwhile, FiLM and Pairwise-Relation are preserved for the regression branch.} We compare the new tracker with state-of-the-art models on five benchmarks. Our tracker achieves compelling performance while running at over 50 FPS. Notably, it only takes less than 24 hours for the second stage training (with 4 GTX2080Ti GPUs), which provides a strong but efficient baseline for further research. \noindent \textbf{OTB100 \cite{OTB-2015}.} OTB100 is a classical tracking benchmark consisting of 100 sequences. Methods are ranked by the area under the success curve (AUC) and precision (Prec.). As shown in Tab.~\ref{tab:results}, our model achieves the top-ranked AUC score, which outperforms the previous best result by SiamAttn \cite{SiamAtt}, \emph{i.e.}, 71.4 \emph{vs} 71.2. When equipping the baseline tracker Ocean \cite{Ocean} with our searched matching network, it brings favorabale 4.2 points gains, \emph{i.e.}, 71.4 \emph{vs} 67.2. The proposed model also surpasses online updating models ATOM \cite{ATOM}/DiMP\cite{DiMP} for 4.5/2.6 points, respectively. \noindent \textbf{LaSOT \cite{LASOT}.} LaSOT is a tracking benchmark designed for long-term tracking. Tab.~\ref{tab:results} shows the comparison results on 280 testing videos. Our method achieves the best AUC and precision score, outperforming Ocean \cite{Ocean} for 5.7 and 7.3 points, respectively. Compared with DiMP \cite{DiMP}, our method achieves improvements of 1.4 points on success score. Notably, the proposed tracker runs at 50 FPS, which is comparable to 58 FPS of Ocean, and faster than 43 FPS of DiMP. The comparisons demonstrate that the proposed method brings significant performance gains with small overhead. \begin{figure} \begin{center} \vspace{-1em} \includegraphics[width=1\linewidth]{figs/LASOT.pdf} \end{center} \vspace{-2em} \caption{Visualization of results comparison on LaSOT.} \vspace{-1em} \label{fig:lasot} \end{figure} \noindent \textbf{TrackingNet \cite{TrackingNet}.} TrackingNet is a large-scale tracking dataset consisting of 511 sequences for testing. The evaluation is performed on the online server. We report the results in Tab.~\ref{tab:results}. Compared with the baseline tracker Ocean \cite{Ocean}, it achieves 5.7 points gains on success score. Our model also surpasses the meta-learning based MAMLTrack \cite{MAMLTrack} on TrackingNet, \emph{i.e.,} success score of 76.0 \emph{vs} 75.7. \noindent \textbf{GOT10K \cite{GOT10K}.} The evaluation of GOT10K is on the online server. We report the average overlap (AO), success rate (SR$_{0.5}$, SR$_{0.75}$) in Tab.~\ref{tab:results}. Comparing the proposed model with the baseline Ocean \cite{Ocean}, we achieve gains of 6 points, 7.1 points, and 7.8 points on AO, SR$_{0.5}$, and SR$_{0.75}$, respectively. Notably, our model outperforms SiamBAN \cite{SiamBAN} for 1.6 points on AO, while running faster (50FPS \emph{vs.} 40FPS). \noindent \textbf{TNL2K \cite{TNL2K}.} TNL2K is a new dataset which consists of 2000 high diversity videos for natural language guided tracking. Adversarial samples and thermal images are introduced to improve the generality of tracking evaluation. Besides tracking by natural language, it also provides the results of tracking by bounding boxes. In Tab.~\ref{tab:results}, we present the results on 700 testing sequences. It shows that our model achieves the best success and precision scores among the compared trackers. \subsection{Ablation and Analysis} \noindent \textbf{One or Many Manipulators.} We link each channel in an operator with a manipulator. Differently, in differentiable neural network search \cite{DARTS}, an operator is identified by a scalar. We also try this strategy, \emph{i.e.,} assigning a matching operator with a scalar during the search. We achieves a final success score of 69.5 on OTB100 \cite{OTB-2015} and 54.7 on LaSOT \cite{LASOT}. The results are inferior to our model, which demonstrates the superiority of our search algorithm. We conjecture that the aggregation of channel information can provide finer guidance for operator selecting. \noindent \textbf{Random Search.} To demonstrate the efficacy of the search algorithm, we evaluate the performance of random search. Two operators are randomly retained for classification and regression branches, respectively. We report the average performance of the three-time random search and training. The average success score on OTB100 and LaSOT are 69.1 and 53.2. The results manifest that the introduced search method is effective in finding better operators combination. \begin{figure}[t] \centering \vspace{-0.5em} \subfloat { \begin{minipage}[t]{1\textwidth} \includegraphics[width=0.48\textwidth]{figs/clsNAS.pdf} \end{minipage}% } \vspace{-1.5em} \subfloat { \begin{minipage}[t]{1\textwidth} \includegraphics[width=0.48\textwidth]{figs/regNAS.pdf} \end{minipage} }% \vspace{-0.5em} \caption{\textbf{top:} NAS-like Matching Network for classification. \textbf{bottom:} NAS-like Matching Network for regression.} \label{fig:nas} \vspace{-1.5em} \end{figure} \noindent \textbf{NAS-like Matching Cell.} In differentiable neural network search \cite{DARTS}, it represents the basic operating cell as a Directed Acyclic Graph (DAG). Each cell contains multiple nodes, and each node aggregates the outputs of multiple basic operators (\emph{e.g.,} $3\times 3$ convolution layer). One intuitive idea is directly replacing the operators in NAS with our designed matching functions and then searching a matching network. As shown in Fig.~\ref{fig:nas}, we use DARTS \cite{DARTS} to search a matching cell, which looks like that in NAS. Surprisingly, though the searched cell is much complex than ours, it does not show superiority. Concretely, the NAS-like cell achieves an success score of 55.7 on LaSOT and runs at 35 FPS. Both the performance and inference speed is inferior to the proposed model. The comparison proclaims that directly borrowing NAS to matching network search may not be an optimal choice. We present more details about the DARTS-like structure search and the related work in supplementary materials, due to space limit. \section{Conclusion} In this work, we introduce six novel operations to explore more feasibility on matching operator selection in Siamese tracking. Quantitative and quantitative analyses demonstrate that the classical (depthwise) cross-correlation is not the optimal choice for Siamese tracking. We simultaneously find the optimal matching networks for both classification and regression branches in state estimation with the proposed binary channel manipulation (BCM). The learned matching networks are applied to a baseline tracker, and the experimental result shows the robustness of our approach on both short-term and long-term benchmarks. In the future work, we will apply our method to other matching based frameworks, \emph{e.g.,} ATOM. \noindent \textbf{Acknowledgements.} We thank Heng Fan for his help during ICCV2021 rebuttal. This work was supported by the National Key Research and Development Program of China (Grant No. 2020AAA0106800), the Natural Science Foundation of China (Grant No. 61902401, No. 61972071, No. 61906052, No. 62036011, No. 61721004, No. 61972394, and No. U2033210), the CAS Key Research Program of Frontier Sciences (Grant No. QYZDJ-SSWJSC040), the Postdoctoral Innovative Talent Support Program BX20200174, China Postdoctoral Science Foundation Funded Project 2020M682828. The work of Bing Li was also supported by the Youth Innovation Promotion Association, CAS. {\small \bibliographystyle{ieee_fullname}
2023-04-23T06:10:08.382Z
2021-08-03T02:34:17.000Z
redpajama/arxiv
arxiv_0002
227
5,525
efaf28937f152165712809f439b882d52c463730
\section{Introduction} Considering security protocols, the study of properties such as authentication and secrecy has been intensive for years~\cite{ryan00}, but the interest of other properties such as non-repudiation and fairness has been raised only in the 1990s with the explosion of Internet services and electronic transactions \footnote{See \url{http://www.lsv.ens-cachan.fr/~kremer/FXbib/references.php} for a detailed list of publications related to the analysis of non-repudiation protocols.} Non-repudiation protocols are designed for verifying that, when two parties exchange information over a network, neither one nor the other can deny having participated to this communication. Such a protocol must therefore generate evidences of participation to be used in case of a dispute. The basic tools for non-repudiation services have been digital signatures and public key cryptography. Indeed, when one receives a signed message, he has an evidence of the participation and the identity of his party~\cite{kremer02}.\\ The majority of the non-repudiation property analysis efforts in the literature are manually driven though. One of the first efforts to apply formal methods to the verification of non-repudiation protocols have been presented by Zhou et al. in~\cite{zhou98towards}, where they used SVO logic. In~\cite{schneider98} Schneider used process algebra CSP to prove the correctness of a non-repudiation protocol, the well-known Fair Zhou-Gollmann protocol. With the same goal, Bella et al. have used the theorem prover Isabelle~\cite{bella01}. Schneider used a rank function for encoding that in an execution trace, an event happens before another event. The verification is done by analyzing traces in the stable failures models of CSP. Among the automatic analysis attempts, we can cite Shmatikov and Mitchell~\cite{shmatikov00analysis} who have used Mur$\varphi$, a finite state model-checker, to analyze a fair exchange and two contract signing protocols, Kremer and Raskin~\cite{kremer01gamebased} who have used a game based model, Armando et al.~\cite{ArmandoCC-CSF07} who used LTL for encoding resilient channels in particular, the very nice work of Gurgens and Rudolph~\cite{GurgensR-FAC05} who have used the asynchronous product automata (APA) and the simple homomorphism verification tool (SHVT)~\cite{SHVT-98}, raising flaws in three variants of the Fair Zhou-Gollmann protocol and in two fair non-repudiation protocols~\cite{KremerK-ITB00,ZhouDB-ACISP99}. Wei and Heather~\cite{WeiH-FAST05} have used FDR, with an approach similar to Schneider, for a variant of the Fair Zhou-Gollmann protocol with timestamps. The common point between all those works is that they use rich logics, with a classical bad consequence for model checkers, the difficulty to consider large protocols. For avoiding this problem, Wei and Heather~\cite{WeiH-FAST06} used PVS~\cite{Roscoe}, but some of the proof are still manual.\\ Fairness is more difficult to achieve: no party should be able to reach a point where he has the evidence or the message he requires without the other party also having his required evidence. Fairness is not always required for non-repudiation protocols, but it is usually desirable.\\ A variety of protocols has been proposed in the literature to solve the problem of fair message exchange with non-repudiation. The first solutions were based on a gradual exchange of the expected information~\cite{kremer02}. However this simultaneous secret exchange is troublesome for actual implementations because fairness is based on the assumption of equal computational power on both parties, which is very unlikely in a real world scenario. A possible solution to this problem is the use of a trusted third party (TTP), and in fact it has been shown that it is impossible to achieve fair exchange without a TTP~\cite{pagnia99,markowitch99}. The TTP can be used as a delivery agent to provide simultaneous share of evidences. The Fair Zhou-Gollmann protocol~\cite{zhou96fair} is a well known example using a TTP as a delivery agent; a significant amount of work has been done over this protocol and its derivations~\cite{bella01,gurgens03,schneider98,zhou98towards}. However, instead of passing the complete message through the TTP and thus creating a possible bottleneck, recent evolution of protocols resulted in efficient, \emph{optimistic} versions, in which the TTP is only involved in case anything goes wrong. Resolve and abort sub-protocols must guarantee that every party can complete the protocol in a fair manner and without waiting for actions of the other party.\\ One of these recent protocols is the optimistic Cederquist-Corin-Dashti (CCD) non-repudia\-tion protocol~\cite{cederquist05}. The CCD protocol has the advantage of not using session labels, contrariwise to many others in the literature~\cite{kremer02,markowitch01,zhou96fair,schneider98}. A session label typically consists of a hash of all message components. G{\"u}rgens et al.~\cite{gurgens03} have shown a number of vulnerabilities associated to the use of session labels and, to our knowledge, the CCD protocol is the only optimistic non-repudiation protocol that avoids altogether the use of session labels. This paper presents a method for automatically verifying non-repudiation protocols in presence of an active intruder. Our method has been implemented in the AVISPA Tool~\cite{AvispaCAV05}\footnote{\url{http://www.avispa-project.org}} and we illustrate it with examples. This tool, intensively used for defining Internet security protocols and automatically analyzing their authentication and secrecy properties, did not provide any help for considering non-repudiation properties.\\ We first consider non-repudiation analysis as a combination of authentication problems, applied to the Fair Zhou-Gollmann protocol. We show the limits of this representation and the difficulties for proving non-repudiation properties using only authentications. Then, we define method based on the analysis of agents knowledge, permitting to handle non-repudiation and fairness properties in a same framework. Our approach is very natural for the user and writing the logical properties is still simple: they correspond to state invariants that are convincing properties for the user. This method is easy to integrate in lazy verification systems, such as the AVISPA Tool, and can also be integrated in any system able to handle agents (or intruder) knowledge. This should permit, contrarily to more complex logics like LTL, to set up abstractions more easily for considering unbounded cases. This should also permit to get a more efficient verification for bounded cases. We illustrate this with the optimistic Cederquist-Corin-Dashti protocol. \section{Non-Repudiation Properties}\label{sec-prop-nr} Non-repudiation (NR) is a general property that may not be clearly defined. It is usually described as a set of required services, depending on the protocol and the required level of security. In particular, non-repudiation properties may be different whether a trusted third party (TTP) is used or not in the protocol. Considering a message sent by an originator agent to a recipient agent (possibly via a delivery agent, a TTP), we define below some of the most important non-repudiation services required by most of the existing security applications (for e-commerce for example). \begin{definition The service of \textbf{non-repudiation of origin}, denoted ${\cal NRO}_B(A)$, provides the recipient $B$ with a set of evidences which ensures that the originator $A$ has sent the message. The evidence of origin is generated by the originator and held by the recipient. This property protects the recipient against a dishonest originator. \end{definition} \begin{definition The service of \textbf{non-repudiation of receipt}, denoted ${\cal NRR}_A(B)$, provides the originator $A$ a set of evidences which ensures that the recipient $B$ has received the message. The evidence of receipt is generated by the recipient and held by the originator. This property protects the originator against a dishonest recipient. \end{definition} \begin{definition The service of \textbf{non-repudiation of submission}, denoted ${\cal NRS}_A(B)$, provides the originator $A$ a set of evidences which ensures that he has submitted the message for delivery to $B$. This service only applies when the protocol uses a TTP. Evidence of submission is generated by the delivery agent, and will be held by the originator. This property protects the originator against a dishonest recipient. \end{definition} \begin{definition The service of \textbf{non-repudiation of delivery}, denoted ${\cal NRD}_A(B)$, provides the originator $A$ a set of evidences which ensures that the recipient $B$ has received the message. This service only applies when the protocol uses a TTP. Evidence of delivery is generated by the delivery agent, and will be held by the originator. This property protects the originator against a dishonest recipient. \end{definition} \begin{definition A service of \textbf{fairness} (also called \textsl{strong fairness}) for a non-repudiation protocol provides evidences that if, at the end of the protocol execution, either the originator has the evidence of receipt of the message and the recipient has the evidence of origin of the corresponding message, or none of them has any valuable information. This property protects the originator and the recipient. \end{definition} \begin{definition A service of \textbf{timeliness} for a non-repudiation protocol guarantees that, whatever happens during the protocol run, all participants can reach a state that preserves fairness, in a finite time. \end{definition} Note that in general sets of evidences such as $\cal NRO$, $\cal NRR$, $\cal NRS$ and $\cal NRD$ are composed with messages signed by an agent. For the sequel of this paper, we will consider the following definition of an evidence. \begin{definition An \textbf{evidence} for an agent $A$ for a non-repudiation property $P$ is a message, a part of a message, or a combination of both, received by $A$ that is necessary for guaranteeing property $P$. \end{definition} Note that in this paper, we consider the evidences given by the protocol designer as valid: without intervention of an intruder, those evidences are sufficient to guarantee the non-repudiation service; and in case of a dispute, a judge analyzing them will always be able to protect honest agents. \section{Non-Repudiation as Authentication} It is well known that non-repudiation is a form of authentication~\cite{ryan00}. In this section we demonstrate that properties like $\cal NRO$, $\cal NRR$,\ldots can be at least partially represented by authentication properties. We illustrate this idea with the Fair Zhou-Gollmann protocol. At the end of this section we show strong limitations of this approach in order to motivate the introduction of a new approach in the next section. \subsection{Running Example: the FairZG Protocol} In this section we describe the Fair Zhou-Gollmann protocol (FairZG)~\cite{zhou98towards}, a fair non-repudiation protocol that uses a TTP. We have chosen this protocol as a case study to demonstrate our analysis approach because of the existence of significant related work~\cite{bella01,gurgens03,schneider98}. The protocol is presented below in Alice\&Bob notation, where \textsf{fNRO}, \textsf{fNRR}, \textsf{fSUB} and \textsf{fCON} are labels used to identify the purpose of messages. \begin{tabbing} xxxx \= xxxxxxxxx \= \kill ~~1. \> {\sf A $\rightarrow$ B:} \> {\sf fNRO.B.L.C.NRO}\\ ~~2. \> {\sf B $\rightarrow$ A:} \> {\sf fNRR.A.L.NRR}\\ ~~3. \> {\sf A $\rightarrow$ TTP:} \> {\sf fSUB.B.L.K.SubK}\\ ~~4. \> {\sf B $\leftrightarrow$ TTP:} \> {\sf fCON.A.B.L.K.ConK}\\ ~~5. \> {\sf A $\leftrightarrow$ TTP:} \> {\sf fCON.A.B.L.K.ConK}\\[1mm] and \> ${\cal NRO}_B(A) = \{ \mathsf{NRO}, \mathsf{ConK} \}$\\ \> ${\cal NRR}_A(B) = \{ \mathsf{NRR}, \mathsf{ConK} \}$ \end{tabbing} where \textsf{A} (for Alice) is the originator of the message \textsf{M}, \textsf{B} (for Bob) is the recipient of the message \textsf{M}, \textsf{TTP} is the trusted third party, \textsf{M} is the message to be sent from Alice to Bob, \textsf{C} is a commitment (the message \textsf{M} encrypted by a key \textsf{K}), \textsf{L} is a unique session identifier (also called label), \textsf{K} is a symmetric key defined by Alice, \textsf{NRO} is a message used for non-repudiation of origin (the message \textsf{fNRO.B.L.C} signed by Alice), \textsf{NRR} is a message used for non-repudiation of receipt (the message \textsf{fNRR.A.L.C} signed by Bob), \textsf{SubK} is a proof of submission of \textsf{K} (the message \textsf{fSUB.B.L.K} signed by A), \textsf{ConK} is a confirmation of \textsf{K} (the message \textsf{fCON.A.B.L.K} signed by the TTP). The main idea of the FairZG protocol is to split the delivery of a message into two parts. First a commitment \textsf{C}, containing the message \textsf{M} encrypted by a key \textsf{K}, is exchanged between Alice and Bob (message \textsf{fNRO}). Once Alice has an evidence of commitment from Bob (message \textsf{fNRR}), the key \textsf{K} is sent to a trusted third party (message \textsf{fSUB}). Once the TTP has received the key, both Alice and Bob can retrieve the evidence \textsf{ConK} and the key \textsf{K} from the TTP (messages \textsf{fCON}). This last step is represented by a double direction arrow in the Alice\&Bob notation because it is implementation specific and may be composed by several message exchanges between the agents and the TTP. In this scenario we assume the network will not be down forever and both Alice and Bob have access to the TTP's shared repository where it stores the evidences and the key. This means that the agents will eventually be able to retrieve the key and evidences from the TTP even in case of network failures. \subsection{Non-Repudiation of Origin as Authentication}\label{ssec:nro_as_auth} In our example, the FairZG protocol, non-repudiation of origin should provide the guarantee that if Bob owns ${\cal NRO}$ then Alice has sent \textsf{M} to Bob. Proposition~\ref{prop:auth_nro} shows how this can be partially done with a set of authentications. \begin{definition} \textsf{auth(X,Y,D)} is the non-injective authentication, and means \textsf{X} authenticates \textsf{Y} on data \textsf{D}. \end{definition} The semantics of such a predicate is standard and can be found in~\cite{lowe-csfw97}. \begin{proposition}\label{prop:auth_nro} Given the FairZG protocol, let \textsf{B} be a honest agent.\\ If \textsf{auth(B,A,NRO)}, \textsf{auth(B,TTP,ConK)} and \textsf{auth(TTP,A,SubK)} are satisfied then the non-repudiation service of origin ${\cal NRO}_B(A)$ is satisfied. \end{proposition} \paragraph{\textsl{Proof: }}{ For the two evidences of ${\cal NRO}_B(A) = \{ \mathsf{NRO}, \mathsf{ConK} \}$, we have: \begin{itemize} \item $\mathsf{NRO = Sig_A(fNRO.B.L.\{M\}_K)}$: since \textsf{auth(B,A,NRO)} is satisfied, there is an agreement on $\mathsf{Sig_A(fNRO.B.L.C)}$ between \textsf{B} and \textsf{A}. From the signature properties this means also an agreement on $\mathsf{\{M\}_K}$, thus \textsf{A} has sent $\mathsf{\{M\}_K}$. \item $\mathsf{ConK = Sig_{TTP}(fCON.A.B.L.K)}$: as above \textsf{auth(B,TTP,ConK)} implies an agreement on \textsf{K} between \textsf{B} and \textsf{TTP}. Furthermore $\mathsf{SubK = Sig_A(fSUB,B,L,K)}$ thus \textsf{auth(TTP,A,SubK)} implies an agreement on \textsf{K} between \textsf{TTP} and \textsf{A}. By transitivity we have an agreement on \textsf{K} between \textsf{B} and \textsf{A} which means that \textsf{A} has sent \textsf{K}. \end{itemize} As \textsf{A} has sent $\mathsf{\{M\}_K}$ and \textsf{K}, he has sent \textsf{M}. The non-injective authentication is only required for \textsf{auth(B,TTP,ConK)} because \textsf{B} can ask many times \textsf{ConK}. However since all authentications imply an agreement on the unique session identifier \textsf{L}, this excludes an authentication across different sessions. \hfill$\Box$} \subsection{Non-Repudiation of Receipt as Authentication}\label{ssec:nrr_as_auth} In our example, the FairZG protocol, non-repudiation of receipt should provide the guarantee that if Alice owns ${\cal NRR}$ then Bob has receipt \textsf{M} from Alice. Proposition~\ref{prop:auth_nrr} shows how this can be done partially with a set of authentications. \begin{proposition}\label{prop:auth_nrr} Given the FairZG protocol, let \textsf{B} be a honest agent.\\ If \textsf{auth(A,B,NRR)}, \textsf{auth(A,TTP,ConK)} and \textsf{auth(B,TTP,ConK)} are satisfied then the non-repudiation service of receipt ${\cal NRR}_A(B)$ is satisfied. \end{proposition} \paragraph{\textsl{Proof: }}{ For the two evidences of ${\cal NRR}_A(B) = \{ \mathsf{NRR}, \mathsf{ConK} \}$, we have: \begin{itemize} \item $\mathsf{NRR = Sig_B(fNRR.A.L.\{M\}_K)}$: a reasoning as for \textsf{NRO} in Proposition~\ref{prop:auth_nro} ensures that \textsf{B} has received $\mathsf{\{M\}_K}$. \item $\mathsf{ConK = Sig_{TTP}(fCON.A.B.L.K)}$: \textsf{auth(A,TTP,ConK)} implies an agreement on \textsf{K} between \textsf{A} and \textsf{TTP}. Furthermore \textsf{auth(B,TTP,ConK)} implies an agreement on \textsf{K} between \textsf{B} and \textsf{TTP}. This means that there is an agreement on \textsf{K} between \textsf{A} and \textsf{B}, thus when \textsf{A} holds \textsf{ConK}, \textsf{B} has received or will be able to receive \textsf{K}. \end{itemize} The proof end is similar to the one of Proposition~\ref{prop:auth_nro}. \hfill$\Box$} \subsection{Limitations and Difficulties} At this point there are some problems that motivate the introduction of a new approach presented in the next section. \begin{enumerate} \item If, contrarily to the previous Propositions hypothesis, the evidences owner is dishonest, he can possibly forge a fake evidences set. For example for Bob and ${\cal NRR}$ we need to prove that Bob could only own ${\cal NRR}$ if Alice has actually sent the correct protocol messages. This may be done as for example in~\cite{schneider98}, \cite{WeiH-FAST05} or \cite{gurgens03} but this is not trivial. \item Handling non-repudiation as authentications seems very hard or may not be possible in general. In particular this task seems difficult for optimistic non-repudiation protocols that include sub-protocols like \textsl{abort} and \textsl{resolve} as presented in the next section. \item In general verifying Fairness is a delicate stage and the above remarks make this more difficult. \end{enumerate} In conclusion, proving non-repudiation with the help of authentications seems for us not to be the right way; this is why in the next section we propose a very easy approach for handling non-repudiation. \section{Non-Repudiation based on Agent Knowledge} In this section, we present a new method for considering non-repudiation services and fairness in a same framework: we introduce a logic permitting to describe states invariants. This logic is a very classical one, except that we define two new predicates, \texttt{deduce} and \texttt{aknows} that permit to consider agents knowledge in the description of goals. The \texttt{aknows} predicate is also used as protocol annotation, with the semantics \textit{agent $X$ knows (or can deduce) term $t$}. \subsection{Description of Non-Repudiation Properties} \label{sec-descrNR} The main role of a non-repudiation protocol is to give evidences of non-repudiation to the parties involved in the protocol. To analyze this kind of protocol, one must verify which participants have their non-repudiation evidences at the end of the protocol execution. For example, if the originator has all its evidences for non-repudiation of receipt, then the service of non-repudiation of receipt is guaranteed. If the recipient has all its evidences for non-repudiation of origin, then the service of non-repudiation of origin is guaranteed. If both parties (or none of them) have their evidences, fairness is guaranteed. In other words, to analyze non-repudiation, we need to verify if a set of terms is known by an agent at the end of the protocol execution. And for considering a large class of non-repudiation protocols, we shall not restrict evidences to a set of terms, but we have to consider them as a combination of terms using standard logical connectors (conjunction, disjunction, negation). For considering non-repudiation and fairness properties involving honest and dishonest agents, we have defined a new predicate that permits to access the knowledge of protocol participants. This predicate, named \texttt{aknows}, is used in the specification of protocol transitions and of properties. \begin{definition}[$\mathcal{NR\_}_{X}(Y)$] Let $\mathcal{A}$ be a set of agents playing a finite number of sessions $\mathcal{S}$ of a protocol, $\mathcal{T}$ a set of terms sent in the messages of this protocol and $\mathcal{E}$ the subset of terms in $\mathcal{T}$ that are part of the evidences of non-repudiation in the protocol. For an agent $X \in \mathcal{A}$, $\mathcal{NR\_}_{X}(Y)$ is a logical combination of terms $t \in \mathcal{E}$ that constitute the evidence for a service of non-repudiation $\cal NR\_$ for agent $X$ wrt.\ agent $Y$. \end{definition} \begin{definition}[\texttt{aknows}] Let $\mathcal{A}$ be a set of agents playing a finite number of sessions $\mathcal{S}$ of a protocol, $\mathcal{T}$ a the set of terms. The annotation $\mathtt{aknows}(X,s,t)$ is a predicate with $X\in\mathcal{A}$, $s \in \mathcal{S}$ and $t \in \mathcal{T}$, expressing that agent $X$, playing in session $s$ of the protocol, knows (or can deduce) the term $t$. \end{definition} The semantics of predicate $\mathtt{aknows}(X,s,t)$ is that the term $t$ can be composed by agent $X$, according to its current knowledge in the session $s$ of the protocol, whether this agent is honest or not. This composability test can be easily done by any tool that is able to manage agents knowledge or intruder knowledge. By abuse of notation, we may write $\mathtt{aknows}(X,s,L)$, for a logical formula $L$ combining evidences ($\mathcal{NR\_}_{X}(Y)$ for example), considering that the predicate $\mathtt{aknows}$ is an homomorphism: \begin{eqnarray*} \mathtt{aknows}(X,s,L_1 \wedge L_2) & = & \mathtt{aknows}(X,s,L_1) \wedge \mathtt{aknows}(X,s,L_2)\\ \mathtt{aknows}(X,s,L_1 \vee L_2) & = & \mathtt{aknows}(X,s,L_1) \vee \mathtt{aknows}(X,s,L_2)\\ \mathtt{aknows}(X,s,\neg L) & = & \neg \mathtt{aknows}(X,s,L) \end{eqnarray*} \begin{definition}[\texttt{deduce}] Let $\mathcal{A}$ be a set of agents playing a finite number of sessions of a protocol and $\mathcal{T}$ a set of terms. We define $\mathtt{deduce}(X,t)$, with $X\in\mathcal{A}$ and $t \in \mathcal{T}$, as the predicate which means that \texttt{X} can deduce \texttt{t} from its knowledge. \end{definition} We will use the same abuse of notation for $\mathtt{deduce}$ as for $\mathtt{aknows}$.\\ In the following, we assume that each \texttt{aknows} annotation corresponds to a valid \texttt{deduce} predicate on the same information, in order to avoid bad annotations. \begin{definition} The evidence ${\cal NR\_}_X(Y)$ is \textbf{well-formed} if it contains information that uniquely identifies the session, and if it contains an injective function of the message $M$ for which ${\cal NR\_}$ acts as a protection agains a dishonnest agent. \end{definition} We now give the results obtained by this representation. \begin{proposition} Given a non-repudiation service of $B$ against $A$ about a message $M$ with the well-formed evidence ${\cal NR\_}_B(A)$ in session $s$ of a protocol. If the following formulae are true at the session end then the non repudiation service is valid. \[ \begin{array}{lcl} \mathtt{aknows}(B,s,{\cal NR\_}_B(A)) &\Rightarrow &\mathtt{aknows}(A,s,M) \\ \mathtt{deduce}(B,{\cal NR\_}_B(A)) &\Rightarrow &\mathtt{aknows}(B,s,{\cal NR\_}_B(A))\\ \end{array} \] \end{proposition} \paragraph{\textsl{Proof: }}{A sketch of proof is as follows: by the second implication if $B$ is able to deduce ${\cal NR\_}_B(A)$ then $\mathtt{aknows}(B,s,{\cal NR\_}_B(A))$ is included in its knowledge. Furthermore since ${\cal NR\_}_B(A)$ is well-formed, ${\cal NR\_}_B(A)$ and $\mathtt{aknows}(B,s,{\cal NR\_}_B(A))$ are related to the same session. Now since ${\cal NR\_}_B(A)$ is well-formed it includes all the information in $M$, thus the first implication implies an agreement on $M$ between $B$ and $A$. Finally as $\mathtt{aknows}(A,s,M)$ is an annotation, this means that $A$ has followed the protocol, thus he has done what he must do with $M$. \hfill$\Box$} \noindent \textit{Remark: } verifying formulas given in the above Proposition is not a problem, because a priori any theorem prover can compute whatever can be deduced by an agent at a given step of the protocol, especially concerning the \texttt{deduce} predicate. \begin{corollary}\label{cor-nro} Given a non-repudiation service of origin for $B$ against $A$ about message $M$, in session $s$ of a protocol. If ${\cal NRO}_B(A))$ is well-formed and the following formulae are true at the session end then the service is valid. \[ \begin{array}{lcl} \mathtt{aknows}(B,s,{\cal NRO}_B(A)) &\Rightarrow &\mathtt{aknows}(A,s,M) \\ \mathtt{deduce}(B,{\cal NRO}_B(A)) &\Rightarrow &\mathtt{aknows}(B,s,{\cal NRO}_B(A))\\ \end{array} \] \end{corollary} \begin{corollary}\label{cor-nrr} Given a non-repudiation service of receipt for $A$ against $B$ about message $M$, in session $s$ of a protocol. If ${\cal NRR}_A(B))$ is well-formed and the following formulae are true at the session end then the service is valid. \[ \begin{array}{lcl} \mathtt{aknows}(A,s,{\cal NRR}_A(B)) &\Rightarrow &\mathtt{aknows}(B,s,M) \\ \mathtt{deduce}(A,{\cal NRR}_A(B)) &\Rightarrow &\mathtt{aknows}(A,s,{\cal NRR}_A(B))\\ \end{array} \] \end{corollary} \subsection{Description of Fairness} In the literature, authors often give different definitions of fairness for non-repudiation protocols. In some definitions none of the parties should have more evidences than the others at any given point in time. Others have a more flexible definition in which none of them should have more evidences than the others in the end of the protocol run. In many works it is also not very clear if only successful protocol runs are taken into account, or partial protocol runs are valid as well. In this paper the later definition of fairness will be used and we take into account complete protocol runs. By complete protocol runs we mean a run where, even though the protocol could not have reached it's last transition for all agents, there is no executable transition left, i.e.\ all possible protocol steps have been executed, but this does not mean that all agents are in a final state. We define this standard fairness as a function of non-repudiation of origin and of non-repudiation of receipt. If both properties, $\cal NRO$ and $\cal NRR$, are ensured or both are not satisfied for a given message $M$, then we have fairness. \begin{proposition}\label{prop-f-auth} Given a protocol whose purpose is to send a message from Alice to Bob, we have the following equivalence concerning the standard definition of fairness for a given session $s$. If the non-repudiation is valid for the $\mathtt{{\cal NRO}}$ and $\mathtt{{\cal NRR}}$ services then: \[ \mbox{Fairness} ~\equiv~ \mathtt{aknows}(Bob,s,{\cal NRO}_{\mbox{Bob}}(\mbox{Alice})) \mbox{ iff } \mathtt{aknows}(Alice,s,{\cal NRR}_{\mbox{Alice}}(\mbox{Bob})) \] \end{proposition} This result can be generalized to fairness wrt.\ a set of non-repudiation services as follows. \begin{theorem}\label{th-f-auth} Given a protocol involving a finite number of agents, given a finite set of valid non-repudiation services $\cal NR$, the protocol is fair wrt.\ $\cal NR$ iff \[\begin{array}{l} \forall {{\cal NR}S_1}_{X_1}(Y_1), {{\cal NR}S_2}_{X_2}(Y_2) \in {\cal NR},~\\ \hspace*{2cm} \mathtt{aknows}(X_1,s,{{\cal NR}S_1}_{X_1}(Y_1)) ~\mbox{ iff }~ \mathtt{aknows}(X_2,s,{{\cal NR}S_2}_{X_2}(Y_2)) \end{array}\] \end{theorem} \subsection{Running Example: CCD} For illustrating the analysis method described later on, we will use a recent protocol, the optimistic Cederquist-Corin-Dashti (CCD) non-repudiation protocol~\cite{cederquist05}. The CCD protocol has been created for permitting an agent $A$ to send a message $M$ to an agent $B$ in a fair manner. This means that agent $A$ should get an evidence of receipt of $M$ by $B$ ($EOR$) if and only if $B$ has really received $M$ and the evidence of origin from $A$ ($EOO$). $EOR$ permits $A$ to prove that $B$ has received $M$, while $EOO$ permits $B$ to prove that $M$ has been sent by $A$. The protocol is divided into three sub-protocols: the main protocol, an \textsl{abort} sub-protocol and a \textsl{resolve} sub-protocol. \paragraph{The Main Protocol.} It describes the sending of $M$ by $A$ to $B$ and the exchange of evidences in the case where both agents can complete the entire protocol. If a problem happens to one of the agents, in order to finish properly the protocol, the agents execute the \textsl{abort} or the \textsl{resolve} sub-protocol with a trusted third party ($TTP$). The main protocol is therefore composed of the following messages exchanges, described in the Alice\&Bob notation:\\[2mm] \begin{tabular}{l@{\hspace{0.2cm}}l@{\hspace{0.2cm}}l@{\hspace{0.5cm}}l} \small{1.} & $A \rightarrow B:$ & ${\{M\}}_{K}$.${EOO}_{M}$ & where ${EOO}_{M} = {\{B.TTP.H({\{M\}}_{K}).{\{K.A\}}_{Kttp}\}}_{inv(Ka)}$\\ \small{2.} & $B \rightarrow A:$ & ${EOR}_{M}$ & where ${EOR}_{M} = {\{{EOO}_{M}\}}_{inv(Kb)}$\\ \small{3.} & $A \rightarrow B:$ & $K$\\ \small{4.} & $B \rightarrow A:$ & ${EOR}_{K}$ & where ${EOR}_{K} = {\{A.H({\{M\}}_{K}).K\}}_{inv(Kb)}$\\ \end{tabular}\\[2mm] where $K$ is a symmetric key freshly generated by $A$, $H$ is a one-way hash function, $Kg$ is the public key of agent $g$ and $inv(Kg)$ is the private key of agent $g$ (used for signing messages). Note that we assure that all public keys are known by all agents (including dishonest agents). In the first message, $A$ sends the message $M$ encrypted by $K$ and the evidence of origin for $B$ (message signed by $A$, so decryptable by $B$). In this evidence, $B$ can check his identity, learns the name of the TTP, can check that the hash code is the result of hashing the first part of the message, but cannot decrypt the last part of the evidence; this last part may be useful if any of the other sub-protocols is used.\\ $B$ answers by sending the evidence of receipt for $A$, $A$ checking that $EOR_M$ is $EOO_M$ signed by $B$.\\ In the third message, $A$ sends the key $K$, permitting $B$ to discover the message $M$.\\ Finally, $B$ sends to $A$ another evidence of receipt, permitting $A$ to check that the symmetric key has been received by $B$. \paragraph{The \textsl{Abort} Sub-Protocol.} The \textsl{abort} sub-protocol is executed by agent $A$ in case he does not receive the message ${\mbox{EOR}}_{M}$ at step 2 of the main protocol. The purpose of this sub-protocol is to cancel the messages exchange. \begin{center} \begin{tabular}{l@{\hspace{0.2cm}}l@{\hspace{0.2cm}}l@{\hspace{0.5cm}}l} \small{1.} & $A \rightarrow TTP:$ & ${\{\texttt{abort}.H({\{M\}}_{K}).B.{\{K.A\}}_{Kttp}\}}_{inv(Ka)}$\\ \small{2.} & $TTP \rightarrow A:$ & $\left\{ \begin{array}{ll} {E}_{TTP} & \mbox{ where } {E}_{TTP} = {\{A.B.K.H(\{M\}_{K})\}}_{inv(Kttp)}\\ & \mbox{ if } \texttt{resolved}(A.B.K.H(\{M\}_{K}))\\ {AB}_{TTP} & \mbox{ where } {AB}_{TTP} = {\{A.B.H(\{M\}_{K}).{\{K.A\}}_{Kttp}\}}_{inv(Kttp)}\\ & \mbox{ otherwise}\\ \end{array} \right.$\\ \end{tabular} \end{center} In this sub-protocol, $A$ sends to the TTP an abort request, containing the \texttt{abort} label and some information about the protocol session to be aborted.\\ According to what happened before, the TTP has two possible answers: if this is the first problem received by the TTP for this protocol session, the TTP sends a confirmation of abortion, and stores in its database that this protocol session has been aborted; but if the TTP has already received a request for resolving this protocol session, he sends to $A$ the information for completing his evidence of receipt by $B$. \paragraph{The \textsl{Resolve} Sub-Protocol.} The role of this second sub-protocol is to permit agents $A$ and $B$ to finish the protocol in a fair manner, if the main protocol cannot be run until its end by some of the parties. For example, if $B$ does not get $K$ or if $A$ does not get $EOR_K$, they can invoke the \textsl{resolve} sub-protocol. \begin{center} \begin{tabular}{l@{\hspace{0.2cm}}l@{\hspace{0.2cm}}l@{\hspace{0.5cm}}l} \small{1.} & $G \rightarrow TTP:$ & ${EOR}_{M}$\\ \small{2.} & $TTP \rightarrow G:$ & $\left\{ \begin{array}{ll} {AB}_{TTP} & \mbox{ if } \texttt{aborted}(A.B.K.H(\{M\}_{K}))\\ {E}_{TTP} & \mbox{ otherwise}\\ \end{array} \right.$ \end{tabular} \end{center} where $G$ stands for $A$ or $B$. A resolve request is done by sending ${EOR}_{M}$ to the TTP. If the protocol session has already been aborted, the TTP answers by the abortion confirmation. If this is not the case, the TTP sends $E_{TTP}$ so that the user could complete its evidence of receipt (if $G$ is $A$) or of origin (if $G$ is $B$). Then the TTP stores in its database that this protocol session has been resolved. \paragraph{Agents' Evidences.} For this protocol, according to~\cite{cederquist05}, the logical formulas of evidences are: \[\begin{array}{l} {\cal NRO}_B(A) = \{M\}_K \wedge EOO_M \wedge K\\ {\cal NRR}_A(B) = \{M\}_K \wedge EOR_M \wedge (EOR_K \vee E_{TTP}) \end{array}\] Note that there are two possibilities of evidences for non-repudiation of receipt, according to the way the protocol is run.\\ According to our method, we simply have to annotate protocol steps with \texttt{aknows} predicates, and then write the logical formula to verify. The following table shows where those annotations take place in the three CCD sub-protocols, for considering non-repudiation of origin and of receipt. \begin{center} \begin{tabular}[t]{|c|c|}\hline ${\cal NRO}_B(A)$ & Protocol - step\\\hline\hline $\mathtt{aknows}(B,s,\{M\}_K)$ & Main - 1.\\\hline $\mathtt{aknows}(B,s,EOO_M)$ & Main - 1.\\\hline $\mathtt{aknows}(B,s,K)$ & Main - 3.\\\hline $\mathtt{aknows}(B,s,K)$ & Resolve - 2.\\\hline \end{tabular} \hspace*{5mm} \begin{tabular}[t]{|c|c|}\hline ${\cal NRR}_A(B)$ & Protocol - step\\\hline\hline $\mathtt{aknows}(A,s,\{M\}_K)$ & Main - 1.\\\hline $\mathtt{aknows}(A,s,EOR_M)$ & Main - 2.\\\hline $\mathtt{aknows}(A,s,EOR_K)$ & Main - 4.\\\hline $\mathtt{aknows}(A,s,E_{TTP})$ & Abort - 2.\\\hline $\mathtt{aknows}(A,s,E_{TTP})$ & Resolve - 2.\\\hline \end{tabular} \end{center} According to Corollary~\ref{cor-nro}, \textbf{non-repudiation of origin} for the CCD protocol is represented by the following invariant formulas: \[\begin{array}{l} \mathtt{aknows}(B,s,\{M\}_K \wedge EOO_M \wedge K \Rightarrow \mathtt{aknows}(A,s,M)\\ \mathtt{deduce}(B,\{M\}_K \wedge EOO_M \wedge K) \Rightarrow \mathtt{aknows}(B,s,\{M\}_K \wedge EOO_M \wedge K) \end{array}\] According to Corollary~\ref{cor-nrr}, \textbf{non-repudiation of receipt} for the CCD protocol is represented by the following invariant formulas: \[\begin{array}{l} \mathtt{aknows}(A,s,\{M\}_K \wedge EOR_M \wedge (EOR_K \vee E_{TTP})) \Rightarrow \mathtt{aknows}(B,s,M)\\ \mathtt{deduce}(A,s,\{M\}_K \wedge EOR_M \wedge (EOR_K \vee E_{TTP}))\\ \hspace*{3cm}\Rightarrow \mathtt{aknows}(A,s,\{M\}_K \wedge EOR_M \wedge (EOR_K \vee E_{TTP})) \end{array}\] For analyzing \textbf{fairness}, this protocol requires timeliness, that is each participant should reach a final state before testing fairness. Fairness for the CCD protocol is described by the following logical formulas, a very simple application of Theorem~\ref{th-f-auth}: \[ \mathtt{aknows}(A,s,{\cal NRR}_A(B)) \Leftrightarrow \mathtt{aknows}(B,s,{\cal NRO}_B(A)) \] Basically the property states that if $A$ knows the EOR evidence (${\{M\}}_{K}$, ${EOR}_{M}$, and ${EOR}_{K}$ or ${E}_{TTP}$), then $B$ must know the EOO evidence. And symmetrically for $B$, if $B$ knows the EOO evidence (${\{M\}}_{K}$, ${EOO}_{M}$, and $K$ or ${E}_{TTP}$), then $A$ must know the EOR evidence.\\ The CCD protocol has been specified in the AVISPA Tool, with the description of the fairness property given above. The detailed formulas used in the AVISPA Tool, with an LTL syntax, are: \begin{small}\[ \Box \left( \left( \begin{array}{lll} \mathtt{aknows}(A,s,{\{M\}}_{K}) \; \wedge\\ \mathtt{aknows}(A,s,{EOR}_{M}) \; \wedge\\ (\mathtt{aknows}(A,s,{EOR}_{K}) \vee \mathtt{aknows}(A,s,{E}_{TTP}))\\ \end{array} \right) \Rightarrow \left( \begin{array}{ll} \mathtt{aknows}(B,s,{\{M\}}_{K}) \; \wedge\\ \mathtt{aknows}(B,s,{EOO}_{M}) \; \wedge\\ \mathtt{aknows}(B,s,K)\\ \end{array} \right) \right) \]\end{small}% \begin{small}\[ \Box \left( \left( \begin{array}{lll} \mathtt{aknows}(B,s,{\{M\}}_{K}) \; \wedge\\ \mathtt{aknows}(B,s,{EOO}_{M}) \; \wedge\\ \mathtt{aknows}(B,s,K)\\ \end{array} \right) \Rightarrow \left( \begin{array}{ll} \mathtt{aknows}(A,s,{\{M\}}_{K}) \; \wedge\\ \mathtt{aknows}(A,s,{EOR}_{M}) \; \wedge\\ (\mathtt{aknows}(A,s,{EOR}_{K}) \vee \mathtt{aknows}(A,s,{E}_{TTP}))\\ \end{array} \right) \right) \]\end{small}% Several scenarios have been run, and two of them have raised an attack, showing that the CCD protocol does not provide the fairness property for which it has been designed. The first attack has been found for a scenario where only one session of the protocol is run, between honest agents. The problem is raised when some messages of the main protocol are delayed, either by a slow network traffic or by the action of an intruder. The consequence of this delay is that $A$ will invoke the \textsl{abort} sub-protocol and $B$ will invoke the \textsl{resolve} sub-protocol. And if the resolve request reaches the TTP before the abort request, $B$ will get all his necessary evidences from the TTP, while $A$ is not able to get all his evidences even with the help of the TTP.\\ The originality of this attack is that, at the end: \begin{itemize} \item $A$ will guess (according to the answer received to his abort request) that the protocol has been resolved by $B$, so he will assume that $B$ knows $M$ and can build the proof that $A$ has sent it; but $A$ cannot prove this; \item $B$ has resolved the protocol and has received from the TTP the information for getting $M$ and building the proof that $A$ has sent $M$; but he does not know that $A$ does not have his proof; \item the TTP will think that $B$ has asked for the protocol to be resolved, followed by $A$; so for him, both $A$ and $B$ can build their evidences. \end{itemize} So, this trace shows that the CCD protocol is not fair, even if both agents $A$ and $B$ are honest. The attack is due to a malicious intruder or a network problem, and the TTP is of no help for detecting the problem. The second attack is a variant: it happens when agent $A$ plays the protocol with a dishonest agent $B$ (named $i$, for \textsl{intruder}). As soon as $i$ has received the first message from $A$, he builds $EOR_{M}$ and sends it to the TTP as resolve request. When $A$ decides to abort the protocol, this is too late: the protocol has already been resolved, the intruder can get $M$ and build the proof that $A$ has sent $M$, and $A$ cannot build the evidence of receipt. We have corrected the protocol and the numerous scenarios that have been tried on the new version have not raised any attack. This experiment on the CCD protocol is detailed in~\cite{SantiagoV-WISTP07}. \section{Conclusion} Non-repudiation protocols have an important role in many areas where secured transactions with proofs of participation are necessary. The evidences of origin and receipt of a message are two examples of elements that the parties should have at the end of the communication. We have given two very different examples of such protocols. The FairZG protocol is an intensively studied protocol in which the role of the trusted third party is essential. The CCD protocol is a more recent non-repudiation protocol that avoids the use of session labels and distinguishes itself by the use of an optimistic approach, the trusted third party being used only in case of a problem in the execution of the main protocol. The fairness of a non-repudiation protocol is a property difficult to analyze and there are very few tools that can handle the automatic analysis of this property. The contribution of this work is twofold. First, we have illustrated with the FairZG protocol how difficult it is to consider full non-repudiation properties using only a combination of authentications. Second, we have defined a new method that permits to handle in a very easy way non-repudiation properties and fairness in a same framework. This method is based on the handling of agents knowledge and can be used to automatically analyze non-repudiation protocols as well as contract signing protocols~\cite{shmatikov00analysis}. We have implemented it in the AVISPA Tool and have successfully applied it to the CCD protocol, proving that it is not fair. We have also tested other specifications of the CCD protocol, for example with secure communication channels between agents and the TTP, and for the original definition for the \textsl{abort} sub-protocol: no attack has been found; but using such channels is not considered as acceptable, because it requires too much work for the TTP. Our method, based on the writing of simple state invariants, is of easy use, and can be implemented in any tool handling agents (or intruder) knowledge. It should be very helpful for setting abstractions for handling unbounded scenarios, and it should very efficient for bounded verifications, as it has been the case in our implementation. We hope that this work will open a highway to the specification of many other properties, without any more change in the specification languages and the analysis engines. \bibliographystyle{abbrv}
2023-04-23T06:10:09.953Z
2007-10-22T08:40:14.000Z
redpajama/arxiv
arxiv_0002
282
7,105
6815632e39022edbede6d4668818bb3dd8eb5197
\section{Introduction} Most pulsar surveys have been carried out with single dish telescopes where there is a trade-off between the collecting area and the beam-width, and consequently the rate of the survey. In a multi-element telescope such as the Giant Meterwave Radio Telescope (GMRT), a large number of smaller antennas can be combined to provide a high sensitivity and yet retain a relatively large beam-width. In this paper, we report on the discovery of three new pulsars in the first blind survey of the north Galactic plane (45$^\circ$ < l < 135$^\circ$ ; |b| < 1$^\circ$) with the GMRT at an intermediate frequency of 610 MHz, which represents the best trade-off between the increased flux density at low frequency for pulsars, interstellar scattering and dispersion and beam-width. The GMRT's multi-element nature was also exploited to determine the positions of the pulsars to an accuracy of 5 arcminutes and this technique is also described. \section{Observations} The survey consists of 300 fields. The observations were conducted using typically 20 to 25 45-m GMRT antennas combined in an incoherent array mode at a frequency of 610 MHz. Each 43' by 43' field in this mode was observed for 35 minutes with a bandwidth of 16 MHz with 256 spectral channel across the band. The data in each channel were acquired with 16-bit precision every 256 $\mu$s after summing the two polarizations and recorded to SDLT tapes for off-line processing. The 8 sigma threshold for detecting a pulsed signal with a duty cycle of 10 percent for the configuration used is 0.5 mJy, which is comparable to the sensitivity of the Parkes multibeam survey (Manchester et al. 2001). \begin{figure} \resizebox{0.7\textwidth}{!} {\includegraphics{newpsr.eps}} \caption{Discovery plots for the three new pulsars, PSRs J0026+6320, J2208+5500 and J2218+5729 (left to right). The top plot in each panel shows the root mean square power as a function of time . The second and third plot in each panel show intensity as a function of subband and pulse phase and sub-integration and pulse phase respectively. The bottom two plots show average profile at 610 MHz observed with the GMRT and at 1420 MHz observed with the Lovell telescope respectively.} \end{figure} \section{Candidate Localization} The pulsar candidates were confirmed in the follow-up observations with the GMRT using the same observing configuration as used for the survey. The pulsar position was localized exploiting the multi-element nature of the GMRT. The range of baselines available for the GMRT antennas allows forming beams with a range of beam-widths when appropriate antennas are combined as a phased array to form an equivalent single dish with similar sensitivity as a 20 antenna incoherent array. Three combinations of the nearest 3, 5 and 6 antennas respectively were used in this mode to observe the candidate field and four fields offset in Right Ascension and Declination by half of Full Width at Half Maximum (FWHM) for the respective array. The respective FWHM in the above configurations were 20, 10 and 5 arcminutes. The detected signal-to-noise ratio of each new pulsar in these gridding observations was used to refine the position successively to 5 arcminutes accuracy. The refined position was used for timing observations at 1420 MHz with the Lovell Telescope at Jodrell Bank Observatory. This allowed follow-up confirmation and timing studies with high sensitivity with the Lovell Telescope and a rapid determination of pulsar parameters. \begin{table} \resizebox{0.8\textwidth}{!} {\begin{tabular}{l|c|c|c|c|c|c|c} \hline NAME & l$_{field}$ & b$_{field}$ & l & b & DM & P & Pdot \\ & (deg) & (deg) &(deg)&(deg)&(pc/cm$^{3}$)&(s)&(10$^{-15}$) \\ \hline J0026+6320&120.15&0.78&120.18&0.59&230.31&0.318357728337(2)&0.1500(2) \\ J2208+5500&101.25&-0.78&100.94&-0.75&101.03&0.93316093521(1)&6.988(5) \\ J2218+5729&103.95&0.78&103.52&0.49&162.75&1.056844& \\ \hline \end{tabular}} \caption{Timing Parameters of the new pulsars} \end{table} \section{Analysis} The data were analyzed using the pulsar searching package SIGPROC (\url{http://sigproc.sourceforge.net}). The data were dedispersed using 145 trial dispersion measures (DM) ranging from 0 to 2000 pc cm$^{-3}$, with spacing determined by the dispersion smearing across each individual frequency channel. Periodicities were searched for using both a Fast Fourier Transform and a Fast Folding Algorithm. Known interference frequencies were eliminated and new pulsar candidates were identified through inspection of diagnostic plots. A single-pulse search was also performed; due to the large amount of impulsive interference in our data, the results of this search are still under analysis. \section{Results} Out of 300 fields observed so far, we have processed 214 fields, covering about 100 square degrees of sky and redetected 11 known pulsars. Three new pulsars, PSRs J0026+6320, J2208+5500 and J2218+5729, have been discovered so far. The discovery plots of the new pulsars alongwith their average profiles observed at the GMRT and Lovell Telescope are shown in Figure 1. The observed parameters of the new pulsars are given in Table 1. The entire data is being reprocessed with better radio frequency interference excision to look for sources similar to recently reported Rotating Radio Transient (McLaughlin et al. 2006), for which the parameters of this survey are particularly suitable. We are also extending the survey area to (45$^\circ$ < l < 165$^\circ$ ; |b| < 3$^\circ$) and plan to complete these observations in the coming months. \bibliographystyle{aipproc}
2023-04-23T06:10:10.180Z
2007-10-17T13:51:49.000Z
redpajama/arxiv
arxiv_0002
294
937
bed79aa7a4ac77d89dac6722d543db7e101e638f
\section{Introduction} Let $I$ be a zero-dimensional ideal in a polynomial ring $P=K[x_1,\dots,x_n]$ over a field~$K$, and let~$\mathcal{O} =\{t_1,\dots,t_\mu\}$ be an order ideal, i.e.\ a finite set of power products in~$P$ which is closed under taking divisors. An {\it $\mathcal{O}$-border basis} of~$I$ is a set of polynomials $G=\{g_1,\dots,g_\nu\}$ of the form $g_j=b_j -\sum_{i=1}^\mu c_{ji}t_i$, where $\{b_1,\dots,b_\nu\}$ is the border $\partial\mathcal{O}= (x_1\mathcal{O}\cup \cdots\cup x_n\mathcal{O})\setminus \mathcal{O}$ of~$\mathcal{O}$ and $c_{ji}\in K$, such that~$I$ is generated by~$G$ and $\mathcal{O}$ is a $K$-vector space basis of~$P/I$. In recent years border bases have received considerable attention (see for instance~\cite{KK1}, \cite{KK2}, \cite{KKR}, \cite{M}, and~\cite{S}). This is due to several reasons. \begin{items} \item[(1)] Border bases generalize Gr\"obner bases: if one takes for $\mathcal{O}$ the complement of a leading term ideal of~$I$ with respect to some term ordering~$\sigma$, the corresponding border basis contains the reduced $\sigma$-Gr\"obner basis of~$I$. \item[(2)] Border bases are more suitable for dealing with computations arising from real world problems. They are more stable with respect to small variations in the coefficients of the polynomials generating~$I$ and permit symbolic computations with polynomial systems having approximate coefficients (see for instance~\cite{AFT}, \cite{HKPP}, and~\cite{S}). \item[(3)] Border bases are in general much more numerous than reduced Gr\"obner bases. For instance, if the given ideal~$I$ is invariant under the action of a group of symmetries, it is sometimes possible to find a border basis having these symmetries, but not a Gr\"obner basis. \end{items} The starting point for this paper is our attempt to generalize one of the fundamental results of Gr\"obner basis theory to the border basis setting, namely the fact that there exists a flat deformation from~$I$ to its leading term ideal~$\mathop{\rm LT}\nolimits_\sigma(I)$. More precisely, we are looking at the following result. (Here and in the following we use the notation introduced in~\cite{KR1} and~\cite{KR2}.) Given a term ordering~$\sigma$, the ring~$P$ can be graded by a row of positive integers $W=(w_1\;\cdots\;w_n)$, i.e.\ by letting $\deg_W(x_i)=w_i$, such that the leading term ideal $\mathop{\rm LT}\nolimits_\sigma(I)$ equals the degree form ideal $\mathop{\rm DF}\nolimits_W(I)$. Using a homogenizing indeterminate $x_0$ and the grading of $\overline{P}=K[x_0,\dots,x_n]$ given by $\overline{W}= (1\;w_1\;\cdots\;w_n)$, the canonical $K$-algebra homomorphism $\Phi: K[x_0] \;\To\; \overline{P}/I^{\rm hom}$ satisfies \begin{items} \item[(1)] The ring $\overline{P}/I^{\rm hom}$ is a free $K[x_0]$-module. \item[(2)] There are isomorphisms of $K$-algebras $\overline{P}/(I^{\rm hom} +(x_0)) \cong P/\mathop{\rm DF}\nolimits_W(I)$ and $\overline{P}/(I^{\rm hom} + (x_0-c)) \cong P/I$ for every $c\in K\setminus \{0\}$. \end{items} We express this by saying that there is a flat deformation from~$I$ to~$\mathop{\rm DF}\nolimits_W(I)$, and thus to~$\mathop{\rm LT}\nolimits_\sigma(I)$. In geometric jargon, we can say that, in the Hilbert scheme parametrizing affine schemes of length $\dim_K(P/I)$, the affine scheme defined by~$I$ is connected to the scheme defined by~$\mathop{\rm DF}\nolimits_W(I)$ via a rational curve parametrized by~$x_0$. Thus the starting point for this paper is the question whether there exists a flat deformation from a zero-dimensional ideal~$I$ given by an $\mathcal{O}$-border basis $G=\{g_1,\dots,g_\nu\}$ as above to its border term ideal $\mathop{\rm BT}\nolimits_\mathcal{O}=(b_1,\dots,b_\nu)$. The direct approach taken in Section~\ref{Deformation to the Border Form Ideal} is to try to imitate Gr\"obner basis theory and to use the flat deformation to the degree form ideal we just recalled. Unfortunately, this approach does not succeed in all cases, but only under the additional assumption that~$\mathcal{O}$ has a $\mathop{\rm maxdeg}\nolimits_W$ border, i.e.\ that no term in~$\mathcal{O}$ has a larger degree than a term in the border~$\partial\mathcal{O}$. Therefore it is necessary to dig deeper into the problem and find other ways of constructing the desired flat deformations. In Section~\ref{The Border Basis Scheme} we take a step back and view the task from a more global perspective. All zero-dimensional ideals having an $\mathcal{O}$-border basis can be parametrized by a scheme~$\mathbb{B}_{\mathcal{O}}$ which we call the {\it $\mathcal{O}$-border basis scheme}. Using the condition that the generic multiplication matrices have to commute, we give explicit equations defining~$\mathbb{B}_{\mathcal{O}}$ in a suitable affine space (see Definition~\ref{defBBS}). A moduli space such as the border basis scheme usually comes together with a universal family: this is a morphism from~$\mathbb{B}_{\mathcal{O}}$ to another scheme whose fibers are precisely the schemes defined by the ideals having an $\mathcal{O}$-border basis. The fundamental result about this {\it universal border basis family} is that it is flat. In fact, in Theorem~\ref{universal} we give an elementary, explicit proof that $\mathcal{O}$ is a basis for the entire family, viewed as a module over the coordinate ring of the border basis scheme. Hence the construction of the desired flat deformation of an ideal to its border term ideal is equivalent to finding suitable rational curves on the border basis scheme (see Corollay~\ref{ratcurve}). To examine the border basis scheme further, we have a more detailed look at the system of generators of its vanishing ideal in Section~\ref{Defining Equations for the Border Basis Scheme}. The technique of lifting neighbor syzygies (introduced in~\cite{KK1} and~\cite{S}, and independently in~\cite{Hu2}) provides us with a different way of constructing a system of generators of~$I(\mathbb{B}_{\mathcal{O}})$ (see Proposition~\ref{altgenBBS}). Using suitable examples, including the well-known Example~\ref{exa1xyz} of a singularity on a Hilbert scheme, we disprove several claims in~\cite{S} with respect to the possibility of removing redundant generators from this system. On~the positive side, in Proposition~\ref{removing} we provide a criterion for eliminating some unnecessary generators. The final Section~\ref{The Homogeneous Border Basis Scheme} introduces the homogeneous border basis scheme $\mathbb{B}_{\mathcal{O}}^{\rm hom}$. It parametrizes all homogeneous zero-dimensional ideals having an $\mathcal{O}$-border basis and is obtained from the border basis scheme by intersecting it with a suitable linear space. Our main result about $\mathbb{B}_{\mathcal{O}}^{\rm hom}$ is that it is an affine space (and not only isomorphic to an affine space) if~$\mathcal{O}$ has a $\mathop{\rm maxdeg}\nolimits_W$ border (see Theorem~\ref{homcommute}). This theorem is a nice tool which can be employed to produce good deformations (see Example~\ref{exdefcontinued}) and to recreate the construction of reducible Hilbert schemes (see Example~\ref{exIarrobino}). Here we close this introduction by pointing out that all computations were done using the computer algebra system~\cocoa (see~\cite{CoCoA}) and that even great artists can be too pessimistic at times. \begin{flushright} \small\it Deformations simply do not exist.\\ \rm (Pablo Picasso) \end{flushright} \bigskip \section{Deformation to the Border Form Ideal} \label{Deformation to the Border Form Ideal} One of the fundamental results of Gr\"obner basis theory is that there exists a flat deformation of a polynomial ideal to its leading term ideal. This deformation is achieved by taking a Gr\"obner basis of the ideal, viewing it as a Macaulay basis with respect to a suitably chosen $\mathbb N$-grading, homogenizing it, and letting the homogenizing indeterminate tend to zero. An analogous fact for border bases of zero-dimensional polynomial ideals is not known in general. In this section we shall prove some partial results in this direction. In the following we let $K$ be a field, $P=K[x_1,\dots,x_n]$ a polynomial ring, and $I\subset P$ a zero-dimensional ideal. Recall that an {\em order ideal}~$\mathcal{O}$ is a finite set of terms in $\mathbb T^n=\{ x_1^{\alpha_1}\cdots x_n^{\alpha_n} \mid \alpha_i\ge 0\}$ such that all divisors of a term in~$\mathcal{O}$ are also contained in~$\mathcal{O}$. The set $\partial\mathcal{O}=(x_1\mathcal{O} \cup\cdots\cup x_n\mathcal{O}) \setminus \mathcal{O}$ is called the {\em border} of~$\mathcal{O}$. By repeating this construction, we define the {\it higher borders} $\partial^i\mathcal{O}$ for $i\ge 1$ and we let $\partial^0\mathcal{O}=\mathcal{O}$. The number ${\mathop{\rm ind}\nolimits}_{\mathcal{O}}(t)=\min\{i\ge 0 \mid t\in \partial^i \mathcal{O}\}$ is called the {\em $\mathcal{O}$-index} of a term $t\in\mathbb{T}^n$. \begin{definition} Let $\mathcal{O}=\{t_1,\dots,t_\mu\}$ be an order ideal and $\partial\mathcal{O} =\{b_1,\dots,b_\nu\}$ its border. \begin{items} \item A set of polynomials $G=\{g_1,\dots,g_\nu\}\subseteq I$ is called an {\em $\mathcal{O}$-border prebasis} of~$I$ if it is of the form $g_j=b_j-\sum_{i=1}^\mu a_{ij}t_i$ with $a_{ij}\in K$. \item An $\mathcal{O}$-border prebasis of~$I$ is called an {\em $\mathcal{O}$-border basis} of~$I$ if $P=I\oplus \langle \mathcal{O}\rangle_K$. \item For a polynomial $f=c_1 u_1+\cdots+c_s u_s\ne 0$ with $c_i\in K\setminus \{0\}$ and $u_i\in\mathbb T^n$, the polynomial $\mathop{\rm BF}\nolimits_{\mathcal{O}}(f)=\sum_{\{i\mid {\mathop{\rm ind}\nolimits}_{\mathcal{O}}(u_i)\, \hbox{\scriptsize max.}\}}c_i u_i$ is called the {\em border form} of~$f$. \item The ideal $\mathop{\rm BF}\nolimits_{\mathcal{O}}(I)= ( \mathop{\rm BF}\nolimits_{\mathcal{O}}(f)\mid f \in I\setminus \{0\} )$ is called the {\em border form ideal} of~$I$. \item The monomial ideal generated by~$\partial\mathcal{O}$ is called the {\em border term ideal}\/ of~$\mathcal{O}$ and is denoted by~$\mathop{\rm BT}\nolimits_{\mathcal{O}}$. \end{items} \end{definition} Notice that if~$I$ has an $\mathcal{O}$-border basis, its border from ideal is $\mathop{\rm BF}\nolimits_{\mathcal{O}}(I)=\mathop{\rm BT}\nolimits_{\mathcal{O}}$. Thus our goal is to use a border basis of~$I$ to deform the ideal to its border form ideal. If the order ideal is of the form $\mathcal{O}_\sigma(I)=\mathbb T^n\setminus \mathop{\rm LT}\nolimits_\sigma(I)$ for some term ordering~$\sigma$, the Gr\"obner deformation can be used as follows. \begin{proposition}\label{TOdeform} Let $\sigma$ be a term ordering, let $G=\{g_1,\dots,g_\nu\}$ be the $\mathcal{O}_\sigma(I)$-border basis of~$I$, and let $b_i$ the border term in the support of~$g_i$ for $i=1,\dots,\nu$. \begin{items} \item There exist weights $W=(w_1,\dots,w_n)\in (\mathbb N_+)^n$ such that $b_j=\mathop{\rm DF}\nolimits_W(g_j)$ and~$G$ is a Macaulay basis of~$I$ with respect to the grading given by~$W$. \item Let $\overline{P}=K[x_0,\dots,x_n]$ be graded by $\overline{W}=(1,w_1,\dots,w_n)$. Then the ring $\overline{P}/I^{\rm hom}= \overline{P}/ ( g_1^{\rm hom},\dots, g_\nu^{\rm hom} )$ is a graded free $K[x_0]$-module. \end{items} \noindent In particular, we have a {\em flat family} $K[x_0] \To \overline{P}/I^{\rm hom}$ whose {\em general fiber} is isomorphic to $P/I\cong \overline{P}/(I^{\rm hom}+( x_0-1))$, where $I= ( g_1,\dots,g_\nu)$, and whose {\em special fiber} is isomorphic to~$P/\mathop{\rm BT}\nolimits_{\mathcal{O}_\sigma(I)} \cong \overline{P}/(I^{\rm hom}+( x_0))$. \end{proposition} \begin{proof} The first claim in~a) follows from~\cite{E}, Prop.\ 15.16. The second claim in~a) is then a consequence of~\cite{KR2}, Props.\ 6.4.18 and~4.2.15. The remaining claims follow from~a) and~\cite{KR2}, Thm.\ 4.3.22 and Prop.\ 4.3.23. \end{proof} For more general order ideals~$\mathcal{O}$, i.e.\ for order ideals which are not necessarily of the form $\mathcal{O}_\sigma(I)$, one strategy is to deform a given $\mathcal{O}$-border basis of~$I$ first to a border basis of the degree form ideal~$\mathop{\rm DF}\nolimits_W(I)$ of~$I$ with respect to a suitably chosen grading. A border basis of~$\mathop{\rm DF}\nolimits_W(I)$ is always homogeneous, as the following lemma shows. \begin{lemma} Let $P$ be graded by a matrix $W\in\mathop{\rm Mat}\nolimits_{m,n}(\mathbb{Z})$, let~$\mathcal{O}$ be an order ideal, and let $I\subset P$ be a homogeneous ideal which has an $\mathcal{O}$-border basis. Then this $\mathcal{O}$-border basis of~$I$ consists of homogeneous polynomials. \end{lemma} \begin{proof} Let $\mathcal{O}=\{t_1,\dots,t_\mu\}$, let $b_j\in \partial\mathcal{O}$, and let $g_j=b_j -\sum_{i=1}^\mu c_{ij}t_i$ be the corresponding border basis element, where $c_{ij}\in K$. If we restrict the sum to those indices~$i$ for which $\deg_W(t_i)=\deg_W(b_j)$, we obtain a homogeneous element of~$I$ of the form $\tilde g_i=b_j -\sum_k c_{ik}t_k$. Now the uniqueness of the $\mathcal{O}$-border basis of~$I$ (cf.~\cite{KR2}, 6.4.17) implies $g_i=\tilde g_i$. \end{proof} As for our idea to deform a border basis of~$I$ to a homogeneous border basis of~$\mathop{\rm DF}\nolimits_W(I)$, we have the following result. \begin{theorem}{\bf (Deformation to the Degree Form Ideal)}\label{DFdeform}\\ Let $W=(w_1,\dots,w_n)\in\mathop{\rm Mat}\nolimits_{1,n}(\mathbb N_+)$ be a row of positive integers, let~$P$ be graded by~$W$, and let $I\subset P$ be a zero-dimensional ideal. Then the following conditions are equivalent. \begin{items} \item The ideal~$I$ has an $\mathcal{O}$-border basis, say $G=\{g_1,\dots,g_\nu\}$, and we have $b_j \in \mathop{\rm Supp}\nolimits(\mathop{\rm DF}\nolimits_W(g_j))$ for $j=1,\dots,\nu$. \item The degree form ideal $\mathop{\rm DF}\nolimits_W(I)$ has an $\mathcal{O}$-border basis. \end{items} If these conditions are satisfied, the $\mathcal{O}$-border basis of $\mathop{\rm DF}\nolimits_W(I)$ is $\mathop{\rm DF}\nolimits_W(G)=\{\mathop{\rm DF}\nolimits_W(g_1),\dots,\mathop{\rm DF}\nolimits_W(g_s)\}$ and there is a flat family $K[x_0] \To \overline{P}/I^{\rm hom}$ whose general fiber is isomorphic to~$P/I$, where $I= (g_1,\dots,g_\nu)$, and whose special fiber is isomorphic to $P/\mathop{\rm DF}\nolimits_W(I)$, where $\mathop{\rm DF}\nolimits_W(I)= (\mathop{\rm DF}\nolimits_W(g_1),\dots,\mathop{\rm DF}\nolimits_W(g_\nu))$. \end{theorem} \begin{proof} First we show that a) implies b). Since $G$ is an $\mathcal{O}$-border basis of~$I$ and since $b_j \in \mathop{\rm Supp}\nolimits(\mathop{\rm LF}\nolimits_W(g_j))$ for $j=1,\dots,\nu$, the set $\mathop{\rm DF}\nolimits_W(G)= \{\mathop{\rm DF}\nolimits_W(g_1),\dots,\mathop{\rm DF}\nolimits_W(g_\nu)\}$ is an $\mathcal{O}$-border prebasis of the ideal $J=(\mathop{\rm DF}\nolimits_W(g_1),\dots,\mathop{\rm DF}\nolimits_W(g_\nu))$. By the Border Division Algorithm (see \cite{KR2}, Prop.~6.4.11), the residue classes of the elements of~$\mathcal{O}$ generate the $K$-vector space $P/J$. Together with $J\subseteq \mathop{\rm DF}\nolimits_W(I)$, this shows $$ \#\mathcal{O}=\dim_K(P/I)= \dim_K(P/\mathop{\rm DF}\nolimits_W(I)) \le \dim_K(P/J) \le \#\mathcal{O}. $$ Therefore we get $J=\mathop{\rm DF}\nolimits_W(I)$ and the residue classes of the elements of~$\mathcal{O}$ are a $K$-basis of~$P/\mathop{\rm DF}\nolimits_W(I)$. From this the claim follows immediately. Now we prove that b) implies a). Let $\sigma$ be a term ordering on~$\mathbb T^n$ which is compatible with the grading defined by~$W$, and let $H=\{h_1,\dots,h_\nu\}$ be the $\mathcal{O}_\sigma(I)$-border basis of~$I$. For the purposes of this proof, we may consider~$\mathcal{O}$ and~$\mathcal{O}_\sigma(I)$ as deg-ordered tuples (see~\cite{KR2}, 4.5.4). The fact the~$H$ is a $\sigma$-Gr\"obner basis of~$I$ implies by~\cite{KR2}, 4.2.15 that~$H$ is a Macaulay basis of~$I$ with respect to the grading given by~$W$. Then~\cite{KR2}, 4.3.19 shows that $I^{\rm hom}$ is generated by~$\{h_1^{\rm hom},\dots, h_\nu^{\rm hom}\}$, and by~\cite{KR2}, 4.3.22 the ring $\overline{P}/I^{\rm hom}$ is a graded free $K[x_0]$-module, where $K[x_0]$ is graded by $\deg(x_0)=1$ and $\overline{P}=K[x_0,\dots,x_n]$ is graded by $\overline{W}=(1,w_1,\dots,w_n)$. More precisely, the proof of~\cite{KR2}, 4.3.22 shows that the residue classes $\overline{\mathcal{O}_\sigma(I)}$ form a homogeneous $K[x_0]$-basis of this graded free module. Since the residue classes $\overline{\mathcal{O}}$ are homogeneous elements in $\overline{P}/I^{\rm hom}$, we can write $\overline{\mathcal{O}} = \mathcal{A} \cdot \overline{\mathcal{O}_\sigma(I)}$ with a homogeneous matrix $\mathcal{A}\in \mathop{\rm Mat}\nolimits_{\nu}(K[x_0])$ (see~\cite{KR2}, 4.7.1 and~4.7.3). By the hypothesis, $\mathop{\rm DF}\nolimits_W(I)$ has an $\mathcal{O}$-border basis. Thus the residue classes of the elements of~$\mathcal{O}$ are a homogeneous $K$-basis of $P/\mathop{\rm DF}\nolimits_W(I)$. Since also the residue classes of the elements of $\mathcal{O}_\sigma(I)$ are a homogeneous $K$-basis of this ring, the degree tuples of~$\mathcal{O}$ and of~$\mathcal{O}_\sigma(I)$ are identical. Therefore the matrix~$\mathcal{A}$ is a block matrix of the form $$ \mathcal{A}=\begin{pmatrix} \mathcal{A}_{11} & \mathcal{A}_{12} & \cdots & \mathcal{A}_{1q}\\ 0 & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \mathcal{A}_{q{-}1\,q} \\ 0 & \cdots & 0 & \mathcal{A}_{qq} \end{pmatrix} $$ with square matrices $\mathcal{A}_{ii}$ having constant entries. Hence we have $\det(\mathcal{A})\in K$, and the fact that the transformation matrix $\mathcal{A}\vert_{x_0\mapsto 0}$ between the two homogeneous bases of $P/\mathop{\rm DF}\nolimits_W(I)$ is invertible implies $\det(\mathcal{A})\ne 0$. Altogether, it follows that~$\overline{\mathcal{O}}$ is a homogeneous $K[x_0]$-basis of~$\overline{P}/I^{\rm hom}$, too. In particular, the residue classes of~$\mathcal{O}$ form a $K$-basis of $P/I\cong \overline{P}/(I^{\rm hom} +(x_0-1))$, i.e.\ the ideal~$I$ has an $\mathcal{O}$-border basis. For every $j\in\{1,\dots,\nu\}$, we have a representation $b_j=\sum_{i=1}^\mu f_{ij} t_i + h_j$ with homogeneous polynomials $f_{ij}\in K[x_0]$ of degree $\deg_W(b_j)-\deg_W(t_i)$ and with a homogeneous polynomial $h_j\in I^{\rm hom}$ of degree $\deg_W(b_j)$. Setting $x_0\mapsto 1$ in this representation, we find $g_j=b_j -\sum_{i=1}^\mu f_{ij}(1)\, t_i \in I$. It follows that these polynomials form the $\mathcal{O}$-border basis of~$I$. By construction, we have $b_j\in\mathop{\rm Supp}\nolimits(\mathop{\rm DF}\nolimits_W(g_j))$. The first additional claim is a consequence of the observation that $\mathop{\rm DF}\nolimits_W(G)$ is an $\mathcal{O}$-border prebasis of~$\mathop{\rm DF}\nolimits_W(I)$ and of~\cite{KR2}, Prop.~6.4.17. To construct the desired flat family, we use the fact that~$G$ is a Macaulay basis of~$I$ by what we have just shown and conclude from~\cite{KR2}, 4.3.19 that $I^{\rm hom}=(g_1^{\rm hom},\dots, g_\nu^{\rm hom})$. From this the claim follows. \end{proof} Let us look at an example for this proposition. \begin{example} Consider the ideal $I=(-2x^2+xy-y^2-1,\, 8y^3+10x+9y)$ in the polynomial ring $P=\mathbb{Q}[x,y]$. The degree form ideal of~$I$ with respect to the standard grading, i.e.\ the grading defined by $W=(1\;1)$, is $\mathop{\rm DF}\nolimits_W(I)=(-2x^2 +xy-y^2,\, y^3)$. We want to use the order ideal $\mathcal{O}=\{1,x,x^2,x^3,y,y^2\}$ whose border is given by $\partial\mathcal{O} = \{xy, y^3,xy^2,x^2y,x^3y,x^4\}$. \medskip \makebox[11 true cm]{ \beginpicture \setcoordinatesystem units <0.4cm,0.4cm> \setplotarea x from 0 to 5, y from 0 to 4.1 \axis left / \axis bottom / \arrow <2mm> [.2,.67] from 4.5 0 to 5 0 \arrow <2mm> [.2,.67] from 0 3.6 to 0 4.1 \put {$\scriptstyle x^i$} [lt] <0.5mm,0.8mm> at 5.1 0 \put {$\scriptstyle y^j$} [rb] <1.7mm,0.7mm> at 0 4.1 \put {$\bullet$} at 0 0 \put {$\bullet$} at 1 0 \put {$\bullet$} at 0 1 \put {$\bullet$} at 2 0 \put {$\bullet$} at 0 2 \put {$\bullet$} at 3 0 \put {$\scriptstyle 1$} [lt] <-0.6mm,-1mm> at 0 0 \put {$\circ$} at 0 3 \put {$\circ$} at 1 2 \put {$\circ$} at 1 1 \put {$\circ$} at 2 1 \put {$\circ$} at 3 1 \put {$\circ$} at 4 0 \endpicture} \medskip It is easy to check that $\mathop{\rm DF}\nolimits_W(I)$ has an $\mathcal{O}$-border basis, namely $H=\{h_1,\dots,h_6\}$ with $h_1=xy-2x^2-y^2$, $h_2=y^3$, $h_3=xy^2+4x^3$, $h_4=x^2y+2x^3$, $h_5=x^3y$, and $h_6=x^4$. Therefore the proposition says that~$I$ has an $\mathcal{O}$-border basis $G=\{g_1,\dots,g_6\}$, and that $h_i=\mathop{\rm DF}\nolimits_W(g_i)$ for $i=1,\dots,6$. Indeed, if we compute this border basis we find that it is given by $g_1=xy-2x^2-y^2-1$, $g_2=y^3+\frac{5}{4}x+\frac{9}{8}y$, $g_3=xy^2+4x^3+\frac{3}{4}x -\frac{1}{8}y$, $g_4=x^2y+2x^3 -\frac{1}{4}x -\frac{1}{8}y$, $g_5=x^3y-\frac{1}{2}x^2-\frac{1}{8}y^2-\frac{3}{32}$, and $g_6=x^4-\frac{1}{64}$. \end{example} An easy modification of this example shows that the converse implication is not true without the hypothesis $b_j \in \mathop{\rm Supp}\nolimits(\mathop{\rm DF}\nolimits_W(g_j))$, i.e.\ that an $\mathcal{O}$-border basis of~$I$ does not necessarily deform to an $\mathcal{O}$-border basis of~$\mathop{\rm DF}\nolimits_W(I)$. \begin{example}\label{noDFdeform} Consider the ideal $I=(x^2y,\, x^3 - \frac{1}{2}xy,\, xy^2,\, y^3)$ in $P=\mathbb{Q}[x,y]$. With respect to the standard grading, we have $\mathop{\rm DF}\nolimits_W(I)=( x^3,x^2y,xy^2,y^3)$. The ideal $\mathop{\rm DF}\nolimits_W(I)$ does not have an $\mathcal{O}$-border basis for $\mathcal{O}= \{1,x,x^2,x^3,y,y^2\}$. However, the ideal~$I$ has the $\mathcal{O}$-border basis $G=\{g_1,\dots,g_6\}$, where $g_1=xy-2x^3$, $g_2=y^3$, $g_3=xy^2$, $g_4=x^2y$, $g_5=x^3y$, and $g_6=x^4$. \end{example} The main reason why the last example exists is that one of the terms in~$\mathcal{O}$ has a larger degree than the term~$xy$ in the border of~$\mathcal{O}$. This suggests the following notion. \begin{definition}\label{DefMaxdeg} Let~$P$ be graded by a matrix $W\in\mathop{\rm Mat}\nolimits_{1,n}(\mathbb{N}_+)$. The order ideal~$\mathcal{O}$ is said to have a {\it $maxdeg_W\!\!$ border} if $\deg_W(b_j)\ge \deg_W(t_i)$ for $i=1,\dots,\mu$ and $j=1,\dots,\nu$. In other words, no term in~$\mathcal{O}$ is allowed to have a degree larger than any term in the border. \end{definition} Note that this condition is violated in Example~\ref{noDFdeform}. By choosing suitable weights, many order ideals can be seen to have a $\mathop{\rm maxdeg}\nolimits_W$ border. \begin{example} Let $a\ge 1$, and let~$\mathcal{O}=\{1,x_1,x_1^2,\dots,x_1^a\} \subset \mathbb{T}^n$. Then~$\mathcal{O}$ has a $\mathop{\rm maxdeg}\nolimits_W$ border with respect to the grading given by $W=(1\;a\;\cdots\;a)$. \end{example} One consequence of an order ideal having a $\mathop{\rm maxdeg}\nolimits_W$ border is that $b_j \in \mathop{\rm Supp}\nolimits(\mathop{\rm LF}\nolimits_W(g_j))$ for $j=1,\dots,\nu$ and every $\mathcal{O}$-border prebasis $G=\{g_1,\dots,g_\nu\}$. Thus the proposition applies in particular to order ideals having a $\mathop{\rm maxdeg}\nolimits_W$ border. Let us end this section with an example for this part of the proposition. \begin{example}\label{deftoDFex} Let $\mathcal{O}=\{1,x,x^2,y,y^2\} \subset \mathbb T^2$. Then we have $\mathcal{O}=\mathbb{T}^2_{\le 2}\setminus \{ xy\}$, i.e.\ the order ideal~$\mathcal{O}$ has a $\mathop{\rm maxdeg}\nolimits_W$ border with respect to the standard grading. Consider the ideal $I=(x^2+xy -\frac{1}{2}y^2-x-\frac{1}{2}y,\, y^3-y,\, xy^2-xy)$ which is the vanishing ideal of the point set $\mathbb X=\{(0,0),\, (0,-1),\, (1,0),\, (1,1),\, (-1,1)\}$ if ${\rm char}(K)\ne 2$. We have $\partial\mathcal{O}= \{b_1,b_2,b_3,b_4,b_5\}$ with $b_1=x^3$, $b_2=x^2y$, $b_3=xy$, $b_4=xy^2$, and $b_5=y^3$. The ideal~$I$ has an $\mathcal{O}$-border basis, namely $G=\{g_1,g_2,g_3,g_4,g_5\}$ with $g_1=x^3-x$, $g_2=x^2y-\frac{1}{2}y^2-\frac{1}{2}y$, $g_3=xy+x^2-\frac{1}{2}y^2-x-\frac{1}{2}y$, $g_4=xy^2+x^2-\frac{1}{2}y^2-x-\frac{1}{2}y$, and $g_5=y^3-y$. The order ideal~$\mathcal{O}$ is not of the form $\mathcal{O}=\mathcal{O}_\sigma(I)$ for any term ordering~$\sigma$. Using the proposition, we deform the border basis elements in~$G$ to their degree forms. Thus the ideal $\mathop{\rm DF}\nolimits_W(I)=(x^3,\, x^2y,\, xy+x^2-\frac{1}{2}y^2,\, xy^2,\, y^3)$ is a flat deformation of~$I$ and these five polynomials are an $\mathcal{O}$-border basis of $\mathop{\rm DF}\nolimits_W(I)$. The task of deforming the homogeneous ideal $\mathop{\rm DF}\nolimits_W(I)$ further to the border term ideal $\mathop{\rm BT}\nolimits_{\mathcal{O}}=(x^3,\, x^2y,\, xy,\, xy^2,\, y^3)$ will be considered in Example~\ref{exdefcontinued}. \end{example} \section{The Border Basis Scheme} \label{The Border Basis Scheme} Let $\mathcal{O}=\{t_1,\dots,t_\mu\}$ be an order ideal in~$\mathbb T^n$, and let $\partial\mathcal{O}=\{b_1,\dots,b_\nu\}$ be its border. In this section we define a moduli space for {\it all}\/ zero-dimensional ideals having an $\mathcal{O}$-border basis, and we use rational curves on this scheme to construct flat deformations of border bases. \begin{definition}\label{defBBS} Let $\{c_{ij} \mid 1\le i\le \mu,\; 1\le j\le\nu\}$ be a set of further indeterminates. \begin{items} \item The {\em generic $\mathcal{O}$-border prebasis} is the set of polynomials $G=\{g_1,\dots,g_\nu\}$ in~$K[x_1,\dots,x_n,c_{11},\dots,c_{\mu\nu}]$ given by $$ g_j = b_j -\sum_{i=1}^\mu c_{ij}t_i $$ \item For $k=1,\dots,n$, let $\mathcal{A}_k \in\mathop{\rm Mat}\nolimits_{\mu}(K[c_{ij}])$ be the $k^{\rm th}$ formal multiplication matrix associated to~$G$ (cf.~\cite{KR2}, Def.\ 6.4.29). It is also called the $k^{\rm th}$ {\em generic multiplication matrix}\/ with respect to~$\mathcal{O}$. \item The affine scheme $\mathbb{B}_{\mathcal{O}} \subseteq \mathbb{A}^{\mu\nu}$ defined by the ideal $I(\mathbb{B}_{\mathcal{O}})$ generated by the entries of the matrices $\mathcal{A}_k \mathcal{A}_\ell -\mathcal{A}_\ell \mathcal{A}_k$ with $1\le k<\ell\le n$ is called the {\em $\mathcal{O}$-border basis scheme}. \item The coordinate ring $K[c_{11},\dots,c_{\mu\nu}]/I(\mathbb{B}_\mathcal{O})$ of the scheme $\mathbb{B}_{\mathcal{O}}$ will be denoted by~$B_{\mathcal{O}}$. \end{items} \end{definition} By~\cite{KR2}, Thm.\ 6.4.30, a point $(\alpha_{ij})\in K^{\mu\nu}$ yields a border basis $\sigma(G)$ when we apply the substitution $\sigma(c_{ij})=\alpha_{ij}$ to~$G$ if and only if $\sigma(\mathcal{A}_k)\,\sigma(\mathcal{A}_\ell)= \sigma(\mathcal{A}_\ell)\,\sigma(\mathcal{A}_k)$ for $1\le k<\ell\le n$. Therefore the $K$-rational points of~$\mathbb B_{\mathcal{O}}$ are in 1--1 correspondence with the $\mathcal{O}$-border bases of zero-dimensional ideals in~$P$, and thus with all zero-dimensional ideals having an $\mathcal{O}$-border basis. \begin{remark}{\bf (Properties of Border Basis Schemes)}% \label{BBSprops}\\ Currently, not much seems to be known about border basis schemes. For instance, it is not clear which of them are connected, reduced or irreducible. Here we collect some basic observations. \begin{items} \item By definition, the ideal $I(\mathbb{B}_{\mathcal{O}})$ is generated by polynomials of degree two. \item The scheme $\mathbb{B}_{\mathcal{O}}$ can be embedded as an open affine subscheme of the Hilbert scheme parametrizing subschemes of~$\mathbb{A}^n$ of length~$\mu$ (see~\cite{MS}, Section 18.4). \item There is an irreducible component of~$\mathbb{B}_{\mathcal{O}}$ of dimension $n\mu$ which is the closure of the set of radical ideals having an $\mathcal{O}$-border basis. \item The dimension of~$\mathbb{B}_{\mathcal{O}}$ is claimed to be $n\mu$ in~\cite{S}, Prop.\ 8.13. Example~\ref{exIarrobino} shows that A.~Iarrobino's example of a high-dimensional component of the Hilbert scheme yields a counterexample to this claim. It follows that the border basis scheme is in general not irreducible. \item For every term ordering~$\sigma$, there is a subset of~$\mathbb{B}_{\mathcal{O}}$ which parametrized all ideals~$I$ such that $\mathcal{O} = \mathcal{O}_\sigma(I)$. These subsets have turned out to be useful for studying the Hilbert scheme parametrizing subschemes of~$\mathbb{A}^n$ of length~$\mu$ (see for instance~\cite{CV} and~\cite{NS}). \item In the case $n=2$ more precise information is available: for instance, it is known that $\mathbb{B}_{\mathcal{O}}$ is reduced, irreducible and smooth of dimension $2\mu$ (see~\cite{Ha}, \cite{Hu1} and~\cite{MS}, Ch.\ 18). \end{items} \end{remark} As usual, a moduli space such as the border basis scheme comes together with a universal family. In the present setting it is defined as follows. \begin{definition} Let $G=\{g_1,\dots,g_\nu\} \subset K[x_1,\dots,x_n,c_{11},\dots,c_{\mu\nu}]$ with $g_j = b_j -\sum_{i=1}^\mu c_{ij}t_i$ for $j=1,\dots,\nu$ be the generic $\mathcal{O}$-border prebasis. The ring $K[x_1,\dots,x_n,c_{11},\dots,c_{\mu\nu}]/(I(\mathbb{B}_{\mathcal{O}})+(g_1, \dots, g_\nu ))$ will be denoted by~$U_{\mathcal{O}}$. Then the natural homomorphism of $K$-algebras $$ \Phi:\; B_{\mathcal{O}} \;\longrightarrow\; U_{\mathcal{O}} \cong B_{\mathcal{O}}[x_1,\dots,x_n]/ (g_1,\dots,g_\nu) $$ is called the {\em universal $\mathcal{O}$-border basis family}. \end{definition} The fibers of the universal $\mathcal{O}$-border basis family are precisely the quotient rings $P/I$ for which~$I$ is a zero-dimensional ideal which has an $\mathcal{O}$-border basis. The special fiber, i.e.\ the fiber corresponding to $(c_{11},\dots,c_{\mu\nu})$, is the ring $P/\mathop{\rm BT}\nolimits_{\mathcal{O}}$. It is the only fiber in the family which is defined by a monomial ideal. Although it is known that the universal family is free with basis~$\mathcal{O}$ (see~\cite{GLS} or~\cite{Hu2}), we believe that the following proof which generalizes the method in~\cite{M} is very elementary and conceptually simple. \begin{theorem}{\bf (The Universal Border Basis Family)}\label{universal}\\ Let $\Phi: B_{\mathcal{O}} \longrightarrow U_{\mathcal{O}}$ be the universal $\mathcal{O}$-border basis family. Then the residue classes of the elements of~$\mathcal{O}$ are a $B_{\mathcal{O}}$-module basis of~$U_{\mathcal{O}}$. In particular, the map~$\Phi$ is a flat homomorphism. \end{theorem} \begin{proof} First we prove that the residue classes $\overline{\mathcal{O}}$ are a system of generators of the $B_{\mathcal{O}}$-module $U_{\mathcal{O}}\cong B_{\mathcal{O}}[x_1,\dots,x_n]/(G)$ where $G=\{g_1, \dots, g_\nu\}$ is the generic $\mathcal{O}$-border prebasis. In order to show that the map $\omega: B_{\mathcal{O}}^{\nu} \To U_{\mathcal{O}}$ defined by $e_i\mapsto \bar t_i$ is surjective, we may extend the base field and hence assume that~$K$ is algebraically closed. By the local-global principle and the lemma of Nakayama, it suffices to show that the induced map $$ \bar\omega: (B_{\mathcal{O}})_{\mathfrak{m}}/\mathfrak{m} (B_{\mathcal{O}}){\mathfrak{m}} \To (B_{\mathcal{O}})_{\mathfrak{m}}[x_1,\dots,x_n]/((G) + \mathfrak{m}(B_{\mathcal{O}})_{\mathfrak{m}}[x_1,\dots,x_n]) $$ is surjective for every maximal ideal $\mathfrak{m}= (c_{ij}-\alpha_{ij})_{i,j}$ in~$B_{\mathcal{O}}$. In other words, we need to show that the map~$\omega$ becomes surjective if we substitute values~$\alpha_{ij}\in K$ for the indeterminates~$c_{ij}$ and if these values have the property that the maximal ideal $(c_{ij}-\alpha_{ij})_{i,j}$ contains $I(\mathbb{B}_{\mathcal{O}})$. Thus the claim follows from the fact that~$G$ becomes an $\mathcal{O}$-border basis after such a substitution, since its associated formal multiplication matrices commute. Now we show that~$\overline{\mathcal{O}}$ is $B_{\mathcal{O}}$-linearly independent. We consider the free $B_{\mathcal{O}}$-submodule $M=\bigoplus_{i=1}^{\mu} B_{\mathcal{O}}\,t_i$ of $B_{\mathcal{O}}[x_1,\dots,x_n]$ and proceed in the following manner. \begin{enumerate} \item We equip~$M$ with a suitable $B_{\mathcal{O}}[x_1,\dots,x_n]$-module structure. \item We show that this $B_{\mathcal{O}}[x_1,\dots,x_n]$-module is cyclic and construct a surjective $B_{\mathcal{O}}[x_1,\dots,x_n]$-linear map $\Theta: B_{\mathcal{O}} [x_1,\dots,x_n] \To M$ which maps~$t_i$ to~$t_i$. \item We prove that the kernel of~$\Theta$ is precisely $(G)$. \end{enumerate} Altogether, it follows that~$\Theta$ induces a map $\overline{\Theta}: B_{\mathcal{O}}[x_1,\dots,x_n]/(G) \To M$ which is an isomorphism of $B_{\mathcal{O}}$-modules and maps~$\bar t_i$ to~$t_i$. Thus $\overline{\mathcal{O}}=\{\bar t_1, \dots,\bar t_{\mu}\}$ is a $B_{\mathcal{O}}$-basis of~$U_{\mathcal{O}}$, as claimed. To do Step~1, we let $\overline{\mathcal{A}}_j$ be the image of the generic multiplication matrix in $\mathop{\rm Mat}\nolimits_{\mu}(B_{\mathcal{O}})$. Then we define \begin{eqnarray} a \ast \sum_{i=1}^\mu a_it_i & = & (t_1,\dots,t_{\mu}) \cdot a\,\overline{\mathcal{I}}_\mu \cdot (a_1,\dots,a_\mu)^{\rm tr} = \sum_{i=1}^\mu a\, a_i\, t_i \\ x_j\ast \sum_{i=1}^\mu a_it_i & = & (t_1,\dots,t_\mu) \cdot \overline{\mathcal{A}}_j \cdot (a_1,\dots, a_\mu)^{\rm tr} \end{eqnarray} for $a, a_1,\dots,a_\mu\in B_{\mathcal{O}}$ and $j=1,\dots,n$. Using this definition, the equalities $$ x_k x_j\ast \sum_{i=1}^\mu a_it_i = x_k\ast (x_j\ast \sum_{i=1}^\mu a_it_i )= (t_1,\dots,t_{\mu}) \cdot \overline{\mathcal{A}}_k\overline{\mathcal{A}}_j \cdot (a_1,\dots,a_\mu)^{\rm tr} \leqno{(3)} $$ and the fact that the matrices~$\overline{\mathcal{A}}_j$ commute show that this definition equips~$M$ with the structure of a $B_{\mathcal{O}}[x_1,\dots,x_n]$-module. By using induction, we get $$ f \ast \sum_{i=1}^\mu a_it_i = (t_1,\dots,t_\mu) \cdot f(\overline{\mathcal{A}}_1,\dots,\overline{\mathcal{A}}_n) \cdot (a_1,\dots, a_\mu)^{\rm tr} \leqno{(4)} $$ for every $f\in B_{\mathcal{O}}[x_1,\dots,x_n]$ and all $a_1,\dots,a_\mu \in B_{\mathcal{O}}$. For Step~2, we assume w.l.o.g.\ that $t_1=1$. Using induction on $\deg(t_i)$, we want to show that $t_i\ast t_1=t_i$ for $i=1,\dots,\mu$. The case $t_i=1$ follows from $(1)$. For the induction step, we write $t_i=x_k t_\ell$ and using $(2)$, $(3)$ and $(4)$ we calculate $$ t_i \ast t_1= x_k\ast (t_\ell\ast t_1) =x_k \ast t_\ell = (t_1,\dots,t_\mu) \cdot \overline{\mathcal{A}}_k \cdot e_\ell^{\rm tr} = (t_1,\dots,t_\mu)\cdot e_i^{\rm tr} = t_i $$ It follows that $M$ is a cyclic $B_{\mathcal{O}}[x_1,\dots,x_n]$-module generated by~$t_1$. Thus we obtain a surjective $B_{\mathcal{O}}[x_1,\dots,x_n]$-linear map $\Theta: B_{\mathcal{O}}[x_1,\dots,x_n] \To M$ which is defined by~$f\mapsto f\ast t_1$. We have just shown that~$\Theta$ satisfies $\Theta(t_i)=t_i$ for $i=1,\dots,\mu$. Finally, to prove Step~3, we want to show that $\Theta(g_j)=0$ for $j=1,\dots,\nu$. We write $b_j=x_kt_\ell$ and calculate $\Theta(g_i) = g_1\ast t_1 = (t_1,\dots, t_\mu) \cdot g_j(\overline{\mathcal{A}}_1,\dots,\overline{\mathcal{A}}_n) \cdot e_1^{\rm tr}$. In particular, we get \begin{eqnarray*} g_j(\overline{\mathcal{A}}_1,\dots,\overline{\mathcal{A}}_n) \cdot e_1^{\rm tr} &=& b_j(\overline{\mathcal{A}}_1,\dots,\overline{\mathcal{A}}_n)\cdot e_1^{\rm tr} - {\textstyle\sum\limits_{i=1}^\mu} c_{ij}\; t_i(\overline{\mathcal{A}}_1,\dots, \overline{\mathcal{A}}_n) \cdot e_1^{\rm tr}\\ &=& \overline{\mathcal{A}}_k \cdot t_\ell(\overline{\mathcal{A}}_1,\dots, \overline{\mathcal{A}}_n) \cdot e_1^{\rm tr} - {\textstyle\sum\limits_{i=1}^\mu} c_{ij}\;e_i^{\rm tr} = \overline{\mathcal{A}}_k \cdot e_\ell^{\rm tr} - {\textstyle\sum\limits_{i=1}^\mu} c_{ij}\;e_i^{\rm tr}\\ &=& {\textstyle\sum\limits_{i=1}^\mu} c_{ij}\; e_i^{\rm tr} - {\textstyle\sum\limits_{i=1}^\mu} c_{ij}\; e_i^{\rm tr} =0 \end{eqnarray*} We have checked that $\Theta(g_j)=0$ for $j=1,\dots,\nu$. Consequently, the map~$\Theta$ induces a $B_{\mathcal{O}}$-linear map $\overline{\Theta}: B_{\mathcal{O}}[x_1,\dots,x_n]/( G) \To M$. We know already that~$\overline{\mathcal{O}}$ generates the left-hand side and $\mathcal{O}$ is a $B_{\mathcal{O}}$-basis of the right-hand side. Hence the surjective map $\overline{\Theta}$ is also injective. \end{proof} In the remainder of this section we recall the connection between flat deformations over~$K[z]$ of border bases and rational curves on the border basis scheme. A rational curve on the $\mathcal{O}$-border basis scheme corresponds to a $K$-algebra homomorphism $\Psi: B_{\mathcal{O}} \To K[z]$ of the corresponding affine coordinate rings. If we restrict the universal family of $\mathcal{O}$-border bases to this rational curve, we obtain the following flat deformation of border bases. \begin{corollary}\label{ratcurve} Let~$z$ be a new indeterminate, and let $\Psi: B_{\mathcal{O}}\To K[z]$ be a homomorphism of $K$-algebras. By applying the base change~$\Psi$ to the universal family~$\Phi$, we get a homomorphism of $K[z]$-algebras $$ \Phi_{K[z]}=\Phi\otimes_{B_{\mathcal{O}}} K[z]:\; K[z] \To U_{\mathcal{O}} \otimes_{B_{\mathcal{O}}} K[z] $$ Then the residue classes of the elements of~$\mathcal{O}$ form a $K[z]$-module basis of the right-hand side. In particular, the map $\Phi_{K[z]}$ defines a flat family. \end{corollary} This corollary can be used to construct flat deformations over~$K[z]$ of border bases. Suppose the maximal ideal $\Psi^{-1}(z-1)$ corresponds to a given $\mathcal{O}$-border basis and the maximal ideal $\Psi^{-1}(z)$ is the ideal $( c_{11},\dots,c_{\mu\nu})$ which corresponds to the border term ideal $( b_1,\dots,b_\nu )$. In other words, suppose that the rational curve connects a given point to the point $(0,\dots,0)$ which corresponds to the border term ideal. Then the map $\Phi_{K[z]}$ defines a flat family over~$K[z]$ whose generic fiber $P/I$ is defined by the ideal~$I$ generated by the given $\mathcal{O}$-border basis and whose special fiber $P/( b_1,\dots,b_\nu )$ is defined by the border term ideal. Another application of the theorem is the following criterion for checking the flatness of a family of border bases. \begin{corollary}{\bf (Flatness Criterion for Families of Border Bases)}\label{FlatCrit}\\ Let~$z$ be a new indeterminate, let $\widetilde{P}=K[z][x_1,\dots,x_n]$, and let $g_j=b_j -\sum_{i=1}^\mu a_{ij}(z)t_i\in\widetilde{P}$ be polynomials with coefficients $a_{ij}(z)\in K[z]$. Let $\widetilde{I}$ be the ideal in $\widetilde{P}$ generated by $G=\{g_1,\dots,g_\nu\}$ and assume that the formal multiplication matrices $\mathcal{A}_k \in\mathop{\rm Mat}\nolimits_\mu(K[z])$ of~$G$ are pairwise commuting. \begin{items} \item For every $c\in K$, the set $\{g_1\vert_{z\mapsto c},\dots, g_\nu\vert_{z\mapsto c}\}$ is an $\mathcal{O}$-border basis of the ideal $I_c=\widetilde{I}\vert_{z\mapsto c}$. \item The canonical $K$-algebra homomorphism $$ \phi:\quad K[z] \;\To\; K[z][x_1,\dots,x_n]/\widetilde{I} $$ defines a flat family. More precisely, the residue classes of the elements of~$\mathcal{O}$ are a $K[z]$-basis of $K[z][x_1,\dots,x_n]/\widetilde{I}$. \end{items} \end{corollary} \begin{proof} First we show~a). For every $c\in K$, the matrices $\mathcal{A}_k\vert_{z\mapsto c}$ are the multiplication matrices of $G\vert_{z\mapsto c}$. Thus the claim follows from~\cite{KR2}, 6.4.30. Next we prove~b). Since the matrices $\mathcal{A}_k$ commute, the map $B_{\mathcal{O}}\To K[z]$ defined by $c_{ij} \mapsto a_{ij}(z)$ is a well-defined homomorphism of $K$-algebras. Hence it suffices to apply the preceding corollary. \end{proof} \begin{remark} If~$K$ is infinite, the hypothesis that the formal multiplication matrices $\mathcal{A}_k$ commute can be replaced by the assumption that the matrices $\mathcal{A}_k \vert_{z\mapsto c}$ commute for every $c\in K$. This follows from the fact that a polynomial $f\in K[z]$ is zero if and only if $f(c)=0$ for all $c\in K$. \end{remark} Let us have a look at one particular border basis scheme in detail. \begin{example}\label{affinecell} Consider the case $n=2$ and $\mathcal{O}=\{1,x,y,xy\}$. The border of~$\mathcal{O}$ is $\partial\mathcal{O} = \{y^2, x^2, xy^2, x^2y\}$, so that in our terminology we have $\mu=4$, $\nu = 4$, $t_1 = 1$, $t_2 = x$, $t_3 = y$, $t_4 = xy$, $b_1 = y^2$, $b_2 = x^2$, $b_3 = xy^2$, and $b_4 = x^2y$. The generic multiplication matrices are $$ \mathcal{A}_x = \left( \begin{array}{cccc} 0 & c_{1\, 2\, } & 0 & c_{1\, 4\, } \\ 1 & c_{2\, 2\, } & 0 & c_{2\, 4\, } \\ 0 & c_{3\, 2\, } & 0 & c_{3\, 4\, } \\ 0 & c_{4\, 2\, } & 1 & c_{4\, 4\, } \end{array}\right) \hbox{\quad and \quad } \mathcal{A}_y= \left( \begin{array}{cccc} 0 & 0 & c_{1\, 1\, } & c_{1\, 3\, } \\ 0 & 0 & c_{2\, 1\, } & c_{2\, 3\, } \\ 1 & 0 & c_{3\, 1\, } & c_{3\, 3\, } \\ 0 & 1 & c_{4\, 1\, } & c_{4\, 3\, } \end{array}\right) $$ When we compute the ideal generated by the entries of $\mathcal{A}_x \mathcal{A}_y -\mathcal{A}_y \mathcal{A}_x$ and simplify its system of generators, we see that the ideal $I(\mathbb{B}_{\mathcal{O}})$ is generated by $$ \left.\begin{array}{l} \{ c_{23}c_{41}c_{42} - c_{21}c_{42}c_{43} + c_{21}c_{44}+ c_{11} - c_{23},\;\; -c_{21}c_{32} - c_{34}c_{41} + c_{33},\\ \;c_{34}c_{41}c_{42}- c_{32}c_{41}c_{44}+ c_{32}c_{43}+ c_{12}- c_{34},\;\; -c_{21}c_{32}- c_{23}c_{42}+ c_{24}, \\ \; -c_{23}c_{32}c_{41}+ c_{21}c_{32}c_{43} - c_{21}c_{34}+ c_{13},\;\; c_{21}c_{42} + c_{41}c_{44} + c_{31}- c_{43}, \\ \;-c_{21}c_{34}c_{42}+ c_{21}c_{32}c_{44} - c_{23}c_{32} + c_{14},\;\; c_{32}c_{41}+ c_{42}c_{43} + c_{22} - c_{44} \} \end{array}\right. $$ Thus there are eight free indeterminates, namely $c_{21}$, $c_{23}$, $c_{32}$, $c_{34}$, $c_{41}$, $c_{42}$, $c_{43}$, and~$c_{44}$, while the remaining indeterminates depend on the free ones by the polynomial expressions above. From this we conclude that the border basis scheme $\mathbb{B}_{\mathcal{O}}$ is an {\it affine cell} of the corresponding Hilbert scheme, i.e.\ an open subset which is isomorphic to an affine space. (This result is in agreement with~\cite{Hu1}, Thm.\ 7.4.1, but not with~\cite{MS}, Example 18.6.) Its coordinate ring is explicitly represented by the isomorphism $$B_{\mathcal{O}} \;\,\smash{\TTo{\lower 7pt\hbox{$\scriptstyle\sim$}}}\,\; K[c_{21}, c_{23}, c_{32}, c_{34}, c_{41}, c_{42}, c_{43}, c_{44}] $$ given by $$ \left.\begin{array}{l} c_{11} \;\longmapsto\; -c_{23}c_{41}c_{42} + c_{21}c_{42}c_{43} - c_{21}c_{44}+ c_{23}\\ c_{12} \;\longmapsto\; -c_{34}c_{41}c_{42} + c_{32}c_{41}c_{44} - c_{32}c_{43}+ c_{34} \\ c_{13} \;\longmapsto\; c_{23}c_{32}c_{41} - c_{21}c_{32}c_{43} + c_{21}c_{34}\\ c_{14} \;\longmapsto\; c_{21}c_{34}c_{42} - c_{21}c_{32}c_{44} + c_{23}c_{32}\\ c_{22} \;\longmapsto\; -c_{32}c_{41} - c_{42}c_{43} + c_{44}\\ c_{24} \;\longmapsto\; c_{21}c_{32} + c_{23}c_{42}\\ c_{31} \;\longmapsto\; -c_{21}c_{42} - c_{41}c_{44} + c_{43}\\ c_{33} \;\longmapsto\; c_{21}c_{32} + c_{34}c_{41} \end{array}\right. $$ Hence we have $U_{\mathcal{O}} \cong K[x,y, c_{21}, c_{23}, c_{32}, c_{34}, c_{41}, c_{42}, c_{43}, c_{44}]/ (\widetilde{g}_1, \widetilde{g}_2, \widetilde{g}_3, \widetilde{g}_4)$ where \begin{eqnarray*} \widetilde{g}_1 &=& y^2 - (-c_{23}c_{41}c_{42} + c_{21}c_{42}c_{43} - c_{21}c_{44}+ c_{23}) \\ && - c_{21}x - (-c_{21}c_{42} - c_{41}c_{44} + c_{43})y - c_{41}xy,\\ \widetilde{g}_2 &=& x^2 - (-c_{34}c_{41}c_{42} + c_{32}c_{41}c_{44} - c_{32}c_{43}+ c_{34}) \\ && - (-c_{32}c_{41} - c_{42}c_{43} + c_{44})x - c_{32}y - c_{42}xy,\\ \widetilde{g}_3 &=& xy^2 - (c_{23}c_{32}c_{41} - c_{21}c_{32}c_{43} + c_{21}c_{34}) \\ && - c_{23}x - (c_{21}c_{32} + c_{34}c_{41})y - c_{43}xy,\\ \widetilde{g}_4 &=& x^2y -(c_{21}c_{34}c_{42} - c_{21}c_{32}c_{44} + c_{23}c_{32}) \\ && - (c_{21}c_{32} + c_{23}c_{42})x - c_{34}y - c_{44}xy,\\ \end{eqnarray*} The ideal $(\widetilde{g}_1, \widetilde{g}_2, \widetilde{g}_3, \widetilde{g}_4)$ is the defining ideal of the family of all subschemes of length four of the affine plane which have the property that their coordinate ring admits $\overline{\mathcal{O}}$ as a vector space basis. Since the border basis scheme is isomorphic to an affine space in this case, we can connect every point to the point corresponding to $(x^2,y^2)$ by a rational curve. Therefore every ideal in the family can be deformed by a flat deformation to the monomial ideal $(x^2, y^2)$. Algebraically, it suffices to substitute each free indeterminate $c_{ij}$ with $z c_{ij}$ where~$z$ is a new indeterminate. We get the $K$-algebra homomorphism $$ \Phi_{K[z]}: K[z] \To K[x,y, z, c_{21}, c_{23}, c_{32}, c_{34}, c_{41}, c_{42}, c_{43}, c_{44}]/ (\overline{g}_1, \overline{g}_2, \overline{g}_3, \overline{g}_4) $$ where \begin{eqnarray*} \overline{g}_1 &=& y^2 - (-z^3c_{23}c_{41}c_{42} + z^3c_{21}c_{42}c_{43} - z^2c_{21}c_{44}+ zc_{23}) \\ && - zc_{21}x - (-z^2c_{21}c_{42} - z^2c_{41}c_{44} +z c_{43})y - zc_{41}xy,\\ \overline{g}_2 &=& x^2 - (-z^3c_{34}c_{41}c_{42} + z^3c_{32}c_{41}c_{44} - z^2c_{32}c_{43}+ zc_{34}) \\ && - (-z^2c_{32}c_{41} - z^2c_{42}c_{43} + z c_{44})x - zc_{32}y - zc_{42}xy,\\ \overline{g}_3 &=& xy^2 - (z^3c_{23}c_{32}c_{41} - z^3c_{21}c_{32}c_{43} + z^2c_{21}c_{34}) \\ && - zc_{23}x - (z^2c_{21}c_{32} +z^2 c_{34}c_{41})y - zc_{43}xy,\\ \overline{g}_4 &=& x^2y -(z^3c_{21}c_{34}c_{42} - z^3c_{21}c_{32}c_{44} + z^2c_{23}c_{32}) \\ && - (z^2c_{21}c_{32} + z^2c_{23}c_{42})x - zc_{34}y - zc_{44}xy,\\ \end{eqnarray*} By Corollary~\ref{ratcurve}, this homomorphism is flat. For every point on the border basis scheme, it connects the corresponding ideal to $\mathop{\rm BT}\nolimits_{\mathcal{O}}=(y^2, x^2, xy^2, x^2y) = (x^2,y^2)$. \end{example} The next example shows that natural families of ideals can lead us out of the affine open subset~$\mathbb{B}_{\mathcal{O}}$ of the Hilbert scheme. \begin{example} Using $K=\mathbb{R}$ and $P=\mathbb{R}[x,y]$, we consider the family of reduced zero-dimensional schemes $\mathbb{X}_a = \{(a,2),\, (0,1),\, (0,0),\, (1,0)\} \subset \mathbb{R}^2$ with $a\in\mathbb{R}$. \medskip \makebox[11 true cm]{ \beginpicture \setcoordinatesystem units <0.4cm,0.4cm> \setplotarea x from 0 to 4, y from 0 to 3.1 \axis left / \axis bottom / \arrow <2mm> [.2,.67] from 3.5 0 to 4 0 \arrow <2mm> [.2,.67] from 0 2.6 to 0 3.1 \put {$\scriptstyle x$} [lt] <0.5mm,0.8mm> at 4.1 0 \put {$\scriptstyle y$} [rb] <1.7mm,0.7mm> at 0 3.1 \put {$\bullet$} at 0 0 \put {$\bullet$} at 1 0 \put {$\bullet$} at 0 1 \put {$\bullet$} at 1.5 2 \put {$\scriptstyle (a,2)$} at 2.6 2 \put {$\cdots$} at 0.8 2 \endpicture} \medskip For $\sigma={\tt DegRevLex}$, the reduced $\sigma$-Gr\"obner basis of the vanishing ideal $I_a\subset P$ of~$\mathbb{X}_a$ is $$ G'_a=\{ x^2+\tfrac{1}{2}\,a(1-a)y^2-x -\tfrac{1}{2}\,a(1-a)\,y,\; xy-ay^2+ay,\; y^3-3y^2+2y \} $$ and thus we have $\mathcal{O}_\sigma(I_a)=\{1,x,y,y^2\}$. We may extend~$G'_a$ to an $\mathcal{O}_\sigma(I_a)$-border basis of~$I_a$ and get $$ G_a=G'_a \;\cup\; \{ xy^2-2ay^2 +2ay \} $$ The residue classes of the elements of~$\mathcal{O}_\sigma(I)$ are a vector space basis of $P/I_a$ for every $a\in\mathbb{R}$. We let $I=( x^2+\tfrac{1}{2}\,z(1-z)y^2-x -\tfrac{1}{2}\,z(1-z)y,\, xy-z y^2+zy,\, y^3-3y^2+2y,\, xy^2-2zy^2 +2zy ) \subset P[z]$. Then the natural map $\mathbb{R}[z]\To P[z]/I$ is a flat homomorphism whose fibers are the rings $P/I_a$. Thus the point corresponding to~$G_a$ on the border basis scheme $\mathbb{B}_{\mathcal{O}_\sigma(I_a)}$ is connected to the point representing~$G_0$ via a rational curve. Now we consider the order ideal $\mathcal{O}=\{1,x,y,xy\}$. For $a\ne 0$, the set~$\mathbb X_a$ is a complete intersection of type $(2,2)$. Its vanishing ideal~$I_a$ has an $\mathcal{O}$-border basis, namely $$ H_a= \{ y^2-\tfrac{1}{a}\,xy-y,\; xy^2-2xy,\; x^2y-axy,\; x^2+\tfrac{1}{2}\,(1-a)xy-x \} $$ However, for $a=0$, the ideal $I_0$ has no $\mathcal{O}$-border basis because $xy\in I_0$. One of the coefficients in~$H_a$ tends to~$\infty$ as $a\To 0$. This happens since the scheme~$\mathbb{B}_{\mathcal{O}}$ is not complete. \end{example} \bigbreak \section{Defining Equations for the Border Basis Scheme} \label{Defining Equations for the Border Basis Scheme} The defining equations for the border basis scheme can be constructed in different ways. One construction is given by imposing the commutativity law to the multiplication matrices, as we have seen in the preceding section. Another construction was given in~\cite{Hu2}, and a different but related one in~\cite{KK1} and~\cite{S}. After describing this alternative construction, we use it to get rid of as many generators of the vanishing ideal of~$\mathbb{B}_{\mathcal{O}}$ as possible and examine some claims in~\cite{S} in this regard. Let $\mathcal{O}=\{t_1,\dots,t_\mu\}$ be an order ideal and $\partial\mathcal{O}=\{b_1,\dots,b_\nu\}$ its border. In~\cite{KK1}, Def.~17, two terms $b_i,b_j\in\partial\mathcal{O}$ are called {\it next-door neighbors} if $b_i=x_k b_j$ for some $k\in\{1,\dots,n\}$ and {\it across-the street neighbors} if $x_k b_i = x_\ell b_j$ for some $k,\ell\in\{1,\dots,n\}$. In addition to these notions we shall say that across-the-street neighbors $b_i,b_j$ with $x_k b_i= x_\ell b_j$ are {\it across-the-corner neighbors} of there exists a term $b_m\in\partial\mathcal{O}$ such that $b_i=x_\ell b_m$ and $b_j=x_k b_m$. In~\cite{S}, Def.~8.5, the graph whose vertices are the border terms and whose edges are given by the neighbor relation is called the {\it border web} of~$\mathcal{O}$. The Buchberger criterion for border bases (see~\cite{KK1}, Prop.~18 and~\cite{S}, Thm.~8.11) says that an $\mathcal{O}$-border prebasis $\{g_1,\dots,g_\nu\}$ with $g_j=b_j-\sum_{i=1}^\mu a_{ij}t_i$ and $a_{ij}\in K$ is an $\mathcal{O}$-border basis if and only if the S-polynomials $S(g_i,g_j)$ reduce to zero using~$G$ for all $(i,j)$ such that~$b_i$ and~$b_j$ are neighbors. This characterization can be used to construct the equations defining the border basis scheme in an alternative way. \begin{proposition}{\bf (Lifting Neighbor Syzygies)}% \label{altgenBBS}\\ Let $G=\{g_1,\dots,g_\nu\}$ be the generic $\mathcal{O}$-border prebasis, where $g_j=b_j -\sum_{i=1}^\mu c_{ij}t_i \in K[x_1,\dots,x_n, c_{11},\dots, c_{\mu\nu}]$, let $\mathcal{A}_1,\dots,\mathcal{A}_n \in\mathop{\rm Mat}\nolimits_{\mu}(K[c_{ij}])$ be the generic multiplication matrices with respect to~$\mathcal{O}$, and let $c_j=(c_{1j},\dots,c_{\mu j})^{\rm tr} \in\mathop{\rm Mat}\nolimits_{\mu,1}(K[c_{ij}])$ for $j=1,\dots,\nu$. Consider the following sets of polynomials in $K[c_{11},\dots,c_{\mu\nu}]$: \begin{enumerate} \item If $b_i,b_j\in\partial\mathcal{O}$ are next-door neighbors with $b_i=x_k b_j$, let $\mathop{\rm ND}\nolimits(i,j)$ be the set of polynomial entries of $c_i - \mathcal{A}_k c_j$. \item If $b_i,b_j\in\partial\mathcal{O}$ are across-the-street neighbors with $x_k b_i=x_\ell b_j$, let $\mathop{\rm AS}\nolimits(i,j)$ be the set of polynomial entries of $\mathcal{A}_k c_i - \mathcal{A}_\ell c_j$. \end{enumerate} Then the following claims hold true. \begin{items} \item The union of all sets $\mathop{\rm ND}\nolimits(i,j)$ and all sets $\mathop{\rm AS}\nolimits(i,j)$ contains the set of the nontrivial entries of the commutators $\mathcal{A}_k \mathcal{A}_\ell -\mathcal{A}_\ell \mathcal{A}_k$ with $1\le k<\ell \le n$. \item If one removes from this union all sets $\mathop{\rm AS}\nolimits(i,j)$ such that $b_i,b_j$ are across-the-corner neighbors, one gets precisely the set of the nontrivial entries of the commutators $\mathcal{A}_k \mathcal{A}_\ell -\mathcal{A}_\ell \mathcal{A}_k$ with $1\le k<\ell \le n$. In particular, the remaining union generates the vanishing ideal $I(\mathbb{B}_{\mathcal{O}})$ of the $\mathcal{O}$-border basis scheme. \item The polynomials in the sets $\mathop{\rm AS}\nolimits(i,j)$ corresponding to across-the-corner neighbors $b_i,b_j$ are contained in~$I(\mathbb{B}_{\mathcal{O}})$. \end{items} \end{proposition} \begin{proof} First we prove~a) and~b). The S-polynomials $g_i - x_k g_j$ resp.\ $x_k g_i - x_\ell g_j$ are $K[c_{ij}]$-linear combinations of terms in $\mathcal{O}\cup \partial\mathcal{O}$. We want to find representations of these polynomials as $K[c_{ij}]$-linear combinations of elements of~$\mathcal{O}$ only. Since we have $b_i - x_k b_j=0$ resp.\ $x_k b_i -x_\ell b_j=0$, we have to represent $(-\sum_{m=1}^\mu c_{mi}t_m) - x_k \, (-\sum_{m=1}^\mu c_{mj}t_m)$ resp.\ $x_k \, (-\sum_{m=1}^\mu c_{mi}t_m) - x_\ell \, (-\sum_{m=1}^\mu c_{mj}t_m)$ using~$\mathcal{O}$. By the definition of the generic multiplication matrices, these representations are given by $(t_1,\dots,t_\mu) \cdot (c_i - \mathcal{A}_k c_j)$ resp.\ $(t_1,\dots,t_\mu) \cdot (\mathcal{A}_k c_i - \mathcal{A}_\ell c_j)$. The coefficients of the terms~$t_i$ in these representations are precisely the polynomials in $\mathop{\rm ND}\nolimits(i,j)$ resp.\ in $\mathop{\rm AS}\nolimits(i,j)$. Now we consider the polynomials in the sets $\mathop{\rm ND}\nolimits(i,j)$ and in the sets $\mathop{\rm AS}\nolimits(i,j)$ for which $b_i,b_j$ are not across-the-corner neighbors. The fact that these polynomials are exactly the nontrivial entries of the commutators $\mathcal{A}_k \mathcal{A}_\ell -\mathcal{A}_\ell \mathcal{A}_k$ was checked in~\cite{KK1}, Section~4 resp.~\cite{S}, Prop.~8.10. It remains to show~c). Let $b_i=x_\ell b_m$ and $b_j=x_k b_m$. By what we have shown so far, the polynomials which are the components of $c_i -\mathcal{A}_\ell c_m$ and $c_j -\mathcal{A}_k c_m$ are contained in~$I(\mathbb{B}_{\mathcal{O}})$. Moreover, the polynomial entries of $\mathcal{A}_k \mathcal{A}_\ell -\mathcal{A}_\ell\mathcal{A}_k$ are in~$I(\mathbb{B}_{\mathcal{O}})$. Therefore also the components of $$ \mathcal{A}_k c_i -\mathcal{A}_\ell c_j = \mathcal{A}_k (c_i - \mathcal{A}_\ell c_m) + (\mathcal{A}_k \mathcal{A}_\ell - \mathcal{A}_\ell\mathcal{A}_k) c_m -\mathcal{A}_\ell( c_j - \mathcal{A}_k c_m) $$ are contained in~$I(\mathbb{B}_{\mathcal{O}})$. These components are exactly the polynomials in~$\mathop{\rm AS}\nolimits(i,j)$. \end{proof} Another way of phrasing this proposition is to say that, for~$G$ to be a border basis, the neighbor syzygies $e_i -x_k e_j$ resp.\ $x_k e_i -x_\ell e_j$ of the border tuple $(b_1,\dots,b_\nu)$ have to lift to syzygies of $(g_1,\dots,g_\nu)$ and that the defining equations of~$\mathbb{\mathcal{O}}$ are precisely the equations expressing the existence of these liftings (see~\cite{KK1}, Ex.~23). Now it is a well-known phenomenon in Gr\"obner basis theory that it suffices to lift a minimal set of generators of the syzygy module of the leading terms (see for instance~\cite{KR1}, Prop.~2.3.10). In~\cite{S}, Props.~8.14 and~8.15, an attempt was made to use a similar idea for removing unnecessary generators of~$I(\mathbb{B}_{\mathcal{O}})$. However, the claims made there are not correct in general, as the following examples show. The first example has surfaced in a number of different contexts, see the papers~\cite{K}, \cite{L} and the references therein. \begin{example}\label{exa1xyz} Let us consider $P=K[x,y,z]$ and $\mathcal{O}=\{1,x,y,z\}$. The border $\partial\mathcal{O}=\{b_1,\dots,b_6\}$ with $b_1=x^2$, $b_2=xy$, $b_3=xz$, $b_4=y^2$, $b_5=yz$, and $b_6=z^2$ has a very simple border web consisting of nine across-the-street neighbors: \medskip \makebox[11 true cm]{ \beginpicture \setcoordinatesystem units <0.4cm,0.4cm> \setplotarea x from -0.5 to 4.5, y from -0.9 to 3.5 \put {$\bullet$} at 0 0 \put {$\bullet$} at 2 0 \put {$\bullet$} at 4 0 \put {$\bullet$} at 1 1.5 \put {$\bullet$} at 3 1.5 \put {$\bullet$} at 2 3 \put {$\scriptstyle x^2$} at -0.3 -0.6 \put {$\scriptstyle y^2$} at 4.4 -0.6 \put {$\scriptstyle z^2$} at 2.1 3.7 \put {$\scriptstyle xy$} at 2 -0.9 \put {$\scriptstyle xz$} at 0.3 1.5 \put {$\scriptstyle yz$} at 3.8 1.5 \setlinear \putrule from 0 0 to 4 0 \putrule from 1 1.5 to 2.8 1.5 \plot 0 0 % 2 3 % 4 0 % / \plot 1 1.5 % 2 0 % 3 1.5 % / \endpicture} \medskip These across-the-street neighbors yield $9\cdot 4 = 36$ quadratic equations for $I(\mathbb{B}_{\mathcal{O}})$ in $K[c_{11},\dots,c_{46}]$. Contrary to the claim in~\cite{S}, Prop.\ 8.15, the equations for the neighbor pair $(x^2,xy)$ are not contained in the ideal generated by the remaining 32 equations. In fact, in agreement with Proposition~\ref{removing}, it turns out that the four equations corresponding to the pair $(xy,xz)$ are contained in the ideal generated by the eight equations corresponding to the two pairs $(xy,yz)$ and $(xz,yz)$ (see Example~\ref{exa1xyzcont}). In order to see whether the ideal $I(\mathbb{B}_{\mathcal{O}})$ is a complete intersection (as claimed in~\cite{S}, p.\ 297), we examine its generators more closely. If we define a grading by letting $\deg_W(c_{1j})=2$ for $j=1,\dots,6$ and $\deg_W(c_{ij})=1$ for $i>1$, the 36 generators are homogeneous with respect to the grading given by~$W$. Every minimal system of generators of the ideal $I(\mathbb{B}_{\mathcal{O}})$ consists of~21 polynomials, while its height is~12. Hence it is very far from being a complete intersection. The indeterminates $c_{11},\dots,c_{16}$ corresponding to the constant coefficients of the generic border basis form the linear parts of six of the 21 minimal generators and do not divide any of the other terms. We may eliminate them and obtain an ideal~$J$ in~$Q=K[c_{21},\dots,c_{46}]$ which has (after interreduction) 15 homogeneous quadratic generators. Geometrically speaking, there is a projection to an 18-dimensional affine space which maps the border basis scheme isomorphically to a homogeneous subscheme of~$\mathbb{A}^{18}$. In fact, it is known that this scheme is an affine cone with 3-dimensional vertex over the Grassmannian ${\rm Grass}(2,6)\subset \mathbb{P}^{14}$. The ideal~$J$ is prime and the ring~$Q/J$ is Gorenstein with Hilbert series $(1+6z+6z^2+1)/(1-z)^{12}$. The minimal number of generators of~$J$ is~15. The border basis scheme is irreducible and has the expected dimension, namely~12. \end{example} Also the lifting of trivial syzygies fails in the border basis scheme setting, as our next example shows (see also Example~\ref{affinecell}). \begin{example}\label{liftfails} Let $P=K[x,y]$ and $\mathcal{O}=\{1,x,y,xy\}$. Then the border of~$\mathcal{O}$ is $\partial\mathcal{O}=\{x^2,y^2,x^2y,xy^2\}$. It has two next-door neighbors $(x^2,x^2y),\ (y^2,xy^2)$ and one across-the-street neighbor $(x^2y,xy^2)$. If one includes the ``trivial syzygy pair'' $(x^2,y^2)$, there is one loop in the border web: \medskip \makebox[11 true cm]{ \beginpicture \setcoordinatesystem units <0.6cm,0.6cm> \setplotarea x from -0.5 to 2.5, y from -0.5 to 2.5 \put {$\bullet$} at 2 0 \put {$\bullet$} at 2 1 \put {$\bullet$} at 1 2 \put {$\bullet$} at 0 2 \put {$\scriptstyle x^2$} at 2 -0.5 \put {$\scriptstyle y^2$} at -0.6 2 \put {$\scriptstyle x^2y$} at 2.7 1.1 \put {$\scriptstyle xy^2$} at 1.7 2 \setlinear \plot 0 2 % 1 2 % 2 1 % 2 0 % / \setdashes \plot 0 2 % 2 0 % / \endpicture} \medskip The neighbor pairs yield four equations each for the defining ideal of~$\mathbb{B}_{\mathcal{O}}$. Contrary to a claim in~\cite{S}, p.\ 297, one cannot drop one of these sets of four polynomials without changing the ideal. Thus the lifting of a ``trivial'' syzygy cannot be used to remove defining equations for the border basis scheme. Interestingly, in the case at hand, the ideal $I(\mathbb{B}_{\mathcal{O}})$ is indeed a complete intersection: there exists a subset of~8 of the 12~equations which generates~$I(\mathbb{B}_{\mathcal{O}})$ minimally and $\dim(K[c_{11},\dots,c_{44}]/I(\mathbb{B}_{\mathcal{O}}))=8$. But the unnecessary generators are spread around the blocks coming from the neighbor pairs. \end{example} Our next example shows how one can sometimes get rid of some generators of~$I(\mathbb{B}_{\mathcal{O}})$ using part~c) of the proposition. \begin{example} Consider $P=K[x,y]$ and $\mathcal{O}=\{1,x,y,x^2,y^2\}$. Then we have $\partial\mathcal{O}=\{b_1,\dots,b_5\}$ with $b_1=y^3$, $b_2=xy^2$, $b_3=xy$, $b_4=x^2y$, and $b_5=x^3$, two next-door neighbors $(xy,xy^2)$ and $(xy,x^2y)$, two proper across-the-street neighbors $(y^3,xy^2)$ and $(x^2y,x^3)$, and one pair of across-the-corner neighbors $(xy^2,x^2y)$. Thus the border web of~$\mathcal{O}$ looks as follows. \medskip \makebox[11 true cm]{ \beginpicture \setcoordinatesystem units <0.6cm,0.6cm> \setplotarea x from -0.5 to 3.5, y from -0.5 to 3.5 \put {$\bullet$} at 3 0 \put {$\bullet$} at 2 1 \put {$\bullet$} at 1 1 \put {$\bullet$} at 1 2 \put {$\bullet$} at 0 3 \put {$\scriptstyle x^3$} at 3 -0.5 \put {$\scriptstyle y^3$} at -0.6 3 \put {$\scriptstyle x^2y$} at 2.7 1.1 \put {$\scriptstyle xy^2$} at 1.7 2 \put {$\scriptstyle xy$} at 1 0.5 \setlinear \plot 0 3 % 1 2 % 1 1 % 2 1 % 3 0 % / \plot 1 2 % 1.5 1.5 % 2 1 % / \endpicture} \medskip Using part~c) of the proposition, we know that ~$I(\mathbb{B}_{\mathcal{O}})$ is generated by $\mathop{\rm AS}\nolimits(1,2)$, $\mathop{\rm AS}\nolimits(4,5)$, $\mathop{\rm ND}\nolimits(2,3)$, and $\mathop{\rm ND}\nolimits(3,4)$. In fact, using \cocoa, we may check that none of these sets can be removed without changing the ideal. \end{example} On the positive side, the following proposition allows us to remove at least a few polynomials from the system of generators of~$I(\mathbb{B}_{\mathcal{O}})$ given in Proposition~\ref{altgenBBS}. \begin{proposition}{\bf (Removing Redundant Generators of $I(\mathbb{B}_{\mathcal{O}})$)}\label{removing}\\ Let $\mathcal{O}=\{t_1,\dots,t_\mu\}$ be an order ideal with border $\partial\mathcal{O}= \{b_1,\dots,b_\nu\}$, and let~$H$ be a system of generators of~$I(\mathbb{B}_{\mathcal{O}})$. \begin{items} \item Suppose that there exist $i,j,k \in\{1,\dots,\nu\}$ and $\ell,m\in\{1,\dots,n\}$ such that $b_k=x_\ell b_i = x_m b_j$. If the sets $\mathop{\rm AS}\nolimits(i,j)$, $\mathop{\rm ND}\nolimits(i,k)$ and $\mathop{\rm ND}\nolimits(j,k)$ are contained in~$H$ and one removes one of these sets, the remaining polynomials still generate~$I(\mathbb{B}_{\mathcal{O}})$. \item Suppose that there exist $i,j,k \in\{1,\dots,\nu\}$ and $\alpha,\beta,\gamma\in \{1,\dots,n\}$ such that $x_\alpha b_i = x_\beta b_j = x_\gamma b_k$. If the sets $\mathop{\rm AS}\nolimits(i,j)$, $\mathop{\rm AS}\nolimits(i,k)$ and $\mathop{\rm AS}\nolimits(j,k)$ are contained in~$H$ and one removes one of these sets, the remaining polynomials still generate~$I(\mathbb{B}_{\mathcal{O}})$. \end{items} \end{proposition} \begin{proof} Let $\mathcal{A}_1\dots,\mathcal{A}_n$ be the generic multiplication matrices with respect to~$\mathcal{O}$, and let $c_j=(c_{1j},\dots,c_{\mu j})^{\rm tr} \in\mathop{\rm Mat}\nolimits_{\mu,1}(K[c_{ij}])$ for $j=1,\dots,\nu$. First we show~a). The polynomials in $\mathop{\rm AS}\nolimits(i,j)$ are the components of $\mathcal{A}_\ell \cdot c_i-\mathcal{A}_m \cdot c_j$, the polynomials in $\mathop{\rm ND}\nolimits(i,k)$ are the components of $c_k-\mathcal{A}_\ell \cdot c_i$, and the polynomials in $\mathop{\rm ND}\nolimits(j,k)$ are the components of $c_k-\mathcal{A}_m \cdot c_j$. Thus the claim follows from $$ (\mathcal{A}_\ell \cdot c_i-\mathcal{A}_m \cdot c_j) + (c_k-\mathcal{A}_\ell \cdot c_i) - (c_k-\mathcal{A}_m \cdot c_j) =0 $$ To show~b), we argue similarly. The polynomials in $\mathop{\rm AS}\nolimits(i,j)$ are the components of $\mathcal{A}_\alpha \cdot c_i - \mathcal{A}_\beta \cdot c_j$, the polynomials in $\mathop{\rm AS}\nolimits(i,k)$ are the components of $\mathcal{A}_\alpha \cdot c_i - \mathcal{A}_\gamma \cdot c_k$, and the polynomials in $\mathop{\rm AS}\nolimits(j,k)$ are the components of $\mathcal{A}_\beta \cdot c_j - \mathcal{A}_\gamma \cdot c_k$. \end{proof} Let us illustrate the application of this proposition with a couple of examples. \begin{example} Let $P=K[x,y,z]$ and $\mathcal{O}=[1,x,y,z,xy]$. Then we have $\partial\mathcal{O}=\{b_1,\dots,b_8\}$ with $b_1=z^2$, $b_2=yz$, $b_3=xz$, $b_4=y^2$, $b_5=x^2$, $b_6=xyz$, $b_7=xy^2$, and $b_8=x^2y$. There are four next-door neighbors $(yz,xyz)$, $(xz,xyz)$, $y^2,xy^2)$, $(x^2,x^2y)$ and eight across-the-street neighbors $(yz,z^2)$, $(xz,z^2)$, $(xz,yz)$, $(y^2,yz)$, $(x^2,xz)$, $(xy^2,xyz)$, $(x^2y,xyz)$, and $(x^2y,xy^2)$. This yields the border web \medskip \makebox[11 true cm]{ \beginpicture \setcoordinatesystem units <0.6cm,0.6cm> \setplotarea x from -0.5 to 6.5, y from -0.9 to 5 \put {$\bullet$} at 0 0 \put {$\bullet$} at 2 0 \put {$\bullet$} at 4 0 \put {$\bullet$} at 6 0 \put {$\bullet$} at 1 1.5 \put {$\bullet$} at 3 1.5 \put {$\bullet$} at 5 1.5 \put {$\bullet$} at 3 4.5 \put {$\scriptstyle x^2$} at -0.3 -0.6 \put {$\scriptstyle x^2y$} at 2 -0.6 \put {$\scriptstyle xy^2$} at 4 -0.6 \put {$\scriptstyle y^2$} at 6.4 -0.6 \put {$\scriptstyle xz$} at 0.3 1.5 \put {$\scriptstyle yz$} at 5.8 1.5 \put {$\scriptstyle xyz$} at 3 1.8 \put {$\scriptstyle z^2$} at 3.4 4.7 \arrow <2mm> [.2,.67] from 1.8 1.5 to 2 1.5 \arrow <2mm> [.2,.67] from 4 1.5 to 3.8 1.5 \arrow <2mm> [.2,.67] from 0.8 0 to 1 0 \arrow <2mm> [.2,.67] from 5 0 to 4.8 0 \setlinear \putrule from 0 0 to 6 0 \putrule from 1 1.5 to 4.9 1.5 \plot 0 0 % 3 4.5 % 6 0 % / \plot 2 0 % 3 1.5 % 4 0 % / \setquadratic \plot 1 1.5 % 3 2.5 % 5 1.5 % / \endpicture} \medskip \noindent where we have marked next-door neighbors by arrows. Since we have $x b_2=y b_3 = b_6$, we can use part~a) of the proposition and remove one of the sets $\mathop{\rm AS}\nolimits(2,3)$, $\mathop{\rm ND}\nolimits(2,6)$, or $\mathop{\rm ND}\nolimits(3,6)$ from the system of generators of~$I(\mathbb{B}_{\mathcal{O}})$. Although there are many further ``loops'' in the remaining part of the border web, we may use \cocoa\ to check that no other set $\mathop{\rm ND}\nolimits(i,j)$ or $\mathop{\rm AS}\nolimits(i,j)$ can be removed without changing the generated ideal. \end{example} Using the second part of the proposition, we can remove some generators of~$I(\mathbb{B}_{\mathcal{O}})$ in Example~\ref{exa1xyz}. \begin{example}\label{exa1xyzcont} Consider $P=K[x,y,z]$ and $\mathcal{O}=\{1,x,y,z\}$ with the border web explained in Example~\ref{exa1xyz}. Then the border terms $b_2=xy$, $b_3=xz$ and $b_5=yz$ satisfy $zb_2 = y b_3= x b_5$. Therefore one of the sets $\mathop{\rm AS}\nolimits(2,3)$, $\mathop{\rm AS}\nolimits(2,5)$, or $\mathop{\rm AS}\nolimits(3,5)$ can be removed from the system of generators of~$I(\mathbb{B}_{\mathcal{O}})$ without changing the ideal. As already explained in Example~\ref{exa1xyz}, none of the remaining sets $\mathop{\rm AS}\nolimits(i,j)$ can be removed thereafter. \end{example} \bigbreak \section{The Homogeneous Border Basis Scheme} \label{The Homogeneous Border Basis Scheme} Let $P=K[x_1,\dots,x_n]$ be graded by $W=(w_1\;\cdots\;w_n) \in\mathop{\rm Mat}\nolimits_{1,n}(\mathbb{N}_+)$, let $\mathcal{O}=\{t_1,\dots,t_\mu\}$ be an order ideal, and let $\partial\mathcal{O}=\{b_1,\dots,b_\nu\}$ be its border. If we restrict our attention to zero-dimensional ideals $I\subset P$ which have an $\mathcal{O}$-border basis and are homogeneous with respect to the grading given by~$W\!$, we obtain the following subscheme of the border basis scheme. \begin{definition}\label{defHBB} Let $\{c_{ij} \mid 1\le i\le \mu,\; 1\le j\le\nu\}$ be a set of further indeterminates. \begin{items} \item The {\em generic homogeneous $\mathcal{O}$-border prebasis} is defined to be the set of polynomials $G=\{g_1,\dots,g_\nu\}$ in the ring $K[x_1,\dots,x_n,c_{11},\dots,c_{\mu\nu}]$ where $$ g_j = b_j -\sum_{\{i\in\{1,\dots,\mu\}\mid\deg_W(t_i)=\deg_W(b_j)\}} c_{ij}t_i $$ for $j=1,\dots,\nu$. \item For $k=1,\dots,n$, let $\mathcal{A}_k \in\mathop{\rm Mat}\nolimits_{\mu}(K[c_{ij}])$ be the $k^{\rm th}$ formal multiplication matrix associated to~$G$. It is also called the $k^{\rm th}$ {\em generic homogeneous multiplication matrix} with respect to~$\mathcal{O}$. \item The affine scheme $\mathbb{B}_{\mathcal{O}}^{\rm hom} \subseteq \mathbb{A}^{\mu\nu}$ defined by the ideal $I(\mathbb{B}_{\mathcal{O}}^{\rm hom})$ generated by the entries of the matrices $\mathcal{A}_k \mathcal{A}_\ell -\mathcal{A}_\ell \mathcal{A}_k$ with $1\le k<\ell\le n$ is called the {\em homogeneous $\mathcal{O}$-border basis scheme}. \end{items} \end{definition} Clearly, the homogeneous border basis scheme is the intersection of~$\mathbb{B}_{\mathcal{O}}$ with the linear space $\mathcal{Z}(c_{ij}\mid \deg_W(t_i)\ne\deg_W(b_j))$. \begin{remark} Let us equip $K[x_1,\dots,x_n,c_{11},\dots,c_{\mu\nu}]$ with the grading defined by the matrix~$\overline{W}$ for which $\deg_{\overline{W}}(c_{ij})=0$ and $\deg_{\overline{W}}(x_i)=w_i$. \begin{items} \item The matrix~$\mathcal{A}_k$ is a homogeneous matrix in the sense of~\cite{KR2}, Def.~4.7.1, with respect to the degree pair given by $(\deg_W(t_1),\dots,\deg_W(t_\mu))$ for the rows and $(\deg_W(x_k t_1),\dots,\deg_W(x_k t_\mu))$ for the columns. \item As explained in~\cite{KR2}, p.\ 118, we can add a vector $d\in\mathbb{Z}^\mu$ to a degree pair and still have a degree pair for the same homogeneous matrix. Thus the matrix $\mathcal{A}_\ell$ also has the degree pair given by $(\deg_W(x_k t_1),\dots,\deg_W(x_k t_\mu))$ for the rows and $(\deg_W(x_k x_\ell t_1),\dots,\deg_W(x_k x_\ell t_\mu))$ for the columns. In this way we see that both $\mathcal{A}_k\mathcal{A}_\ell$ and $\mathcal{A}_\ell \mathcal{A}_k$ are homogeneous matrices with respect to the degree pair given by $(\deg_W(t_1),\dots,\deg_W(t_\mu))$ for the rows and $(\deg_W(x_k e_\ell t_1),\dots,\deg_W(x_k x_\ell t_\mu))$ for the columns. Consequently, also the commutator $\mathcal{A}_k\mathcal{A}_\ell -\mathcal{A}_\ell\mathcal{A}_k$ is a homogeneous matrix with respect to this degree pair. \end{items} \end{remark} In order to deform a homogeneous ideal having an $\mathcal{O}$-border basis to its border form ideal, we may try to construct a suitable rational curve inside the homogeneous border basis scheme. If~$\mathcal{O}$ has a $\mathop{\rm maxdeg}\nolimits_W$ border (see Definition~\ref{DefMaxdeg}), this plan can be carried out as follows. \begin{theorem}{\bf (Homogeneous Maxdeg Border Bases)}\label{homcommute}\\ Suppose that the order ideal~$\mathcal{O}$ has a $\mathop{\rm maxdeg}\nolimits_W$ border. \begin{items} \item The generic homogeneous multiplication matrices commute. \item Let $d=\max\{\deg_W(t_1),\dots,\deg_W(t_\mu)\}$, let $r=\# \{t\in\mathcal{O}\mid \deg_W(t)=d\}$, and let $s=\# \{t\in \partial\mathcal{O}\mid \deg_W(t)=d\}$. Then the homogeneous border basis scheme $\mathbb{B}_{\mathcal{O}}^{\rm hom}$ is an affine space of dimension $r\,s$. \item If $I\subset P$ is a homogeneous ideal which has an $\mathcal{O}$-border basis $G=\{g_1,\dots,g_\nu\}$, then there exists a flat family $K[z]\To K[z][x_1,\dots,x_n]/J$ such that $\mathcal{O}$ is a $K[z]$-basis of the right-hand side, such that $J\vert_{z \mapsto 1}\cong I$, and such that $J\vert_{z\mapsto 0}\cong ( b_1,\dots,b_{\nu})$. In fact, the ideal~$J$ may be defined by writing $g_j=b_j-\sum_{i=1}^\mu c_{ij}t_i$ and replacing $c_{ij}\in K$ by $c_{ij}\,z\in K[z]$ for all $i,j$. \end{items} \end{theorem} \begin{proof} To prove claim~a), we examine the entry at position $(\alpha,\beta)$ of a product $\mathcal{A}_k\mathcal{A}_\ell$. Let $\mathcal{A}_k=(a_{ij})$ and $\mathcal{A}_\ell=(a'_{ij})$. We want to examine the element $\sum_{\gamma=1}^\mu a_{\alpha\gamma}a'_{\gamma\beta}$. If $a'_{\gamma\beta}\ne 0$, the term $t_\gamma$ is contained in the support of the representation of $x_\ell\,t_\beta$ in terms of the basis~$\mathcal{O}$. Since~$(g_1,\dots,g_\nu)$ is a homogeneous ideal with respect to the grading on $K[x_1,\dots,x_n,c_{11},\dots,c_{\mu\nu}]$ defined by the matrix~$\overline{W}$ for which $\deg_{\overline{W}}(c_{ij})=0$ and $\deg_{\overline{W}}(x_i)=\deg_W(x_i)=w_i$, we have the relations $\deg_W(t_\gamma)=\deg_W(x_\ell t_\beta) > \deg_W(t_\beta)$. For the same reason, if $a_{\alpha \gamma}\ne 0$ we have the relations $\deg_W(t_\alpha)=\deg_W(x_k t_\gamma) > \deg_W(t_\gamma)$. We deduce the inequality $\deg_W(t_\alpha) > \deg_W(x_\ell t_\beta)$. Hence the assumption that~$\mathcal{O}$ has a $\mathop{\rm maxdeg}\nolimits_W$ border implies $x_\ell t_\beta \notin \partial\mathcal{O}$. We conclude that $x_\ell t_\beta \in \mathcal{O}$, $t_\gamma = x_\ell t_\beta$, and hence $a'_{\gamma \beta} = 1$. Therefore, in order to get $a_{\alpha \gamma} a'_{\gamma \beta} \ne 0$ in the sum above, we need to have $a'_{\gamma \beta} = 1$ and $t_\gamma=x_\ell t_\beta$. In particular, this condition fixes~$\gamma$. If the surviving summand $a_{\alpha \gamma}$ of $\sum_{\gamma=1}^\mu a_{\alpha\gamma}a'_{\gamma\beta}$ is not zero, there are two possibilities. Either we have $t_\alpha = x_k t_\gamma$ and thus $a_{\alpha \gamma} = 1$, or we have $x_k t_\gamma = b_j$, $t_\alpha \in {\rm Supp}(b_j - g_j)$, and hence $a_{\alpha \gamma} = c_{\alpha j}$. In the first case, we have $t_\alpha = x_k x_\ell t_\beta$. In the second case, we have $x_k x_\ell t_\beta = b_j$ and $t_\alpha \in {\rm Supp}(b_j - g_j)$. Now it is clear that if we examine the product $\mathcal{A}_\ell\mathcal{A}_k$, we get the same conditions. Therefore we conclude that $\mathcal{A}_k\mathcal{A}_\ell = \mathcal{A}_\ell\mathcal{A}_k$. Next we show~b). The entries of the commutators $\mathcal{A}_k\mathcal{A}_\ell -\mathcal{A}_\ell\mathcal{A}_k$ are the defining equations of the scheme~$\mathbb{B}_{\mathcal{O}}^{\rm hom}$ in the affine subspace $\mathcal{Z}(c_{ij}\mid \deg_W(t_i)\ne\deg_W(b_j))$ of~$\mathbb{A}^{\mu\nu}$. By~a), these commutators are all zero. The number $r\,s$ is precisely the dimension of this affine subspace. To show~c), it now suffices to connect the given point in this affine space by a line to the origin and to apply Corollary~\ref{ratcurve}. \end{proof} If an ideal~$I$ has an $\mathcal{O}$-border basis and $\mathcal{O}$ has a $\mathop{\rm maxdeg}\nolimits_W$ border for some grading given by a matrix $W\in\mathop{\rm Mat}\nolimits_{1,n}(\mathbb{N}_+)$, we can combine the two flat families of Theorem~\ref{DFdeform} and part~c) of the theorem above. As an illustration, we continue the discussion of Example~\ref{deftoDFex}. \begin{example}\label{exdefcontinued} Let~$I$ be the ideal $I=( x^2+xy -\frac{1}{2}y^2-x-\frac{1}{2}y,\, y^3-y,\, xy^2-xy)$ in~$K[x,y]$, where ${\rm char}(K)\ne 2$, and let $\mathcal{O}=\{1,x,x^2,y,y^2\} \subset \mathbb T^2$. Using the fact that~$\mathcal{O}$ has a $\mathop{\rm maxdeg}\nolimits_W$ border with respect to the standard grading, we have already deformed~$I$ to~$\mathop{\rm DF}\nolimits_W(I)=( x^3,\, x^2y,\, xy+x^2-\frac{1}{2}y^2,\, xy^2,\, y^3)$. Now we apply the theorem. We equip the summands $x^2$ and $y^2$ in the third polynomial with a factor~$z$ and get $J=( x^3,\, x^2y,\, xy+zx^2-\frac{1}{2}zy^2,\, xy^2,y^3)$. As we now let $z\To 0$, we get the border form ideal of~$I$. This is a flat deformation by part~c) of the theorem. We can also directly check that the multiplication matrices $$ \mathcal{A}_x= \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \cr 1 & 0 & 0 & 0 & 0 \cr 0 & 1 & 0 & -z & 0 \cr 0 & 0 & 0 & 0 & 0 \cr 0 & 0 & 0 & \frac{1}{2}z & 0 \end{pmatrix} \quad \hbox{\rm and}\quad \mathcal{A}_y = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \cr 0 & 0 & 0 & 0 & 0 \cr 0 & -z & 0 & 0 & 0 \cr 1 & 0 & 0 & 0 & 0 \cr 0 & \frac{1}{2}z & 0 & 1 & 0 \end{pmatrix} $$ commute as elements of $\mathop{\rm Mat}\nolimits_5(K[z])$. \end{example} Notice that, at least following the approach taken here, it is not possible to connect~$I$ to~$\mathop{\rm BT}\nolimits_{\mathcal{O}}$ using just one irreducible rational curve on the border basis scheme. The next example shows that the $\mathop{\rm maxdeg}\nolimits$ border property is indispensable for the theorem to hold. \begin{example}\label{secondexample} The order ideal $\mathcal{O}=\{1,x,y,x^2,xy,y^2,x^2y,xy^2,x^2y^2\}\subseteq \mathbb{T}^2$. does not have a $\mathop{\rm maxdeg}\nolimits_W$ border with respect to any grading given by a matrix $W\in\mathop{\rm Mat}\nolimits_{1,2}(\mathbb{N}_+)$. The generic homogeneous $\mathcal{O}$-border basis is $G=\{g_1,\dots,g_6\}$ with $g_1 = y^3-c_{71}x^2y - c_{81}xy^2$, $g_2=x^3-c_{72}x^2y -c_{82}xy^2$, $g_3=xy^3-c_{93}x^2y^2$, $g_4=x^3y-c_{94}x^2y^2$, $g_5=x^2y^3$, and $g_6=x^3y^2$. For the defining ideal of~$\mathbb{B}_{\mathcal{O}}^{\rm hom}$, we find $( c_{82}c_{93}+c_{72}-c_{94},\, c_{71}c_{94}+c_{81}-c_{93})$. Hence~$\mathbb{B}_{\mathcal{O}}^{\rm hom}$ is not a 2-dimensional affine space (as would be the case if the theorem were applicable), but isomorphic to a 4-dimensional affine space via the projection to $\mathcal{Z}(c_{72},c_{81})$. \end{example} Another consequence of the theorem is that the homogeneous border basis scheme can have a dimension which is higher than $n\mu$, the natural generalization of the dimension of~$\mathbb{B}_{\mathcal{O}}$ for $n=2$ (see Remark~\ref{BBSprops}). \begin{example}{\bf (Iarrobino)}\label{exIarrobino} In the paper~\cite{I} Iarrobino proves that Hilbert schemes need not be irreducible (see also~\cite{MS}, Theorem 18.32). In particular, he produces an example which can easily be explained using homogeneous border basis schemes. Let $\mathcal{O}$ be an order ideal in~$\mathbb{T}^3$ consisting of all terms of degree $\le 6$ and 18 terms of degree seven. The we have $d=7$ and $r=s=18$ in part~b) of the theorem. Hence~$\mathbb{B}_{\mathcal{O}}^{\rm hom}$ is isomorphic to an affine space of dimension 324. In particular, it follows that $\dim(\mathbb{B}_{\mathcal{O}}) \ge 324$. On the other hand, the irreducible component of~$\mathbb{B}_{\mathcal{O}}$ containing the points corresponding to reduced ideals has dimension $3\cdot\mu=3\cdot 102=306$. \end{example} In the maxdeg border case, we can also compare the dimension of~$\mathbb{B}_{\mathcal{O}}^{\rm hom}$ to the dimension of the {\em zero fiber}~$Z$, i.e.\ the dimension of the subscheme of~$\mathbb{B}_{\mathcal{O}}$ parametrizing schemes supported at the origin. Since~$\mathbb{B}_{\mathcal{O}}^{\rm hom}$ is contained in~$Z$, the preceding example implies that the dimension of~$Z$ can be larger than~$n\mu$, the dimension of the irreducible component of~$\mathbb{B}_{\mathcal{O}}$ containing the points corresponding to reduced ideals. For $n=2$, a more precise estimate is available. \begin{example} Let $n=2$. Then the dimension of~$Z$ is~$\mu-1$ by~\cite{B}. If~$\mathcal{O}$ has a maxdeg border then the theorem yields $s=d+1-r$ and $\dim(\mathbb{B}_{\mathcal{O}}^{\rm hom})=r(d+1-r)\le (\frac{d+1}{2})^2$. This agrees with $\mathbb{B}_{\mathcal{O}}^{\rm hom}\subseteq Z$ since $(\frac{d+1}{2})^2 \le \frac{d(d+1)}{2}+r-1 = \mu-1$. \end{example} Let us end this section with an example application of Theorem~\ref{homcommute}. \begin{example} In~\cite{MS}, Example 18.9, the authors consider the ideal $I=(x^2-xy,\,\allowbreak y^2-xy,\, x^2y,\, xy^2)$ in the ring $\mathbb{C}[x,y]$. It has a border basis with respect to the order ideal $\mathcal{O} = \{1,x,y,xy\}$, i.e.\ it corresponds to a point in~$\mathbb{B}_{\mathcal{O}}$. It is clear that no matter which term ordering~$\sigma$ one chooses, it is not possible to get $\mathcal{O}_\sigma(I) = \mathcal{O}$, since $x^2>_\sigma xy$ implies $xy >_\sigma y^2$, and therefore $xy \notin \mathcal{O}_\sigma(I)$. The consequence is that if one wants to connect~$I$ to a monomial ideal in the Hilbert scheme, the deformation to~$\mathop{\rm LT}\nolimits_\sigma(I)$ with respect to any term ordering~$\sigma$ leads to a monomial ideal which is not $(x^2, y^2)$, i.e.\ not in~$\mathbb{B}_{\mathcal{O}}$. On the other hand, by Example~\ref{affinecell}, we know that it is possible to deform the ideal~$I$ to $(x^2, y^2)$. But we can do even better: since the ideal~$I$ is homogeneous, it belongs to the family parametrized by the homogeneous border basis scheme $\mathbb{B}_{\mathcal{O}}^{\rm hom}$ which is an affine space by Theorem~\ref{homcommute}. The full family of homogeneous ideals is $(x^2-zaxy,\ y^2-zbxy,\ x^2y,\ xy^2)$. Putting $a = b = 1$, we get the desired flat deformation $\Phi: K[z] \To \mathbb{C}[x,y,z]/(x^2-zxy,\, y^2-zxy,\, x^2y,\, xy^2)$. \end{example} \bigbreak \subsection*{Acknowledgements} Part of this work was conducted during the Special Semester on Gr\"obner Bases, February 1 to July 31, 2006, organized by the RICAM Institute (Austrian Academy of Sciences) and the RISC Institute (Johannes Kepler University) in Linz, Austria. The authors thank these institutions, and in particular Prof.\ Bruno Buchberger, for the warm hospitality and the excellent work atmosphere they experienced in Linz. \bigbreak
2023-04-23T06:10:10.622Z
2007-10-14T10:36:00.000Z
redpajama/arxiv
arxiv_0002
315
15,069
7d3096c31209047a29f3c6714987b9ba5f7b6002
\section{Introduction}\label{Intro:sec} Ordinary molecular dynamics (MD) \cite{tuckerman00} uses Hamiltonian equations of motion (EOM) to describe the evolution of an atomic system. The validity of that approach relies on the Born-Oppenheimer (BO) approximation which states that --- in most cases --- the atomic motion induces a slow (adiabatic) perturbation of the electronic dynamics. Therefore, if the system is originally prepared in an electronic eigenstate (e.g. the ground-state) for a given atomic configuration, it will evolve into the same electronic eigenstate for the evolved atomic configuration. Furthermore, according to the BO approximation, the ground-state electronic energy for a given atomic configuration can be employed as an effective (i.e. low energy) \emph{potential energy surface} (PES) for the atomic motion. The BO approximation is not longer applicable whenever electronic transitions between surfaces are relevant: for instance, where there is a non-radiative transition following an earlier photo-excitation. However, it is still possible to extend the scope of MD by allowing more than a single PES --- one for every electronic state --- and by providing a meaningful way to make transitions among PES i.e. \emph{non-adiabatic electronic transitions}. The extension of ordinary MD to deal with processes involving many PES has been extensively pursued over recent decades and some effective algorithms are available for atomistic simulations. The most used are: Ehrenfest dynamics (ED), \cite{tully98,horsfield06a} molecular dynamics with quantum transitions (MDQT), \cite{hammesschiffer94,tully98}, mixed quantum-classical dynamics (MQCD), \cite{kapral99a,kapral06} and \emph{ab initio} multiple spawning (AIMS) \cite{ben-nun00,ben-nun02}. They have been employed to simulate quantum dissipative dynamics \cite{mackernan02,sergi07a} as well as non-adiabatic processes such as photo-chemical reactions, \cite{mueller97,kin07} polaron formation in conjugated polymers, \cite{an04}, and proton transfer in solutions \cite{hammesschiffer94}. Although less successful in practical applications, we also mention other dynamical schemes that provide valuable theoretical insights: the frozen Gaussian approximation, \cite{heller81} which is one of the pillars of AIMS, and the Gauss-Hermite wave-packet expansion, \cite{adhikari99} which shares similarities to the method presented in this paper. In order to extend the scope and the accuracy of the aforementioned methods, a new approach called \emph{correlated electron-ion dynamics} (CEID) \cite{horsfield04b,horsfield05a,horsfield06a}, has been introduced. This method has been mainly applied to study the heat production and dissipation in model metallic nanostructures, \cite{bowler05a,horsfield06a,mceniry07} a problem relevant for nanotechnology. Indeed, other algorithms based on non-smooth dynamics, that is, those which allow for either sudden surface hopping (like MDQT) or sudden wave-packet spawning (like AIMS), are expected to be less efficient in the simulation of systems with a dense, gapless electronic spectrum, like a metal. That is a consequence of the slight adjustments of the average atomic positions and/or velocities that might need to be imposed after a sudden transition in order to ensure total energy and momentum are conserved. \cite{tully98,ben-nun02} If the number of crossings is large, the computational overhead due to these adjustments might be non-negligible. MQCD, although it is often implemented by using both surface hopping and mean-field evolutions, is based on a sound theory \cite{nielsen01,sergi05} and conserves the kinematic constraints. On the other hand, time-translation invariance is valid only approximately in MQCD \cite{nielsen01} and numerical instabilities have so far limited the surface-hopping implementation of MQCD to relatively short time simulations. \cite{sergi07a} Finally, ED --- although it evolves smoothly --- has been shown to poorly describe the atomic heating caused by electron-ion interaction. \cite{horsfield04a} A further complication arises when a quantum sub-system is coupled with large quantum reservoirs (which are not treated in detail), as for a nanostructure connected to macroscopic leads. In this case, it is hard to define meaningful PES in terms of the sub-system degrees of freedom only, and so algorithms based on the surface hopping paradigm are expected to be less accurate. This problem is absent if the quantum sub-system is coupled to a classical dissipative environment; in this case MQCD or its latest variant \cite{sergi07b} can give reliable results. In principle, the CEID EOM form an exact, yet infinite, kinetic hierarchy which corrects ED by means of the so-called small amplitude moment expansion (SAME) of the Liouville equation. \cite{horsfield06a} So far, a few schemes to truncate the hierarchy have been proposed \cite{horsfield04b} in order to simulate the CEID EOM and currently available algorithms are restricted to a mean-field second moment approximation. \cite{footnote4,horsfield05a} Although those algorithms are accurate enough to describe --- at least qualitatively --- nanostructure heating, the existence of a practical truncation scheme which converges to the exact quantum dynamics has not been demonstrated until now: in this paper we show that a convergent truncation scheme actually exists and we provide a new practical CEID algorithm whose accuracy depends on a single tunable parameter. We provide a validation of our theory by comparing CEID results against exact integration of the time-dependent Schr\"{o}dinger equation for a model two level system (2LS) --- i.e. the simplest system which displays non-adiabatic transitions \cite{landau32b,zener32} --- in two contrasting parameter regimes. The rest of the paper is organized as follows: In Sec.~\ref{2LS:sec} we introduce the physics of the model 2LS employed to test our new CEID algorithm. In Sec.~\ref{exact_int:sec} we describe the exact algorithm by which we produced the benchmark calculations, while in Sec.~\ref{CEID:sec} the new CEID scheme is derived in detail. Finally, numerical results from simulations of the model 2LS are collected in Sec.~\ref{2LS_results:sec} and the conclusions and perspectives of this work are discussed in Sec.~\ref{conclusions:sec}. \section{A case study: the two level system}\label{2LS:sec} In this section we introduce the model 2LS that we employed to test the convergence properties of our CEID scheme. [Numerical findings are reported in Sec.~\ref{2LS_results:sec}.] Here we discuss the physics we expect to address before explaining the details of the algorithms we use. It is widely recognized that a one-dimensional 2LS illustrates many of the fundamental features of a non-adiabatic system and, at the same time, it retains the simplicity of a low-dimensional model. \cite{tully90,martens97} In general, the Hamiltonian for a system made by electrons and ions (in the absence of external fields) can be written as follows: \begin{equation}\label{2LS_H:eqn} \hat{H} = \hat{P}^2/2M + \hat{H}_{e}(\hat{R})\;, \end{equation} where $\hat{R}$ and $\hat{P}$ are the quantum operators for the atomic position and momentum, while the electronic dependence of $H$ is collected into $\hat{H}_{e}$. In particular, the first term in the RHS of Eq.~\eqref{2LS_H:eqn} accounts for the atomic kinetic operator while a sort of atomic potential operator is described by the second term. There is a lot of freedom in constructing the potential term, $\hat{H}_{e}$, but in this paper we focus only on the following parametrization: \begin{equation}\label{elec_H:eqn} \hat{H}_{e}(R) = \left( \begin{array}{cc} \frac{1}{2}K\,\left( R -R_{0}\right)^2 & -f_c \,R\\ -f_c \,R & \frac{1}{2}K\,R^2 + \Delta\varepsilon \end{array} \right)\; \end{equation} which describes two \emph{parabolic} PES (diagonal entries) linearly coupled through a kind of dipolar interaction (off-diagonal entries). This is a \emph{non-adiabatic} representation of the electronic PES in which the electronic basis is independent of the atomic coordinate. The \emph{adiabatic} representation can be obtained as usual by diagonalizing $\hat{H}_{e}(R)$. Eq.~\eqref{2LS_H:eqn} and \eqref{elec_H:eqn} depend on the following parameters: the atomic mass $M$, the harmonic constant $K$, the electron-ion coupling constant $f_c$, the surface displacement $R_0$, and the PES energetic offset $\Delta \varepsilon$. Since both the PES are confining, we expect to see periodic electronic transitions between the two PES driven by the electron-ion interaction. For instance, a state can be prepared as the atomic ground-state on the upper electronic PES. The atomic position $R$ experiences quantum oscillations and so the system cannot be exactly localized in the minimum of the PES ($R=0$ according to our parametrization). Since this initial state is not an eigenstate of the interacting Hamiltonian (i.e. for $f_c \neq 0$), the system will eventually make a transition into the lower PES. We stress that this process must conserve the total energy so that an atomic transition must accompany the electronic transition. For instance, the electronic process described above can be viewed as an initial decay since the atomic potential energy is effectively decreased. Therefore, an increase of the atomic kinetic energy is expected as a consequence of this decay in order to conserve the total energy. This, in a nutshell, is the heating of an atomic degree of freedom caused by the electron-ion interaction. \cite{horsfield04a} However, we stress that the aim of the present work is not the study of a quantum decay process, but the illustration of the convergence properties of a new CEID algorithm. Other models must be used to address the physics of quantum thermalization. For instance, the spin-boson model \cite{leggett87} describes a 2LS coupled to an environment made by a collections of many quantum harmonic oscillators. This model can be effectively simulated \cite{makri98,mackernan02} and it might provide a future test case for our new CEID algorithm. There is an interesting general feature displayed by the electronic dynamics of our model 2LS. According to the initial condition described above, the electronic transition is due to the quantum fluctuations only, because the dipolar interaction is exactly zero for a classical atom perfectly localized in the minimum of the upper PES. On the other hand, quantum fluctuations are completely neglected in the sort of mean-field description of the atomic motion employed in ED. As a consequence, we do not expect ED to reproduce the initial transition from the upper PES to the lower PES, which can be thought of as spontaneous phonon emission. If confirmed, this behavior will provide further evidence that the exchange of energy from the electronic to the atomic degrees of freedom cannot be properly addressed by ED. \cite{horsfield04a} In order to be as transparent as possible when discussing the dynamical features of our model 2LS, we choose to measure the values of the parameters in the Hamiltonian in terms of natural units. The natural energy scale is given by the harmonic quantum, $\hbar\omega$, where $\omega = \sqrt{K/M}$, and so the time can be measured in units of the harmonic period, $2\pi/\omega$. Introducing a mass scale, $M_0$, (its actual value is not important here), a length scale and a linear momentum scale are immediately obtained: $a_0= \sqrt{\hbar/(M_0\omega)}$ and $b_o=\sqrt{M_0\hbar\omega}$, respectively. For our purposes, not all the parameters need to be varied during the numerical experiments. First of all, we fixed both the atomic mass ($M=M_0$) and the harmonic constant ($K=M_0\omega^2$) and then we took the PES offset to be equal to one harmonic quantum ($\Delta \varepsilon = \hbar\omega$). This is equivalent to saying that the atomic ground-state on the upper PES, $|\chi_0^{(u)}\rangle$, has got exactly the same energy as the first harmonic excitation on the lower PES, $|\chi_1^{(l)}\rangle$. [Here we are neglecting the electron-ion coupling and using the notation introduced in appendix \ref{time_dep_pert:sec}.] If the coupling constant is small enough (see appendix \ref{time_dep_pert:sec}), the time-evolution of our 2LS can be understood starting from these two states. We confined this study to this weak coupling regime, i.e. $f_c \le 0.1 \, \hbar \omega /a_0$. Finally, we report in this paper the numerical results for two different 2LS geometries only: the \emph{unshifted} 2LS, with $R_0=0$, and the \emph{shifted} 2LS, with $R_0=a_0$. The PES for these two cases are shown in Fig.~\ref{fig_potential_populations_unshifted:fig}(a) and Fig.~\ref{fig_potential_populations_shifted:fig}(a), respectively. Although many other 2LS geometries have been studied, e.g. with larger $R_0$ or with other values of $K$ (including the possibility of different harmonic force constants for the lower and the upper PES), they gave numerical outcomes qualitatively similar to either the shifted or unshifted 2LS, and so the details are not reported here. A linear combination of the two low-lying resonant states might be employed to give a qualitative account of our model 2LS dynamics i.e. the wave-function at time $t$ might be approximated as: \begin{equation}\label{0-order_state:eqn} \psi(t) \simeq c_0(t)|\chi_0^{(u)}\rangle + c_1(t)|\chi_1^{(l)}\rangle\;, \end{equation} where $c_0$ and $c_1$ are time-dependent complex coefficients. This state would give a distribution of the atomic position $R$ more or less localized around each of the two classical equilibrium positions, namely $R=0$ for the upper PES and $R=R_0$ for the lower PES. Eq.~\eqref{0-order_state:eqn} might not be a good \emph{ansatz} for the evolved state as soon as the displacement $R_0$ is large enough. Indeed, even a classical atom can be found far from its equilibrium position whenever this is energetically allowed. In this case, we should be able to describe an atomic wave-packet almost localized far from both $R=0$ and $R=R_0$. Therefore, we will produce a manifest physical inconsistency if we assume that the generic 2LS state can be always well approximated by Eq.~\eqref{0-order_state:eqn}. This physical inconsistency can be easily fixed by taking a longer expansion of the exact evolved state $\psi(t)$ in terms of the harmonic excitations of the atomic degrees of freedom. Unfortunately, there is a cost to pay: the longer --- and so the more accurate --- the expansion, the more time-consuming will be the simulation. [See Sec.~\ref{conclusions:sec} for a further discussion of this point.] In the next two sections, two different ways to compute non-adiabatic dynamics are considered in detail. In Sec.~\ref{exact_int:sec} a method based on the numerical diagonalization of the Hamiltonian of the full system is presented, while a new formulation of CEID is introduced in Sec.~\ref{CEID:sec}. \section{Exact integration}\label{exact_int:sec} The `exact' method of integration is based on numerical diagonalization of the full Hamiltonian matrix and subsequent time evolution exploiting the eigenvectors and eigenvalues obtained in the diagonalization step. While it is highly accurate, in practice, such an approach naturally has to be limited to systems with a small number of degrees of freedom. However, for our purposes it provides an ideal scheme for producing benchmark results. A central quantity in this paper is the Wigner transform (WT) of an operator. Originally, the WT of a wave-function $\psi$ was defined in Ref.~\onlinecite{wigner32} as \begin{align} \label{MatRWig1} W(R,P) &=& \frac{1}{{(\hbar\pi)}^{n}}\int\psi^{*}(R+s)\psi(R-s) \exp\Big(\frac{2i}{\hbar}P\cdot s\Big){\text{d}}^{n}s \nonumber \\ &=&\frac{1}{{(2\pi\hbar)}^{n}}\int \psi^{*}\big(R+\frac{1}{2}s\big)\psi(R-\frac{1}{2}s\big)\exp\Big(\frac{i}{\hbar}P\cdot s\Big) {\text{d}}^{n}s, \end{align} where $R=(R_{1},\dots,R_{n})$, $P=(P_{1},\dots,P_{n})$, and the time-dependence has been suppressed. The latter form in Eq.~\eqref{MatRWig1} is more common nowadays. In a straightforward extension of this definition, for an operator $\hat{A}$, expandable in basis states $\{\phi_{a}\}$, $\hat{A}=\sum_{ab}\phi_{a}(x)A^{ab}\phi^{*}_{b}(x)$, the WT is given by \begin{multline} A_{w}(R,P)=\sum_{ab}A^{ab}\frac{1}{{(2\pi\hbar)}^{n}} \int\phi^{*}_{b}\big(R+\frac{1}{2}s\big)\phi_{a}\big(R-\frac{1}{2}s\big) \exp\Big(\frac{i}{\hbar}P\cdot s\Big){\text{d}}^{n}s. \end{multline} Of particular interest to us here, however, are WT with respect to the degrees of freedom of a subsystem, i.e. partial WT. These appear naturally when the system can be divided into subsystems on physical grounds. In this paper, where the system is a molecule, the system can be split into an electronic subsystem and the subsystem of the ions/nuclei. Consider an operator $\hat{C}$ acting on a system divisible into subsystems with basis sets $\{\ket{A}\}$ and $\{\ket{a}\}$. The operator is assumed to be expandable as $\hat{C}=\sum_{ABab}\ket{a}\otimes\ket{A}C^{ABab}\bra{b}\otimes\bra{B}$, and its partial WT with respect to the $\ket{A}$-subsystem is \begin{equation} \hat{C}_{w,A}(R,P) = \sum_{ab}\ket{a}C^{ab}_{w,A}(R,P)\bra{b}, \end{equation} where, with $\Phi_{A}$ the position representation of $\ket{A}$, \begin{multline} C^{ab}_{w,A}(R,P)=\sum_{AB}C^{ABab}\frac{1}{{(2\pi\hbar)}^{n}} \int\exp\Big(\frac{i}{\hbar}P\cdot s\Big) \Phi_{B}^{*}\Big(R+\frac{1}{2}s\Big)\Phi_{A}\Big(R-\frac{1}{2}s\Big){\text{d}}^{n}s. \end{multline} Note the important fact that $\hat{C}_{w,A}$ is still an operator in the $\ket{a}$-subsystem. As in the rest of the paper the only WT used are partial WT with respect to `ionic' or `atomic' degrees of freedom, we shall henceforth write $\hat{C}_{w}$ instead of $\hat{C}_{w,A}$, and also refer to the $\ket{A}$-subsystem as the atomic and the $\ket{a}$-subsystem as the electronic subsystems. The time-evolution of the system is generated by a Hamiltonian, whose eigenvalues and eigenvectors shall be $\mathcal{E}_{n}$ and $\ket{\Psi_{n}}$, respectively. The basis states $\{\ket{a}\}$ and $\{\ket{A}\}$ of the subsystems are taken to be time-independent from now on. We can expand an eigenstate in the product basis $\ket{\Psi_{n}}=\sum_{Aa}\mathcal{C}^{Aa}_{n}\ket{a}\otimes\ket{A}$, or vice versa, $\ket{a}\otimes\ket{A}=\sum_{n}\mathcal{D}_{Aa}^{n}\ket{\Psi_{n}}$, where we have the relations $\sum_{Aa}\mathcal{C}^{Aa}_{n}\mathcal{D}^{m}_{Aa}=\delta^{m}_{n}$ and $\sum_{n}\mathcal{C}^{Aa}_{n}\mathcal{D}^{n}_{Bb}=\delta^{Aa}_{Bb}$. The expansion coefficients $\mathcal{C}$ and $\mathcal{D}$ are time-dependent, \begin{equation} \mathcal{C}^{Aa}_{n}(t)=\mathcal{C}^{Aa}_{n}(0)\exp\Big(-\frac{i}{\hbar}\mathcal{E}_{n}t\Big); \end{equation} in any particular case the numerical diagonalization will provide us with the $\mathcal{E}_{n}$ and the coefficients $\mathcal{C}^{Aa}_{n}(0)$, which make up the eigenvectors, i.e. $\mathcal{C}^{Aa}_{n}(0)$ is the $Aa$-component of eigenvector $n$. We now can expand an operator $\hat{G}$ in the eigenstates or the product states: \begin{equation} \hat{G}=\sum_{mn}\ket{\Psi_{n}}G^{nm}\bra{\Psi_{m}}=\sum_{ABab}\ket{a}\otimes\ket{A}G^{ABab}(t)\bra{b}\otimes\bra{B}, \end{equation} with \begin{equation} \label{MatRGt} G^{ABab}(t)=\sum_{nm}\mathcal{C}^{Aa}_{n}(0)G^{nm}{\big(\mathcal{C}^{Bb}_{m}\big)}^{*}(0)\exp\Big(-\frac{i}{\hbar} \big(\mathcal{E}_{n}-\mathcal{E}_{m}\big)t\Big). \end{equation} The partial WT with respect to the atomic subsystem is then \begin{equation} \hat{G}_{w}(R,P,t)=\sum_{ab}\ket{a}G^{ab}_{w}(R,P,t)\bra{b}, \end{equation} where in turn \begin{equation} \label{MatRGF} G^{ab}_{w}(R,P,t)=\sum_{AB}\mathcal{F}_{BA}(R,P)G^{ABab}(t), \end{equation} and \begin{multline} \label{MatRFE} \mathcal{F}_{BA}(R,P)=\frac{1}{{(2\pi\hbar)}^{n}}\int\exp\Big(\frac{i}{\hbar}P\cdot s\Big) \Phi^{*}_{B}\Big(R+\frac{1}{2}s\Big) \Phi_{A}\Big(R-\frac{1}{2}s\Big){\text{d}}^{n}s. \end{multline} Specializing to the particular case of the 2LS and its actual implementation on a computer, we introduce dimensionless quantities using the scale factors mentioned in Sec.~\ref{2LS:sec}. In particular we obtain the dimensionless wave-function $\varphi(\xi)=\sqrt{a_{0}}\Phi(a_{0}\xi)$, the dimensionless version of $\mathcal{F}$, $F_{nm}(\xi,\eta) =\hbar\mathcal{F}_{nm}(a_{0}\xi,b_{0}\eta)$ and dimensionless eigenvalues $E_{n}=\mathcal{E}_{n}/(\hbar\omega)$. Using Eq.~\eqref{MatRGt}, the factoring Eq.~\eqref{MatRGF}, and Eq.~\eqref{MatRFE}, all the components of $G^{ab}_{w}$ can be computed as functions of $R,P,t$ (or their dimensionless counterparts). Note that this does not involve the numerical solution of a (potentially partial) differential equation, so we are not required to advance over many small time-steps in order to reach a given value of $t$. Limitations are, however, introduced by the need to truncate the oscillator basis to a finite number of states. As the purpose of the exact approach is to provide a benchmark for the CEID method, some care has to be taken to avoid truncation errors here. First one has to decide how many product and eigenbasis states to use in expansions like Eq.~\eqref{MatRGt}, say $N$. Then, the diagonalization has to be carried out using a number of product basis states $M>N$ such that the $N$ lowest eigenvalues and corresponding eigenvectors are well converged; typically, we used $N\approx 120$ and $M\approx 5N$. The operator one is considering (in what follows it will be the density operator) should only have negligible coupling between states with index equal to or less than $N'$ ($N'<N$) to states with index larger than $N'$. These first $N'$ states, which in the diagonalization step are produced as $M$-component quantities, should not have significant contributions from product basis components with index greater than $N$. Only if these conditions are met, and the initial conditions for the operator are chosen to involve only the first $N'$ states, can we consider the numerical results for the time-evolution reliable and an adequate benchmark for CEID. The quality of the choice of $N$ can therefore only be assessed after diagonalization. From the results produced for the purpose of comparison we show the occupations $N_{a}$, $a\in\{1,2\}$, of the electronic levels, and expectation values of position, momentum and the variance of the position, as functions of time. These have been calculated via the WT $\rho^{ab}_{w}(R,P,t)$ of the density operator $\rho$ of the system, by numerical evaluation of the integral \begin{equation} N_{a}(t)=\iint\rho^{aa}_{w}(R,P,t)\text{d} R\text{d} P, \end{equation} in case of the occupations, and of \begin{equation}\label{exact_observables:eqn} \langle f(R,P,t) \rangle = \sum_{a=1}^{2}\iint f(R,P,t)\rho^{aa}_{w}(R,P,t)\text{d} R\text{d} P \end{equation} for $f(R,P,t)=R$, $P$, $\big(R-\langle R\rangle(t)\big)^{2}$, respectively, in the rest of the cases. \section{Correlated electron-ion dynamics}\label{CEID:sec} In this section we describe a new formulation of the correlated electron-ion dynamics (CEID) while the original formalism can be found in Ref.~\onlinecite{horsfield04b}. [See also Ref.~\onlinecite{horsfield05a} and Ref.~\onlinecite{horsfield06a} for further details.] For the sake of simplicity, the new CEID EOM has been derived here only for the one-dimensional case although multi-dimensional EOM are also known. \cite{footnote3} We start from the well known quantum Liouville equation: \begin{equation}\label{Liouville:eqn} \dot{\hat{\rho}} = \frac{1}{i\,\hbar} \, \left[ \hat{H}, \hat{\rho}\right]\;, \end{equation} which is the EOM for the density matrix $\hat{\rho}$ of the system. [We use a dot to indicate time-derivative.] Unfortunately, a direct integration of Eq.~\eqref{Liouville:eqn} is exceedingly time-consuming because it scales approximately as the cube of the Hilbert space dimension which is very large in most cases of interest. On the other hand, since atoms are much heavier than electrons, an expansion of their motion around the classical trajectories is often justified. It turns out that this kind of expansion cuts off the quantum fluctuations of the atomic degrees of freedom and so it effectively reduces the Hilbert space dimension. For instance, simulations of the \emph{semi-classical} limit of Eq.~\eqref{Liouville:eqn} \cite{martens97} have been shown to reproduce --- at least qualitatively --- the correct non-adiabatic dynamics of a few interesting test-cases. More generally, the density matrix $\hat{\rho}$ can be partially expanded with respect to the atomic degrees of freedom by means of a complete orthonormal system (COS). \cite{kapral99a} By using the standard Dirac's bra and ket notation, this expansion can be expressed as: \begin{equation}\label{rho_expansion:eqn} \hat{\rho} = \sum_{n=0}^{\infty}\sum_{m=0}^{\infty}\,|\phi_n \rangle \hat{\rho}_{n,m} \langle \phi_m|\;, \end{equation} where the functions $\left\{ \phi_n \right\}$ are a COS in the atomic subspace. As a consequence, in Eq.~\eqref{rho_expansion:eqn} the electronic degrees of freedom are included in the \emph{matrix coefficients} $\hat{\rho}_{n,m}$. A natural choice dictated by the kind of 2LS physics introduced in Sec.~\ref{2LS:sec} (i.e. confining PES) is to use the simple harmonic oscillator (SHO) eigenfunctions as atomic COS. We stress here that, although these functions are usually centered around a classical equilibrium point, any other reference point can be taken instead (see below). Since all the observables can be expanded as in Eq.~\eqref{rho_expansion:eqn}, all the operations involving observables (e.g. averages) can be worked out by means of the observable matrix coefficients only. [In general, the matrix coefficients of the observable $\hat{A}$ are given by: $\hat{A}_{n,m}= \langle \phi_n| \hat{A}| \phi_m \rangle$.] For instance, the total energy of the system is given by: \begin{equation}\label{e_tot:eqn} E_{tot} = {\rm Tr}\left\{ \hat{H} \hat{\rho} \right\} = \sum_{n=0}^{\infty}\sum_{m=0}^{\infty} {\rm Tr}_e\left\{ \hat{H}_{m,n} \hat{\rho}_{n,m} \right\} \;. \end{equation} [The two traces, ${\rm Tr}$ and ${\rm Tr}_e$, apply to different linear spaces, namely the whole Hilbert space and the electronic subspace: see Sec.~\ref{exact_int:sec}.] As a further example, the EOM for $\hat{\rho}_{n,m}$ are obtained by plugging Eq.~\eqref{rho_expansion:eqn} into Eq.~\eqref{Liouville:eqn}: \begin{equation}\label{matrix_EOM:eqn} \dot{\hat{\rho}}_{n,m} = \frac{1}{i\hbar}\,\sum_{k=0}^{\infty}\left[ \hat{H}_{n,k}\;\hat{\rho}_{k,m} - \hat{\rho}_{n,k}\;\hat{H}_{k,m} \right]\;. \end{equation} It is easy to prove that $E_{tot}$ is in fact a constant of motion by taking the time-derivative of Eq.~\eqref{e_tot:eqn} and then by using Eq.~\eqref{matrix_EOM:eqn}. The complete set of EOM, Eq.~\eqref{matrix_EOM:eqn}, cannot be directly simulated because it is not finite. Therefore, we must make an approximation and, quite naturally, we set to zero (as a matrix) every matrix coefficient with indices greater than $N$, the \emph{CEID order}. Nevertheless, after this truncation, the EOM are still fully quantized (and expressed by a proper Lie bracket), but they are restricted to a smaller Hilbert subspace. As for the exact scheme described in Sec.~\ref{exact_int:sec}, in order to make the classical limit of Eq.~\eqref{matrix_EOM:eqn} more manifest, we use a partial Wigner transform (WT) i.e. a WT taken only with respect to the atomic degrees of freedom. Therefore, the partial WT of the operator $\hat{A}$, $\hat{A}_w( R, P)$, is still an operator in the electronic subspace but it explicitly depends on what are now the classical atomic position $R$ and momentum $P$. In the context of non-adiabatic MD, similar partial WT have been already considered \cite{kapral99a,thorndyke05} and they have been shown to provide correct numerical results. In order to avoid confusion, we stress that, although WT seems to map a quantum operator into a classical distribution, the dynamics remains non-classical because the WT of the product of two operators is in general not the product of the WT of the two operators: \begin{equation} (\hat{A} \hat{B})_w = \hat{A}_w \star \hat{B}_w \;, \end{equation} where $\star$ is the non-commutative \emph{Moyal product}: \cite{groenewald46,moyal49} \begin{equation}\label{Moyal:eqn} \star = \exp\left[ \frac{i\,\hbar}{2} \left( \overleftarrow{\partial}_R\overrightarrow{\partial}_P- \overleftarrow{\partial}_P\overrightarrow{\partial}_R \right)\right]\;. \end{equation} [The arrows indicate the directions in which the derivative operators act.] The WT of any operator can be expanded as in Eq.~\eqref{rho_expansion:eqn} by using the WT of the basis operators $|\phi_n\rangle\langle\phi_m|$. As a consequence, the matrix coefficients of an operator are the same both in the original Hilbert space and the in transformed one. The WT of the Liouville equation can be formally stated as: \begin{equation}\label{Liouville_WT:eqn} \dot{\hat{\rho}}_w(t) = \frac{1}{i\,\hbar} \, \left( \hat{H}_w \star \hat{\rho}_w - \hat{\rho}_w \star \hat{H}_w \right)\;. \end{equation} It can be shown that from Eq.~\eqref{Liouville_WT:eqn} the usual Hamilton-Ehrenfest equations (i.e. the EOM for $\bar{R}={\rm Tr}\{\hat{R}\hat{\rho}\}$ and $\bar{P}={\rm Tr}\{\hat{P}\hat{\rho}\}$) can be derived: \cite{horsfield06a} \begin{equation}\label{Ehrenfest:eqn} \left\{ \begin{array}{l} \dot{\bar{R}} = \bar{P}/M \; ,\\ \dot{\bar{P}} = \bar{F} = -{\rm Tr}\left\{ \left(\frac{\partial \hat{H}_e }{\partial R}\right) \hat{\rho} \right\}\;. \end{array} \right. \end{equation} It is also reasonable to take the phase-space trajectory $(R(t), P(t))$ as a zero-order approximation of the true atomic dynamics if the quantum fluctuations are not too large. [The semi-classical limit of the WT is explained more extensively in Ref.~\onlinecite{wigner32} (see also Ref.~\onlinecite{groenewald46}).] This fact suggests that instead of taking the origin as a reference point in the phase-space $(R,P)$ (i.e a fixed reference frame), one can take advantage of Eq.~\eqref{Ehrenfest:eqn} and use $(R(t), P(t))$ as reference (i.e. a mobile reference frame). After this mobile reference frame transform, the EOM becomes: \begin{equation}\label{EOM_MC:eqn} \dot{\hat{\rho}}_w = \frac{1}{i\hbar}\,\left[ \hat{H}_w \star \hat{\rho}_w - \hat{\rho}_w \star \hat{H}_w \right] + \left( \frac{\partial \hat{\rho}_w }{\partial \bar{R}} \right) \frac{\bar{P}}{M} +\left( \frac{\partial \hat{\rho}_w }{\partial \bar{P}} \right) \bar{F}\;. \end{equation} Although it is not apparent from Eq.~\eqref{EOM_MC:eqn}, the EOM can be still expressed by a proper Lie bracket --- as in Eq.~\eqref{Liouville_WT:eqn} --- by means of a different time-translation generator i.e. by a different Hamiltonian. [Mathematical details can be found in appendix \ref{derivation_EOM:sec}.] At this stage two paths can be followed. In the first case, the Moyal product is expanded in $\hbar$ (usually up to first order \cite{thorndyke05,kapral99a}) and a quantum-classical extension of the Liouville equation is obtained. However, although this approach is physically appealing, one must be aware that the EOM obtained this way cannot be formulated through a proper Lie bracket, \cite{caro99,Prezhdo06} and a generalized non-Hamiltonian bracket should be introduced instead. \cite{sergi05} As a consequence, the evolution of a composite operator, $[AB](t)$, might be different from the composition of the separated evolutions, $A(t)B(t)$, \cite{caro99,nielsen01} (because the non-Hamiltonian bracket does not define a proper derivative), and the conservation of dynamical symmetries might be a problem \cite{caro99} (due to the violation of the Jacobi identity). It must be also recalled that the difference between the non-Hamiltonian and Lie brackets is only of order $\mathcal{O}(\hbar)$ \cite{nielsen01} and that the non-Hamiltonian structure arises in a quite natural way for open systems, whether classical or quantum. \cite{sergi05} By following the other route, one uses the \emph{exact} expression for the Moyal product and takes the WT of Eq.~\eqref{rho_expansion:eqn} as a natural way to truncate $\hat{\rho}_w$. [We have found a direct expansion in the transformed space --- e.g. by using weighted orthogonal polynomials --- to cause dangerous instabilities in the truncated dynamics.] Therefore, the action of the truncation super-operator ${\rm T}_w$ can be defined as follows: \begin{align}\label{rho_expansion_WT:eqn} {\rm T}_w\left[ \hat{\rho}_w(R,P,t) \right] &= {\rm T}_w\left[ \hat{\rho}_w(\bar{R} +\Delta R, \bar{P} +\Delta P,t) \right] \nonumber \\ & \equiv \sum_{n=0}^{N}\sum_{m=0}^{N}\,\hat{\rho}_{n,m}(\bar{R},\bar{P},t) P_{n,m}(\Delta R, \Delta P)\;, \end{align} where $P_{n,m}(\Delta R,\Delta P)$ is the WT of $|\phi_n \rangle \langle \phi_m|$ in the mobile reference frame. [Properties of these functions are given in appendix \ref{CEID_appendix:sec}.] We opted for this approach because it is still a full quantum scheme --- but in a truncated Hilbert space --- and the EOM for the density matrix can still be formulated by means of a proper Lie bracket (see appendix \ref{derivation_EOM:sec}). Although an analytical expression of $P_{n,m}$ is known, \cite{groenewald46} it is far more convenient to state a set of recurrence relations by taking advantage of the well-known SHO algebra. Details can be found in appendix \ref{derivation_EOM:sec}; here we report only the final form of the CEID EOM for the matrix coefficients $\hat{\rho}_{n,m}$ (in the mobile reference frame): \begin{widetext} \begin{equation}\label{coded_EOM:eqn} \begin{split} \dot{\hat{\rho}}_{n,m} &= -\frac{b_0^2}{4i\hbar M}\left( \sqrt{(n+2)(n+1)} \hat{\rho}_{n+2,m} -(2n+1)\hat{\rho}_{n,m} + \sqrt{n(n-1)} \hat{\rho}_{n-2,m} + \right. \\ & \left. -\sqrt{m(m-1)}\hat{\rho}_{n,m-2} +(2m+1)\hat{\rho}_{n,m} -\sqrt{(m+2)(m+1)} \hat{\rho}_{n,m+2}\right) + \\ &+\frac{1}{i\hbar} \left[ \hat{H}_e\left( \bar{R} \right), \hat{\rho}_{n,m} \right] -\frac{a_0}{i\hbar} \left( \Delta\hat{F}\left( \bar{R}\right)\sqrt{\frac{n+1}{2}} \hat{\rho}_{n+1,m} + \Delta\hat{F}\left( \bar{R}\right)\sqrt{\frac{n}{2}} \hat{\rho}_{n-1,m} + \right. \\ & \left. -\sqrt{\frac{m}{2}} \hat{\rho}_{n,m-1}\Delta\hat{F}\left( \bar{R}\right) -\sqrt{\frac{m+1}{2}} \hat{\rho}_{n,m+1}\Delta\hat{F}\left( \bar{R}\right) \right) +\frac{a_0^2}{4i\hbar}\left( \hat{K}\left( \bar{R} \right)\sqrt{(n+2)(n+1)} \hat{\rho}_{n+2,m} + \right. \\ & \left. +\hat{K}\left( \bar{R} \right)(2n+1)\hat{\rho}_{n,m} +\hat{K}\left( \bar{R} \right)\sqrt{n(n-1)} \hat{\rho}_{n-2,m} -\sqrt{m(m-1)} \hat{\rho}_{n,m-2}\hat{K}\left( \bar{R} \right) + \right. \\ &\left. -(2m+1)\hat{\rho}_{n,m}\hat{K}\left( \bar{R} \right) - \sqrt{(m+2)(m+1)} \hat{\rho}_{n,m+2}\hat{K}\left( \bar{R} \right)\right)\;, \end{split} \end{equation} \end{widetext} where $\hat{F} = -\partial \hat{H}_e / \partial R$, $\Delta\hat{F} = \hat{F} - \bar{F}$, and $\hat{K} = \partial^2 \hat{H}_e / \partial R^2$. [Terms involving higher derivatives of $\hat{H}_e$ should also appear in Eq.~\eqref{coded_EOM:eqn}, but vanish in this case since the 2LS Hamiltonian we want to study is quadratic --- see Eq.~\eqref{elec_H:eqn}.] We recall that, according to our truncation scheme, one must neglect in the RHS of Eq.~\eqref{coded_EOM:eqn} those matrix coefficients whose indices are greater than the CEID order. Those equations, along with Eq.~\eqref{Ehrenfest:eqn}, have been used to simulate the 2LS dynamics described in Sec.~\ref{2LS:sec}. In particular, the current implementation uses a second order Runge-Kutta non-adaptive algorithm to integrate Eq.~\eqref{coded_EOM:eqn} and the standard velocity-Verlet algorithm to integrate Eq.~\eqref{Ehrenfest:eqn}. [A time-step $\Delta t = 10^{-3}/2\pi$ in our natural unit (see Sec.~\ref{2LS:sec}) has been found to be appropriate for the precision required by the comparison between CEID an exact approaches reported in Sec.~\ref{2LS_results:sec}.] At every integration step, the averaged coordinates, $\bar{R}$ and $\bar{P}$, are evolved according to Eq.~\eqref{Ehrenfest:eqn} for half a time-step, then the matrix coefficients are propagated through Eq.~\eqref{coded_EOM:eqn} for a whole time-step, and finally the averaged coordinates are evolved by another half time-step. We verified that the accuracy achieved by this kind of symmetric Trotter decomposition is greater than what is obtained by means of a single evolution of the averaged coordinates for a whole time-step followed (or preceded) by a matrix coefficients propagation for a whole time-step. Numerical results can be found in Sec.~\ref{2LS_results:sec}. We now briefly discuss the link between our new formulation and the original CEID. \subsection{Comparison with former CEID integration schemes} \label{old_CEID:sec} At variance with the scheme described so far, the original formulation of CEID makes use of a completely different expansion which directly provides EOM for the moments of the density matrix. \cite{horsfield05a,horsfield06a} The most relevant CEID moments are: $\hat{\rho}_e={\rm Tr}_a\left\{ \hat{\rho} \right\}$, $\hat{\mu}_1={\rm Tr}_a\left\{ \Delta \hat{R} \hat{ \rho} \right\}$, and $\hat{\lambda}_1={\rm Tr}_a\left\{ \Delta \hat{P} \hat{ \rho} \right\}$, where ${\rm Tr}_a$ is the partial trace with respect to the atomic degrees of freedom, $\Delta \hat{R} = \hat{R} - \bar{R}$, and $\Delta \hat{P} = \hat{P} - \bar{P}$. [Higher order moments must be carefully defined because $\Delta \hat{R}$ and $\Delta \hat{P}$ do not commute.] On the other hand, analogous objects can be also introduced in the new formulation: \begin{equation}\label{mu_nm_def:eqn} \hat{\mu}_{n,m}(t) = \frac{1}{2\pi\hbar} \int {\rm d}R {\rm d}P \, \Delta R^n \Delta P^m \hat{\rho}_w(R,P,t)\;. \end{equation} By using the property of the WT, it is easy to find a link between the new and the original notation: $\hat{\mu}_{0,0} = \hat{\rho}_e$, $\hat{\mu}_{0,1} = \hat{\lambda}_1$, and $\hat{\mu}_{1,0} = \hat{\mu}_1$. Similar relations for higher CEID moments can be stated, but some extra attention must be payed in the derivation due to the non-trivial commutation relations between positions and momenta. It is worth noting that CEID moments provide valuable information about the system. For instance, the quantities $(\hat{\mu}_{0,0})_{n,n}=(\hat{\rho}_e)_{n,n}$ give the probability of observing the system on the $n$-th PES and the average force (see Eq.~\eqref{Ehrenfest:eqn}) can be easily computed from $\bar{F}= {\rm Tr}\{ \hat{F}(\bar{R}) \hat{\mu}_{0,0}\} -{\rm Tr}\{ \hat{K}(\bar{R}) \hat{\mu}_{1,0}\}$. Higher moments can be used to study electron-ion correlations. \cite{horsfield05a} Moments defined in Eq.~\eqref{mu_nm_def:eqn} can be expressed in terms of the matrix coefficients by means of the following linear transform: \begin{equation} \hat{\mu}_{n,m} = \sum_{r,s}\,A^{n,m}_{r,s}\,\hat{\rho}_{r,s}\;, \end{equation} where \begin{equation} A^{n,m}_{r,s} = \frac{1}{2\pi\hbar}\int {\rm d}R {\rm d}P \Delta R^n \Delta P^m P_{r,s}(R,P)\;. \end{equation} As usual, a set of recurrence relations for $A^{n,m}_{r,s}$ can be found and --- at least in theory --- CEID moments of any order can be computed. In practice, only the low lying moments are relevant and here we give a short selection of them: \begin{widetext} \begin{subequations}\label{moments:eqn} \begin{align} \hat{\mu}_{0,0} &=& \sum_{n=0}^{N} \hat{\rho}_{n,n}\;, \\ \hat{\mu}_{0,1} &=& -i\,b_0\sum_{n=0}^{N} \sqrt{\frac{n}{2}}\left[ \hat{\rho}_{n,n-1} - \hat{\rho}_{n-1,n}\right]\;,\\ \hat{\mu}_{1,0} &=& +a_0\sum_{n=0}^{N} \sqrt{\frac{n}{2}}\left[ \hat{\rho}_{n,n-1} + \hat{\rho}_{n-1,n}\right]\;,\\ \hat{\mu}_{0,2} &=& -\frac{b_0^2}{2}\sum_{n=0}^{N}\left[ \sqrt{n(n-1)} \hat{\rho}_{n,n-2} -(2n+1) \hat{\rho}_{n,n} + \sqrt{n(n-1)} \hat{\rho}_{n-2,n}\right]\;,\\ \hat{\mu}_{1,1} &=& -\frac{i a_0 b_0}{2}\sum_{n=0}^{N}\left[ \sqrt{n(n-1)} \hat{\rho}_{n,n-2} -\sqrt{n(n-1)} \hat{\rho}_{n-2,n}\right]\;,\\ \hat{\mu}_{2,0} &=& +\frac{a_0^2}{2}\sum_{n=0}^{N}\left[ \sqrt{n(n-1)} \hat{\rho}_{n,n-2} +(2n+1) \hat{\rho}_{n,n} + \sqrt{n(n-1)} \hat{\rho}_{n-2,n}\right]\;. \end{align} \end{subequations} \end{widetext} As anticipated in Sec.~\ref{Intro:sec}, the zero order CEID (i.e. for $N=0$) is equivalent to the ED: \cite{horsfield04b} \begin{equation} \dot{\hat{\mu}}_{0,0} = \frac{1}{i\hbar} \left[ \hat{H}_e\left( \bar{R} \right), \hat{\mu}_{0,0} \right] \;. \end{equation} [We have used Eq.~\eqref{moments:eqn}(a) and Eq.~\eqref{coded_EOM:eqn}.] We also stress that, in this case, $\hat{\mu}_{0,1} = \hat{\mu}_{1,0} = \hat{\mu}_{1,1} = 0$ and that both $\hat{\mu}_{0,2}$ and $\hat{\mu}_{2,0}$ are proportional to $\hat{\mu}_{0,0}$. To clarify the link between the new and the original formalism, it is helpful to write down the first order ($N=1$) EOM in terms of the CEID moments. This can be done by inverting Eqs.~\eqref{moments:eqn}(a-c,f) to express $\hat{\rho}_{0,0}$, $\hat{\rho}_{0,1}$, $\hat{\rho}_{1,0}$, and $\hat{\rho}_{1,1}$ as functions of $\hat{\mu}_{0,0}$, $\hat{\mu}_{0,1}$, $\hat{\mu}_{1,0}$, and $\hat{\mu}_{2,0}$. [At the same CEID order $\hat{\mu}_{1,1}=0$ and that $\hat{\mu}_{0,2}= (b_0^2 / a_0^2)\hat{\mu}_{2,0}$.] The final result is: \begin{widetext} \begin{subequations}\label{first_order:eqn} \begin{align} \dot{\hat{\mu}}_{0,0} &=& \frac{1}{i\hbar}\left[ \hat{H}_e(\bar{R}), \hat{\mu}_{0,0}\right] - \frac{1}{i\hbar}\left[ \hat{F}(\bar{R}), \hat{\mu}_{1,0}\right]+ \frac{1}{2i\hbar}\left[ \hat{K}(\bar{R}), \hat{\mu}_{2,0}\right]\;,\\ \dot{\hat{\mu}}_{0,1} &=& \left\{ \Delta \hat{F}(\bar{R}), \hat{\mu}_{0,0} \right\} -\frac{1}{4}\left\{ \hat{K}(\bar{R}), \hat{\mu}_{1,0} \right\} +\frac{1}{i\hbar}\left[ \hat{H}_e(\bar{R}), \hat{\mu}_{0,1}\right] -\frac{i a_0}{2 b_0}\left[ \hat{K}(\bar{R}), \hat{\mu}_{0,1}\right] + \nonumber \\ &&- \frac{1}{a_0^2}\left\{ \Delta \hat{F}(\bar{R}), \hat{\mu}_{2,0} \right\} -\frac{b_0^2}{2 a_0^2 M}\hat{\mu}_{1,0}\;, \\ \dot{\hat{\mu}}_{1,0} &=& \frac{1}{2M}\hat{\mu}_{0,1} + \frac{1}{i\hbar}\left[ \hat{H}_e(\bar{R}), \hat{\mu}_{1,0}\right] - \frac{i a_0}{2 b_0}\left[ \hat{K}(\bar{R}), \hat{\mu}_{1,0}\right] + \frac{i a_0}{2 b_0} \left[ \hat{F}(\bar{R}), \hat{\mu}_{0,0}\right] + \nonumber \\ &&+\frac{a_0^2}{4 b_0^2} \left\{ \hat{K}(\bar{R}), \hat{\mu}_{0,1}\right\}\;,\\ \dot{\hat{\mu}}_{2,0} &=& \frac{1}{i\hbar}\left[ \hat{H}_e(\bar{R}), \hat{\mu}_{2,0}\right] + \frac{i a_0}{b_0}\left[ \hat{F}(\bar{R}), \hat{\mu}_{1,0}\right]- \frac{i a_0}{b_0}\left[ \hat{K}(\bar{R}), \hat{\mu}_{2,0}\right] + \frac{3i\hbar a_0^2}{8 b_0^2}\left[ \hat{K}(\bar{R}), \hat{\mu}_{0,0}\right] + \nonumber \\ &&+\frac{a_0^2}{2 b_0^2}\left\{ \Delta\hat{F}(\bar{R}), \hat{\mu}_{0,1}\right\}\;. \end{align} \end{subequations} \end{widetext} These equations might be compared with the Eq.~(8) of Ref.~\onlinecite{horsfield05a} (the mean-field second moment approximation) keeping in mind that in that paper the following \emph{ansatz} as been made: $\hat{\mu}_{2,0} = C^{R,R} \hat{\mu}_{0,0}$, $\hat{\mu}_{1,1} = C^{R,P} \hat{\mu}_{0,0}$, and $\hat{\mu}_{0,2} = C^{P,P} \hat{\mu}_{0,0}$, where $C^{R,R}$, $C^{R,P}$, and $C^{P,P}$ are time-dependent quantities. According to our initial conditions (see Sec.~\ref{2LS:sec}), the initial values of these variables are: $C_{R,R}(0)=a_0^2/2$, $C_{R,P}(0)=0$ and $C_{P,P}(0)=b_0^2/2$. In order to find the EOM for $C_{R,R}$, one can trace Eq.~\eqref{first_order:eqn}(d) and it turns out that, to the first order in the coupling constant $f_c$, $\dot{C}_{R,R}=0$. Remarkably, by assuming that the matrix $\hat{K}$ is proportional to the unit matrix and by substituting $\hat{\mu}_{2,0} = (a_0^2/2)\hat{\mu}_{0,0}$ into Eq.~\eqref{first_order:eqn}(a-c), we obtain EOM for $\hat{\mu}_{0,0}$, $\hat{\mu}_{0,1}$, and $\hat{\mu}_{1,0}$ which are equal to the ones stated in Eq.~(8) of Ref.~\onlinecite{horsfield05a} up to the first order in the coupling constant. Although there is no reason to believe that this agreement must be restricted only to a given set of initial conditions, it is not clear yet how it might be proved right for higher CEID order or different basis set expansion. \subsection{Energy conservation}\label{energy_conservation:sec} By using the original CEID scheme, it is possible to write the total energy, Eq.~\eqref{e_tot:eqn}, in terms of the density matrix moments. \cite{horsfield04b} A similar expansion is also obtained by computing Eq.~\eqref{e_tot:eqn} explicitly. [The matrix coefficients of the Hamiltonian can be found in appendix \ref{derivation_EOM:sec}.] During this computation, it is quite useful to distinguish between atomic kinetic energy, $E_{kin}={\rm Tr}\{ (\hat{P}^2/2M)\hat{\rho}\}$, and atomic potential energy, $E_{pot}={\rm Tr}\{ \hat{H}_e\hat{\rho}\}$ (see Eq.~\eqref{2LS_H:eqn}). In terms of the CEID moments (see Eq.~\eqref{moments:eqn}), those two quantities are given by: \begin{widetext} \begin{subequations}\label{energies:eqn} \begin{align} E_{kin} &=& \frac{\bar{P}^2}{2M} +\frac{\bar{P}}{M}{\rm Tr}\left\{ \hat{\mu}_{0,1} \right\} +\frac{1}{2M}{\rm Tr}_e\left\{\hat{\mu}_{0,2} \right\} \;, \\ E_{pot} &=& {\rm Tr}_e\left\{ \hat{H}_e(\bar{R}) \hat{\mu}_{0,0} \right\} -{\rm Tr}_e\left\{ \hat{F}(\bar{R}) \hat{\mu}_{1,0} \right\} +\frac{1}{2} {\rm Tr}_e\left\{ \hat{K}(\bar{R}) \hat{\mu}_{2,0} \right\}\;, \end{align} \end{subequations} \end{widetext} As explained in appendix \ref{correcting_averages:sec}, the CEID evolution of the bare averages defined in Eq.~\eqref{energies:eqn} do not give a conserved (bare) total energy, $E_{tot}= E_{kin} + E_{pot}$, although the error is negligible for large enough CEID order. \cite{footnote6} That is because CEID provides an approximation of the exact evolution (in the truncated Hilbert space) of the observables' averages (see Eqs.~\ref{truncated_average:eqn} and \ref{CEID_average:eqn}). On the other hand, the \emph{exact} evolution (in the truncated Hilbert space) of \emph{every} observable can be retrieved starting from the CEID EOM and then adding a correcting term whose general analytical expression is reported at the end of appendix \ref{correcting_averages:sec} The time-derivatives of the corrections for the bare atomic kinetic and potential energy --- whose integrals must be added to Eq.~\eqref{energies:eqn}(a) and Eq.~\eqref{energies:eqn}(b), respectively --- are reported below: \begin{widetext} \begin{subequations}\label{res_energies:eqn} \begin{align} &\dot{C}_{E_{kin}}^{(N)} =& +\frac{\bar{P}}{M}(N+1) {\rm Tr}\left\{\Delta\hat{F}(\bar{R})\hat{\rho}_{N,N} \right\} +\nonumber \\ && -\frac{ib_0}{4 M}(N+1)\sqrt{\frac{N}{2}}{\rm Tr}\left\{\Delta\hat{F}(\bar{R})\left(\hat{\rho}_{N,N-1} -\hat{\rho}_{N-1,N}\right) \right\} + \nonumber\\ &&+\frac{\bar{P} b_0^2}{4 a_0 M^2}(N+1)\sqrt{\frac{N}{2}}{\rm Tr}\left\{\hat{\rho}_{N,N-1} + \hat{\rho}_{N-1,N} \right\} + \nonumber\\ &&-\frac{\bar{P} a_0}{4M} (N+1) \sqrt{\frac{N}{2}}{\rm Tr}\left\{ \hat{K}(\bar{R}) (\hat{\rho}_{N,N-1} + \hat{\rho}_{N-1,N}) \right\}\;,\\ &\dot{C}_{E_{pot}}^{(N)} =&-\frac{i a_0^2 \bar{F}}{4 b_0}(N+1)\sqrt{\frac{N}{2}}{\rm Tr}\left\{\hat{K}(\bar{R}) \left(\hat{\rho}_{N,N-1} - \hat{\rho}_{N-1,N} \right) \right\} + \nonumber\\ &&+\frac{i b_0}{4M} (N+1) \sqrt{\frac{N}{2}}{\rm Tr}\left\{ \hat{F}(\bar{R}) (\hat{\rho}_{N,N-1} - \hat{\rho}_{N-1,N}) \right\}\;. \end{align} \end{subequations} \end{widetext} \section{Comparison between exact integration and CEID} \label{2LS_results:sec} In this section we present the main results of this work. They were obtained by means of the two numerical algorithms described in the previous sections, namely the exact integration scheme of Sec.~\ref{exact_int:sec} and the CEID scheme of Sec.~\ref{CEID:sec}. We recall that our main goal is to attest the convergence of CEID (by increasing its order) and to verify that the converged results agree with the exact dynamics of the 2LS geometries introduced in Sec.~\ref{2LS:sec}. That can be safely done by a direct comparison between CEID and exact integration of the time-dependent Sch\"{o}dinger equation and this will be the object of Sec.~\ref{CEID_vs_exact:sec} --- which contains a discussion of the electronic observable dynamics --- and Sec.~\ref{atomic_observables:sec} --- which contains a discussion of the atomic observable dynamics. The agreement of CEID with exact integration is clearly a fundamental numerical achievement, but it does not directly help in the interpretation of the simulation findings. Further insights can be obtained by comparing CEID against analytical results derived through first-order time-dependent perturbation theory (see appendix \ref{time_dep_pert:sec}). This comparison is reported in Sec.\ref{CEID_vs_perturbation:sec}. \subsection{Electronic observables} \label{CEID_vs_exact:sec} Here we present the results of the electronic dynamics. As initial condition, we always choose the atomic vibrational ground-state on the upper PES and than we let the system to evolve according to either the exact Schr\"{o}dinger evolution or the CEID equations. The WT of the initial $(t=0)$ uncorrelated density matrix is: $\hat{\rho}_w(\Delta R,\Delta P,0)=P_{0,0}(\Delta R,\Delta P)\hat{\rho}_e(0)$, where $P_{0,0}(\Delta R,\Delta P)$, according to the definition given in Sec.~\ref{CEID:sec}, is the WT (in the mobile reference frame) of the atomic vibrational ground-state (which is centered in $\bar{R}=0$ and $\bar{P}=0$) and \begin{equation} \hat{\rho}_e(0) = \left( \begin{array}{cc} 0 & 0\\ 0 & 1 \end{array} \right) \end{equation} describes a pure excited electronic state in the \emph{non-adiabatic} representation introduced in Sec.~\ref{2LS:sec}. The most informative electronic observables are the probabilities to find the system in the upper or lower electronic state. Those are obtained as the diagonal entries of the electronic density matrix $\hat{\rho}_e= \hat{\mu}_{0,0}$ (see Sec.~\ref{old_CEID:sec}) and we shall call them electronic populations. \begin{figure}[!ht] \begin{center} \includegraphics[width=7cm]{figure1.eps} \end{center} \caption{ (Color online) Unshifted 2LS. (Top) Adiabatic energy levels of the electronic Hamiltonian, $H_e(R)$ (see Eq.~\eqref{elec_H:eqn}) are plotted (solid lines) against the atomic coordinate, $R$. For the units employed, see the main text. The horizontal line (red online) marks the the total energy of the system. A sketch of the initial atomic density distribution is also given (dashed line). (Center) Electronic populations against time, from exact integration. (Bottom) Electronic populations against time, from CEID. Results for CEID order $0$ (i.e. ED), $1$, and $5$ are reported. } \label{fig_potential_populations_unshifted:fig} \end{figure} In Fig.~\ref{fig_potential_populations_unshifted:fig} we collect the numerical results for the unshifted case. A sketch of the PES is plotted in the first panel. [In this particular kind of 2LS, the difference between adiabatic and non-adiabatic surfaces is negligible.] In Fig.~\ref{fig_potential_populations_unshifted:fig}(b) the exact evolution of electronic populations is reported. We stress that the oscillatory population transfer between the two PES clearly confirms the crude picture we guessed in Sec.~\ref{2LS:sec}. In Fig.~\ref{fig_potential_populations_unshifted:fig}(c) the time-evolution of the electronic populations from CEID is reported. Different CEID orders, namely $N=0$, $N=1$, and $N=5$, have been considered. As we expected, the $N=0$ (which is equivalent to ED, see Sec.~\ref{old_CEID:sec}) does not display any electronic transition. On the other hand, the outcomes of higher order CEID simulations present oscillations of the electronic populations which are in almost perfect agreement which the exact integration results. In particular the first order CEID simulation is well converged (i.e. there are no visible differences between the $N=1$ and $N=5$ findings). On the other hand, this is not very surprising; the unshifted 2LS is the easiest case, since the symmetries involved ensure that the evolved state is well described by the simple \emph{ansatz} stated in Eq.~\eqref{0-order_state:eqn} (see also appendix \ref{time_dep_pert:sec}). \begin{figure}[!ht] \begin{center} \includegraphics[width=7cm]{figure2.eps} \end{center} \caption{ (Color online) Shifted 2LS. (Top) Adiabatic energy levels of the electronic Hamiltonian, $H_e(R)$ (see Eq.~\eqref{elec_H:eqn}) are plotted (solid lines) against the atomic coordinate, $R$. The avoided crossing is magnified in the inset, where also the non-adiabatic energies (dashed lines) are reported for comparison. For the units employed, see the main text. The horizontal line (red online) marks the the total energy of the system. A sketch of the initial atomic density distribution is also given (dashed line). (Center) Electronic populations against time, from exact integration. (Bottom) Electronic populations against time from CEID. Results for CEID order $0$ (i.e. ED), $1$, $5$, and $10$ are reported. } \label{fig_potential_populations_shifted:fig} \end{figure} In Fig.~\ref{fig_potential_populations_shifted:fig} we show the results for the shifted case following the same scheme employed for the previous figure. For this kind of 2LS, adiabatic and non-adiabatic PES are qualitatively different, but since in Fig.~\ref{fig_potential_populations_shifted:fig}(a) the difference can be appreciated only close to the crossing, we provided a magnified plot of that region in a small inset. Once again, almost perfect agreement is seen between a well converged CEID simulation (here for at least $N=5$) and the exact integration of the time-dependent Schr\"{o}dinger equation, while at the level of ED (i.e. $N=0$) the system is stuck in the upper PES. The fact that a first order CEID simulation is not yet well converged is not surprising and is a confirmation of the general trend predicted in Sec.~\ref{2LS:sec}: the larger the surface displacement, $R_0$, the higher will be the CEID order required to obtain a well converged simulation. We see in Fig.~\ref{fig_potential_populations_shifted:fig}(b) that the period of the electronic oscillations is larger for the shifted case than for the unshifted 2LS. [A perturbative account of that effect can be found in appendix \ref{time_dep_pert:sec}.] Moreover, in this shifted case the population exchange between the two PES is not complete: the minimum of the electronic population on the upper surface (corresponding to the maximum of the electronic population on the lower surface) is not exactly zero (one). This interesting feature is clearly visible in Fig.~\ref{fig_potential_populations_shifted:fig}(b) and is also found in the two well converged CEID simulations in Fig.~\ref{fig_potential_populations_shifted:fig}(c) so it is not a numerical feature. This is instead a non-trivial fingerprint of otherwise elementary dynamics which is caused by the virtual transitions --- a clear quantum effect --- between the low lying resonant states and more energetic atomic vibrational states. Further details can be found in Sec.~\ref{CEID_vs_perturbation:sec}, in which we study the dependence of such residual population on the coupling constant $f_c$. \subsection{Atomic observables} \label{atomic_observables:sec} Since the atomic motion is actually non-classical, we expect to find quantum fluctuations of the atomic observables around their average values. We study the time-evolution of the average atomic position and momentum, $\bar{R}$ and $\bar{P}$, because they provide a sort of effective trajectory in the phase space which represent an important link with classical MD. Obviously, the concept of trajectory is not well defined in quantum mechanics and is useful as an approximation only if the fluctuations are not too large. So, we also consider the variance of the atomic position, $\langle \Delta R^2 \rangle$, in order to test the accuracy of CEID in describing possibly non-classical atomic dynamics. \begin{figure}[!ht] \begin{center} \includegraphics[width=7cm]{figure3.eps} \end{center} \caption{ Shifted 2LS. Plots of the time-evolution of the averaged atomic position, $\bar{R}$, (top) and averaged atomic momentum, $\bar{P}$, (bottom). Data (almost perfectly superimposed) are taken from exact integration and well-converged CEID simulation. } \label{fig_R_P:fig} \end{figure} We start by reporting results for the average atomic position and momentum, $\bar{R}$ and $\bar{P}$. They are evolved by means of the Hamilton-Ehrenfest equations (see Sec.~\ref{CEID:sec}) according to CEID while in the exact integration scheme they are obtained by means of Eq.~\eqref{exact_observables:eqn}. For the unshifted 2LS, $\bar{R}=0$ and $\bar{P}=0$ for all time due to the inversion symmetry displayed by the system. On the other hand, the findings reported in Fig.~\ref{fig_R_P:fig} for the shifted case once again show almost perfect agreement between CEID and exact integration in a completely non-trivial case. In particular, CEID not only reproduce the general trend of both $\bar{R}$ and $\bar{P}$ (large period oscillations), but also gives the short time scale details (rapid oscillations). \begin{figure}[!ht] \begin{center} \includegraphics[width=7cm]{figure4.eps} \end{center} \caption{ Plots of the time-evolution of the atomic position variance $\langle \Delta R^2\rangle$, for the unshifted (top) and shifted (bottom) 2LS. Data (almost perfectly superimposed) are taken from exact integration and well-converged CEID simulation. } \label{fig_RR:fig} \end{figure} In Fig.~\ref{fig_RR:fig} we report the results for the variance of the atomic position, $\langle \Delta R^2\rangle$, for both the unshifted and shifted 2LS. This observable can been obtained as the trace of the CEID moment $\hat{\mu}_{2,0}$ defined in Sec~\ref{old_CEID:sec}. Once again, almost perfect agreement has been found between a well converged CEID simulations (here $N=5$ and $N=10$ for the unshifted and shifted 2LS, respectively) and the exact integration of the time-dependent Schr\"{o}dinger equation. We stress that such fluctuations are quite significant and so the atomic dynamics is only poorly approximated by its average trajectory in the classical phase space. CEID is working properly even in those highly non-classical cases. Finally, we have also verified the agreement between CEID and exact integration for the other entries of the covariance matrix, namely $\langle \Delta P^2\rangle$ and $\langle \Delta R \Delta P\rangle$. However, numerical findings for those cases are nor reported here because they are not qualitatively different from the $\langle \Delta R^2\rangle$ case. \subsection{Comparison between time-dependent perturbation theory and CEID} \label{CEID_vs_perturbation:sec} In this last section we briefly compare the CEID outcomes against time-dependent perturbation theory results. [Mathematical details are collected in appendix \ref{time_dep_pert:sec}.] First of all, from a well converged CEID simulation (e.g. $N=5$ and $N=10$ for the unshifted and shifted case, respectively), the values of the electronic oscillation frequency and the residual electronic population can be obtained by means of a straightforward numerical interpolation. Then, this procedure can be repeated for the same 2LS geometries, but different electron-ion coupling constant, $f_c$ (see Eq.~\eqref{2LS_H:eqn}). It is instructive to study the effect of the atomic motion on the electronic transitions because it might cause non-classical phenomena, like quantum interference between different transition paths. Those effects are usually hard to interpret without a model which can --- at least qualitatively --- describe the physics involved. Fortunately, for the kind of model 2LS we considered in this paper, a simple model can be obtained by means of time-dependent perturbation theory, whose prediction for the electronic transition frequency and the residual population are summarized in Eq.~\eqref{omega_p:eqn} and Eq.~\eqref{P_res:eqn}, respectively. \begin{figure}[!ht] \begin{center} \includegraphics[width=7cm]{figure5.eps} \end{center} \caption{ (Color online) Frequency $\omega_p$ of the electronic transitions (see Eq.~\eqref{omega_p:eqn}) (top) and square root of the residual population $P_{res}^{1/2}$ (see Eq.~\eqref{P_res:eqn}) (bottom) against the coupling constants, $f_c$, for the unshifted and shifted 2LS. Linear fits are also showed (dashed lines). For the units employed, see the main text. } \label{fig_fitting_populations:fig} \end{figure} In Fig.~\ref{fig_fitting_populations:fig}(a) numerical values of electronic population frequencies are reported against several values of the coupling constant, $f_c$. A clear linear trend is manifest in all the 2LS geometries. Moreover, numerical values are in almost perfect agreement with the analytical results, Eq.~\eqref{omega_p:eqn}. In Fig.~\ref{fig_fitting_populations:fig}(b) the residual populations are plotted against the same coupling constant values. Although the analytical trend, $P_{res} \simeq \gamma g^2$ (see Eq.~\eqref{P_res:eqn}) is confirmed, in general is not easy to give an estimate of the prefactor, $\gamma$. On the other hand, for the unshifted 2LS case, only one term in Eq.~\eqref{P_res:eqn} is non-zero due to the SHO selection rules. As a consequence, a numerical estimate can be obtained and it gives $\gamma \simeq 2.5 \cdot 10^{-1}$, while a direct numerical interpolation gives $\gamma \simeq 3.7 \cdot 10^{-1}$. We stress that such disagreement might depend on the kind of approximation we made in order to derive Eq.~\eqref{P_res:eqn} and it does not effect the general scaling trend of the residual population with the coupling constant. \section{Discussion and conclusions}\label{conclusions:sec} We have presented a new formulation of correlated electron-ion dynamics (CEID). It is based on a suitable expansion of the quantum fluctuations around the mean-field atomic trajectories and its lowest accuracy limit has been proved to be equivalent to the well-known Ehrenfest dynamics (ED). This new formulation has been obtained by a combined use of: 1) an expansion of the density matrix in terms of atomic harmonic states centered around the average instantaneous atomic positions; 2) an \emph{exact} Wigner transform with respect to the atomic degrees of freedom of the expanded density matrix. The validity of this scheme has been successfully tested by simulating the non-adiabatic time-evolution of a model two level system (2LS). The accuracy of our simulations is determined by a single parameter which is related to the order of the density matrix expansion and is called the CEID order. We then verified that, for all the considered 2LS geometries, the exact quantum dynamics --- obtained by exact integration of the time-dependent Schr\"{o}dinger equation --- is eventually retrieved by increasing the CEID order. We think that this is a crucial property of our new CEID scheme which allows us to estimate the convergence of a numerical simulation even when reliable benchmarks are not available. As for the other proposed CEID schemes, \cite{horsfield04b,horsfield05a} our algorithm only needs the Hamiltonian and the initial conditions to start and the subsequent evolution is computed smoothly, without resorting to any kind of surface hopping or wave-function spawning. No \emph{a posteriori} position, velocity, or density matrix adjustment is needed. The \emph{exact} evolution (in the truncated Hilbert space) of every observable average can be obtained starting from the CEID EOM by adding a correction term (whose analytical expression is known) which is anyhow negligibly small for large CEID order. Moreover --- and at variance with other available algorithms \cite{mueller97,tully98} --- it works perfectly well within a non-adiabatic representation of the electronic PES. This is desirable because non-adiabatic PES may be smoother than the adiabatic ones \cite{ben-nun00} and also because a costly diagonalization of the atomic potential energy $H_e(R)$ at each step is avoided. All these dynamical properties make our new CEID algorithm a good candidate for simulating atomic systems in which quantum coherence is relevant. The advantages of a coherent quantum scheme might be relevant even when macroscopic quantum coherence is not shown. It is well known that ED --- at variance with other quantum-classical methods \cite{parandekar05a,kapral06} --- cannot thermalize a mixed electron-ion system (the electronic degrees of freedom are too hot with respect to the atomic ones. \cite{parandekar05a}) This failure of the mean-field approximation depends on the absence of quantum fluctuations which cause spontaneous phonon emission from an excited electronic state. [This drawback is apparent also in the 2LS simulations considered in this paper (see Sec.~\ref{2LS_results:sec}): the ED is always stuck in the initial excited state.] As a consequence, ED does not satisfy microreversibility. \cite{mueller97,tully98} On the other hand, a CEID simulation beyond ED can describe quantum fluctuations and meet the coherence requirements for microreversibility in a very natural way. [Once again, see the results of Sec.~\ref{2LS_results:sec}.] Although it is not the main concern of this paper, our group is considering a viable way to approach quantum thermalization physics by means of CEID. A first possibility is given by the spin-boson model \cite{leggett87} in which the bath degrees of freedom are treated explicitly by means of a collection of many quantum harmonic oscillators. On a more speculative ground, one can think to implement the generalization of the Nos\`{e}-Hoover thermostat introduced in Ref.~\onlinecite{grilli89}. This scheme is known to fail for ED \cite{mauri93} due to the lack of correct quantum back-reaction on the classical bath variables. \cite{sergi05} On the other hand, as we have shown again in this paper, CEID corrects this ED drawback and it might be better suited for that sort of thermostat. Moreover, a successful attempt to couple the Nos\`{e}-Hoover thermostat to the spin-boson model is known in literature \cite{sergi07a} and it can provide an interesting test case for future CEID simulations. Our CEID algorithm is computationally demanding and is expected --- in the worst case scenario --- to scale as $(N+1)^{2 N_{c} }$, where $N$ is the CEID order and $N_{c}$ is the number of atomic coordinates. \cite{footnote5} Nevertheless, it must be pointed out that the number of relevant atomic coordinates can be effectively much smaller than $N_{c}$. \cite{ness99,tamura07} In this case, one might accelerate a CEID simulation by allowing for quantum atomic fluctuations along the relevant directions only. We also stress that the CEID algorithm is still faster than the exact integration scheme employed to produce benchmark calculations in this paper (see Sec.~\ref{exact_int:sec}) which should scale as $(N+1)^{3 N_{c} }$ since a numerical diagonalization of the Hamiltonian in the truncated Hilbert space is implied. We are also considering alternative truncations of the Hilbert space in order to restore the polynomial scaling with atomic degrees of freedom of the early CEID algorithms. We see another possible advantage of this CEID scheme over exact integration: the former expands the quantum fluctuations around mean-field atomic trajectories, while the latter expands with respect to a fixed reference frame. Now, consider a quantum motion in which there are fluctuations about the mean-field atomic trajectories that are very tightly confined along a given direction. With our CEID formulation such fluctuations can be treated accurately with a low order expansion. However, schemes that employ basis functions which are not centered around the atomic trajectories could require a very high order expansion to reproduce that confined behavior if the trajectories are remote from the center of the basis functions. Other algorithms, such as molecular dynamics with quantum transitions or \emph{ab initio} multiple spawning, might have a lower computational complexity, especially if the region of the configuration space where non-adiabatic effects are relevant is small and crossed only few times during the time-evolution. This is the case, for instance, for many chemical reactions in a gaseous or diluted phase. On the other hand, we recall here that CEID was explicitly devised to deal with electron-ion correlations in metals, a kind of systems in which the aforementioned algorithms are expected to be less efficient. Needless to say, a reliable algorithm to simulate microscopic electro-mechanical effects, including joule heating, will find important application in nanostructure design. Our CEID algorithm is a good candidate because its accuracy can be systematically increased by tuning a single parameter that allows us to approximate the quantum atomic fluctuations in a physically transparent way. Moreover, since quantum coherence is well addressed by CEID, subtle photo-physical effect like luminescence in conjugated polymers might be addressed by this method. Applications of our algorithm to larger atomic systems and different thermodynamical ensembles are subjects of ongoing study. \begin{acknowledgments} LS is supported by EPSRC under grant EP/C524381/1 and MM is supported by EPSRC under grant GR/S80165. The authors like to thank Tchavdar Todorov for illuminating suggestions and critically reading of this paper. MM also thanks A.T. Paxton for helpful comments and LS acknowledges useful discussions with R. Peixoto Miranda, D.B. Bowler, P. Delaney, and A.M. Stoneham. \end{acknowledgments}
2023-04-23T06:10:10.886Z
2007-10-23T12:36:01.000Z
redpajama/arxiv
arxiv_0002
322
11,917
e3d1c19ee72dd0b11dcc37014cd5ecf8b847382a
\section{Introduction} The Standard Model (SM)~\cite{sm} predicts one Higgs boson as a remnant from the spontaneous symmetry breaking mechanism~\cite{higgs}. This mechanism allows fermions and the W and Z bosons to acquire their masses by interaction with the Higgs field. To discover the Higgs boson, therefore, is of crucial interest to complete the SM. Electroweak precision measurements suggest the mass of the Higgs boson to be of the order ${\cal O}(100\,{\rm GeV})$. Direct searches at LEP have set a lower mass limit of $114~{\rm GeV}$. A Higgs boson with a mass above $114~{\rm GeV}$ will be accessible in the experiments at the Large Hadron Collider (LHC). However, to be sure about the nature of the particle found, it is necessary to measure its properties such as mass, width, charge, spin, parity, couplings to other particles, and self-couplings to test the internal consistency of the SM, or to find hints for new physics. For the determination of at least some of these quantities, LHC will not be sufficient. At the future International Linear Collider (ILC), we will have the chance to investigate the properties of new particles with high precision in all details. This $e^+e^-$ collider with a center-of-mass energy up to $1\,{\rm TeV}$ provides a well known initial state, a very clear signature for events of ${\rm e^+e^-}\to{\rm ZH}$, and due to its high luminosity sufficient statistics for precision measurements. In the Higgs-strahlung process ${\rm e^+e^-}\to{\rm ZH}$, we can investigate the coupling of the Higgs boson to the Z boson and determine the Higgs boson mass with high precision in a relatively model-independent way using the recoil mass spectrum against the Z, \begin{equation}\label{form:recoil} m_{\rm recoil}^2=s+m_{\rm di-lepton}^2-2\cdot E_{\rm di-lepton} \cdot \sqrt{s}\;, \end{equation} where $s$ is the square of the centre-of-mass energy, and $m_{\rm di-lepton}$ and $E_{\rm di-lepton}$ are the mass and the energy of the leptons originating from the Z decay. Previous studies using simplified parametric detector simulations have shown the potential of this technique~\cite{GaLo96}. Here, we study the prospects to measure the Higgs boson mass and cross section assuming its mass to be $120\,{\rm GeV}$. At a centre-of-mass energy of $250\,{\rm GeV}$, a full detector simulation of the process ${\rm e^+e^-}\to{\rm ZH}$\ is performed using the {\sc Mokka} software package, which simulates events in the LDC detector. The response from the sub-detectors is digitised as in a real experiment and is processed using the {\sc MarlinReco}~\cite{ILCSoft} reconstruction software. In addition, SM background processes are treated in the same way. The Z is reconstructed from its decays into electrons and muons. Algorithms are developed to identify electrons and muons using the tracker and calorimeter information. A likelihood technique is used to separate signal events from the SM background processes with high efficiency. \section{Experimental Conditions} The study assumes a linear $e^+e^-$ collider operating at a centre-of-mass energy of $250\,{\rm GeV}$. This energy is chosen because the Higgs-strahlung cross section for the SM Higgs boson with a mass of $120\,{\rm GeV}$ reaches its maximal value. The statistics for signal and background events corresponds to a luminosity of $50\,{\rm fb}^{-1}$. Signal events were generated using {\sc Pyhtia}~6.4.11 \cite{Sjo06}. Initial and final state bremsstrahlung and beamstrahlung are taken into account. To simulate beamstrahlung the {\sc GuineaPig}~1.4.1 \cite{Sch96} program is used assuming ILC nominal beam parameters. Background events are produced using in addition the event generators {\sc BHwide}~1.04 \cite{Jad97} and {\sc Sherpa}~1.0.10 \cite{GHK04} as listed in Table \ref{tab:xsec}. Signal and background events are passed through a full detector simulation ({\sc Mokka}) and are processed in the full reconstruction scheme of {\sc Marlin}. A sketch of the simulation stages is shown in Figure \ref{Fig:fig01}. \begin{figure}[!h] \centerline{\includegraphics[width=0.95\columnwidth]{ohlerich_martin.fig1.eps}} \caption{\small Scheme of simulation and analysis stages. Events of the different processes are generated using {\sc Pythia}~6.4.11 , {\sc BHwide}~1.04 and {\sc Sherpa}~1.0.10. Beamstrahlung is treated by {\sc GuineaPig}. {\sc Mokka} and {\sc Marlin} simulate and reconstruct events in the LDC detector. The analysis software is based on ROOT. } \label{Fig:fig01} \end{figure} The LDC detector~\cite{DOD} is used for the simulation and reconstruction. The vertex detector (VTX) consists of five layers of silicon pixel detectors. The main tracker is a TPC of about $3~{\rm m}$ diameter and $4~{\rm m}$ length supplemented by cylindrical silicon strip detectors (SIT) and forward strip and pixel detectors (FTD). The TPC is surrounded by the electromagnetic (ECAL) and hadronic (HCAL) calorimeter, which in turn are enclosed by the $4\,{\rm T}$ magnet and the iron yoke. ECAL is a finely segmented silicon-tungsten sandwich calorimeter. HCAL consists of a steel absorber structure with small scintillator tiles read out with silicon photomultipliers. When this analysis was performed no muon chamber was implemented in {\sc Mokka}. Thus the separation of muons and pions in the particle identification is done using the trackers and calorimeters only. The precision of the momentum measurement of the tracker system (TPC+VTX+SIT+FTD) is obtained to be $\sigma_{p_t}/p_t=7\cdot10^{-5}\cdot p_t\,[{\rm GeV}]$ using the FullLDCTracking processor in {\sc MarlinReco}. \begin{table}[!h] \centerline{\begin{tabular}{|rl|r|r|r|} \hline &Process & $\sigma\;[{\rm fb}]$ & $N(50\;{\rm fb}^{-1})$ & Generator\\ \hline 1.&$e^+e^-\rightarrow HZ\rightarrow X\ell^+\ell^-$ & $15.0$ & $751$ & {\sc Pythia} \\ 2.&$e^+e^-\rightarrow e^+e^-$ & $4144.5$ & $207223$ & {\sc BHwide} \\ 3.&$e^+e^-\rightarrow \mu^+\mu^-$ & $4281.0$ & $214050$ & {\sc Pythia} \\ 4.&$e^+e^-\rightarrow \tau^+\tau^-$ & $4182.0$ & $209100$ & {\sc Pythia} \\ 5.&$e^+e^-\rightarrow W^+W^-\rightarrow Xe,X\mu,Xe\mu$ & $5650.0$ & $282277$ & {\sc Pythia} \\ 6.&$e^+e^-\rightarrow e^+e^-f\bar{f}$ & $475.7$ & $23784$ & {\sc Sherpa} \\ 7.&$e^+e^-\rightarrow \mu^+\mu^-f\bar{f}$ & $359.4$ & $17970$ & {\sc Sherpa} \\ 8.&$e^+e^-\rightarrow e^+e^-e^+e^-$ & $24.6$ & $1231$ & {\sc Sherpa} \\ 9.&$e^+e^-\rightarrow \mu^+\mu^-\mu^+\mu^-$ & $7.2$ & $360$ & {\sc Sherpa} \\ 10.&$e^+e^-\rightarrow e^+e^-\mu^+\mu^-$ & $177.0$ & $8850$ & {\sc Sherpa} \\ \hline \end{tabular}} \caption{\small The processes simulated for this study, their cross sections, the expected statistics for an integrated luminosity of ${50\;{\rm fb}^{-1}}$, and the generators used. $\ell$ represents $e,\,\mu$ and $f$ stands for $\tau,\nu,q$. } \label{tab:xsec} \end{table} \section{Simulation Details and Analysis} \subsubsection*{Signal and Background Processes} The signal and background processes considered in the analysis, their cross sections and expected statistics at $50\,{\rm fb}^{-1}$, and the programs used to generate events are listed in Table~\ref{tab:xsec}. For the generation of $e^+e^-$ events using {\sc BHwide}, the following cuts are applied: the polar angle range is restricted to $|\cos\vartheta_{\rm lep}|\!<\!0.985$, the electron energy is required to be $E_e\!>\!10\;{\rm GeV}$, the difference of the di-lepton mass and the Z mass is $|m_{ee}-m_Z|\!<\!40\,{\rm GeV}$, and the recoil mass against the Z is in the range $90\,{\rm GeV}\!\le\!m_{\rm recoil}\!\le\!190\,{\rm GeV}$. For event samples generated with {\sc Sherpa}, cuts on the lepton polar angle, $|\cos\vartheta_{\rm lep}|\!<\!0.985$, the di-lepton and di-parton mass, $m_{ee,qq}\!>\!10\,{\rm GeV}$, and the energy of the final state fermions, $E_{\rm fermion}\!>\!5\,{\rm GeV}$, are applied. These cuts avoid divergences in the cross sections, reduce computing time, and would have no influence on the results. \subsubsection*{Analysis Strategy} The signature of events from ${\rm e^+e^-}\to{\rm ZH}$\ are two leptons of the same kind and opposite charge. The invariant mass of the two leptons must be in the vicinity of the ${\rm Z}$ mass. The dominant background is expected from the process ${\rm e^+e^-}\to{\rm ZZ}\to{\rm \ell^+\ell^-X}$, which is simulated within the four-fermion processes 6 - 10 in Table \ref{tab:xsec}. Discriminating power is expected from the polar angle distribution of the ${\rm Z}$. Processes 2 and 3 with two electrons or muons in the final state may be selected since initial state radiation leads to a radiative return and thus to an invariant mass near the Z. To distinguish them from the signal, the acoplanarity angle can be used. The process 5 is a possible background due to its high cross section. The polar angle of the leptons can be used to distinguish the signal from this background. In the fist step of the analysis, we look for isolated electrons and muons in each event using a likelihood method. An electron is identified as an electromagnetic shower in the ECAL that matches the position predicted from a track as impact point into the ECAL. A muon is identified as a track matching to deposits in the ECAL and HCAL compatible with the expectations from a minimum ionising particle. For the second step, only events with a lepton pair of the same kind and with opposite charge are accepted. If there are several pairings possible, the one with an invariant mass closest to $m_{\rm{Z}}$ is chosen. For further reduction of the background the following cuts are applied to: \begin{itemize} \setlength{\itemsep}{-2pt} \item[-] the polar angles of the leptons: $|\cos\vartheta_{\rm lep}|\!<\!0.95$, \item[-] the difference between the invariant di-lepton mass and $m_{\rm{Z}}$: $|m_{\rm di-lepton}-m_{\rm{Z}}|\!<\!30\,{\rm GeV}$, \item[-] the lepton energy: $E_{\rm lep}\!>\!15\;{\rm GeV}$, \item[-] the recoil mass: $90\,{\rm GeV}\!\le\! m_{\rm recoil}\!\le\!190\,{\rm GeV}$, \item[-] the polar angle of the di-electron system: $|\cos\vartheta_{\rm di-electron}|<0.90$. \end{itemize} The remaining events are analysed using likelihood density functions for the signal, the ${\rm e^+e^-}\to4{\rm f}$, and the ${\rm e^+e^-}\to2{\rm f}$\ background channels in the variables: acoplanarity angle of the two leptons, acolinearity angle of the two leptons, the di-lepton mass, the polar angle of the di-lepton system, the polar angles of the two leptons and the transverse momentum of the Z. For each event a likelihood is calculated characterising its compatibility with a signal event. A cut on this likelihood is set such that the quantity $\frac{\sqrt{S+B}}{S}$ is minimised in the mass range from $119\,{\rm GeV}$ to $125\,{\rm GeV}$. Here, S and B are the numbers of signal and background events, respectively, in the final sample. \section{Results} In the final sample, the signal selection efficiency is obtained to be 43.1\% for the ${\rm e^+e^-}\to{\rm Zh}\to{\rm e^+e^-X}$\ and 57.2\% for the ${\rm e^+e^-}\to{\rm Zh}\to{\rm \mu^+\mu^-X}$\ channel, respectively. The recoil mass spectra are shown in Figure \ref{Fig:fig02} for these processes. In both spectra a signal peak is seen on top of a moderate background. The signal has a smaller width in the di-muon channel, presumably because the muon track measurement is more precise due to less bremsstrahlung in the material of the detectors. This is confirmed by the resolution function for the transverse momentum, which has a pronounced tail to lower reconstructed momenta for electrons. \begin{figure}[!ht] \centerline{ \includegraphics[width=0.5\columnwidth]{ohlerich_martin.fig2.eps} \includegraphics[width=0.5\columnwidth]{ohlerich_martin.fig3.eps}} \caption{\small The recoil mass distributions of the selected ${\rm e^+e^-}\to{\rm ZH}$\ events, left for ${\rm Z\to\,e^+e^-}$\ and right for \Zmm\ final states, respectively. The dark red distribution originates from the signal process and the light red distribution from the remaining background. }\label{Fig:fig02} \end{figure} \subsubsection*{Cross Section Measurement} The recoil mass spectra in Figure \ref{Fig:fig02} are used to determine the cross sections for the processes ${\rm e^+e^-}\to{\rm Zh}\to{\rm e^+e^-X}$\ and ${\rm e^+e^-}\to{\rm Zh}\to{\rm \mu^+\mu^-X}$. The background originating from known SM processes is parametrised by a polynomial and kept constant in the fit to determine the amount of the signal. The signal is described using the following parametrisation, \begin{equation}\label{form:signal} s(x)={\rm Norm}_{\rm GausExp}\begin{cases} e^{-(x-m_{\rm 0})^2/(2\sigma^2_{\rm gaus})} &:\,x\!<\!m_{\rm 0}\;,\\ \beta e^{-(x-m_{\rm 0})^2/(2\sigma^2_{\rm gaus})}+(1-\beta)e^{-(x-m_{\rm 0})/\lambda}&:\,x\!>\!m_{\rm 0}\;, \end{cases} \end{equation} where $m_{\rm 0}$ is the central value of the peak, $\lambda$ a constant to describe the tail to larger values in the signal mass distribution and $1-\beta$ is the fraction of the tail. The tail to larger mass values in the signal is caused by bremsstrahlung and beamstrahlung. The former is well predicted from QED and the latter depends on the machine parameters. For known and reasonable stable machine parameters the shape and fraction of the tail can be determined, and we keep it constant in the fit, varying only $m_{\rm 0}$ and the number of events in the signal, $N_{\rm signal}$. The cross section of the signal process is obtained from \begin{equation}\label{form:xsec} \sigma({\rm process})=N_{\rm signal}/({\cal L}\varepsilon), \end{equation} where ${\cal L}$ is the integrated luminosity, and $\varepsilon$ is the signal selection efficiency. The results obtained are $\sigma(${${\rm e^+e^-}\to{\rm ZH}$ $)=216.0\,{\rm fb}$} with an uncertainty of 20\% using the di-electron final state and $\sigma(${${\rm e^+e^-}\to{\rm ZH}$ $)=219.7\,{\rm fb}$} with an uncertainty of 10\% using the di-muon final state. Both results agree with the value of the cross section of $226.8\,{\rm fb}$ obtained from {\sc Pythia}.\\ An alternative method is to count all events, $N$, in the signal mass range from $119\,{\rm GeV}$ to $125\,{\rm GeV}$ and to subtract the background. The latter is obtained from a high statistics Monte Carlo simulation or from the integral over a parametrised background distribution in the same mass range, $\langle B\rangle$. The cross section is then given by \begin{equation}\label{form:xsec2} \sigma({\rm process})=(N-\langle B\rangle)/({\cal L}\varepsilon), \end{equation} with an uncertainty of $(\pm\;\sqrt{N}/(N-\langle B\rangle)\;[\%]\;$. Using this method no assumption on the signal peak parametrisation is needed. The results obtained are compatible with the fit results given above. \subsubsection*{Higgs boson mass} To determine the Higgs boson mass from the spectra shown in Figure \ref{Fig:fig02} a likelihood method is used. Several signal samples of high statistics with Higgs boson masses between $119$ and $121\,{\rm GeV}$ are generated and processed through the full simulation, reconstruction and analysis chain. The obtained spectra are parametrised using Formula (\ref{form:signal}). These parametrisations are then used in an unbinned likelihood fit to the simulated event samples shown in Figure \ref{Fig:fig02} to determine the Higgs boson mass $m_h$. The results are $m_h=119.78\pm0.42\,{\rm GeV}$ and $m_h=120.09\pm0.12\,{\rm GeV}$ for the ${\rm Z\to\,e^+e^-}$\ and \Zmm\ decays, respectively. \begin{figure}[!ht] \centering \includegraphics[width=0.49\columnwidth]{ohlerich_martin.fig4.eps} \includegraphics[width=0.49\columnwidth]{ohlerich_martin.fig5.eps} \caption{\small Uncertainties on the Higgs boson recoil mass measurement (left) and on the Higgs boson production cross section (right) as a function of the centre-of-mass energy. A Monte Carlo toy model with parametrised momentum resolution is used.}\label{Fig:fig06} \end{figure} \subsubsection*{Estimate of the Optimal Centre-of-Mass Energy for the Measurements} A Monte Carlo toy model is used to estimate the optimal centre-of-mass energy for the measurement of the Higgs boson mass and cross section. The resolution of the track momentum measurement is parametrised as $\sigma_{p_t}/p_t=10^{-4}\cdot p_t\,[{\rm GeV}]$. A lepton identification and pair matching is performed as described above. For a Higgs boson mass of $120\,{\rm GeV}$ the recoil mass spectra are used to determine the Higgs boson mass and cross section for centre-of-mass energies between $210\,{\rm GeV}$ and $250\,{\rm GeV}$. The estimated accuracies for the mass and cross section measurements are shown in Figure \ref{Fig:fig06} as a function of the centre-of-mass energy assuming an integrated luminosity of $500\,{\rm fb}^{-1}$. The minimal uncertainty on the recoil mass occurs for $E_{\rm cms}=220\,{\rm GeV}$, in agreement with the results obtained in Ref.\cite{philip}. The cross section uncertainty is minimal for $E_{\rm cms}=240\,{\rm GeV}$, becoming worse by about 20\% at $E_{\rm cms}=220\,{\rm GeV}$. \section{Conclusions} The recoil mass technique is a unique tool to determine the mass and cross section of the Higgs boson at the ILC. For the first time the prospects obtained from a full detector simulation and reconstruction are presented. Choosing the center-of-mass energy a few ten GeV above the kinematic threshold, here $250\,{\rm GeV}$ for $m_h=120\,{\rm GeV}$, the Higgs boson mass and cross section can be determined with an accuracy of 120 MeV and 9\%, respectively, using only $50\,{\rm fb}^{-1}$. To reach a similar accuracy at a center-of-mass energy of $350\,{\rm GeV}$, an integrated luminosity $500\,{\rm fb}^{-1}$ is needed. The talk held at the LCWS is available under Ref.~\cite{url}. \begin{footnotesize}
2023-04-23T06:10:11.018Z
2007-10-13T13:16:23.000Z
redpajama/arxiv
arxiv_0002
325
2,981
17cb478c34f99ceaaf9671da25efc8a94a0fc1c4
\section{Introduction} Given a compact even-dimensional oriented Riemannian manifold $M$, endowed with a spin$^c$ structure, one can construct an associated Dirac operator $D^+$ acting on smooth sections of a certain (complex) vector bundle over $M$. The \emph{spin$^c$ quantization} of $M$ with respect to the above structure is defined to be $$Q(M)=ker(D^+)-coker(D^+)\ .$$ This is a virtual vector space, and in the presence of a $G$-action, it is a virtual representation of the group $G$. Spin$^c$ quantization generalizes the concept of \emph{K\"{a}hler} and \emph{almost-complex quantization} (see \cite{CKT}, especially Lemma 2.7 and Remark 2.9) and in some sense it is a `better behaved' quantization (see \cite{SF1}). Quantization was originally defined as a process that associates a Hilbert space to a symplectic manifold (and self-adjoint operators to smooth real valued functions on the manifold). Therefore, one of our goals in this paper is to relate spin$^c$ quantization to symplectic geometry. This can be achieved by defining a \emph{spin$^c$ prequantization} of a symplectic manifold to be a spin$^c$ structure and a connection on its determinant line bundle which are compatible with the symplectic form (in a certain sense). This definition is analogous to the definition of prequantization in the context of geometric quantization (see \cite{GQ} and references therein). Our definition is different but equivalent to the one in \cite{CKT}. It is important to mention that in the equivariant setting, a spin$^c$ prequantization for a symplectic manifold $(M,\omega)$ determines a moment map $\Phi\colon M\to\mathfrak{g}^*$, and hence the action $G\circlearrowright (M,\omega)$ is Hamiltonian. The cutting construction was originnaly introduced by E. Lerman in \cite{L} for symplecitc manifolds equipped with a Hamiltonian circle action. In \cite{SF1} we explained how one can cut a given $S^1$-equivariant spin$^c$ structure on an oriented Riemannian manifold. Here we extend this construction and describe how to cut a given $S^1$-equivariant spin$^c$ prequantization. This cutting process involves two choices: a choice of an equivariant spin$^c$ prequantization for the complex plane $\mathbb C$, and a choice of a level set $\Phi^{-1}(\alpha)$ along which the cutting is done. Our main theorem (Theorem \ref{main-thm}) reveals a quite interesting fact: Those two choices must be compatible (in a certain sense) in order to make the cutting construction possible. In fact, each one of the two choices determines the other (once we assume that cutting is possible), so in fact only one choice is to be made. This theorem also explains the `mysterious' freedom one has when choosing a spin$^c$ structure on $\mathbb C$ in the first step of the cutting construction: it is just the freedom of choosing a `cutting point' $\alpha\in\mathfrak g^*$ (or a level set of the moment map along which the cutting is done). Since by our theorem, $\alpha$ can never be a weight, we see why spin$^c$ quantization must be additive under cutting (a result already obtained in \cite{SF1}). This paper is organized as follows. In Section \ref{Sec-preq} we review the definitions of the spin groups, spin and spin$^c$ structures and define the concept of spin$^c$ prequantization. As an example we will use later, we construct a prequantization for the complex plane. For technical reasons, we chose to define spin$^c$ prequantization for manifold endowed with closed two-forms (which may not be symplectic). In Section \ref{Sec-Cut} we describe the cutting process in steps and obtain our main theorem relating the spin$^c$ prequantization for $\mathbb C$ with the level set used for cutting. In the last sections we discuss a couple of examples. Throughout this paper, all spaces are assumed to be smooth manifolds, and all maps and actions are assumed to be smooth. The principal action in a principal bundle will be always a right action. A real vector bundle $E$, equipped with a fiberwise inner product will be called a \emph{Riemannian vector bundle}. If the fibers are also oriented, then its bundle of oriented orthonormal frames will be denoted by $SOF(E)$. For an oriented Riemannian manifold $M$, we will simply write $SOF(M)$, instead of $SOF(TM)$. \textbf{Acknowledgements.} I would like to thank my supervisor, Yael Karshon, for offering me this project, guiding and supporting me through the process of developing and writing the material, and for having always good advice and a lot of patience. I also would like to thank Lisa Jeffrey and Eckhard Meinrenken for useful discussions and important comments. \section{Spin$^c$ prequantization}\label{Sec-preq} \subsection{Spin$^c$ structures}\ \\ In this section we recall the definition and basic properties of the spin and spin$^c$ groups. Then we give the definition of a spin$^c$ structure on a manifold, which is essential for defining spin$^c$ prequantization. \begin{definition} Let $V$ be a finite dimensional vector space over $\mathbb{K}=\mathbb{R}\mbox{ or } \mathbb{C}$, equipped with a symmetric bilinear form $B:V\times V\rightarrow\mathbb{K}$. Define the \emph{Clifford algebra} \ $Cl(V,B)$ to be the quotient $T(V)/I(V,B)$ where $T(V)$ is the tensor algebra of $V$, and $I(V,B)$ is the ideal generated by $\{ v\otimes v-B(v,v)\cdot 1\;:\; v\in V\}$. \end{definition} \begin{remark} If $v_1,\dots,v_n$ is an orthogonal basis for $V$, then $Cl(V,B)$ is the algebra generated by $v_1,\dots,v_n$, subject to the relations $v_i^2=B(v_i,v_i)\cdot 1$ and $v_i v_j=-v_j v_i$ for $i\neq j$.\\ Also note that $V$ is a vector subspace of $Cl(V,B)$. \end{remark} \begin{definition} If $V=\mathbb{R}^k$ and $B$ is minus the standard inner product on $V$, then define the following objects: \begin{enumerate} \item $C_k=Cl(V,B)$, and $C_k^c=Cl(V,B)\otimes\mathbb{C}$.\\ Those are finite dimensional algebras over $\mathbb{R}$ and $\mathbb{C}$, respectively. \item The \emph{spin group} $$Spin(k)=\{v_1 v_2 \dots v_l\;:\; v_i\in\mathbb{R}^k,\ ||v_i||=1 \mbox{ and } 0\le l \mbox{ is even}\}\subset C_k$$ \item The \emph{spin$^c$ group} $$Spin^c(k)= {\left(Spin(k)\times U(1)\right)}\diagup{K}$$ where $U(1)\subset\mathbb{C}$ is the unit circle, and $K=\{(1,1),(-1,-1)\}$. \end{enumerate} \end{definition} \begin{remark} \ \begin{enumerate} \item Equivalently, one can define \begin{multline*} \qquad \ Spin^c(k)=\\ =\left\{c\cdot v_1\cdots v_l\;: \linebreak \; v_i\in\mathbb{R}^k,\ ||v_i||=1,\ 0\le l \mbox{ is even, }\mbox{ and } c\in U(1)\right\}\subset C^c_k \end{multline*} \item The group $Spin(k)$ is connected for $k\ge 2$. \end{enumerate} \end{remark} \begin{prop} \ \begin{enumerate} \item There is a linear map $C_k\rightarrow C_k\;,\; x\mapsto x^t$ characterized by $(v_1\dots v_l)^t=v_l\dots v_1$ for all $v_1,\dots,v_l\in\mathbb{R}^k$. \item For each $x\in Spin(k)$ and $y\in\mathbb{R}^k$, we have $xyx^t\in\mathbb{R}^k$. \item For each $x\in Spin(k)$, the map $\lambda(x):\mathbb{R}^k\rightarrow\mathbb{R}^k\;,\; y\mapsto xyx^t$ is in $SO(k)$, and $\lambda:Spin(k)\rightarrow SO(k)$ is a double covering for $k\ge 1$. It is a universal covering map for $k\ge 3$. \end{enumerate} \end{prop} For the proof, see page 16 in \cite{Fr}. \begin{definition} Let $M$ be a manifold, and $Q$ a principal $SO(k)$-bundle on $M$. A \emph{spin$^c$ structure} on $Q$ is a principal $Spin^c(k)$-bundle $P\rightarrow M$, together with a map $\Lambda:P\rightarrow Q$ such that the following diagram commutes. $$ \begin{CD} P\times Spin^c(k) @>>> P \\ @VV\Lambda\times\lambda^c V @VV\Lambda V \\ Q\times SO(k) @>>> Q\\ \end{CD} $$\\ Here, the maps corresponding to the horizontal arrows are the principal actions, and $\lambda^c:Spin^c(k)\rightarrow SO(k)$ is given by $[x,z]\mapsto\lambda(x)$, where $\lambda:Spin(k)\rightarrow SO(k)$ is the double covering. \end{definition} \begin{remark}\ \begin{enumerate} \item A spin$^c$ structure on an oriented Riemannian vector bundle $E$ is a spin$^c$ structure on the associated bundle of oriented orthonormal frames, $SOF(E)$. \item A spin$^c$ structure on an oriented Riemannian manifold is a spin$^c$ structure on its tangent bundle. \end{enumerate} \end{remark} \subsection{Equivariant spin$^c$ structures}\ \\ \begin{definition} Let $G,H$ be Lie groups. A \emph{$G$-equivariant principal $H$-bundle} is a principal $H$-bundle $\pi:Q\rightarrow M$ together with left $G$-actions on $Q$ and $M$, such that: \begin{enumerate} \item $\pi(g\cdot q)=g\cdot\pi(q)$ for all $g\in G\;,\; q\in Q$\\ (i.e., $G$ acts on the fiber bundle $\pi:Q\rightarrow M$). \item $(g\cdot q)\cdot h=g\cdot(q\cdot h)$ for all $g\in G\;,\; q\in Q\;,\; h\in H$\\ (i.e., the actions of $G$ and $H$ commute). \end{enumerate} \begin{remark} It is convenient to think of a $G$-equivariant principal $H$-bundle in terms of the following commuting diagram (the horizontal arrows correspond to the $G$ and $H$ actions).\\ $$ \begin{CD} G\times Q @>>> Q @<<< Q\times H\\ @VId\times\pi VV @VV\pi V @.\\ G\times M @>>> M @.\\ \end{CD} $$\\ \end{remark} \begin{definition} Let $\pi:E\rightarrow M$ be a fiberwise oriented Riemannian vector bundle, and let $G$ be a Lie group. A \emph{$G$-equivariant structure} on $E$ is an action of $G$ on the vector bundle, that preserves the orientations and the inner products of the fibers. We will say that $E$ is a \emph{$G$-equivariant oriented Riemannian vector bundle}. \end{definition} \begin{remark} \ \begin{enumerate} \item A $G$-equivariant oriented Riemannian vector bundle $E$ over a manifold $M$, naturally turns $SOF(E)$ into a $G$-equivariant principal $SO(k)$-bundle, where $k=rank(E)$. \item If a Lie group G acts on an oriented Riemannian manifold $M$, by orientation preserving isometries, then the frame bundle $SOF(M)$ becomes a $G$-equivariant principal $SO(m)$-bundle, where $m=$dim$(M)$. \end{enumerate} \end{remark} \end{definition} \begin{definition} Let $\pi:Q\rightarrow M$ be a $G$-equivariant principal $SO(k)$-bundle. \emph{A $G$-equivariant spin$^c$ structure} on $Q$ is a spin$^c$ structure $\Lambda:P\rightarrow Q$ on $Q$, together with a a left action of $G$ on $P$, such that \begin{enumerate} \item $\Lambda(g\cdot p)=g\cdot\Lambda(p)$ for all $p\in P$, $g\in G$ (i.e., $G$ acts on the bundle $P\rightarrow Q$). \item $g\cdot(p\cdot x)=(g\cdot p)\cdot x$ for all $g\in G$, $p\in P$, $x\in Spin(k)$\\ (i.e., the actions of $G$ and $Spin^c(k)$ on $P$ commute). \end{enumerate} \end{definition} \begin{remark}\label{Remark_spin-c_str} \ \begin{enumerate} \item It is convenient to think of a $G$-equivariant spin$^c$ structure in terms of the following commuting diagram (where the horizontal arrows correspond to the principal and the $G$-actions). $$ \begin{CD} G\times P @>>> P @<<< P\times Spin^c(k)\\ @V Id\times\Lambda VV @V\Lambda VV @V\Lambda\times\lambda^c VV\\ G\times Q @>>> Q @<<< Q\times SO(k)\\ @V Id\times\pi VV @V\pi VV @.\\ G\times M @>>> M @.\\ \end{CD} $$\\ \item Note that in a $G$-equivariant spin$^c$ structure, the bundle $P\rightarrow M$ is a $G$-equivariant principal $Spin^c(k)$-bundle. \end{enumerate} \end{remark} \subsection{The definition of spin$^c$ prequantization}\label{def_of_preq}\ \\ In this section we define the concept of \emph{a $G$-equivariant Spin$^c$ prequantization}. This will consist of a $G$-equivariant spin$^c$ structure and a connection on the corresponding $U(1)$-bundle, which is compatible with a given two-form on our manifold. To motivate the definition, we begin by proving the following claim. \begin{claim} Let $M$ be a compact oriented Riemannian manifold of dimension $2m$, on which a Lie group $G$ acts by orientation preserving isometries, and let $P\to SOF(M)\to M$ be a $G$-equivariant spin$^c$ structure on $M$. Assume that $\theta\colon TP\to\mathfrak{u}(1)\cong i\mathbb R$ is a $G$-invariant and $Spin^c(m)$-invariant connection 1-form on the principal $S^1$-bundle $\pi\colon P\to SOF(M)$, for which $$\theta(\zeta_P)\colon P\to \mathfrak{u}(1)$$ is a constant function for any $\zeta\in\mathfrak{spin}(m)$.\\ For each $\xi\in\mathfrak{g}=Lie(G)$ define a map $$\phi^\xi\colon P\to\mathbb{R}\qquad,\qquad \phi^\xi=-i\cdot\left(\iota_{\xi_P}\theta\right)\ $$ where $\xi_P$ is the vector field on $P$ generated by $\xi$.\\[10pt] Then \begin{enumerate} \item For any $\xi\in\mathfrak{g}$, the map $\phi^\xi$ is $Spin^c(2m)$-invariant, i.e., $\phi^\xi=\pi^*(\Phi^\xi)$ where $\Phi^\xi\colon M\to\mathbb{R}$ is a smooth funtion. \item For any $\xi\in\mathfrak{g}$, we have $d\Phi^\xi=\iota_{\xi_M}\omega$, where $\omega$ is a real two-form on $M$, determined by the equation $d\theta=\pi^*(-i\cdot\omega)$. \item The map $$\Phi\colon M\to\mathfrak{g}^*\qquad,\qquad \Phi(m)\xi=\Phi^\xi(m)$$ is $G$-equivariant. \end{enumerate} \end{claim} \begin{proof}\ \begin{enumerate} \item This follows from the fact that $\theta$ is $Spin^c(m)$-invariant, and that the $G$ and $Spin^c(m)$-actions on $P$ commute. \item For any $\eta=(\zeta,b)\in\mathfrak{spin}^c(m)=\mathfrak{spin}(n)\oplus\mathfrak{u}(1)$, we have $$\iota_{\eta_P}\theta=\theta(\eta_P)=\theta(\zeta_P)+\theta(b_P)=\theta(\zeta_P)+b\ .$$ \noindent Since $\theta(\zeta_P)$ is constant by assumption, we get that $$\iota_{\eta_P}d\theta=L_{\eta_P}\theta-d\iota_{\eta_P}\theta =0\ .$$ This implies that $d\theta$ is horizontal, and hence $\omega$ is well defined by the equation $d\theta=\pi^*(-i\cdot\omega)$. Now, observe that \begin{multline*} \qquad\qquad\pi^*d\Phi^\xi=d\left(\pi^*\Phi^\xi\right)=d\phi^\xi=-i\;d\iota_{\xi_P} \theta=-i\left[L_{\xi_P}\theta-\iota_{\xi_P}d\theta\right]=\\ =\iota_{\xi_P}(\pi^*\omega)=\pi^*(\iota_{\xi_M}\omega)\qquad \end{multline*} and since $\pi^*$ is injective, we get $d\Phi^\xi=\iota_{\xi_M}\omega$ as needed. \item If $g\in G$, $m\in M$, $\xi\in\mathfrak{g}$ and $p\in\pi^{-1}(m)$, then \begin{multline*} \qquad\qquad \Phi^\xi(g\cdot m)=\phi^\xi(g\cdot p)=-i\left(\iota_{\xi_P}\theta\right)(g\cdot p) =-i\left(\theta_{g\cdot p}(\xi_P|_{g\cdot p})\right)=\\=-i\left(\theta_{g\cdot p}(g\cdot(Ad_{g^{-1}}\xi)_P|_p)\right)= -i\left(\iota _{\left(Ad_{g^{-1}}\xi\right)_P}\theta\right)(p)=\\ \qquad\qquad\qquad=\phi^{Ad_{g^{-1}}\xi}(p)=\Phi^{Ad_{g^{-1}}\xi}(m)\hfill \end{multline*} and we ended up with $\Phi^\xi(g\cdot m)=\Phi^{Ad_{g^{-1}}\xi}(m)$, which means that $\Phi$ is $G$-equivariant. \end{enumerate} \end{proof} The above claim suggests a compatibility condition between a given two-form and a spin$^c$ structure on our manifold. We will work with two-forms that are closed, but not necessarily nondegenerate. The compatibility condition is formulated in the following definition. \begin{definition} Let a Lie group $G$ act on a compact $m$-dimensional manifold $M$, and let $\omega$ be a $G$-invariant closed two-form (i.e., $g^*\omega=\omega$ for any $g\in G$). A \emph{G-equivariant spin$^c$ prequiantization} for $M$ is a $G$-equivariant spin$^c$ structure $\pi\colon P\to SOF(M)\to M$ (with respect to an invariant Riemannian metric and orientation), and a $G$ and $Spin^c(m)$-invariant connection $\theta\in\Omega^1(P;\mathfrak{u}(1))$ on $P\to SOF(M)$, such that $$\theta(\zeta_P)=0\text{\ \ \ for any } \zeta\in\mathfrak{spin}(m)$$ and $$d\theta=\pi^*(-i\cdot\omega)\ .$$ \end{definition} \begin{remark}\label{moment map} By the above claim, the action $G\circlearrowright (M,\omega)$ is Hamiltonian, with a moment map $\Phi\colon M\to\mathfrak{g}^*$ satisfying $$\pi^*\left(\Phi^\xi\right)=-i\cdot\iota_{\xi_P}(\theta) \mbox{\quad for any \quad}\xi\in\mathfrak{g}\ .$$ \end{remark} \begin{remark} \label{conn on Spin-c bundle}A $G$-invariant connection 1-form $\theta$ on the $G$-equivariant principal $Spin^c(m)$-bundle $P\to M$ induces a connection 1-form $\tilde\theta$ on the principal $S^1$-bundle $P\to SOF(M)$ as follows. Recall the determinant map $$det\colon Spin^c(n)\to U(1)\qquad,\qquad [A,z]\mapsto z^2\ .$$ This map induces a map on the Lie algebras $$det_*\colon\mathfrak{spin}^c(n)=\mathfrak{spin}(n)\oplus\mathfrak{u}(1) \to\mathfrak{u}(1)\simeq i\mathbb{R}\qquad,\qquad (A,z)\mapsto 2z\ .$$ This means that the map $\frac{1}{2}det_*\colon\mathfrak{spin}^c(m)\to\mathfrak{u}(1)$ is just the projection onto the $\mathfrak{u}(1)$ component. The composition $\frac{1}{2}det_*\circ\theta$ will then be a connection 1-form on $P\to SOF(M)$, which is $G$-invariant, and for which $\tilde\theta(\zeta_P)=\frac{1}{2}det_*(\zeta)=0$ for any $\zeta\in\mathfrak{spin}(m)$. \end{remark} \begin{remark} The condition $\theta(\zeta_P)=0$ could have been omitted, since our main theorem can be proved without it. However, this condition is necessary to obtain a discreet condition on the prequantizable closed two forms. See the example in Section \ref{Sec-Ex}. \end{remark} In the following claim, $M$ is an oriented Riemannian $m$-dimensional manifold on which $G$ acts by orientation preserving isometries. \begin{claim}\label{connection on P_det} Let $P\to SOF(M)\to M$ be a $G$-equivariant spin$^c$ structure on $M$. Let $P_{det}=P/Spin(m)$ and $q\colon P\to P_{det}$ the quotient map. Let $\theta\colon TP\to\mathfrak u(1)$ be a connection 1-form on the $G$-equivariant principal $U(1)$-bundle $P\to SOF(M)$. Then $\theta=\frac{1}{2}\,q^*(\overline{\theta})$ for some connection one form $\overline\theta$ on the $G$-equivariant principal $U(1)$ bundle $P_{det}\to M$ if and only if $\theta$ is $Spin^c(m)$-invariant and $\theta(\zeta_P)=0$ for all $\zeta\in\mathfrak{spin}(m)$. \end{claim} \noindent Here is the relevant diagram. $$\begin{CD} P @>q>> P_{det}\\ @VVV @VVV\\ SOF(M) @>>> M\\ \end{CD}$$ Note that this is not a pullback diagram. The pullback of $P_{det}$ under the projection $SOF(M)\to M$ is the square of the principal $U(1)$ bundle $P\to SOF(M)$. \begin{proof}[Proof of Claim \ref{connection on P_det}] Assume that $\theta=\frac{1}{2}q^*(\overline\theta)$. Then for any $g\in Spin^c(m)\colon P\to P$, write $g=[A,z]$ with $A\in Spin(m)$ and $z\in U(1)$. Since $\theta$ is $U(1)$-invariant, we have $$g^*\theta=[A,1]^*[1,z]^*\theta=[A,1]^*\theta= \frac{1}{2}\,[A,1]^*q^*\overline\theta=\frac{1}{2}\,q^*\overline\theta= \theta\ ,$$ and so $\theta$ is $Spin^c(m)$-invariant. If $\zeta\in\mathfrak{spin}(m)$ then $q_*(\zeta_P)=0$, which implies $\theta(\zeta_P)=0$. Conversely, assume that $\theta$ is $Spin^c(m)$-invariant with $\theta(\zeta_P)=0$ for all $\zeta\in\mathfrak{spin}(m)$. Define a 1-form $TP_{det}\to\mathfrak u(1)$ by $$\overline\theta(q_*v)=2\,\theta(v)\quad\mbox{for}\quad v\in TP\ .$$ This will be well defined, since if $q_*v=q_*v'$ for $v\in T_xP$ and $v'\in T_{xg}P_{det}$ where $g\in Spin(m)$, then $q_*(v-v'g^{-1})=0$, which implies that $v-v'g^{-1}=\zeta_P$ for some $\zeta\in\mathfrak{spin}(m)$. The fact that $\theta(\zeta_P)=0$ will imply that $\theta(v)=\theta(v')$. Smoothness and $G$-invariance of $\overline\theta$ are straight forward. We also need to check that $\overline\theta$ is vertical (i.e., that $\overline\theta(\xi_{P_{det}})=\xi$ for $\xi\in\mathfrak u(1)$). Note that $Spin^c(m)/Spin(m)$ is isomorphic to $U(1)$ via the isomorphism taking the class of $[A,z]\in Spin^c(m)$ to $z^2\in U(1)$. This will imply that $q_*(\xi_P)=2\,\xi_{P_{det}}$, from which we can conclude that $\overline\theta$ is vertical. \end{proof} \subsection{Spin$^c$ prequantizations for $\mathbb{C}$}\label{prequan. for C}\ \\ For the purpose of cutting, we will need to choose an $S^1$-equivariant spin$^c$ prequantization on the complex plane. The $S^1$-action on $\mathbb{C}$ is given by $$(a,z)\mapsto a^{-1}\cdot z\qquad,\qquad a\in S^1,\ z\in\mathbb{C}\ .$$ We take the standard orientation and Riemannian structure on $\mathbb{C}$ and choose our two-form to be $$\omega_\mathbb{C}=2\cdot dx\wedge dy=-i\cdot dz\wedge d\bar z\ .$$ For each odd integer $\ell\in\mathbb{Z}$ we will define an $S^1$-equivariant spin$^c$ prequantization for $S^1\circlearrowright(\mathbb C,\omega_\mathbb C)$. The prequantization will be denoted as $(P_\mathbb{C}^\ell,\tilde\theta_\mathbb C)$, and defined as follows. Let $P_\mathbb{C}^\ell=\mathbb{C}\times Spin^c(2)$ be the the trivial principal $Spin^c(2)$-principal bundle over $\mathbb{C}$ with the non-trivial $S^1$-action $$S^1\times P_\mathbb C^\ell\to P_\mathbb C^\ell\qquad,\qquad(e^{i\varphi}z,(z,[a,w]))\mapsto (e^{-i\varphi},[x_{-\varphi/2}\cdot a,e^{-i\ell\varphi/2}\cdot w])$$ where $x_\varphi=\cos\varphi+\sin\varphi\cdot e_1e_2\in Spin(2)$. Note that since $\ell\in\mathbb Z$ is odd, this action is well defined. Next we define a connection $$\theta_\mathbb C\colon TP^\ell_\mathbb C\to\mathfrak{spin}^c(2)=\mathfrak{spin}(2)\oplus\mathfrak u(1)\ .$$ Denote by $\pi_1\colon P^\ell_\mathbb C\to \mathbb C $ and $\pi_2\colon P^\ell_\mathbb C \to Spin^c(2)$ the projections, and by $\theta^R$ the right-invariant Maurer-Cartan form on $Spin^c(2)$. Then set $$\theta_\mathbb C\colon TP_\mathbb C^\ell\to Spin^c(2) \qquad,\qquad\theta_\mathbb C=\pi_2^*(\theta^R)+\frac{1}{2}\;\pi_1^*(\bar z\,dz-z\,d\bar z)\ .$$ Note that $\pi_1^*(\bar z\,dz-z\,d\bar z)$ takes values in $i\mathbb R=\mathfrak u(1)\subset\mathfrak{spin}^c(2)$, and that the connection $\theta_\mathbb C$ does not depend on $\ell$.\\ Finally, let $$\tilde\theta_\mathbb{C}=\frac{1}{2}\,det_*\circ\theta_\mathbb{C}\ .$$ \begin{claim} For any odd $\ell\in\mathbb Z$, the pair $(P^\ell_\mathbb C,\tilde\theta_\mathbb C)$ is an $S^1$-equivariant spin$^c$ prequantization for $(\mathbb C,\omega_\mathbb C)$. \end{claim} \begin{proof} The 1-form $\theta_\mathbb C$ (and hence $\tilde\theta_\mathbb C$) is $S^1$-invariant, since $\bar z\,dz-z\,d\bar z$ is an $S^1$-invariant 1-form on $\mathbb C$, and since the group $Spin^c(2)$ is abelian. The 1-form $\tilde\theta_\mathbb C$ is given by $$\tilde\theta_\mathbb C=\frac{1}{2}\,det_*\circ\theta_\mathbb C= \frac{1}{2}\,det_*\circ\pi_2^*(\theta^R)+\frac{1}{2}\;\pi_1^*(\bar z\,dz-z\,d\bar z)$$ and therefore $$d\left(\tilde\theta_\mathbb C\right)=0+\frac{1}{2}\,\pi_1^*(d\bar z\wedge dz-dz\wedge d\bar z)=\pi_1^*\left(-dz\wedge d\bar z\right)=\pi_1^*(-i\cdot\omega_\mathbb C)$$ as needed. Finally, by Remark \ref{conn on Spin-c bundle}, we have $\tilde\theta_\mathbb C(\zeta_{P_\mathbb C^\ell})=0$ for all $\zeta\in\mathfrak{spin}(2)$. \end{proof} \section{Cutting of a spin$^c$ prequantization}\label{Sec-Cut} The process cutting consists of several steps: Taking the product, restricting and taking the quotient of spin$^c$ structures. We start by discussing those constructions independently. \begin{comment} WE DON'T NEED THIS ANYMORE \subsection{Connection 1-forms on associated bundles} \ \\ \begin{claim}\label{conn. on assoc. bundle} Let $P\to B$ be a principal $H$-bundle, with an invariant connection $\theta\in\Omega^1(P;\mathfrak{h})$, and let $\alpha\colon H\to G$ be a homomorphism of Lie groups. Denote by $q\colon P\times G\to P\times_H G$ the quotient map (where $H$ acts on $G$ via $(h,g)\mapsto \alpha(h)\cdot g$). Then $\theta$ induces an invariant connection 1-form $\hat\theta\in\Omega^1(P\times_HG;\mathfrak g)$ on the associated principal $G$-bundle $P\times_HG$, given by $$\hat\theta_{[p,g]}\left(q_*\left(a+v\right)\right)=Ad_{g^{-1}}(\alpha_*(\theta_p(a)))+ \left(L_{g^{-1}}\right)_*v\ ,$$ where $(a,v)\in T_{(p,g)}(P\times G)$ and $L_{g^{-1}}\colon G\to G$ is left-multiplication by $g^{-1}$. \end{claim} \begin{proof} Let us check first that $\hat\theta$ is well defined. For an $h\in H$, compute \begin{multline*} \hat\theta_{[ph,\alpha(h^{-1})g]}\left(q_*\left(ah+\left(L_{\alpha(h^{-1})}\right)_*v\right)\right)= Ad_{g^{-1}\alpha(h)}(\alpha_*(\theta_{ph}(ah)))+\left(L_{g^{-1}\alpha(h)}\right)_*\left(L_{\alpha(h^{-1})}\right)_*v\ =\\ =Ad_{g^{-1}\alpha(h)}(Ad_{\alpha(h^{-1})}(\theta_{p}(a)))+\left(L_{g^{-1}}\right)_*v=\ =Ad_{g^{-1}}(\alpha_*(\theta_p(a)))+ \left(L_{g^{-1}}\right)_*v=\hat\theta_{[p,g]}\left(q_*\left(a+v\right)\right) \end{multline*} and indeed $\hat\theta$ is well defined. The map $q_*$ is onto, because $q$ is, and so $\hat\theta$ is determined by its values on elements of the form $q_*(a+v)\in T(P\times_HG)$. Obviously $\hat\theta$ is a one form. It is vertical since if $\xi^L$ is the left-invariant vector field generated by $\xi\in\mathfrak g$, then $$\hat\theta_{[p,g]}(q_*(\xi^L|_g))=\left(L_{g^{-1}}\right)_*\left(\xi^L|_g\right)=\xi\ ,$$ and it is invariant (i.e, $G$-equivariant) since for any $b\in G$ \begin{multline*}(b^*\hat\theta)_{[p,g]}(q_*(a+v))= \hat\theta_{[p,gb]}(q_*(a+\left(R_b\right)_*v))=\\ =Ad_{(gb)^{-1}}(\alpha_*(\theta_p(a)))+\left(L_{(gb)^{-1}}R_b\right)_*v= Ad_{b^{-1}}Ad_{g^{-1}}(\alpha_*(\theta_p(a)))+Ad_{b^{-1}}\left(L_{g^{-1}}\right)_*v=\\ =Ad_{b^{-1}}\left[Ad_{g^{-1}}(\alpha_*(\theta_p(a)))+\left(L_{g^{-1}}\right)_*v\right]= Ad_{b^{-1}}\left[\hat\theta_{[p,g]}\left(q_*\left(a+v\right)\right)\right]\ . \end{multline*} \end{proof} \end{comment} \subsection{The product of two spin$^c$ prequantizations}\ \\ Let a Lie group $G$ act by orientation preserving isometries on two oriented Riemannian manifolds $M$ and $N$, of dimensions $m$ and $n$, respectively. Given two equivariant spin$^c$ structures $P_M,P_N$ on $M,N$, we can take their `product' as follows. First, note that $P_M\times P_N$ is a $G$-equivariant principal $Spin^c(m)\times Spin^c(n)$-bundle on $M\times N$. Second, observe that $Spin^c(m)$ and $Spin^c(n)$ embed naturally as subgroups of $Spin^c(m+n)$, and thus give rise to a homomorphism $$Spin^c(m)\times Spin^c(n)\to Spin^c(m+n)\qquad,\qquad (x,y)\mapsto x\cdot y\ .$$ This homomorphism is used to define a principal $Spin^c(m+n)$-bundle on $M\times N$, denoted $P_{M\times N}$, as a fiber bundle associated to $P_M\times P_N$. In the following claim, $\theta^L$ is the left invariant Maurer-Cartan 1-form on the group $Spin^c(m+n)$, and $\omega_M,\omega_N$ are closed $G$-invariant two forms on $M,N$. \begin{claim}\label{prequan. for products} Let $(P_M,\theta_M)$ and $(P_N,\theta_N)$ be two $G$-equivariant spin$^c$ prequantizations for $(M,\omega_M)$ and $(N,\omega_N)$, respectively. Let $$P_{M\times N}=\left(P_M\times P_N\right)\times_{Spin^c(m)\times Spin^c(n)}Spin^c(m+n)$$ and $$\theta_{M\times N}=\theta_M+\theta_N+\frac{1}{2}\,det_*\circ\theta^L\in\Omega^1(P_{M\times N};\mathfrak{u}(1))\ .$$ Then $(P_{M\times N},\theta_{M\times N})$ is a $G$-equivariant spin$^c$ prequantization for $(M\times N,\omega_M\oplus~\omega_N)$, called \emph{the product of $(P_M,\theta_M)$ and $(P_N,\theta_N)$}. \end{claim} \begin{remark}\label{remark product} \ \begin{enumerate} \item More specifically, the connection $\theta_{M\times N}$ is given by $$\theta_{M\times N}(q_*(u,v,\xi^L))=\theta_M(u)+\theta_N(v)+\frac{1}{2}\,det_*(\xi)$$ where $u\in TP_M,\ v\in TP_N,\ \xi\in\mathfrak{spin}^c(m+n)$ and $$q\colon P_M\times P_N\times Spin^c(m+n)\to P_{M\times N}$$ is the quotient map. This is well defined since $\theta_M$ and $\theta_N$ are spin$^c$-invariant. \item The $G$-action on $M\times N$ can be taken to be either the diagonal action $$g\cdot (x,y)=(g\cdot x,g\cdot y)$$ or the `M-action' $$ g\cdot (x,y)=(g\cdot x, y)$$ and $(P_{M\times N},\theta_{M\times N})$ will be a $G$-equivariant prequantization with respect to any of those actions. \item The map $P_{M\times N}\to SOF(M\times N)$ is the natural one induced from $P_M\to SOF(M)$ and $P_N\to SOF(N)$, using the fact that $$SOF(M\times N)\cong \left(SOF(M)\times SOF(N)\right)\times_{SO(m)\times SO(n)}SO(m+n)\ .$$ \end{enumerate} \end{remark} \begin{proof} The connection $\theta_{M\times N}$ is $G$ and $Spin^c(m+n)$-invariant, since $\theta_M$ and $\theta_N$ have the same invariance properties. Moreover, since $d\theta^L=0$, we get that $$d(\theta_{M\times N})=d(\theta_M)+d(\theta_N)=\pi^*(-i\cdot\omega_M\oplus\omega_N)$$ as needed, where $\pi\colon P_{M\times N}\to M\times N$ is the projection.\\ Finally, $\theta_{M\times N}(\zeta_{P_{M\times N}})=0$ for all $\zeta\in\mathfrak{spin}(m+n)$ since $\frac{1}{2}det_*(\zeta)=0$. \end{proof} \subsection{Restricting a spin$^c$ prequantization}\label{restr prequan} \ \\ Assume that a Lie group $G$ acts on an $m$ dimensional oriented Riemannian manifold $M$ by orientation preserving isometries. Let $Z\subset M$ be a $G$-invariant co-oriented submanifold of co-dimension 1. Then there is a natural map $$i\colon SOF(Z)\to SOF(M)\qquad,\qquad i(f)(a_1,\dots,a_m)=f(a_1,\dots,a_{m-1})+a_m\cdot v_p$$ where $f\colon\mathbb{R}^{m-1}\xrightarrow{\sim}T_pZ$ is a frame in $SOF(Z)$, and $v\in\Gamma(TM)$ is the vector field on $Z$ of positive unit vectors orthogonal to $TZ$. A $G$-equivariant spin$^c$ structure $P$ on $M$ can be restricted to $Z$, by setting $$P_Z=i^*(P)\ ,$$ i.e., $P_Z$ is the pullback under $i$ of the circle bundle $P\to SOF(M)$. The relevant diagram is $$ \begin{CD} P_Z=i^*(P) @>i'>> P \\ @VVV @VVV \\ SOF(Z) @>i>> SOF(M)\\ @VVV @VVV\\ Z @>>> M\\ \end{CD} $$\\ The principal action on $P_Z\to Z$ comes from the natural inclusion $Spin^c(m-1)\hookrightarrow Spin^c(m)$, and the $G$-action on $P_Z$ is induced from the one on $P$. Furthermore, if a connection 1-form $\theta$ is given on the circle bundle $P\to SOF(M)$, we can restrict it to a connection 1-form $\theta_Z$ on $P_Z\to SOF(Z)$ by letting $$\theta_Z=(i')^*\theta\ .$$ \begin{claim}\label{prequan. for Z} Let $(P,\theta)$ be a $G$-equivariant spin$^c$ prequantization for $(M,\omega)$ (for a closed $G$-invariant two form $\omega$), and $Z\subset M$ a co-oriented $G$-invariant submanifold of co-dimension 1. Then the pair $(P_Z,\theta_Z)$ is a $G$-equivariant spin$^c$ prequantization for $(Z,\omega|_Z)$. \end{claim} \begin{proof} $$d(\theta_Z)=(i')^*(d\theta)= (i')^*\pi^*(-i\cdot\omega)=\pi^*(-i\cdot\omega|_Z)$$ as needed, and $$\theta_Z(\zeta_{P_Z})=\theta(\zeta_P)=0$$ for all $\zeta\in\mathfrak{spin}(m-1)$. \end{proof} \subsection{Quotients of spin$^c$ prequantization} \ \\ Here is a general fact about connections on principal bundles and their quotients. \begin{claim}\label{quotient conn.} Let $H,K,G$ be three Lie groups, and $P\to X$ an $H$-equivariant and $K$-equivariant principal $G$-bundle. Assume that $H$ acts freely on $X$, and that the $H$ and $K$-actions on $P$ commute (i.e., $h\cdot(k\cdot y)=k\cdot (h\cdot y)$ for all $h\in H,\ k\in K,\ y\in P$), then: \begin{enumerate} \item $\pi\colon P/H\to X/H$ is a $K$-equivariant principal $G$-bundle. \item If $\theta\colon TP\to\mathfrak{g}$ is a connection 1-form, and $q\colon P\to P/H$ is the quotient map, then $\theta=q^*(\bar\theta)$ for some connection 1-form $\bar\theta\colon T(P/H)\to\mathfrak{g}$ if and only if $\theta$ is $H$-invariant, and $\theta(\xi_P)=0$ for all $\xi\in\mathfrak{h}$. \end{enumerate} \end{claim} \begin{proof}\ \begin{enumerate} \item The surjection $P/H\to M/H$, induced from $\pi\colon P\to M$, and the right $G$-action on those quotient spaces are well defined since the left $H$-action commutes with the right $G$-action on $P$, and with the projection $\pi$. To show that $P/H\to X/H$ is a principal $G$-bundle, it suffices to check that $G$ acts freely on $P/H$. Indeed, if $[p]\in P/H,\ g\in G$ and $[p]\cdot g=[p]$, then this implies $$[p\cdot g]=[p]\qquad\Rightarrow\qquad p\cdot g=h\cdot p$$ for some $h\in H$, which implies $$\pi(p\cdot g)=\pi(h\cdot p)\qquad\Rightarrow\qquad\pi(p)=h\cdot\pi(p)\ .$$ But $H\circlearrowright X$ freely, and so $h=id$. Then $p\cdot g=p$, and since $P\circlearrowleft G$ freely, we conclude that $g=id$, as needed. It is easy to check that the $K$-action descends to $P/H\to X/H$, since it commutes with the $H$ and the $G$-actions. \item First assume that $\theta=q^*(\bar\theta)$. If $h\in H$ acts on $P$, then $$h^*\theta=h^*(q^*\bar\theta)=(q\circ h)^*\bar\theta= q^*\bar\theta=\theta$$ and so $\theta$ is $H$-invariant. Also, if $\xi\in\mathfrak{h}$, then clearly $q_*(\xi_P)=0$, and hence $\theta(\xi_P)=(q^*\bar\theta)(\xi_P)=0$, as needed. Conversely, assume that $\theta$ is $H$-invariant and that $\theta(\xi_P)=0$ for all $\xi\in\mathfrak{h}$. For any $v\in TP$ define $$\bar\theta(q_*v)=\theta(v)\ .$$ This is well defined: If $v\in T_yP$ and $v'\in T_{y'}P$ such that $q_*(v)=q_*(v')$, then $y'=h\cdot y$ for some $h\in H$, and we get that $$ \theta_{y'}(v')=\theta_{h\cdot y}(v')=h^*(\theta_y((h^{-1})_*v'))=\theta_y((h^{-1})_*v')\ .$$ Now observe that $$q_*(v-(h^{-1})_*v')=q_*(v)-q_*(v')=0\ ,$$ and so $v-(h^{-1})_*v'=~\xi_P|_x$ (for some $\xi\in\mathfrak{h}$) is in the vertical bundle of $P\to P/H$. By assumption, $\theta(\xi_P)=0$ and therefore $\theta_y(v)=\theta_{y'}(v')$, and $\bar\theta$ is well defined. The map $\bar\theta\colon T(P/H)\to\mathfrak g$ is a 1-form. Smoothness is implied from the definition of the smooth structure on $P/H$. Also $\bar\theta$ is vertical and $G$-equivariant because $\theta$ is. \end{enumerate} \end{proof} Now assume that $Z$ is an $n$-dimensional oriented Riemannian manifold, and $S^1$ acts freely on $Z$ by isometries. Let $P\to SOF(Z)\to Z$ be a $G$ and $S^1$-equivariant spin$^c$ structure on $Z$. We would like to explain how one can get a $G$-equivariant spin$^c$ structure on $Z/S^1$, induced from the given one on $Z$. Denote by $\frac{\partial}{\partial\varphi}\in Lie(S^1)\simeq i\mathbb{R}$ the generator, and by $\left(\frac{\partial}{\partial\varphi}\right)_Z$ the corresponding vector field on $Z$. Define the normal bundle $$V=\left[\left(\frac{\partial}{\partial\varphi}\right)_Z\right]^\bot\subset TZ$$ and an embedding $\eta\colon SOF(V)\to SOF(Z)$ as follows. If $f\colon\mathbb{R}^{n-1}\xrightarrow{\simeq} V_x$ is a frame in $SOF(V)$, then $\eta(f)\colon\mathbb{R}^{n}\xrightarrow{\simeq} T_xZ$ will be given by $\eta(f)e_i=f(e_i)$ for $i=1,\dots,n-1$, and $\eta(f)e_n$ is the unit vector in the direction of $\left(\frac{\partial}{\partial\varphi}\right)_{Z,x}$ . $$ \begin{CD} \eta^*(P) @>\eta'>> P \\ @VVV @VVV \\ SOF(V) @>\eta>> SOF(Z)\\ @VVV @VVV\\ Z @= Z\\ \end{CD} $$\\ To get a spin$^c$ structure on $Z/S^1$, first consider the equivariant spin$^c$ structure on the vector bundle $V$ $$\eta^*(P)\to SOF(V)\to Z\ .$$ Once we take the quotient by the circle action, we get \emph{the quotient} spin$^c$ structure on $Z/S^1$, denoted by $\bar P$: $$\bar P=\eta^*(P)/S^1\ \to\ SOF(V)/S^1\cong SOF(Z/S^1)\ \to\ Z/S^1\ \ .$$ If an $S^1$ and $Spin^c(m)$-invariant connection 1-form $\theta$ is given on the principal circle bundle $P\to SOF(Z)$, then $(\eta')^*\theta$ is a connection 1-form on the principal circle bundle $\eta^*(P)\to SOF(V)$. The previous claim tells us exactly when the above connection 1-form will descend to a connection 1-form on the quotient bundle $\bar P\to SOF(Z/S^1)$. The following proposition summarizes the above construction and relates it to spin$^c$ prequantization. \begin{prop}\label{main-prop} Assume that the following data is given: \begin{enumerate} \item An $n$-dimensional Riemannian oriented manifold $Z$. \item A real closed 2-form $\omega$ on $Z$. \item Actions of a Lie group $G$ and $S^1$ on $Z$, by orientation preserving and $\omega$-invariant isometries. \item A $G$ and $S^1$-equivariant spin$^c$ prequantization $(P,\theta)$ on $Z$. Assume that the actions of $G$ and $S^1$ on $P$ and $Z$ commute with each other.\\ Also assume that the action $S^1\circlearrowright Z$ is free.\vspace{10pt} \end{enumerate} Then, using the above notation, we have that: \begin{enumerate} \item $\theta'=(\eta')^*\theta$ is a connection 1-form on the principal circle bundle $\pi\colon\eta^*(P)\to SOF(V)$, satisfying $$d\theta'=\pi^*(-i\cdot\omega)\ ,$$ and $$\theta'(\zeta_{\eta^*(P)})=0\mbox{\quad for all \quad}\zeta\in\mathfrak{spin}(m-1)\ .$$ \item If $\left(\frac{\partial}{\partial\varphi}\right)_{\eta^*(P)}$ is the vector field generated by the action $S^1\circlearrowright\eta^*(P)$, and $q\colon \eta^*(P)\to\bar P=\eta^*(P)/S^1$ is the quotient map, then $\theta'=q^*(\bar\theta)$ for some connection 1-form $\bar\theta$ on $\bar P\to SOF(Z/S^1)$ if and only if $$\theta'\left[\left(\frac{\partial}{\partial\varphi}\right)_{\eta^*(P)}\right]=0\ .$$ Moreover, in this case, $(\bar P,\bar\theta)$ is a $G$-equivariant spin$^c$ prequantization for $G\circlearrowright (Z/S^1,\bar\omega)$ (where $\omega=q^*(\bar\omega)$). \end{enumerate} \end{prop} \begin{proof}\ \\ \begin{enumerate} \item We have $$d\theta'=(\eta')^*d\theta=(\eta')^*\circ\pi^*(-i\cdot\omega)= \pi^*(-i\cdot\omega)$$ and $$\theta'(\zeta_{\eta^*(P)})=\theta(\zeta_P)=0 $$ as needed. \item The fact that $\theta'=q^*(\bar\theta)$ if and only if $$\theta'\left[\left(\frac{\partial}{\partial\varphi}\right)_{\eta^*(P)}\right]=0$$ follows directly from Claim \ref{quotient conn.}, since $\theta'$ is $S^1$-invariant, and $\frac{\partial}{\partial\varphi}$ is a generator. Finally, $(\bar P,\bar\theta)$ is a prequantization, since $$q^*(d{\bar\theta})=d\theta'= \pi^*(-i\cdot\omega)=q^*\bar\pi^*(-i\cdot\bar\omega) \quad\Rightarrow\quad d{\bar\theta}=\bar\pi^*(-i\cdot\bar\omega)$$ where $\bar\pi\colon \eta^*(P)/S^1\to Z/S^1$ is the projection. Clearly, since all our objects are $G$-invariant, and all the actions commute, $(\bar P,\bar\theta)$ is a $G$-equivariant prequantization. \end{enumerate} \end{proof} \begin{remark} \label{prequan. descends} When the condition in part (2) of the above proposition holds, we will say that \emph{the prequantization $(P,\theta)$ for $G\circlearrowright (Z,\omega)$ descends to the prequantization $(\bar P,\bar\theta)$ for $G\circlearrowright(Z/S^1,\bar\omega)$}. \end{remark} \subsection{The cutting of a prequantization} \ \\ In \cite{L}, Lerman describes a cutting construction for symplectic manifolds $(M,\omega)$, endowed with a Hamiltonian circle action and a moment map $\Phi\colon M\to \mathfrak{u}(1)^*$, which goes as follows. If $\omega_\mathbb{C}=-i\cdot dz\wedge d\bar z$, then $(M\times\mathbb C,\omega\oplus\omega_\mathbb C)$ is a symplectic manifold. The action $$S^1\times(M\times\mathbb C)\to M\times\mathbb C\qquad,\qquad(a,(m,z))\mapsto (a\cdot m,a^{-1}\cdot z)$$ is Hamiltonian with moment map $\tilde\Phi(m,z)=\Phi(m)-|z|^2$. If $\alpha\in\mathfrak{u}(1)^*$ and $S^1$ acts freely on $Z=\Phi^{-1}(\alpha)$, then $\alpha$ is a regular value of $\tilde\Phi$, and the (positive) cut space is defined by $$M_{cut}^+=\tilde\Phi^{-1}(\alpha)/S^1=\left\{(m,z)\in M\times\mathbb{C}:\Phi(m)-|z|^2=\alpha\right\}\ .$$ This is a symplectic manifold, with the symplectic form $\omega_{cut}^+$ obtained by reduction, and $S^1$ acts on $M_{cut}^+$ by $a\cdot [m,z]=[a\cdot m,z]$. If $M$ is also Riemannian oriented manifold, so is the cut space (but the natural inclusion $M_{cut}^+\hookrightarrow M$ is not an isometry).\\ \noindent Assume that the following is given: \begin{enumerate} \item An $m$ dimensional oriented Riemannian manifold. \item A closed real two-form $\omega$ on $M$. \item An action of $S^1$ on $M$ by $\omega$-invariant isometries. \item An $S^1$-equivariant spin$^c$ prequantization $(P,\theta)=(P_M,\theta_M)$ for $(M,\omega)$.\\ \end{enumerate} Recall that the action $S^1\circlearrowright(M,\omega)$ is Hamiltonian, with moment map $\Phi\colon M\to\mathfrak{u}(1)^*$ determined by the equation $$\pi^*(\Phi^\xi)=-i\cdot\iota_{\xi_P}(\theta)\qquad,\qquad\xi\in\mathfrak{u}(1)$$ where $\pi\colon P\to M$ is the projection, and $\xi_P$ is the vector field on $P$ generated by the $S^1$-action (see Remark \ref{moment map}). We want to cut the given spin$^c$ prequantization. For that we choose $\alpha\in\mathfrak u(1)^*$ and set $Z=\Phi^{-1}(\alpha)$. We assume that $S^1$ acts on $Z$ freely, and that $\alpha$ is a regular value of $\Phi$ (however, we do not assume that $\omega$ is nondegenerate). Our goal is to get a condition on $\alpha$ such that cutting along $Z=\Phi^{-1}(\alpha)$ is possible (i.e., such that a spin$^c$ prequantization on the cut space is obtained).\\ We proceed according to the following steps. \begin{description} \item[Step 1] Let $S^1$ act on the complex plane via $$(a,z)\mapsto a^{-1}\cdot z\qquad,\qquad a\in S^1,\ z\in\mathbb C\ .$$ This action preserves the standard Riemannian structure and orientation, and the two form $\omega_\mathbb C=-i\cdot dz\wedge d\bar z$ . Fix an odd integer $\ell$, and consider the $S^1$-equivariant spin$^c$ prequantization $(P_\mathbb C^\ell,\tilde\theta_\mathbb C)$ for $S^1\circlearrowright (\mathbb C,\omega_\mathbb C)$ defined in \S\ref{prequan. for C}.\\ \item[Step 2] Using Claim \ref{prequan. for products} we obtain an $S^1$-equivariant spin$^c$ prequantization $(P_{M\times\mathbb C},\theta_{M\times\mathbb C})$ for $S^1\circlearrowright ({M\times\mathbb C},\omega\oplus\omega_{\mathbb C})$.\\ \item[Step 3] Denote $$\tilde Z=\left\{(m,z):\Phi(m)-|z|^2=\alpha\right\}\subset M\times\mathbb{C}\ .$$ This is an $S^1$-invariant submanifold of codimension 1. By Claim \ref{prequan. for Z}, we get an $S^1$-equivariant spin$^c$ prequantization $(P_{\tilde Z},\theta_{\tilde Z})$ for $(\tilde Z,\omega_{\tilde Z})$, where $\omega_{\tilde Z}$ is the restriction of $\omega\oplus\omega_\mathbb C$ to $\tilde Z$.\\ \item[Step 4] By Remark \ref{remark product}, the pair $(P_{\tilde Z},\theta_{\tilde Z})$ is an $S^1$-equivariant prequantization with respect to both the anti-diagonal and the `M-action' (in which $S^1$ acts on the $M$ component via the given action, and on the $\mathbb C$ component trivially). Using the terminology introduced in Remark \ref{prequan. descends}, we state our main theorem, which enable us to complete the process and get an equivariant prequantization on the (positive) cut space. \end{description} \begin{theorem} \label{main-thm} The $S^1$-equivariant spin$^c$ prequantization $(P_{\tilde Z},\theta_{\tilde Z})$ descends to an $S^1$-equivatiant spin$^c$ prequantization on $(\tilde Z/S^1=M_{cut}^+\,,\,\omega_{cut}^+)$ if and only if $$\alpha=\frac{\ell}{2}\in\mathfrak{u}(1)^*=\mathbb{R}$$ \end{theorem} \begin{proof} By Proposition \ref{main-prop}, $(P_{\tilde Z},\theta_{\tilde Z})$ will descend to a prequantization on the cut space, if and only if $$ \theta_{\tilde Z}'\left[\left(\frac{\partial}{\partial\varphi}\right)_{\eta^*(P_{\tilde Z})}\right]=0\ .$$ This is the same as requiring that $\theta_{\tilde Z}$, when restricted to $\eta^*(P_{\tilde Z})$, vanishes: $$\theta_{\tilde Z}\left.\left[\left(\frac{\partial}{\partial\varphi}\right)_{P_{\tilde Z}}\right]\right|_{\eta^*(P)}=0\ ,$$ which is equivalent to $$\theta_{M\times\mathbb C}\left[\left(\frac{\partial}{\partial\varphi}\right)_{P_{M\times\mathbb C}}\right]=0\ \ \text{on}\ \ \eta^*(P_{\tilde Z})\ .$$ Now using the formula for $\theta_{M\times\mathbb{C}}$, we get that $${\theta_M}\left(\left(\frac{\partial}{\partial\varphi}\right)_{P_M}\right)+ {\theta_\mathbb C}\left(\left(\frac{\partial}{\partial\varphi}\right)_{P_\mathbb C}\right)=0$$ It is not hard to show that at a point $(z,[A,w])\in P^\ell_\mathbb C=\mathbb C\times Spin^c(2)$, we have $$\quad\left(\frac{\partial}{\partial\varphi}\right)_{P^\ell_\mathbb C}= i\cdot\left[\bar z\frac{\partial}{\partial\bar z}-z\frac{\partial}{\partial z}\right]+\nu|_{[A,w]}$$ where $\nu|_{[A,w]}$ is the vector field on $Spin^c(2)$ generated by the element $$\nu=-\frac{1}{2}\,e_1e_2-\frac{i\cdot \ell}{2}\in\mathfrak{spin}^c(2)\ .$$ Therefore one computes that \begin{equation*} \theta_\mathbb C\left(\left(\frac{\partial}{\partial\varphi}\right)_{P^\ell_\mathbb C}\right)=-i\cdot\left(|z|^2+\frac{\ell}{2}\right) \end{equation*} On the other hand, by the condition defining our moment map, we have that $$ {\theta_M}\left(\left(\frac{\partial}{\partial\varphi}\right)_{P_M}\right) =i\cdot\pi^*\left(\Phi^{{\partial}/{\partial\varphi}}\right)$$ where $\pi\colon P\to M$ is the projection. Combining the above we see that $(P_{\tilde Z},\theta_{\tilde Z})$ descends to an $S^1$-equivatiant spin$^c$ prequantization on $(\tilde Z/S^1=M_{cut}^+\,,\,\omega_{cut}^+)$ if and only if (on $\eta^*(P_{\tilde Z})$): $$\pi^*\left(\Phi^{{\partial}/{\partial\varphi}}\right)- |z|^2-\frac{\ell}{2}=0\ .$$ But on the manifold $\tilde Z$ we have $\Phi(m)-|z|^2=\alpha$. and hence the last equality is equivalent to $$\alpha-\frac{\ell}{2}=0\ ,$$ as needed. \end{proof} \begin{remark} We can also construct a spin$^c$ prequantization for the negative cut space $(M_{cut}^-,\omega_{cut}^-)$ as follows. Recall that $M_{cut}^-$ is defined as the quotient $$\left\{(m,z)\in M\times\mathbb C:\Phi(m)+|z|^2=\alpha\right\}/S^1\ ,$$ where the $S^1$-action on $M\times\mathbb C$ is taken to be the diagonal action, and $\omega_{cut}^-$ is defined as before by reduction. The two form on $\mathbb C$ is taken to be $i\,dz\wedge d\bar z$, and the spin$^c$ prequantization for $\mathbb C$ is defined using the connection $$\theta_\mathbb C=\pi^*(\theta^R)-\frac{1}{2}(\bar zdz-zd\bar z)\ .$$The $S^1$-action on $P_\mathbb C^\ell$ will be given by $$S^1\times P_\mathbb C^\ell\to P_\mathbb C^\ell\qquad,\qquad(e^{i\varphi}z,(z,[a,w]))\mapsto (e^{i\varphi},[x_{\varphi/2}\cdot a,e^{-i\ell\varphi/2}\cdot w])$$ (see \S\ref{prequan. for C}). Other than that, the construction is carried out as for the positive cut space, and we can prove a theorem that will assert that $\alpha=\ell/2$, if the cutting is to be done along the level set $\Phi^{-1}(\alpha)$ of the moment map. \end{remark} \section{An example - The two sphere}\label{Sec-Ex} In this section we discuss in detail spin$^c$ prequantizations and cutting for the two-sphere. \subsection{Prequantizations for the two-sphere} The two-sphere will be thought of as a submanifold of $\mathbb R^3$:$$S^2=\{(x,y,z)\in\mathbb R^3:x^2+y^2+z^2=1\} $$ with the outward orientation and natural Riemannian structure induced from the inner product in $\mathbb R^3$. Fix a real number $c$, and let $\omega=c\cdot A$, where A is the area form on the two-sphere $$A=j^*(x\,dy\wedge dz+y\,dz\wedge dx+z\, dx\wedge dy)\ ,$$ and where $j\colon S^2\hookrightarrow\mathbb R^3$ is the inclusion. Note that $\omega$ is a symplectic form if and only if $c\ne 0$. For any real $\varphi$ define \[C_\varphi=\left( \begin{array}{ccc} \cos\varphi & -\sin\varphi & 0\\ \sin\varphi & \cos\varphi & 0\\ 0 & 0 & 1 \end{array} \right)\ ,\] and let $S^1$ act on $S^2$ via rotations around the $z$-axis, i.e.,$$(e^{i\varphi},v)\mapsto C_\varphi\cdot v\qquad,\qquad v\in S^2\ .$$ In Section 7 of \cite{SF1}, we constructed all $S^1$-equivariant spin$^c$-structures over the $S^1$-manifold $S^2$ (up to equivalence). Let us review the main ingredients here. First, the \emph{trivial spin$^c$ structure} $P_0$ is given by the following diagram. $$ \begin{CD} S^1\times Spin^c(3) @>>> P_0=Spin^c(3) @<<< Spin^c(3)\times Spin^c(2)\\ @VVV @V\Lambda VV @VVV\\ S^1\times SO(3) @>>> SO(3) @<<< SO(3)\times SO(2)\\ @VVV @V\pi VV @.\\ S^1\times S^2 @>>> S^2 @.\\ \end{CD} $$\\ In this diagram we use the fact that the frame bundle of $S^2$ is isomorphic to $SO(3)$. The projection $\pi$ is given by $$A\mapsto A\cdot x$$ where $x=(0,0,1)$ is the north pole, and the map $\Lambda$ is the obvious one. The horizontal maps describe the $S^1$ and the principal actions: $S^1$ and $SO(2)$ act on $SO(3)$ by left and right multiplication by $C_\varphi$, respectively. The principal action of $Spin^c(2)$ on $Spin^c(3)$ is just right multiplication, and the $S^1$ action on $Spin^c(3)$ is given by $$(e^{i\varphi},[A,z])\mapsto[x_{\varphi/2}\cdot A\;,\;e^{i\varphi}\cdot z]$$ where $x_{\varphi/2}=\cos\varphi+\sin\varphi\cdot e_1e_2\in Spin(3)$ . We can turn this spin$^c$ structure into a spin$^c$ prequantization as follows. Let $\omega_0=0$ the zero two form on $S^2$, and consider the 1-form $$\theta_0=\frac{1}{2}\;det_*\circ\theta^R\colon TSpin^c(3)\to \mathfrak u(1)=i\mathbb R$$ where $\theta^R$ is the right-invariant Maurer-Cartan form on $Spin^c(3)$ and the map $det$ was defined in \S\ref{def_of_preq}. Clearly, $(P_0,\theta_0)$ is an $S^1$-equivariant spin$^c$ prequantization for $(S^2,\omega_0)$. Next, we construct all $S^1$-equivariant line bundles over $S^2$. \begin{claim} Given a pair of integers $(k,n)$, define an $S^1$-equivariant complex Hermitian line bundle $L_{k,n}$ as follows: \begin{enumerate} \item As a complex line bundle, $$L_{k,n}=S^3\times_{S^1}\mathbb{C}\ ,$$ where $S^1$ acts on $\mathbb{C}$ with weight $n$ and on $S^3\subset\mathbb C^2$ by $$S^1\times S^3\to S^3\qquad,\qquad(a,(z,w))\mapsto (az,aw)\ .$$ \item The circle group $S^1$ acts on $L_{k,n}$ by $$ S^1\times L_{k,n}\to L_{k,n}\qquad,\qquad \left(e^{i\varphi},[(z,w),u]\right)\mapsto [(e^{i\varphi/2}z,e^{-i\varphi/2}w),e^{i(n+2k)\varphi/2}\cdot u]\ .$$ \end{enumerate} Then every equivariant line bundle over $S^2$ is equivariantly isomorphic to $L_{k,n}$ for some integers $k,n$. \end{claim} For the proof, see Claim 7.1 in \cite{SF1} (where slightly different notation is used). To get all spin$^c$ structures on $S^2$, we need to twist $P_0$ with the $U(1)$-bundle $U(L_{k,n})$ associated to $L_{k,n}$ for some $k,n\in\mathbb Z$. Thus define $$P_{k,n}=P_0\times_{U(1)}U(L_{k,n})\ .$$ The principal $Spin^c(2)$-action is given coming from the action on $P_0$, and the left $S^1$-action in induced from the diagonal action. We now define a connection $$\theta_{n}\colon TP_{k,n}\to i\mathbb R$$ on the $U(1)$ bundle $P_{k,n}\to SO(3)=SOF(S^2)$, which will not depend on $k$, as follows: $$\theta_{n}=\theta_0+\frac{n}{2}\left( -\bar z\;dz+z\;d\bar z-\bar w\;dw+w\;d\bar w\right)+u^{-1}du$$ where $(z,w)\in S^3\subset\mathbb C^2$ are coordinates on $S^3$ and $u^{-1}du$ is the Maurer-Cartan form on the $S^1$ component of $U(L_{k,n})=S^3\times_{S^1}S^1$. One can compute $$d\theta_{n}=n(dz\wedge d\bar z+dw\wedge d\bar w)=\pi^*(-in/2\cdot A)$$ and hence if we define $\omega_n=\frac{n}{2}\cdot A$ then $(P_{k,n}\,,\,\theta_n)$ is a spin$^c$ prequantization for $(S^2,\omega_n)$. Let $P_{det}$ be the $U(1)$-bundle associated to the determinant line bundle of a spin$^c$ structure. We proved in Section 7 of \cite{SF1}, that the determinant line bundle of any spin$^c$ structure on the two-sphere is isomorphic to $L_{2k+1,2n}$, and hence has a square root (as a non-equivariant line bundle). Using this fact and the construction of $(P_{k,n}\,,\,\theta_n)$ above, we prove: \begin{claim}\label{S^2-integral} The $S^1$-manifold $(S^2,\omega=c\cdot A)$ is spin$^c$-prequantizable (i.e., admits an $S^1$-equivariant spin$^c$ prequantization) if and only if $2c\in\mathbb Z$. \end{claim} \begin{proof} Assume that $(P,\theta)$ is a spin$^c$- prequantization for $(S^2,\omega)$. Then, by Claim~\ref{connection on P_det}, $\theta=\frac{1}{2}q^*(\overline\theta)$ for some connection 1-form $\overline\theta$ on the principal $U(1)$-bundle $p\colon P_{det}\to S^2$, where $q\colon P\to P/Spin(2)=P_{det}$ is the quotient map. Since $(P,\theta)$ is a spin$^c$ prequantization, we have $$ d\theta=\pi^*(-i\cdot\omega)\qquad\Rightarrow\qquad q^*\left(\frac{1}{2}d\overline\theta\right)=q^*p^*(-i\cdot\omega) \qquad\Rightarrow\qquad \frac{1}{2}d\overline\theta=p^*(-i\cdot\omega)\ $$ which implies $$d\overline\theta=p^*(-2i\cdot\omega)\ .$$ This means that $[-2i\cdot\omega]$ is the curvature class of the determinant line bundle of $P$. According to the above remark, $P_{det}$ is a square, and hence the class $$\frac{1}{2}\,[-2i\cdot\omega]=[-i\cdot\omega]$$ is a curvature class of a line bundle over $S^2$. This forces $[\omega]$ to be integral (Weyl's theorem - page 172 in \cite{Fr}), i.e., $$\int_{S^2}\omega\in 2\pi\mathbb Z\qquad\Rightarrow \qquad 2c\in\mathbb Z$$ and the conclusion follows. Conversely, assume that $2c\in\mathbb Z$. Then, as mentioned above, $(P_{k,2c}\,,\,\theta_{2c})$ (for any $k\in\mathbb Z$) is a spin$^c$ prequantization for $(S^2,c\cdot A)$ as needed. \end{proof} Let us now compute the moment map $$\Phi\colon S^2\to\mathfrak u(1)^*=\mathbb R$$ for $(S^2,n/2\cdot A)$ (for $n\in\mathbb Z$) determined by the prequantization $(P_{k,n},\theta_n)$. Recall that $$\theta_{n}=\theta_0+\frac{n}{2}\left( -\bar z\;dz+z\;d\bar z-\bar w\;dw+w\;d\bar w\right)+u^{-1}du\ .$$ It is straightforward to show that the vector field, generated by the left $S^1$-action on $P_{k,n}$ is $$\left(\frac{\partial}{\partial\varphi}\right)_{P_{k,n}}= \frac{i}{2}\frac{\partial}{\partial v}-\frac{i}{2}\left(-\bar z\frac{\partial}{\partial\bar z}+z\frac{\partial}{\partial z}+\bar w\frac{\partial}{\partial\bar w}-w\frac{\partial}{\partial w}\right)+\frac{i}{2}(n+2k)\frac{\partial}{\partial u}$$ where $\frac{\partial}{\partial v}$ is the vector field on $P_0$ generated by the $S^1$-action. Now compute \begin{multline*} \theta_n\left(\left(\frac{\partial}{\partial\varphi}\right)_{P_{k,n}}\right)= \frac{i}{2}-\frac{in}{4}\left(-\bar z z+z(-\bar z)-\bar w w\right)+\frac{i}{2}(n+2k)=\\[5pt] \qquad=\frac{i}{2}\left[n(|z|^2-|w|^2)+n+2k+1\right]\hfill\\ \end{multline*} and thus $\Phi$ is given by $$\Phi([z,w])=-i\cdot\theta_n\left(\left(\frac{\partial}{\partial\varphi}\right)_{P_{k,n}}\right)= \frac{n}{2}\left(|z|^2-|w|^2+1\right)+k+\frac{1}{2}$$ \begin{remark} Observe that for $[z,w]\in S^2=\mathbb CP^1$, the quantity $|z|^2-|w|^2$ represents the third coordinate $x_3$ (i.e., the height) on the unit sphere (this is part of the Hopf-fibration). Since $-1\le x_3\le 1$, we have (for $n\ge0$): $$k+\frac{1}{2}\le\Phi\le n+k+\frac{1}{2}$$ and hence the image of the moment map is the closed interval $$\left[k+\frac{1}{2}\,,\,n+k+\frac{1}{2}\right]$$ if $n\ge 0$ or $$\left[n+k+\frac{1}{2}\,,\,k+\frac{1}{2}\right]$$ if $n\le 0$. \end{remark} \subsection{Cutting a prequantization on the two-sphere} Fix an $S^1$-equivariant spin$^c$-prequantization $(P_{k,n},\theta_n)$ for $(S^2,\omega_n)$, where $\omega_n=\frac{n}{2}\cdot A$ ($A$ is the area form on the two-sphere) and $n\ne 0$. The corresponding moment map, as computed above, is $$\Phi\colon S^2\to\mathbb R\qquad,\qquad \Phi([z,w])= \frac{n}{2}\left(|z|^2-|w|^2+1\right)+k+\frac{1}{2}$$ We would like to cut this prequantization along a level set $\Phi^{-1}(\alpha)$ of the moment map. By Theorem \ref{main-thm} we must have $$\alpha=\frac{\ell}{2}$$ for some odd integer $\ell$, and the cutting has to be done using the spin$^c$ structure $(P^\ell_\mathbb C,\theta_\mathbb C)$ on $(\mathbb C,\omega_\mathbb C)$ (see \S\ref{prequan. for C}). In \cite[Section 7]{SF1} we performed the cutting construction for the two-sphere in the case where $\ell=1$. In this case we showed that the spin$^c$ structures obtained for the cut spaces are $$(P_{k,n})_{cut}^+=P_{0,k+n}\qquad,\qquad (P_{k,n})_{cut}^-=P_{k,-k}\ .$$ The computations in \cite{SF1} can be modified for an arbitrary $\ell$ to get $$(P_{k,n})_{cut}^+=P_{(\ell-1)/2,k+n-(\ell-1)/2}\qquad,\qquad (P_{k,n})_{cut}^-=P_{k,-k+(\ell-1)/2}\ .$$ Recall that the cut spaces obtained in this case are symplectomorphic to two-spheres (if $\ell/2$ is strictly between $k+\frac{1}{2}$ and $n+k+\frac{1}{2}$). Using this identification we have: \begin{claim} If the symplectic manifold $(S^2,\omega_n)$, endowed with the Hamiltonian $S^1$-action $$(e^{i\varphi},v)\mapsto C_\varphi\cdot v$$ and the above moment map $\Phi$ is being cut along the level set $\Phi^{-1}(\ell/2)$, then the reduced two-forms on the cut spaces are $$\omega_{cut}^+=\omega_{k+n+(1-\ell)/2}\qquad and\qquad \omega_{cut}^-=\omega_{-k+(\ell-1)/2}\ .$$ Here we assume that $\ell/2$ is strictly between $k+\frac{1}{2}$ and $n+k+\frac{1}{2}$. \end{claim} \begin{proof} Let us concentrate on the positive cut space. We will use cylindrical coordinates $(\phi,h)$ to describe the point $$(x,y,z)=(\sqrt{1-h^2}\cos\phi,\sqrt{1-h^2}\sin\phi,h)$$ on the unit sphere $S^2$. The positive cut space is obtained by reduction. The relevant diagram is $$\begin{CD} \tilde Z @>i>> S^2\times\mathbb C\\ @V p VV\\ \tilde Z/S^1\cong S^2 \end{CD}$$ Recall that $$\tilde Z=\left\{((\phi,h),u)\in S^2\times\mathbb C:\Phi(\phi,h)-|u|^2=\ell/2\right\}$$ and that the two-form on $S^2\times\mathbb C$ is $$\omega_n+\omega_\mathbb C=\frac{n}{2}\cdot A -i\,du\wedge d\bar u\ .$$ The map $p$ is given by $$((\phi,h),u=r\,e^{-i\alpha})\mapsto \left(\phi+\alpha\,,\,\frac{2n}{2n+2k+1-\ell}(h-1)+1\right)\ .$$ The pullback of the area form on $S^2$ via $p$ is $$A'=(d\phi+d\alpha)\wedge\frac{2n}{2n+2k+1-\ell}dh= \frac{2n}{2n+2k+1-\ell}(d\phi\wedge dh-\frac{2i}{n}du\wedge d\bar u)\ ,$$ and thus the pullback of $\omega_{k+n+(1-\ell)/2}$ via $p$ is $$\frac{k+n+(1-\ell)/2}{2}\cdot A'=\frac{n}{2}A-i\,du\wedge d\bar u=\omega_n+\omega_\mathbb C$$ as needed. A similar proof is obtained for the negative cut space. \end{proof} To complete the cutting, we need to find out what are the corresponding connections $\theta^\pm=(\theta_n)_{cut}^\pm$ on $(P_{k,n})_{cut}^\pm$. Instead of going through the cutting process of a connection, we proceed as follows (for the positive cut space). We know that $\left((P_{k,n})_{cut}^+,\theta^+\right)$ must be a spin$^c$ prequantization for $$((S^2)_{cut}^+,\omega_{cut}^+)=(S^2,\omega_{k+n+(1-\ell)/2})\ .$$ This means that $$d\theta^+=d\theta_{k+n+(1-\ell)/2}$$ which implies that $$\theta^+-\theta_{k+n+(1-\ell)/2}=\pi^*\beta$$ for some closed one-form $\beta\in\Omega^1(S^2;\mathfrak u(1))$. But then $\beta=df$ is also exact since $S^2$ is simply connected. We conclude that $$\theta^+=\theta_{k+n+(1-\ell)/2}+d(\pi^*(f))\ ,$$ thus, the bundle $((P_{k,n})_{cut}^+,\theta^+)$ is gauge equivalent to $((P_{k,n})_{cut}^+,\theta_{k+n+(1-\ell)/2})$. A similar argument can be carried out for the negative cut space. We summarize:\\ \emph{The cutting of $(S^2,\omega_n)$ along the level set $\Phi^{-1}(\ell/2)$ yields two spin$^c$ prequantizations: $$(P_{k,-k+(\ell-1)/2}\,,\,\theta_{-k+(\ell-1)/2})\qquad \mbox{for}\qquad ((S^2)_{cut}^-=S^2,\omega_{-k+(\ell-1)/2})$$ and $$(P_{(\ell-1)/2,k+n+(1-\ell)/2}\,,\,\theta_{k+n+(1-\ell)/2})\qquad \mbox{for}\qquad ((S^2)_{cut}^+=S^2,\omega_{k+n+(1-\ell)/2})\ .$$ } \section{Prequantizing $\mathbb CP^n$} In this section we construct a spin$^c$ prequantization for the complex projective space $\mathbb CP^n$ (with the standard Riemannian structure coming from the K\"ahler structure). For $n=1$ we have shown that a two form $\omega$ on $\mathbb CP^1\cong S^2$ is spin$^c$ prequantizable if and only if $\frac{1}{2\pi}\omega$ is integral (i.e., $\int_{\mathbb CP^1}\frac{1}{2\pi}\omega\in\mathbb Z$ - see Claim \ref{S^2-integral}). This is not true in general. We will prove that for an even $n$, if $(\mathbb CP^n,\omega)$ is spin$^c$ prequantizable then $\frac{1}{2\pi}\omega$ will not be integral. This is an important difference between spin$^c$ prequantization and the geometric prequantization scheme of Kostant and Souriau (an excellent reference for geometric quantization is \cite{GQ}). From now on, fix a positive integer $n$. Points in $\mathbb CP^n$ will be written as $[v]$, where $v\in S^{2n+1}\subset \mathbb C^{n+1}$. The Fubini-Study form $\omega_{FS}$ on $\mathbb CP^n$ will be normalized (as in \cite[page 261]{Kirillov}) so that $\int_{\mathbb CP^1}\omega_{FS}=1$ (where $\mathbb CP^1$ is naturally embedded into $\mathbb CP^n$). We describe our construction in steps. For simpliciy, we discuss the non-equivariant case (where the acting group $G$ is the trivial group), but our results will apply to the equivariant case as well. Also, $|\cdot|$ will denote the determinant of a matrix.\\ \noindent \textbf{Step 1 - Constructing a Spin$^c$ structure.}\\ The group $SU(n+1)$ acts transitively on $\mathbb CP^n$ via $$SU(n+1)\times\mathbb CP^n\to\mathbb CP^n\qquad,\qquad (A,[v])\mapsto [A\cdot v]\ .$$ Let $p=e_{n+1}\in\mathbb C^{n+1}$ denote the unit vector $(0,\dots,0,1)$. The stabilizer of $p$ under the $SU(n+1)$-action is $$H=S(U(n)\times U(1))=\left\{\left( \begin{array}{cc} B & 0 \\ 0 & |B|^{-1} \\ \end{array} \right) :B\in U(n)\right\}\subset SU(n+1)$$ and so $\mathbb CP^n\cong SU(n+1)/H$ via $$[A]\mapsto [A\cdot p]\ .$$ The tangent space $T_{[p]}\mathbb CP^n$ can be identified with $\mathbb C^n$ and then the isotropy representation is given by $$\sigma\colon H\to U(n)\qquad,\qquad \sigma\left( \begin{array}{cc} B & 0 \\ 0 & |B|^{-1} \\ \end{array} \right)=|B|\cdot B\ . $$ The frame bundle of $\mathbb CP^n$ can then be described as an associated bundle (using $U(n)\subset SO(2n)$): $$SOF(\mathbb CP^n)=SU(n+1)\times_\sigma SO(2n)\ .$$ The map $$f\colon U(n)\to SO(2n)\times S^1\qquad,\qquad A\mapsto (A,|A|)$$ has a lift $F\colon U(n)\to Spin^c(2n)$ (see \cite[page 27]{Fr} for an explicit formula for $F$). Using that, we define $$P=SU(n+1)\times_{\tilde\sigma}Spin^c(2n)$$ where $\tilde\sigma=F\circ\sigma\colon H\to Spin^c(2n)$. Thus we get a spin$^c$ structure $P\to SOF(\mathbb CP^n)\to\mathbb CP^n$ on the n-dimensional complex projective space.\\ \noindent \textbf{Step 2 - Constructing a connection on $P\to SOF(\mathbb CP^n)$ .}\\ Let $\theta^R\colon TSU(n+1)\to\mathfrak{su}(n+1)$ be the right-invariant Maurer-Cartan form, and define $$\chi\colon\mathfrak{su}(n+1)\to\mathfrak h=Lie(H)\qquad,\qquad \left( \begin{array}{cc} A & \ast \\ \ast & -tr(A) \\ \end{array} \right)\mapsto \left( \begin{array}{cc} A & 0 \\ 0 & -tr(A) \\ \end{array} \right)\ .$$ Since $\chi$ is an equivariant map under the adjoint action of $H$, we conclude that $$\chi\circ\theta^R\colon TSU(n+1)\to\mathfrak h$$ is a connection 1-form on the (right-) principal $H$-bundle $$SU(n+1)\to\mathbb CP^n=SU(n+1)/H\ .$$ This induces a connection 1-form on the principal $Spin^c(2n)$-bundle $P\to\mathbb CP^n$: $$\hat\theta\colon TP\to\mathfrak{spin}^c(2n)\ .$$ After composing $\hat\theta$ with the projection $$\frac{1}{2}det_*\colon\mathfrak{spin}^c(2n)=\mathfrak{spin}(2n)\oplus \mathfrak u(1)\to\mathfrak u(1)=i\mathbb R$$ We get a connection 1-form $\theta=\frac{1}{2}det_*\circ\hat\theta$ on the principal $U(1)$-bundle $P\to SOF(\mathbb CP^n)$. In fact, here is an explicit formula for the connection $\theta$:\\[5pt] If $\xi=\left( \begin{array}{cc} A & \ast \\ \ast & -tr(A) \\ \end{array} \right)\in\mathfrak{su}(n+1)$,\vspace{5pt} $\zeta\in\mathfrak{spin}^c(2n)$, $\xi^R$ and $\zeta^L$ are the corresponding vector fields on $SU(n+1)$ and $Spin^c(2n)$, and $$q\colon SU(n+1)\times Spin^c(2n)\to P$$ is the quotient map, then a direct computation gives $$\theta(q_*(\xi^R+\zeta^L))=\frac{n+1}{2}\cdot tr(A)+\frac{1}{2}det_*(\zeta)\ .$$ Note that if $\zeta\in\mathfrak{spin}(2n)$, then $\theta(q_*(\zeta^L))=0$.\\ \noindent \textbf{Step 3 - Computing the curvature of $\theta$.}\\ Using the formula $$d\theta(V,W)=V\,\theta(W)-W\,\theta(V)-\theta([V,W])$$ for any two vector fields $V,W$ on $P$, we can compute the curvature $d\theta$ of the connection $\theta$. We obtain the following:\\ If $\xi_1,\xi_2\in\mathfrak{su}(n+1)$, $\zeta_1,\zeta_2\in\mathfrak{spin}^c(2n)$, and $$[\xi_1,\xi_2]=\left( \begin{array}{cc} X & \ast \\ \ast & \ast \\ \end{array} \right)\in\mathfrak{su}(n+1)$$ then we have $$d\theta(q_*(\xi_1^R+\zeta_1^L),q_*(\xi_2^R+\zeta_2^L))= -\frac{n+1}{2}\cdot tr(X)\ .$$ Let $\omega$ be the real two form on $\mathbb CP^n$ for which $$d\theta=\pi^*(-i\cdot\omega)\ .$$ In fact $$\omega=-\frac{n+1}{2}\cdot 2\pi\,\omega_{FS} $$ where $\omega_{FS}$ is the Fubini-Study form. To see this, it is enough, by $SU(n+1)$-invariance of $\omega$ and $\omega_{FS}$, to show the above equality at one point (for instance, at $[p]\in\mathbb CP^n$). Recall that the cohomology class of $\omega_{FS}$ generates the integral cohomology of $\mathbb CP^n$, i.e., $\int_{\mathbb CP^1}\omega_{FS}=1$. This immediately implies that our two form $\omega$ is integral if and only if $n$ is odd, and we have:\\ $(P,\theta)$ is a spin$^c$ prequantization for $(\mathbb CP^n,\omega)$. \begin{remark} It is not hard to conclude, that a spin$^c$ prequantizable two form $\omega$ on $\mathbb CP^n$ is integral if and only if $n$ is odd. In fact, Proposition D.43 in \cite{Kar}, together with Claim \ref{connection on P_det} imply the following:\\ \emph{For an odd $n$, a two-form $\omega$ on $\mathbb CP^n$ is spin$^c$ prequantizable if and only if $\frac{1}{2\pi}\omega$ is integral, i.e., $\left[\frac{1}{2\pi}\omega\right]\in\mathbb Z[\omega_{FS}]$.\\ For an even $n$, a two-form $\omega$ on $\mathbb CP^n$ is spin$^c$ prequantizable if and only if $\left[\frac{1}{2\pi}\omega\right]\in\left(\mathbb Z+\frac{1}{2}\right)[\omega_{FS}]$.} \end{remark}
2023-04-23T06:10:11.345Z
2007-12-12T17:02:23.000Z
redpajama/arxiv
arxiv_0002
333
11,959
6f08575e20746968c2c14352cf2b4366084b091f
\section{Introduction and summary} \subsection{Introduction} A covariant quantization of the massless $D$=11 superparticle (see \cite{B+T=D-branes,Green+99}) has been recently considered \cite{BdAS2006} in its twistor-like Lorentz harmonics formulation \cite{BL98'} (see also \cite{B90,BZ-str,BZ-strH,BZ-p}). This new, covariant {\it supertwistor quantization} leds to the linearized $D$=11 supergravity multiplet in the spectrum (in agreement with the light--cone results of \cite{Green+99}) and permitted to find a possible origin of the hidden $SO(16)$ symmetry of the $D=11$ supergravity \cite{Nicolai87}. In this paper we study the BRST quantization of the $D$=11 massless superparticle model in that approach and then turn back to the covariant quantization of physical degrees of freedom (different from the supertwistor one in \cite{BdAS2006}) to search for an explanation of the simple structure of the superparticle cohomologies. The $D=11$ superparticle is interesting on its own, as the simplest of the M-theory superbranes, the M0-brane, and because its quantization produces, as noticed above, the linearized $D$=11 supergravity multiplet. Nevertheless, our main motivation is to look for the origin and geometric meaning of the `pure spinor' formalism by Berkovits \cite{NB-pure}. Recently, a breakthrough in the covariant description of quantum superstring theory has been reached in this pure spinor framework: a technique for loop calculations was developed \cite{NBloops} and the first results were given in \cite{NBloops,NBloopC,NBloopsR4}. In particular, two new multiloop theorems useful in a resent investigations of the possible finiteness of $N$=8 $D$=4 supergravity \cite{N8SG=fin} were proved in \cite{NBloopsR4}. On the other hand, the pure spinor superstring was introduced -and still remains- as a set of prescriptions for quantum superstring calculations, rather than as a quantization of the Green-Schwarz superstring. Despite a certain progress in relating the pure spinor superstring \cite{NB-pure} to the original Green--Schwarz formulation \cite{pure-GS}, and also \cite{Dima+Mario+02} to the superembedding approach \cite{bpstv,hs1,bst,Dima}\footnote{Notice also the recent progress \cite{Skenderis07} in derivation of the pure spinor ghost measure for loop calculations, which was originally proposed in \cite{NBloops} on the ground of a series of very elegant but indirect arguments involving the picture changing operator characteristic of the RNS (Ramond--Neveu--Schwarz) string model \cite{RNS,GSO}. This was reached, however, by starting from the pure spinor superstring by Berkovits, covariantizing it with respect to the worldsheet reparametrizations by introducing two dimensional gravity and quantizing this sector {\it a la} Batalin-Vilkovisky \cite{BV83}. Thus, although the subject of \cite{Skenderis07} was the quantization of Berkovits pure spinor model rather than the original Green-Schwarz superstring, a deeper understanding of the loop calculation technique has been reached already at this stage. The approach similar to \cite{Skenderis07} was also developed in earlier \cite{GrassiPolocastro04}. }, the origin and geometrical meaning of the pure spinor formalism is far from being clear. Possible modifications of the pure spinor approach are also being considered (see {\it e.g.} \cite{nonmNB}). In this context, the Lorentz harmonic approach \cite{Sok,NPS,Lharm,B90,Ghsds,BZ-str,BZ-strH,BZ-p,GHT93}, in the frame of which a significant progress in solving the problem of covariant superstring quantization had already been made in late eighties \cite{NPS,Lharm}, looks particularly interesting. Although no counterpart of the recent progress in loop calculations \cite{NBloops,NBloopC} has been reached (yet) in the Lorentz harmonics framework, its relation with the superembedding approach \cite{bpstv,hs1,bst,Dima}, transparent geometrical meaning \cite{Sok,Ghsds,BZ-str,BZ-strH,BZ-p} and twistor-likeness \cite{BZ-str,BZ-strH,BZ-p} justifies the hope that its further development (in the pragmatic spirit characteristic for the pure spinor approach of \cite{NB-pure,NBloops,NBloopC}) may be helpful to understand the origin and the geometrical meaning of the pure spinor formalism \cite{NB-pure} as well as its nonminimal modifications \cite{nonmNB} and even that it might provide a basis for an alternative, convenient and algorithmic, technique for the superstring loop calculations. A natural first stage in such a program is to study the covariant quantization of superparticle, and in particular, of the $D$=11 massless superparticle or M$0$--brane\footnote{See \cite{GrassiAnguelovaVanhove04} for the loop calculations with the use of the $D=11$ pure spinor formalism. }, less studied as well in comparison with the $D$=10 and $D$=4 superparticle models. \subsection{Summary of the main results} The BRST charge proposed by Berkovits \cite{NB-pure} has the form \begin{eqnarray}\label{QbrstB} \mathbb{Q}^{B}= {\Lambda}^\alpha \; d_\alpha \; , \qquad \end{eqnarray} where $d_\alpha$ are the fermionic constraints of the (here $D$=11) superparticle model, which obey the algebra \begin{eqnarray}\label{dd=P} \{ d_\alpha\, , \, d_\beta \}= 2iP\!\!\!/_{\alpha\beta}\equiv 2i \Gamma^m_{\alpha\beta}P_m \; \qquad (\hbox{here}\; \alpha=1,\ldots , 32\; , \; m= 0,1,\ldots , 9, \#)\; , \quad \end{eqnarray} where $P_m$ is the superparticle momentum, and ${\Lambda}^\alpha$ is the complex {\it pure spinor} which obeys \begin{eqnarray}\label{NB-pureSp} {\Lambda}\Gamma_a{\Lambda}=0 \; , \qquad {\Lambda}^\alpha\not= ({\Lambda}^\alpha)^*\; . \qquad \end{eqnarray} This constraint guarantees the nilpotency $(\mathbb{Q}^{B})^2 =0$ of the Berkovits BRST charge (\ref{QbrstB}). The generic null spinor $\Lambda_\alpha$ contains $23$ complex or $46$ real parameters \cite{NB-pure}\footnote{The direct counting shows 32 - 11 = 21 complex or 42 real parameters, but one can show, passing to the $SO(1,9)$ covariant representation of the (originally $SO(1,10)$ covariant) $D$=11 pure spinor condition \cite{NB-pure}, that two of the $11$ complex conditions are satisfied automatically, so that there are only nine independent complex conditions.}. A $39$ parametric solution $\widetilde{\Lambda}_\alpha$ of this constraint is provided by \begin{eqnarray}\label{pureSp=} \widetilde{\Lambda}_\alpha = \tilde{\lambda}^+_p v_{\alpha p}^{\;-}\; , \qquad \tilde{\lambda}^+_p\tilde{\lambda}^+_p=0\; , \qquad \{v_{\alpha p}^{\;-}\} = {Spin(1,10) \over [Spin (1,1)\otimes Spin(9)] \, \subset\!\!\!\!\!\times \mathbb{K}_9 } = \mathbb{S}^{9} \; , \qquad \end{eqnarray} where $\tilde{\lambda}^+_p$ is a complex 16 component $SO(9)$ spinor with zero norm, $\tilde{\lambda}^+_p\tilde{\lambda}^+_p=0$, carrying $32-2=30$ degrees of freedom and $v_{\alpha p}^{\;-}$ are spinorial Lorentz harmonics \cite{BL98'} (see also \cite{B90,Ghsds}, \cite{BZ-str,BZ-strH,BZ-p,BL98'}), a set of $16$ constrained $D$=11 bosonic spinors which, once the constraints are taken into account, provide the homogeneous coordinates for the 11 dimensional celestial sphere $S^9$ and thus carry $9$ degrees of freedom (see below). The existence of such a solution already suggests a relation among the pure spinor and the Lorentz harmonics approaches. Notice that in $D=10$ dimensional case such a relation is much more close. The solution of Eq. (\ref{pureSp=}) carries {\bf 16+8-2=22 } degrees of freedom, the same number as the generic pure spinor, so that it is the general solution. This may be important for the study of superstring covariant quantization on the line similar to what we present here for the case of superparticle. \bigskip Here we first construct the Hamiltonian mechanics of the twistor-like Lorentz harmonics formulation of the $D=11$ superparticle and, with the help of the spinorial Lorentz harmonics, separate {\it covariantly} the first and the second class constraints (see \cite{BZ-strH} for an analogous result for the Green-Schwarz superstring). Then we take into account the second class constraints by introducing Dirac brackets \cite{Dirac}, and calculate the Dirac brackets algebra of the first class constraints which happens to be a nonlinear algebra. Further, following the pragmatic spirit of the Berkovit's approach \cite{NB-pure}, \cite{NBloops}, we take care of the part of constraints separately and left with a set of $16$ fermionic and $1$ bosonic first class constraints, the generators of the fermionic $\kappa$--symmetry (see \cite{A+L82,S83}) and its bosonic $b$--symmetry superpartner, the Dirac brackets of which represent the $d=1$ , $n=16$ (worldline) supersymmetry algebra. This set of constraints is described by the BRST charge \begin{eqnarray}\label{Qsusy-Int} \mathbb{Q}^{susy}= \lambda^+_q D_q^{-} + i c^{++} \partial_{++} - \lambda^+_q\lambda^+_q {\partial\over \partial c^{++}}\; , \qquad \{ D_p^{-} , D_q^{-} \} = 2i \delta_{qp} \partial_{++} \; , \qquad \end{eqnarray} including $16$ real bosonic ghosts $\lambda^+_q$ and \footnote{The sign superscripts in $\lambda^+_q$ and $D^-_q$ denote the spinorial Majorana-Weyl (MW) representations of $SO(1,1)$; double sign superscript $--$, $++$ or subscript, like in $\partial_{++}$, would correspond to the $SO(1,1)$ vector. Since the MW spinorial representation of $SO(1,1)$ is one dimensional, the subscript $+$ is equivalent to superscript $-$ and {\it vice versa}, so that $\{ D_p^{-} , D_q^{-} \} = 2i \delta_{qp} \partial_{++} $ in (\ref{Qsusy-Int}) is $SO(1,1)$ invariant. This notation corresponds to the light--cone basis in two dimensional space (or in two dimensional subspace of the D--dimensional spacetime) with the (flat space) metric of the form $g_{++\; --}={1\over 2}$, $g_{++\; ++}=0=g_{--\; --}$, so that, {\it e.g.} $\partial_{++}= {1\over 2}\partial^{--}$, where the coefficients ${1\over 2}$ (which then appear in Eq. (\ref{UUT=eta})) are introduced to avoid the appearance of $\sqrt{2}$ coefficients in many equations. } one real fermionic ghost $c^{++}$. An analysis of the cohomology of this BRST operator shows that it is trivial if the norm $ \lambda^+_q\lambda^+_q$ of bosonic ghost $ \lambda^+_q$ is nonvanishing. In other words, the nontrivial cohomology of $\mathbb{Q}^{susy}$ has support on $\lambda^+_q\lambda^+_q=0$. For a real spinor $\lambda^+_q\lambda^+_q=0$ implies $\lambda^+_q=0$. This produces a technical problem which is sorted out by means of a regularization which consists in allowing $\lambda^+_q$ to be {\it complex}, $\lambda^+_q \mapsto \tilde{\lambda}^+_q\not= (\tilde{\lambda}^+_q)^*$ so that $\tilde{\lambda}^+_q\tilde{\lambda}^+_q=0$ allows for nonvanishing complex solutions. Furthermore, this implies the reduction of the cohomology problem for the regularized BRST operator $\mathbb{Q}^{susy}$ to the search for cohomology at vanishing bosonic ghost, $\tilde{\lambda}^+_q=0$, for the following complex BRST charge \begin{eqnarray}\label{tQsusy-Int} \tilde{\mathbb{Q}}^{susy}= \tilde{\lambda}^+_q \; D_q^{-} + i c^{++} \partial_{++} \; , \qquad \tilde{\lambda}^+_q\tilde{\lambda}^+_q=0 \; , \qquad \{ D_p^{-} , D_q^{-} \} = 2i \delta_{qp} \partial_{++} \; . \qquad \end{eqnarray} We discuss the relation of the above non-hermitean $\tilde{\mathbb{Q}}^{susy}$ operators with the (always complex) Berkovits BRST charge and find that this comparison shows the possible origin of the intrinsic complexity of the Berkovits formalism. The above results were briefly reported in \cite{IB07}; here we give details on their derivation. The possible results of stringy generalizations are discussed in the concluding sec. 6 of the present paper. Let us stress that of all the cohomologies of the Berkovits--like BRST charge $\tilde{\mathbb{Q}}^{susy}$ (\ref{tQsusy-Int}) only the ones calculated (and remaining nontrivial) at $\tilde{\lambda}_q=0$ describe the cohomology of the superparticle BRST operator ${\mathbb{Q}}^{susy}$. The full cohomology of $\tilde{\mathbb{Q}}^{susy}$ is clearly reacher and is related with spinorial cohomologies of \cite{SpinCohom02}. As far as the $\tilde{\mathbb{Q}}^{susy}$ cohomology for vanishing bosonic ghost describing the M0--brane spectrum is concerned, this can be described by the function of variables which are inert under $\kappa$-- and $b$--symmetry. This relatively simple structure finds its explanation in the properties of the superparticle action in the so--called covariantized light-cone basis (see \cite{Sok,B90,GHT93}). The change of variable corresponding to this basis in the superparticle spinor moving frame action results in an automatical gauge fixing of the $\kappa$--symmetry and $b$--symmetry. Thus, in this basis, the set of superparticle first class constraints contains only the generators of the Borel subgroup $[SO(1,1)\times SO(9)]\subset\!\!\!\!\!\!\times K_9$ of the Lorentz group $SO(1,10)$. We present here the BRST charge describing this set of the first class constraints. Then, following Dirac \cite{Dirac}, we impose this set of first class constriants as conditions on the wave function and discuss the quantization of physical degrees of freedom (a covariantized light cone basis prototype of the supertwistor quantization in \cite{BdAS2006}) which shows the hints of hidden $SO(16)$ symmetry and suggests some speculations on possible $E_8$ symmetry of $D=11$ supergravity. \subsection{Structure of the paper} This paper is organized as follows. Sec. \ref{LHSsp} reviews the spinor moving frame (twistor like Lorentz harmonics) formulation of the $D$=11 massless superparticle or M0-brane and shows its classical equivalence with the standard Brink--Schwarz formulation. In Sec. III we develop the Hamiltonian formalism for this formulation of M0--brane, discuss its classical BRST charge and the reduced BRST operator ${\mathbb{Q}}^{susy}$ corresponding to a subset of the M0--brane first class constraints. Particularly, the primary constraints are obtained in sec. \ref{Primary}. In sec. \ref{DBsec} the Dirac brackets that allow us to treat harmonic variables as coordinates on the Lorenz group manifold are defined. These are related with the group-theoretical structure of Lorentz harmonics in sec. \ref{CartanF}, where the $SO(1,10)/[[SO(1,1)\otimes SO(9)]\subset\!\!\!\!\!\!\times K_9]$ Cartan forms are introduced. These are used in sec. \ref{PB-DB} to define canonical Hamiltonian of the M0--brane model. The second class constraints are found and the Dirac brackets allowing to treat them in the strong sense are presented in sec. \ref{secIIclass}. The Dirac bracket algebra of all the first class constraints is presented in sec. \ref{secIclass}. The BRST charge $\mathbb{Q}^{\prime}$ for the nonlinear (sub)superalgebra of the first class constraints is obtained in sec. \ref{Qprime}. Finally, the BRST charge ${\mathbb{Q}}^{susy}$ is obtained by reduction of $\mathbb{Q}^{\prime}$ in sec. \ref{secQsusy}. The cohomology of ${\mathbb{Q}}^{susy}$ is studied in Sec. \ref{QsusyCoH}. In particular, the complex charge (\ref{tQsusy-Int}) is introduced in sec. \ref{CohQ-2} and its relation with the Berkovits BRST charge is discussed in sec. \ref{CohQ-3}. To explain the relatively simple structure of the ${\mathbb{Q}}^{susy}$ cohomology, in Sec. \ref{SecAnalB} we study the superparticle spinor moving frame action in the covariantized light-cone basis (see \cite{Sok,B90,GHT93}). The automatical gauge fixing of the $\kappa$--symmetry and $b$--symmetry which occurs in the action when changing variables to this basis is discussed in sec. \ref{SecAnalB}.1.1. The BRST charge describing the set of the first class constraints of the superparticle action in this basis is presented in sec. \ref{SecAnalB}.1.2. The quantization of the physical degrees of freedom of the superparticle using the covariantized light cone basis is discussed in sec. \ref{SecAnalB}.2. There we also discuss the hints of possible hidden symmetries of the D=11 supergravity which appears on the way of such a covariant quantization. In Sec. \ref{Concl} we present our conclusions (sec. \ref{Concl}.1) and an outlook (secs. \ref{Concl}.2, \ref{Concl}.3), including the discussion on possible results of the generalization of our study of M0--brane to the case of type IIB superstring (sec. \ref{Concl}.2). Some technical details on harmonics are presented in the Appendix. \section{The M0-brane in the spinor moving frame formulation. Twistor--like action and its gauge symmetries. }\label{LHSsp} \setcounter{equation}0 \subsection{Towards the spinor moving frame action for the D=11 massless superparticle} The Brink-Schwarz massless superparticle action, $S_{BS} = \int_{W^1} {1\over 2e}{\Pi}_{\tau m} {\Pi}_\tau^{m}$, can be written in the following first order form \begin{eqnarray}\label{11DSSP-1st} S_{BS}^{1} & = & \int_{W^1} \left(P_{{m}} {\Pi}^{m} - {1\over 2}d\tau \; e\; P_{{m}} P^{{m}} \right)\; , \qquad \end{eqnarray} where $P_m(\tau)$ is the auxiliary momentum variable, $e(\tau)$ is the worldline einbein and $\Pi^m = d\tau \hat{\Pi}^{m}_\tau$ is the pull-back of the bosonic supervielbein of the tangent superspace to the superparticle worldline. In flat $D=11$ superspace this reads \begin{eqnarray} \label{11DPi} && \Pi^m := dx^m - id\theta \Gamma^m \theta = d\tau \hat{\Pi}^{m}_\tau \; , \qquad \hat{\Pi}^{m}_\tau:= \partial_\tau\hat{x}^m(\tau ) - i\partial_\tau \hat{\theta}(\tau) \Gamma^m \hat{\theta}(\tau)\; \end{eqnarray} The action (\ref{11DSSP-1st}) is valid in any dimension; the $D$=11 massless superparticle action \cite{B+T=D-branes} corresponds to $m=0\, , 1,\ldots 9, \# $, a $32$ component Majorana spinor $\theta^\alpha $ and $32\times 32$ eleven--dimensional gamma matrices $\Gamma^m_{\alpha\beta}:= \Gamma^m{}_\alpha{}^\gamma C_{\gamma\beta}= \Gamma^m_{\beta\alpha}$. The einbein $e(\tau)$ plays the r\^ole of Lagrange multiplier and produces the mass shell constraint \begin{eqnarray} \label{PmPm=0} P_mP^m=0 \; . \end{eqnarray} Since Eq. (\ref{PmPm=0}) is algebraic, it may be substituted into the action (\ref{11DSSP-1st}), which gives \begin{eqnarray}\label{11DSSPwhen} S^{\prime}_{M0} & = & \int_{W^1} \; P_{{m}} \hat{\Pi}^{m} \; , \qquad P_{{m}} P^{{m}}=0 \; . \qquad \end{eqnarray} Thus, if the general solution of (\ref{PmPm=0}) is known, one may substitute it for $P_m$ in (\ref{11DSSPwhen}) and obtain a classically equivalent formulation of the $D$- (here 11-) dimensional Brink-Schwarz superparticle. The moving frame or twistor-like Lorentz harmonics formulation of \cite{BL98',BdAS2006} (see \cite{B90} for $D$=4 and \cite{IB+AN96} for $D=10$) can be obtained just in this way. It is easy to solve the constraint (\ref{PmPm=0}) in a non-covariant manner: in a special Lorentz frame a solution with positive energy, $P^{\!\!\!^{0}}_{(a)}$, reads {\it e.g.} \begin{eqnarray} \label{PmPm=00} & P^{\!\!\!^{0}}_{(a)} = {\rho\over 2} \; (1,\ldots , -1) = {\rho\over 2} \; (\delta_{(a)}^0 -\delta_{(a)}^{\#}) \quad . \end{eqnarray} The solution in an arbitrary frame follows from (\ref{PmPm=00}) by making a Lorentz transformation, $P_{m}= U_m{}^{(a)}P^{\!\!\!^{0}}_{(a)}$ with $U_m{}^{(a)} \in SO(1,10)$, \begin{eqnarray} \label{PmPm=01} P_m := U_m{}^{(a)} P^{\!\!\!^{0}}_{(a)} = {\rho\over 2} \; (u_{m}^{\;\; 0} - u_{m}^{\; \#}) \; , \qquad U_m^{\, (a)}:= (u_{m}^{\;\; 0} , u_{m}^{\;\; i} , u_{m}^{\;\#}) \in SO(1,D-1) \; . \end{eqnarray} Note that, since $P_{m}=P_{m}(\tau)$ is dynamical variable in the action (\ref{11DSSPwhen}), the same is true for the Lorentz group matrix $U$ when it is used to express $P_{m}$ through Eq. (\ref{PmPm=01}), $U_m{}^{(a)}=U_m{}^{(a)}(\tau)= (u_{m}^{\;\; 0}(\tau) , u_{m}^{\;\; i}(\tau) , u_{m}^{\;\#}(\tau))$. Such {\it moving frame variables} \cite{BZ-str} are called {\it Lorentz harmonics} \cite{B90,Ghsds} (see \cite{GIKOS}, also light--cone harmonics in \cite{Sok}). Substituting (\ref{PmPm=01}) for $P_{m}$ in (\ref{11DSSPwhen}) or, equivalently, in (\ref{11DSSP-1st}), one arrives at the following action \begin{eqnarray}\label{11DSSP(LH)} S = \int_{W^1} \; {1\over 2} \rho^{++} u^{--}_{{m}} \hat{\Pi}^{m} \; , \qquad u^{--}_{{m}} u^{--{m}}=0 \quad & (\; \Leftarrow \quad U:= \{ {u_m^{++} + u_m^{--}\over 2} , u_m^{\; i} , {u_m^{++} - u_m^{--}\over 2} \} \in SO(1,10)\;) \qquad \end{eqnarray} where the light--likeness of the vector $u^{--}_m=u^0_m - u^{\#}_m$ (see also (\ref{harmUdef}) below) follows from the orthogonality and normalization of the timelike $u_m^{0}$ and spacelike $u_m^{\# }$ vectors which, in their turn, follow from $U\in SO(1,10)$ in Eq. (\ref{PmPm=01}) (as it is noticed in the brackets in (\ref{11DSSP(LH)})). At this stage it might seem obscure what is the advantage of the action of Eq. (\ref{11DSSP(LH)}) with respect to (\ref{11DSSPwhen}) or (\ref{11DSSP-1st}). However, as we discussed below, the action (\ref{11DSSP(LH)}) {\it hides the twistor--like action}, a higher dimensional ($D$=11 here) generalization of the D=4 Ferber--Schirafuji action \cite{Ferber}. The twistor like variables called {\it spinorial harmonics} appears as `square roots' of the vector harmonics (see below); they can be used to separate covariantly the first and the second class constraints and to provide the {\it irreducible} form of the $\kappa$--symmetry \cite{A+L82,S83} (infinitely reducible in the standard formulation of massless superparticle \cite{S83}\footnote{Notice that in the case of massless N=2 superparticle, which presently are identified with D0--branes, the covariant gauge fixing of the $\kappa$--symmetry is possible already in the standard formulation \cite{A+L82}.}). This also explains why the formulation based on the action (\ref{11DSSP(LH)}) is called the {\it spinor moving frame} formulation. \subsection{Twistor--like spinor moving frame action of M0--brane and its gauge symmetries. } The spinor moving frame action for the $D=11$ massless superparticle can be written in the following equivalent forms \cite{BL98'} (see \cite{IB+AN96} for D=10 and \cite{B90} for D=4) \begin{eqnarray}\label{11DSSP} S:= \int d\tau L &=& \int_{W^1} {1\over 2}\rho^{++}\, u_{m}^{--} \, \Pi^m = \int_{W^1} {1\over 32}\rho^{++}\, v_{\alpha q}^{\; -} v_{\beta q}^{\; -} \, \Pi^m \tilde{\Gamma}_m^{\alpha\beta}\; , \qquad \\ \nonumber && {} \alpha= 1,2, \ldots , 32 \, \quad (n\; in \; general ) \; , \quad q=1, \ldots , 16 \, \quad (n/2\; in \; general ) \; , \qquad \\ \nonumber && {} \qquad m=0,\ldots , 9, \# \quad ( (D-1)\; in \; general) \; \end{eqnarray} where we use the symbol $\#$ to denote the tenth spatial direction ($X^{\#}:= X^{10}$) and the notation $\Gamma^m\equiv \Gamma^m{}_{\alpha\beta}:= \Gamma^m{}_{\alpha}{}^{\gamma}C_{\gamma\beta}\,,\, {\tilde \Gamma}^m\equiv {\tilde \Gamma}^m{\,}^{\alpha\beta}:=C^{\alpha\gamma} \Gamma^m{}_{\gamma}{}^{\beta}$ for the $D=11$ gamma--matrices contracted with $C_{\alpha\beta}$ and $C^{\alpha\beta}$. The first from of the action (\ref{11DSSP}) coincides with (\ref{11DSSP(LH)}); the second form is twistor--like, {\it i.e.} it resembles the Ferber--Schirafuji action \cite{Ferber} for the massless $D=4$ superparticle. Instead of two--component Weyl spinor of the Ferber supertwistor, the action of Eq. (\ref{11DSSP}) includes the set of $16$ bosonic $32$--component Majorana spinors $v_{\alpha}{}^-_{q}$ which satisfy the following kinematical constraints (see \cite{BZ-str,BZ-strH,BL98'}), \begin{eqnarray}\label{vv=uG} \left\{ \matrix{2 v_\alpha{}_{q}^{-} v_\beta{}_{q}^{-} &=& u_m^{--}{\Gamma}^m_{\alpha\beta}\; & \qquad (a)\; , \cr v_{q}^{-}\tilde{\Gamma}_m v_{p}^{-} &=& \delta_{qp} \; u_m^{--} \; & \qquad (b)\; , \cr v_\alpha{}_{q}^{-}C^{\alpha\beta}v_\beta{}_{p}^{-}&=& 0 \; \qquad & \qquad (c)\; , } \right. \; \quad u_m^{--}u^{m --}=0 \qquad (d)\;\; . \qquad \end{eqnarray} Although, in principle, one can study the dynamical system using just the kinematical constraints (\ref{vv=uG}) (see \cite{gs92,BCSV}), it is more convenient to treat the light--like vector $u_m^{--}$ as an element of the $SO(1,10)$-valued matrix describing {\it vector moving frame} and the set of $16$ $SO(1,10)$- spinors $v_\alpha{}_{q}^{-}$ as part of the corresponding $Spin (1,10)$--valued matrix describing the {\it spinor moving frame}. These moving frame variables are also called ({\it vector} and {\it spinor}) {\it Lorentz harmonics} and will be discussed in Sec. 2.4 below. Let us conclude this section by noticing that the action (\ref{11DSSP}) possesses a set of gauge symmetries which includes \\ i) the {\it irreducible} $\kappa$--symmetry \begin{eqnarray}\label{kappa-irr} \delta_\kappa x^m = i \delta_\kappa \theta^\alpha \Gamma^m_{\alpha\beta}\theta^\beta\; , \qquad \delta_\kappa \theta^\alpha = \kappa^{+q} v_q^{-\alpha} \; , \qquad \delta_\kappa v_\alpha{}^-_q =0 = \delta_\kappa u_m^{--}\; ; \qquad \end{eqnarray} the possibility to reformulate the $\kappa$--symmetry in the irreducible form is due to the presence of the constrained bosonic spinor variables $v_\alpha{}_{q}^{-}$ (see \cite{BZ-str,IB+AN96} and the discussion below); \\ ii) its superpartner, the tangent space copy of the worldvolume reparametrization symmetry, which we, following the pioneer paper \cite{A+L82}, call $b$--symmetry, \begin{eqnarray}\label{b-sym} \delta_b x^m = b^{++} u^{--m} \; , \qquad \delta_b \theta^\alpha = 0 \; , \qquad \delta_b v_\alpha{}^-_q =0 = \delta_b u_m^{--}\; ; \qquad \end{eqnarray} iii) a scaling $GL(1,\mathbb{R})$ symmetry \begin{eqnarray}\label{SO(1,1)} \rho^{++} \mapsto e^{2\alpha} \rho^{++}\; , \qquad u_m^{--} \mapsto e^{-2\alpha} u_m^{--}\; , \qquad v_{\alpha q}{}^- \mapsto e^{- \alpha} v_{\alpha q}{}^- \; , \qquad \end{eqnarray} with the wait determined by the sign indices $^{++}$, $^{--}$ and $^{-}$. In the light of Lorentz harmonic treatment of $v_{\alpha q}{}^{\!\! -}$ and $u_m^{--}$, which will be presented below, we prefer to identify this scaling symmetry as $SO(1,1)$ group transformations. \\ iv) The action (\ref{11DSSP}) is also invariant under the $Spin(9)$ symmetry acting on the $q=1, \ldots 16$ index of the constrained bosonic spinor variable $v_{\alpha q}{}^-$, \begin{eqnarray}\label{SO(9)} v_{\alpha q}{}^{\!\!\! -} \mapsto v_{\alpha p}{}^{\!\!\! -} S_{pq}\; , \qquad S_{pq} \in Spin(9)\; \quad \Leftrightarrow \quad \cases{ S^TS=\mathbb{I}_{16\times 16} \; , \cr S\gamma^I S^T = \gamma^J U^{JI} \; , \quad U^TU= \mathbb{I}_{9\times 9} }\; , \qquad \end{eqnarray} Notice that the nine dimensional charge conjugation matrix is symmetric and can be identified with the Kroneker delta symbol, $ \delta_{qp}\; $, so that the contraction $v_{\alpha q}{}^{\!\!\! -}v_{\beta q}{}^{\!\!\! -}$, entering the action, is $Spin(9)$ invariant. This $Spin(9)$ symmetry is used as an identification relation when the spinorial Lorentz harmonics are defined as homogeneous coordinates of the coset $SO(1,10)\over [SO(1,1)\otimes SO(9)]\subset\!\!\!\!\times K_9=\mathbb{S}^9$ (see below) given by a $Spin(1,10)$ valued matrix $V_\alpha{}^{(\beta)}=(v_{\alpha q}{}^{\!\! -} , v_{\alpha q}{}^{\!\!+})\in Spin(1,10)$, one of the two $32\times 16$ blocks of which is identified with our $v_{\alpha q}{}^-$. However, when the action (\ref{11DSSP}) with the variable $v_{\alpha q}{}^-$ subject only to the constraints (\ref{vv=uG}) is considered, one immediately finds that neither constraints nor the action involve the $d=9$ gamma matrices; all the contractions are made with $16\times 16$ Kroneker symbol $\delta_{qp}$, and the same matrix only is used in the constraints. \subsection{On O(16) gauge symmetry} \label{O(16)} Thus we have observed that {\it the action} (\ref{11DSSP}), when considered as constructed from spinorial variables restricted by the constraints (\ref{vv=uG}), \begin{eqnarray}\label{11DSSP-16} S &=& \int_{W^1} {1\over 32}\rho^{++}\, \tilde{v}_{\alpha q}^{\; -} \tilde{v}_{\beta q}^{\; -} \, \Pi^m \tilde{\Gamma}_m^{\alpha\beta}\; , \qquad \cases{ 2 \tilde{v}_\alpha{}_{q}^{-} \tilde{v}_\beta{}_{q}^{-} = {1\over 16} \tilde{v}_{p^\prime}^{-}\tilde{\Gamma}_m \tilde{v}_{p^\prime}^{-} {\Gamma}^m_{\alpha\beta}\; , \quad (a)\; \cr \tilde{v}_{q}^{-}\tilde{\Gamma}_m \tilde{v}_{p}^{-} = \delta_{qp} \; {1\over 16} \tilde{v}_{p^\prime}^{-}\tilde{\Gamma}_m \tilde{v}_{p^\prime}^{-} \; , \quad (b) \cr \tilde{v}_\alpha{}_{q}^{-}C^{\alpha\beta}\tilde{v}_\beta{}_{q}^{-}=0\; , \qquad {} \qquad {}\quad (c) \; } \; \qquad \\ \nonumber && {} \qquad \alpha= 1,2, \ldots , 32 \, \qquad \; , \quad q=1, \ldots , 16 \, \quad \; , \end{eqnarray} actually possesses the local $SO(16)$ symmetry acting on the $q=1,\ldots , 16$ indices of $\tilde{v}_{\alpha q}^{\; -}$ variables, \begin{eqnarray}\label{SO(16)} \tilde{v}_{\alpha q}{}^- \mapsto \tilde{v}_{\alpha p}{}^- O_{pq}\; , \qquad O_{pq} \in O(16)\quad \Leftrightarrow \quad O^TO=\mathbb{I}_{16\times 16} \; . \qquad \end{eqnarray} One can conclude that the relation between spinorial harmonic ${v}_{\alpha q}{}^- $, which transforms under $Spin(9)$ symmetry, and the above $\tilde{v}_{\alpha p}{}^- $, carrying the $SO(16)$ index $p$ is given by \begin{eqnarray}\label{tv-=v-L} \tilde{v}_{\alpha p}{}^- = {v}_{\alpha q}{}^- L_{qp}\; , \qquad L_{qp} \in O(16)\quad \Leftrightarrow \quad L^TL=\mathbb{I}_{16\times 16} \; , \qquad \end{eqnarray} where $L_{qp}$ is an arbitrary orthogonal $16\times 16$ matrix. Clearly, $\tilde{v}_{\alpha p}{}^-$ of Eq. (\ref{tv-=v-L}) solves the constraints (\ref{vv=uG}a-d) if these are solved by ${v}_{\alpha q}{}^-$. But if ${v}_{\alpha q}{}^-$ is the spinorial harmonic, this is to say a $32\times 16$ block of the $Spin(1,10)$ valued matrix $V_\alpha{}^{(\beta)}=(v_{\alpha q}{}^- , v_{\alpha q}{}^+)\in Spin(1,10)$, then $\tilde{v}_{\alpha p}{}^-$ cannot be such a block if the $O(16)$ matrix $L_{pq}$ does not belong to the $Spin(9)$ subgroup of $SO(16)$. However, $\tilde{v}_{\alpha p}{}^- \tilde{v}_{\beta p}{}^- = {v}_{\alpha q}{}^- {v}_{\beta q}{}^-$ so that substituting (\ref{tv-=v-L}) for $\tilde{v}_{\alpha p}{}^-$ in (\ref{11DSSP-16}), one observes the cancelation of the contributions of the matrix $L_{qp}$. On one hand this is tantamount to the statement of the $O(16)$ of the action (\ref{11DSSP-16}), with variable restricted only by the constraints presented explicitly. On the other hand, this can be used to treat the variables ${v}_{\alpha q}{}^- $ in the action (\ref{11DSSP}) as spinorial harmonics (allowing only the $Spin(9)$ transformations (\ref{SO(9)}) on $q$ index). In the next section we accept this latter point of view as it is technically more convenient for the Hamiltonian analysis. The reason is that the constraints (\ref{vv=uG}) are reducible\footnote{This is seen already from the fact that their number, $2122$, exceed the number $512$ of the components of $32\times 16$ matrix. The above number of constraints is composed as $2122$=$528-11+1496-11+120$, where $-11$ come from the facts of coincidence of the gamma--trace parts of constraints (a) and (b) and of that $u_m^{--}$ can be defined by means of one of these parts; the light--likeness of $u_m^{--}$, Eq. (\ref{vv=uG}d), follows from the fact that the rank of the matrix in the {\it l.h.s.} of the constraint (\ref{vv=uG}a) is $16$ or less and, thus, is not counted. } and even to calculate the number of degrees of freedom becomes a nontrivial problem. This can be solved passing through the identification of ${v}_{\alpha q}{}^- $ with spinorial harmonics: also one introduces additional variables ${v}_{\alpha q}{}^+ $, one gains a clear group theoretical and geometrical meaning which helps to deal with the reducible constraints. To conclude this section, let us note that the (seemingly fictitious) $SO(16)$ symmetry of the M0--brane, which we have observed studying different versions of its twistor-like formulation, reappears inevitably in the quantization of physical degrees of freedom which we will consider in Sec. \ref{SecAnalB} (see also \cite{BdAS2006}). \subsection{Vector and spinor Lorentz harmonics: moving frame and spinor moving frame} The {\it vector} Lorentz harmonics variables $u_m^{\pm\pm}$, $u_m{}^i$ \cite{Sok} are defined as elements of the $SO(1,10)$ Lorentz group matrix, Eq. (\ref{PmPm=01}). In the lightlike basis they are given by \begin{eqnarray} \label{harmUin} && U_m^{(a)}= (u_m^{--}, u_m^{++}, u_m^{i})\; \in \; SO(1,10) \; , \qquad \\ \nonumber && {m= 0,1,\dots,9,\# \; , } \qquad (a)=++,--, i \; , \qquad i=1,\dots,9 \; , \qquad \end{eqnarray} where $u^{\pm\pm}_m=u^0_m \pm u^{\#}_m$. The three-blocks splitting (\ref{harmUin}) is invariant under $SO(1,1)\otimes SO(9)$; $SO(1,1)$ rotates $u^0_m$ and $u^{\#}_m$ among themselves and, hence, transforms their sum and differences, $u^{\pm\pm}_m=u^0_m \pm u^{\#}_m$, by inverse scaling factors, see Eq. (\ref{SO(1,1)}). The fact that $U\in SO(1,10)$ implies the following set of constraints \begin{eqnarray} \label{harmUdef} U^T\eta U = \eta \quad \Leftrightarrow \cases{ u_m^{--}u^{m--}=0 \; , \quad u_m^{++}u^{m++}=0 \; , \quad u_m^{\pm\pm}u^{m\, i}=0 \; , \cr u_m^{--}u^{m++}=2 \; , \qquad u_m^{i}u^{m\, j}=- \delta^{ij} } \end{eqnarray} or, equivalently, the unity decomposition \begin{eqnarray}\label{UUT=eta} \delta_m^n= {1\over 2}u_m^{++}u^{n--} + {1\over 2}u_m^{--}u^{n++} - u_m^{i}u^{n i}\qquad \Leftrightarrow \qquad U\eta U^T=\eta\; . \end{eqnarray} The {\it spinor} harmonics \cite{Ghsds} or spinor moving frame variables \cite{BZ-str,BZ-strH,BZ-p} $v^{\;\;\,\pm}_{\alpha q}$ are elements of the $32\times32$ $Spin(1,10)$--valued matrix \begin{eqnarray} \label{harmVin} V_\alpha^{(\beta)}= (v_\alpha{}_q^{-}\; , v_\alpha{}_{q}^{+})\; \in \; Spin(1,10) \qquad (\alpha=1,\dots 32\; , \; q=1,\dots,16) \; . \end{eqnarray} They are `square roots' of the associated vector harmonics in the sense that \begin{eqnarray} \label{harmVdef} V \Gamma^{(a)} V^T = \Gamma^m U_m ^{(a)} \qquad (a) \; , \qquad V^T \tilde{\Gamma}_m V = U_m^{(a)} \tilde{\Gamma}_{(a)} \qquad (b) \; , \end{eqnarray} which express the $Spin(1,10)$ invariance of the Dirac matrices. Equation in (\ref{vv=uG}a) is just the $(a)=(--)\equiv (0)-(\# )$ component of Eq. (\ref{harmVdef}a) in the Dirac matrices realization in which $\Gamma^0$ and $\Gamma^{\# }$ are diagonal; the nine remaining $\Gamma^I$ are off-diagonal. Eq. (\ref{vv=uG}b) comes from the upper diagonal block of Eq. (\ref{harmVdef}b). To complete the set of constraints defining the spinorial harmonics, we have to add the conditions expressing the invariance of the charge conjugation matrix $C$, \begin{eqnarray} \label{harmVdefC} VCV^T=C \quad, \quad V^TC^{-1}V=C^{-1}\; , \end{eqnarray} which give rise to the constraint (\ref{vv=uG}c). In a theory with the local $SO(1,1)\otimes SO(9)$ symmetry (\ref{SO(1,1)}), (\ref{SO(9)}), containing only one of the two sets of $16$ constrained spinors (\ref{harmVin}), say $v_{\alpha p}^{\;-}\,$, these can be treated as homogeneous coordinates of the $SO(1,10)$ coset giving the celestial sphere $S^9$; specifically (see \cite{Ghsds}) \begin{eqnarray} \label{v-inS11} {} \{v_{\alpha q}^{\;-}\} = {Spin(1,10) \over [Spin (1,1)\otimes Spin(9)] \, \subset \!\!\!\!\!\!\times {\mathbb{K}_9} } = \mathbb{S}^{9} \quad , \end{eqnarray} where $\mathbb{K}_9$ is the abelian subgroup of $SO(1,10)$ defined by\footnote{The $\mathbb{K}_9$ symmetry (\ref{K9-def}) is tantamount to stating that the model contains only one, $v_{\alpha p}^{\;-}\,$, of the two sets of $16$ constrained spinors $(v_\alpha{}_q^{-}\; , v_\alpha{}_{q}^{+})$ in (\ref{harmVin}).} \begin{eqnarray} \label{K9-def} \delta v_{\alpha q}^{\; -}=0\; , \qquad \delta v_{\alpha q}^{\; +}= k^{++ i} \gamma^i{}_{qp}\,v_{\alpha p}^{\; -}\; , \qquad i=1,\ldots , 9 \; . \qquad \end{eqnarray} Our superparticle model contains just $v_{\alpha q}^{\; -}$ and is invariant under $SO(1,1)\otimes Spin(9)$ transformations. Hence the harmonics sector of its configuration space parametrize $S^9$ sphere. \subsubsection{On harmonics and explicit parametrization of $SO(1,D-1)/H$ cosets} The vector harmonic variables, when constrained only by Eqs. (\ref{harmUdef}), parametrize the eleven dimensional Lorentz group $SO(1,10)$, Eq. (\ref{harmUin}). This, in principle, can be solved by expressing the harmonics in terms of $55$ parameters $l^{(a)(b)}=- l^{(b)(a)}$, $U_m{}^{(a)}=U_m{}^{(a)}(l^{(b)(c)})$, \begin{eqnarray} \label{harmU=} U_m^{(a)}&=& \left(u_m^{--}, u_m^{++}, u_m^{i}\right)= \delta_m^{(a)} + \eta_{m(b)} l^{(b)(a)} + {\cal O}(l^2)\; , \qquad \nonumber {{}\over {}} \\ && u_m^{\pm\pm}= \delta_m^{\pm\pm} - \eta_{m(b)} l^{\pm\pm \, (b)} + {\cal O}(l^2)\; , \quad u_m^{i}= \delta_m^{i} + \eta_{m(b)} l^{ (b) i} + {\cal O}(l^2)\; , \qquad \\ \label{l-param} && \delta_m^{\pm\pm}:= \delta_m^0 \pm \delta_m^{\#}\; , \qquad l^{(a)(b)}=- l^{(b)(a)}= \left(\matrix{ 0 & - 4 l^{(0)} & l^{++j} \cr \, 4 \, l^{(0)} & 0 & l^{--j} \cr - l^{++i} & - \; l^{--i} & l^{ij}\;}\right) \; , \qquad \end{eqnarray} where we used the `light-like' splitting $(a)=++,--, i$, $i=1,\ldots , 9$, so that \begin{eqnarray} \label{g(a)(b)} \eta_{(a)(b)}:= \left(\matrix{ 0 & {1\over 2} & 0 \cr {1\over 2} & 0 & 0 \cr 0 & 0 & - \delta_{ij}\; }\right) \quad \; , \qquad \eta^{(a)(b)}:= \left(\matrix{ 0 & 2 & 0 \cr {2} & 0 & 0 \cr 0 & 0 & - \delta_{ij}\; }\right) \quad \; , \qquad \end{eqnarray} The same can be said about spinorial harmonics. Eqs. (\ref{harmVdef}), (\ref{harmVdefC}) imply that spinorial harmonics parametrize the $Spin(1,10)$-valued matrix providing the double covering of the $SO(1,10)$ group element (\ref{harmUin}) and, hence, that they can be expressed (up to the sign) through the same $l^{(a)(b)}=- l^{(b)(a)}$ parameters, $V_\alpha^{(\beta)}= \pm V_\alpha^{(\beta)}(l)$, \begin{eqnarray} \label{harmV=} V_\alpha^{(\beta)}(l)= (v_\alpha{}_q^{-}(l)\; , v_\alpha{}_{q}^{+}(l))\; = \left(\delta_\alpha^{(\beta)} + {1\over 4} l^{(a)(b)}\Gamma_{(a)(b)}{} _\alpha^{(\beta)} + {\cal O}(l^2) \right)\; . \qquad \end{eqnarray} The identification of the harmonics with the coordinates of $SO(1,10)/H$ corresponds to setting to zero the $H$ coordinates in the explicit expressions (\ref{harmU=}), (\ref{harmV=}). In our case with $H=[SO(1,1)\otimes SO(9)]\otimes \mathbb{K}_9$ this implies $l^{(0)}=l^{ij}=l^{++j}=0$ so that the $SO(1,10)$ and $Spin(1,10)$ matrices are constructed with the use of $9$ parameters $l^{--j}$, $U_m{}^{(a)}=U_m{}^{(a)}(l^{--j})$, $V_\alpha^{(\beta)}= V_\alpha^{(\beta)}(l^{--j})$. These expressions are not so complicated and read \begin{eqnarray} \label{U=l--} u_a^{--}= \delta_a^{--} + \delta_a{}^i l^{--i} + {1\over 2}\delta_a^{++} (l^{--j}l^{--j})\; , \qquad u_a^{++}= \delta_a^{++}\; , \qquad u_a{}^{i}= \delta_a{}^{i} + {1\over 2}\delta_a^{++} {l}^{--i} \; . \qquad \end{eqnarray} for the vector harmonics. The expressions for spinor harmonics are even simpler, \begin{eqnarray} \label{V=l--} v_{\alpha}{}^-_q = \delta_{\alpha}^{-q} + {1\over 2}\, l^{--i}\gamma^i_{qp} \delta_{\alpha}^{+q} \; , \qquad v_{\alpha}{}^+_q = \delta_{\alpha}^{+q} \; . \qquad \end{eqnarray} The disadvantage of the above equations Eqs. (\ref{U=l--}), (\ref{V=l--}) with respect to the general Eqs. (\ref{harmU=}), (\ref{harmV=}), is that they are not Lorentz covariant; this follows from that they are the gauge fixed version of (\ref{harmU=}), (\ref{harmV=}) obtained with the use of $[SO(1,1)\otimes SO(9)]\subset\!\!\!\!\!\!\times K_9$ symmetry. Although the use of the explicit expressions (\ref{harmU=}), (\ref{harmV=}) is not practical (so that we even would not like to present them beyond the linear approximation; these explicit expressions can be found in \cite{GomisWest06}), it is convenient to keep in mind the mere fact of there existence. To make calculations we rather use the `admissible variation' technique \cite{BZ-strH,BZ-p} and/or, when the Hamiltonian mechanics is considered, the Dirac brackets for the constraints (\ref{harmUin}), (\ref{harmVin}) on the Harmonic variables and their conjugate. These Dirac brackets for $U$, $V$ and their momenta, which can be identified as Poisson brackets for $l^{(a)(b)}$ in (\ref{harmUin}), (\ref{harmVin}) and its conjugate momentum, are discussed in the next section. \section{ Hamiltonian mechanics of the D=11 superparticle in the spinor moving frame formulation and the BRST charge $\mathbb{Q}^{susy}$} \setcounter{equation}0 In \cite{BdAS2006} we presented the supertwistor quantization of M0--branes. Albeit heuristic, it has the advantage of being simple, formulated in terms of physical variables (like the light-cone gauge quantization in \cite{Green+99}), and of being covariant (in contrast with \cite{Green+99}). Here we perform the complete Hamiltonian analysis of the dynamical system and consider its BRST quantization. \subsection{Primary constraints of the D=11 superparticle model (M0--brane)}\label{Primary} The primary constraints of the M0-brane in the spinor moving frame formulation (\ref{11DSSP}) include the defining relations of the harmonic variables, Eqs. (\ref{vv=uG}), plus other relations in (\ref{harmVdef}), as well as \begin{eqnarray}\label{P-rvv} \Phi_a &:= & P_a - {1\over 2} \rho^{++} u_a^{--}\approx 0 \qquad \Leftrightarrow \qquad \Phi\!\!\!/_{\alpha\beta}:= \Phi_a \Gamma^a_{\alpha\beta}= P\!\!\!/_{\alpha\beta} - \rho^{++} v_\alpha{}_{q}^{-} v_\beta{}_{q}^{-} \approx 0 \qquad \; , \\ \label{df=} d_\alpha &:= & \pi_\alpha + i P\!\!\!/_{\alpha\beta}\theta^{\beta}\approx 0 \; , \qquad \pi_\alpha:= {\partial L \over \partial \dot{\theta}^{\alpha} } \; , \quad P_m:= {\partial L \over \partial \dot{x}^{m} } \\ \label{Pr=0} P_{++}^{(\rho )} &:= & {\partial L \over \partial \dot{\rho}^{++} } \approx 0 \; , \qquad \end{eqnarray} and \begin{eqnarray} \label{Pharm=0} P^{[u]}{}_{(a)}{}^{m}&:= & {\partial L \over \partial \dot{u}_m^{(a)} } \approx 0 \; \qquad or \qquad P^{[v]}{}_{(\alpha)}{}^{\beta}:= {\partial L \over \partial \dot{V}{}_{\beta}^{(\alpha)} } \approx 0 \; . \end{eqnarray} The definition of the momenta \begin{eqnarray} \label{Pdef} P_{_{{\cal N}}}= {\partial L \over \partial \dot{{\cal Z}}^{{\cal N}}} := \left(P_a \, , \, \pi_\alpha \, , \, P^{(\rho)}_{++}\, , \, P^{[u]}{}_{(a)}{}^{m} \; or \; P^{[v]}{}_{(\alpha)}{}^{\beta}\right) \end{eqnarray} for the configuration space coordinates \begin{eqnarray} \label{cZdef} {\cal Z}^{{\cal N}} := \left( x^a \, , \, \theta^\alpha \, , \, \rho^{++}\, , \, u_m^{(a)} \; or \; {V}{}_{\beta}^{(\alpha)} \right) \end{eqnarray} determines the form of the (equal--proper--time) Poisson brackets $ [ \ldots \; , \; \ldots \}_{_{PB}}$ ($:= ( [ \ldots \; , \; \ldots ]_{_{PB}}\,$ , \, $\{ \ldots \; , \; \ldots \}_{_{PB}})$) \begin{eqnarray} \label{PBdef} {}[ {\cal Z}^{{\cal N}} \; , \; P_{_{{\cal N}^\prime}} \}_{_{PB}} := (-)^{{{\cal N}}} \delta_{{\cal N}^\prime}{}^{{\cal N}} \; , \qquad [ \ldots \; , \; \ldots \}_{_{PB}} := {\partial ... \over \partial {\cal Z}^{{\cal N}} } (-)^{{{\cal N}}} {\partial ... \over \partial P_{_{{\cal N}}}} - {\partial ... \over \partial P_{_{{\cal N}}}} {\partial ... \over \partial {\cal Z}^{{\cal N}} }\; . \end{eqnarray} The canonical Hamiltonian $H_0$ is defined by \begin{eqnarray} \label{H0:=} d\tau H_0 := d{\cal Z}^{{\cal N}}\; P_{_{{\cal N}}} - d\tau \, L \; . \end{eqnarray} Since the canonical Hamiltonian of the massless superparticle is zero in the weak sense ({\it i.e.}, when constraints are taken into account \cite{Dirac}), its Hamiltonian analysis reduces to the analysis of the constraints. Following Dirac \cite{Dirac}, we shall split the whole set of the constraints into first and second class ones and will deal with the second class constraints by using Dirac brackets. To make the analysis more transparent it is convenient deal first with the second class constraints imposed on the harmonic variables. \subsection{Dirac brackets in Hamiltonian mechanics on the $SO(1,D-1)$ group manifold} \label{DBsec} Eqs. (\ref{harmU=}), (\ref{harmV=}) make manifest that the vector and the spinor Lorentz harmonics can be expressed through the same parameter $l^{(a)(b)}$. Hence one can, in principle, use the local $l^{(a)(b)}=-l^{(b)(a)}$ coordinate in the configurational space (${\cal Z}^{{\cal N}} = ( x^a \, , \, \theta^\alpha \, , \, \rho^{++}\, , l^{(a)(b)})$ in our case of massless superparticle) and develop the Hamiltonian mechanics using this variable and its conjugate momentum. This way is, however, technically involved. Much more practical is to work with the whole set of Harmonic variables $U$ and/or $V$ and to take Eqs. (\ref{harmUin}), (\ref{harmVin}) into account by passing to the associated Dirac brackets. (This may be treated as an implicit use of Eqs. (\ref{harmU=}), (\ref{harmV=}) which, in terms of \cite{Dirac}, would correspond to explicit solution of the corresponding second class constraints). It is more convenient to work in terms of vector harmonics; the correpsonding Dirac brackets (as they actually coincide with the Poisson brackets for $l$) can be then applied to the spinor harmonics as well. When the harmonics enter as auxiliary variables, the primary constraints include the statement of vanishing of all the momentum conjugate to the vector harmonics, $P_{(a)}{}^m =0$ (Eq. (\ref{Pharm=0})). This set of constraints can be easily split in a set of 55 constraints $\mathbf{d}_{(a)(b)}:= P_{(a)}{}^m U_{m(b)} - P_{(b)}{}^m U_{m(a)}$ and the $66$ constraints $\mathbf{K}_{(a)(b)}:= P_{(a)}{}^m U_{m(b)} + P_{(b)}{}^m U_{m(a)}$. These latter are manifestly second class ones as far as they are conjugate to the (also second class) $66$ kinematical constraints (\ref{harmUdef}), \begin{eqnarray}\label{IIccH} \mathbf{\Xi}^{(a)(b)} := U_m^{(a)}U^{m(b)} - \eta^{(a)(b)}\approx 0\; , \qquad \mathbf{K}_{(a)(b)}:= P_{(a)}{}^m U_{m(b)} + P_{(b)}{}^m U_{m(a)} \approx 0 \; , \qquad \\ \label{PBIIccH} {}[\; \mathbf{\Xi}^{(a)(b)} \; , \; \mathbf{K}_{(a^\prime)(b^\prime)}\; ]_{_{PB}} = 4 \delta^{((a) }{}_{(a^\prime)} \delta^{(b))} {}_{(b^\prime)} +4 \delta^{((a) }{}_{((a^\prime)} \mathbf{\Xi}^{(b))} {}_{(b^\prime))} \approx 4 \delta^{((a) }{}_{(a^\prime)} \delta^{(b))} {}_{(b^\prime)} \; , \qquad \end{eqnarray} while the 55 constraints $\mathbf{d}_{(a)(b)}:= P_{(a)}{}^m U_{m(b)} - P_{(b)}{}^m U_{m(a)}$ commute with the kinematical constraints $\Xi^{(a)(b)}$, \begin{eqnarray}\label{[d,Xi]=0} \mathbf{d}_{(a)(b)}:= P_{(a)}{}^m U_{m(b)} - P_{(b)}{}^m U_{m(a)}\approx 0\; , \qquad {}[\; \mathbf{\Xi}^{(a)(b)} \; , \; \mathbf{d}_{(a^\prime)(b^\prime)}\; ]_{_{PB}} = 0\; . \qquad \end{eqnarray} The brackets of these constraints represent the Lorentz group algebra while their brackets with $\mathbf{K}_{(a)(b)}$ show that these are transformed as symmetric second rank tensor under the Lorentz group,\footnote{Furthermore, on can see that the Poisson brackets of two $\mathbf{K}$'s close on $\mathbf{d}_{(a)(b)}$, so that the complete set of brackets of $\mathbf{K}$ and $\mathbf{d}_{(a)(b)}$ constraints represent $gl(D,\mathbb{R})$; the $\mathbf{K}_{(a)(b)}$ constraints correspond to the ${GL(D,\mathbb{R})\over SO(1,D-1)}$ coset generators. } \begin{eqnarray}\label{dd,kk} {}[\mathbf{d}_{(a)(b)} \; , \; \mathbf{d}^{(c)(d)} \; ]_{_{PB}} = - 4\delta_{[(a) }{}^{[(c)} \mathbf{d}_{(b)]}{}^{(d)]}\; , \qquad {}[\mathbf{d}_{(a)(b)} \; , \; \mathbf{K}^{(c)(d)} \; ]_{_{PB}}= - 4\delta_{[(a) }{}^{((c)} \mathbf{K}_{(b)]}{}^{(d))} \; . \qquad \end{eqnarray} Hence in the Lorentz harmonics sector of phase space one can define Dirac brackets \begin{eqnarray}\label{DB-harm} {}[\; \ldots \; , \; \ldots \; ]_{_{DB_{harm}}}=[\; \ldots \; , \; \ldots \; ]_{_{PB}} & - {1\over 4} [\; \ldots \; , \; \mathbf{K}_{(a)(b)} \; ]_{_{PB}} [\; \mathbf{\Xi}^{(a)(b)}\; , \; \ldots \; ]_{_{PB}} \qquad \nonumber \\ & + {1\over 4} [\; \ldots \; , \; \mathbf{\Xi}^{(a)(b)} \; ]_{_{PB}} [\; \mathbf{K}_{(a)(b)} \; , \; \ldots \; ]_{_{PB}} \qquad \end{eqnarray} allowing us to use (\ref{harmUdef}) and, moreover, all the $122$ constraints (\ref{IIccH}) in the strong sense, \begin{eqnarray}\label{IIccH=0} \mathbf{\Xi}^{(a)(b)} := U_m^{(a)}U^{m(b)} - \eta^{(a)(b)}= 0\; , \qquad \mathbf{K}_{(a)(b)}:= P_{(a)}{}^m U_{m(b)} + P_{(b)}{}^m U_{m(a)} = 0 \; . \qquad \end{eqnarray} Using (\ref{IIccH=0}) one sees that in the phase space sector that involves the harmonics $U_{m(a)}$ and the `covariant momenta' $\mathbf{d}_{(a)(b)}:= P_{(a)}{}^m U_{m(b)} - P_{(b)}{}^m U_{m(a)}$, but not the canonical momenta $P_{(b)}{}^m$ themselves, the above defined Dirac brackets coincide with the Poisson brackets; in particular (see (\ref{dd,kk})) \begin{eqnarray}\label{dab-DB=PB} {}[\; \mathbf{d}_{(a)(b)} \; , \; \ldots \; ]_{_{DB_{harm}}}=[\; \mathbf{d}_{(a)(b)} \; , \; \ldots \; ]_{_{PB}}\; . \qquad \end{eqnarray} This reflects the fact that $\mathbf{d}_{(a)(b)}$ provide a representation of the Lorentz group generators {\it i.e.} generate a parallel transport ('translations') along the Lorentz group manifold: $[\mathbf{d}_{(a)(b)} \, , f(U) ]_{_{PB}} = ({\partial\over \partial l^{(a)(b)}} + \ldots ) f (U(l))$ in terms of explicit parametrization in (\ref{harmU=}) (and (\ref{harmV=}) for spinorial harmonics, $[\; \mathbf{d}_{(a)(b)} \; , \; f(V) \; ]_{_{PB}} = (\partial/\partial l^{(a)(b)} + \ldots ) \; f (V(l))\;$). The above described Dirac brackets give a convenient way to represent the Poisson brackets on the Lorentz $SO(1,D-1)$ group manifold (which can also be formulated in terms of $l^{(a)(b)} = - l^{(b)(a)}$ and its conjugate momentum). This gives a reason for not distinguishing notationally these Dirac brackets ${}[ ... , ...]_{_{DB_{harm}}}$ from the original Poisson brackets (\ref{PBdef}), denoting them also by ${}[ ... , ...]_{_{PB}}$ or ${}\{ ... , ...\}_{_{PB}}$ for the case of two fermionic constraints, and reserve the notation ${}[ ... , ...]_{_{DB}}$, ${}\{ ... , ...\}_{_{PB}}$ for the Dirac brackets allowing to resolve {\it all} the second class constraints for the M0-brane model. \bigskip \subsection{Cartan forms and Hamiltonian mechanics on the Lorentz group manifold} \label{CartanF} The above Dirac brackets can be also applied \cite{BZ-strH} to calculations with the spinorial Lorentz harmonics. This is particularly important because the simple constraints on these variables, Eqs. (\ref{harmVin}), are reducible, and the irreducible constraints are not so easy to extract and to deal with. However, a relatively simple method to obtain the definite expressions for the above Dirac brackets and, more generally, to deal with the derivatives and variations of harmonic variables can be formulated using just the group--theoretical meaning of the harmonic variables (see \cite{BZ-strH} and also Appendix for more detail on this {\it admissible variation technique}). Using the kinematic constraints (\ref{harmUdef}) (first of Eqs. (\ref{IIccH})) and (\ref{harmVdef}), one can express the derivatives of both the vector and the spinor harmonics through the $55$ Cartan forms, \begin{eqnarray} \label{Omab} \Omega^{(a)(b)}:= U^{m(a)}dU_m^{(b)} = - \Omega^{(b)(a)} = \left(\matrix{ 0 & - 4 \Omega^{(0)} & \Omega^{++j} \cr \, 4 \, \Omega^{(0)} & 0 & \Omega^{--j} \cr - \Omega^{++i} & - \; \Omega^{--i} & \Omega^{ij}\; }\right) \quad \in\quad so(1,10)\; . \end{eqnarray} Indeed, the equation \begin{eqnarray} \label{dU=UOm} dU_m^{(a)}= U_{m(b)}\Omega^{(b)(a)} \end{eqnarray} is just equivalent to the definition of the Cartan forms, Eq. (\ref{Omab}), when (\ref{harmUdef}) (or equivalent (\ref{UUT=eta})) is taken into account. As, according to (\ref{harmVdef}), the spinorial harmonic matrix $V$ provides the spinoral representation of the $Spin(1,D-1)$ element $g$ which correspond to the Lorentz rotation $U$, its derivative can be expressed through the same Cartan form $g^{-1}dg= {1\over 2} \Omega^{(a)(b)}\mathbb{T}_{(a)(b)}$, but with $\mathbb{T}_{(a)(b)}= {1\over 2}\Gamma_{(a)(b)}$ instead of $\mathbb{T}_{(a)(b)}{}_{(c)}{}^{(d)}= 2\eta_{(c)[(a)} \delta_{(b)]}{}^{(d)}$ giving rise to Eq. (\ref{Omab}), \begin{eqnarray} \label{VdV=UdUG} V^{-1}dV = {1\over 4} \; \Omega^{(a)(b)}\; \Gamma_{(a)(b)} \quad \in \; spin(1,10) \; , \qquad \Omega^{(a)(b)}:= U^{m(a)}dU_m^{(b)} \quad . \qquad \end{eqnarray} Eq. (\ref{VdV=UdUG}) can be equivalently written in the form of $dV = {1\over 4} \; \Omega^{(a)(b)}\; V\Gamma_{(a)(b)}$. This equation implies, in particular, the following expression for the differential $dv_\alpha{}_q^{-}$ of the harmonics $v_\alpha{}_q^{-}$ entering the action (\ref{11DSSP}): \begin{eqnarray} \label{dv-q} & dv_q^{-}= - \Omega^{(0)} v_q^{-} - {1\over 4} \Omega^{ij} v_p^{-}\gamma_{pq}^{ij} + {1\over 2} \Omega^{--i} \gamma_{qp}^{i}v_p^{+} \quad . \end{eqnarray} The particular ($(a)=--$) case of Eq. (\ref{dU=UOm}) gives \begin{eqnarray} \label{du--} & du^{--}_m = - 2u^{--}_m \Omega^{(0)} + u^{i}_m \Omega^{--i} \; \qquad \end{eqnarray} for the derivative of the only vector harmonics that appear explicitly in the action (\ref{11DSSP}). Notice that (\ref{dv-q}) and (\ref{du--}) do not contain the Cartan form $\Omega^{++ i}$, corresponding to the abelian $\mathbb{K}_9$ subgroup (see Eq. (\ref{K9-def})) of $SO(1,10)$ parametrized by the harmonics. This actually reflects the $\mathbb{K}_9$ gauge invariance of the action (\ref{11DSSP}), allowing, together with its manifest $SO(1,1)$ and $SO(9)$ invariance, to identify the relevant harmonics $u_m^{--}$ and $v_{\alpha q}^{\;\;\, -}$ with the homogeneous coordinates of $\mathbb{S}^{9}$, Eq. (\ref{v-inS11}). \bigskip When the Hamiltonian formalism for a dynamical system involving harmonic variables is considered, one can use, as above, the standard way to define hamiltonian, $H_0= \partial_\tau u P^{[u]} + ... - L $ or $H_0= \partial_\tau V P^{[v]} + ... - L $, Eq. (\ref{H0:=}), and introduce the Dirac brackets (\ref{DB-harm}). Alternatively one can use Eqs. (\ref{dU=UOm}), (\ref{VdV=UdUG}) in the above expressions for $H_0$, or better for $d\tau H_0$, and, in such a way, to arrive at the Hamiltonian of the form \begin{eqnarray} \label{H0:=Om(gen)--} d\tau H_0= - {1\over 2} \Omega^{(a)(b)} \mathbf{d}_{(a)(b)}+ \ldots - d\tau L\; , \qquad \mathbf{d}_{(a)(b)}:= u_{(b)}{}^m P_{(a)m}- u_{(a)}{}^m P_{(b)m} \; \end{eqnarray} containing the Cartan form (\ref{Omab}) and the `covariant momentum' $\mathbf{d}_{(a)(b)}$ (see (\ref{[d,Xi]=0})) instead of $dU$ or $dV$ and its conjugate momentum. Such a Hamiltonian can be thought of as the one with the kinematical constraints solved in terms of the independent parameter $l$ ($U=U(l)$, $V=V(l)$, see Eqs. (\ref{harmU=}), (\ref{harmV=})), but, as we see, one does not need using the explicit form of such a solution. In particular, to find the Poisson bracket of the 'covariant momentum' $\mathbf{d}_{(a)(b)}$ with harmonics one can just use the general form of the Hamiltonian equations $\dot{U}:= [\; U \; , H_0 \; ]_{_{PB}}$ or $\dot{V}:= [\; V \; , H_0 \; ]_{_{PB}}$, and the explicit expression for the Cartan form, (\ref{Omab}) and (\ref{VdV=UdUG}) for the case of spinor Harmonics. Indeed, for the vector harmonic $d{U}_m{}^{(a)}:= d\tau [\; U_m{}^{(a)} \; , H_0 \; ]_{_{PB}}= - {1\over 2} \Omega^{(c)(d)} [\; U_m{}^{(a)} \; , \mathbf{d}_{(c)(d)}\; ]_{_{PB}} = - {1\over 2} dU_n^{(d)} [\; U_m{}^{(a)} \; , \mathbf{d}_{(c)(d)}\; ]_{_{PB}} U^{n(c)} $ implies \begin{eqnarray} \label{[d,U]=} {}[\; \mathbf{d}_{(a)(b)} \; , \; U_m{}^{(a^\prime)}\; ]_{_{PB}} = 2 U_{m[(a)}\delta_{(b)]}{}^{(a^\prime)}\; . \end{eqnarray} Making the similar calculation with the spinor harmonics, one finds \begin{eqnarray} \label{[d,V]=} {}[\; \mathbf{d}_{(a)(b)} \; , \; V_{\alpha}{}^{(\beta)}\; ]_{_{PB}} = {1\over 2} V_{\alpha}{}^{(\gamma)} \Gamma_{(a)(b)}{}_{(\gamma)}{}^{(\beta)}\delta_{(d)]}{}^{(a^\prime)}\; . \end{eqnarray} Then, calculating the Poisson bracket of (\ref{[d,U]=}) and $\mathbf{d}_{(a)(b)}$, and using the Jacobi identities for the Poisson brackets we find the first of Eqs. (\ref{dd,kk}) \begin{eqnarray} \label{[d,d]=} {}[\mathbf{d}_{(a)(b)} \; , \; \mathbf{d}^{(c)(d)} \; ]_{_{PB}} = - 4\delta_{[(a) }{}^{[(c)} \mathbf{d}_{(b)]}{}^{(d)]}\; , \qquad \end{eqnarray} which implies that $\mathbf{d}_{(a)(b)}$ are the Lorentz group generators. Thus, using the kinematical constraints (\ref{harmUdef}) and/or (\ref{harmVdef}) in the strong sense we also can easily construct the canonical Hamiltonian and the Poisson brackets directly on the $SO(1,D-1)$ group manifold, thus overcoming the stage of introducing the Dirac brackets (\ref{DB-harm}) and escaping the use of explicit parametrization (\ref{harmU=}), (\ref{harmV=}). \bigskip \subsection{Canonical Hamiltonian and Poisson/Dirac brackets of the M0--brane model} \label{PB-DB} The discussion and equations of the previous section hold for Hamiltonian mechanics on any space including Lorentz group $SO(1,D-1)$ or its coset $SO(1,D-1)/H$ as a subspace. The harmonics used in the twistor--like formulations of super--$p$--branes with $p\geq 1$ \cite{BZ-str,BZ-strH,BZ-p} are homogeneous coordinates of the coset with $H=SO(1,p)\otimes SO(D-p-1)$. The case of massless superparticle ($p=0$) is special. Here the $H=[SO(1,1)\otimes SO(D-2)]\subset\!\!\!\!\!\!\times \; \mathbb{K}_{D-2}$ is the Borel (maximal compact) subgroup of $SO(1,D-1)$. In this case (as well as in the string case \cite{BZ-str,BZ-strH}) one uses the $H$--covariant splitting (\ref{Omab}) to arrive at \begin{eqnarray} \label{H0:=OmD-L} & d\tau H_0 := - {1\over 2}\Omega^{--i}\mathbf{d}^{++i} - {1\over 2}\Omega^{++i}\mathbf{d}^{--i}- \Omega^{(0)} \mathbf{d}^{(0)} + {1\over 2} \Omega^{ij} \mathbf{d}^{ij} +\qquad \nonumber \\ & + d x^a P_a + d \theta^\alpha \pi_\alpha + d\rho^{++} P^{(\rho)}_{++} - d\tau \, L \; . \quad \end{eqnarray} Then the Poisson/Dirac brackets can be defined by the following set of non-zero relations (see (\ref{PBdef})) \begin{eqnarray} \label{PB=XP} {}[P_{a}\; , \; x^{b}]_{_{PB}} = - \delta_{a}{}^{b} \; , \qquad \{ \pi_{\alpha}\; , \; \theta^{\beta}\}_{_{PB}} = - \delta_{\alpha}{}^{\beta}\; , \qquad [P^{(\rho)}_{++} \; , \; \rho^{++}]_{_{PB}} = - 1 \; , \qquad \end{eqnarray} as well as Eqs. (\ref{[d,U]=}), (\ref{[d,V]=}) and the Lorentz group algebra (\ref{[d,d]=}) which splits as \begin{eqnarray}\label{PB=d'd} {}[\mathbf{d}^{++i}\; , \; \mathbf{d}^{--j}]_{_{PB}} = 2\mathbf{d}^{ij} + \mathbf{d}^{(0)}\delta^{ij}\; , \qquad{}[\mathbf{d}^{(0)}\; , \; \mathbf{d}^{\pm\pm i}]_{_{PB}} = \pm 2 \mathbf{d}^{\pm\pm i} \; , \qquad \nonumber \\ {} [\mathbf{d}^{ij}\; , \; \mathbf{d}^{\pm\pm k}]_{_{PB}} = 2\mathbf{d}^{\pm\pm [i} \delta^{j]k}\; , \qquad [\mathbf{d}^{ij}\; , \; \mathbf{d}^{kl}]_{_{PB}} = 2\mathbf{d}^{k[i} \delta^{j]l} - 2\mathbf{d}^{l[i} \delta^{j]k}\; . \qquad \end{eqnarray} The splitting $\mathbf{d}_{(a)(b)}= (\mathbf{d}^{(0)}\, , \mathbf{d}^{\pm\pm j}\, , \mathbf{d}^{ij})$ of the $SO(1,10)$ generators (see \ref{Omab}) is invariant under $SO(1,1)\otimes SO(9)$ symmetry the generators of which are represented by $\mathbf{d}^{(0)}\, , \mathbf{d}^{ij}$. The set of remaining generators $ \mathbf{d}^{++ j}$, $ \mathbf{d}^{-- j}$ can be conventionally split on two Abelian subsets, one, say $ \mathbf{d}^{-- j}$, representing the $\mathbb{K}_9$ generator, and other, $ \mathbf{d}^{++ j}$, corresponding to the $SO(1,10)/SO(1,1)\otimes SO(9)]\subset\!\!\!\!\!\!\times \; \mathbb{K}_9$ coset. The split form of Eqs. (\ref{[d,U]=}), (\ref{[d,V]=}) include \begin{eqnarray} \label{[d,u--]=} & {}[\mathbf{d}^{(0)}\; , u^{--}_m \; ]_{_{PB}} = - 2u^{--}_m\; , \quad {}[\mathbf{d}^{--i}\; , u^{--}_m \; ]_{_{PB}} = 0\; , \quad {}[\mathbf{d}^{++i}\; , u^{--}_m \; ]_{_{PB}} =2 u^{i}_m \; , \quad \nonumber \\ & {}[\mathbf{d}^{ij}\; , u^{--}_m \; ]_{_{PB}} =0\; , \qquad \\ \label{[d,v-q]=} & {}[\mathbf{d}^{(0)}\; , v_q^{-} \; ]_{_{PB}} = - v_q^{-}\; , \quad {}[\mathbf{d}^{--i}\; , v_q^{-} \; ]_{_{PB}} = 0 \; , \quad {}[\mathbf{d}^{++i}\; , v_q^{-} \; ]_{_{PB}} = \gamma_{qp}^{i}v_p^{+}\; , \quad \nonumber \\ & {}[\mathbf{d}^{ij}\; , v_q^{-} \; ]_{_{PB}} = {1\over 2} v_p^{-}\gamma_{pq}^{ij}\; . \qquad \\ \label{[d,v+q]=} & {}[\mathbf{d}^{(0)}\; , v_q^{+} \; ]_{_{PB}} = \; v_q^{-}\; , \quad {}[\mathbf{d}^{--i}\; , v_q^{+} \; ]_{_{PB}} = \gamma_{qp}^{i}v_p^{-} \; , \quad {}[\mathbf{d}^{++i}\; , v_q^{+} \; ]_{_{PB}} = 0 \; , \quad \nonumber \\ & {}[\mathbf{d}^{ij}\; , v_q^{+} \; ]_{_{PB}} = {1\over 2} v_p^{+}\gamma_{pq}^{ij}\; , \qquad \end{eqnarray} and the relations for the brackets of $\mathbf{d}_{(a)(b)}$ with $u_m^{++}$ and $u_m^{i}$ vectors, which are not needed in this paper. All these relations can be collected in \begin{eqnarray} \label{PB-d=D} {}[\mathbf{d}^{(a)(b)} , U \}_{_{PB}} := \mathbb{D}^{(a)(b)} U \; , \qquad {}[\mathbf{d}^{(a)(b)} , V \}_{_{PB}} := \mathbb{D}^{(a)(b)} V \; , \qquad \end{eqnarray} where $\mathbb{D}_{(a)(b)}=(\mathbb{D}^{\pm\pm i}, \mathbb{D}^{ij}, \mathbb{D}^{(0)})$ are the covariant harmonic derivatives which provide the differential operator representation for the Lorentz group generators ($\mathbb{D}_{(a)(b)}=\partial/\partial l^{(a)(b)} + \ldots$ in terms of explicit parametrization) which are defined by the decomposition of the differential on the Cartan forms (\ref{Omab})\footnote{ The minus signs in (\ref{H0:=OmD-L}) are chosen to provide the plus sign in (\ref{PB-d=D}).}, \begin{eqnarray} \label{d=OmD/2} d:= {1\over 2} \Omega^{(a)(b)} \mathbb{D}_{(a)(b)}=: \Omega^{(0)} \mathbb{D}^{(0)} + {1\over 2}\Omega^{++i}\mathbb{D}^{--i}+ {1\over 2}\Omega^{--i}\mathbb{D}^{++i} - {1\over 2} \Omega^{ij} \mathbb{D}^{ij} \; . \end{eqnarray} \bigskip \subsection{Second class constraints of the D=11 superparticle model}\label{secIIclass} \bigskip With the Poisson/Dirac brackets (\ref{PB=XP})--(\ref{[d,v+q]=}), the phase space $(Z^{{\cal N}}, P_{{\cal N}})$ of our superparticle model includes, for the moment, the $Spin(1,10)$ group manifold, parametrized by harmonics, and the corresponding momentum space parametrized by the non--commutative generalized momenta $\mathbf{d}_{(a)(b)}$ of Eqs. (\ref{H0:=Om(gen)--}), (\ref{[d,d]=}), (\ref{PB-d=D}). In all we have \footnote{Here it is convenient to consider vector harmonics $U_m^{(a)} \in SO(1,10)$ as composites of the spinoral ones, ${V}{}_{\beta}^{(\alpha)} \in Spin(1,10)$, defined by the gamma--trace parts of Eqs. (\ref{harmVdef}), $U_{m}^{(a)}= {1\over 32} \, tr V\Gamma^{(a)}V^T\tilde{\Gamma}_m$. } \begin{eqnarray} \label{Z,Pdef2} P_{{{\cal N}}}= \left(P_a \, , \, \pi_\alpha \, , \, P^{(\rho)}_{++}\, , \; \mathbf{d}_{(a)(b)} \right) \; , \qquad {\cal Z}^{{\cal N}} := \left( x^a \, , \, \theta^\alpha \, , \, \rho^{++}\, , \; {V}{}_{\beta}^{(\alpha)}\right) \in Spin(1,10) \end{eqnarray} This phase space (\ref{Z,Pdef2}) is restricted by the constraints (\ref{P-rvv}), (\ref{df=}), (\ref{Pr=0}) and \begin{eqnarray} \label{d-harm-c} \mathbf{d}_{(a)(b)} \approx 0 \; \qquad \Leftrightarrow \qquad \cases{ \mathbf{d}^{(0)}\approx 0 \; , \quad \mathbf{d}^{ij}\approx 0 \; , \quad \mathbf{d}^{--i}\approx 0 \; , \cr \mathbf{d}^{++i}\approx 0 \; } \; \qquad \end{eqnarray} for the non--commutative momentum of the $Spin(1,10)$ group valued spinor moving frame variables $V\in Spin(1,10)$ [instead of the `original' (\ref{Pharm=0}) for an apparently unrestricted $V$ matrices]. The algebra of primary constraints (\ref{P-rvv}), (\ref{df=}), (\ref{Pr=0}) and (\ref{d-harm-c}) is characterized by the following nonvanishing brackets \begin{eqnarray} \label{(C,C)=} & {} [\Phi_a \; , \; P_{++}^{[\rho]} ]_{_{PB}}= -{1\over 2} u_a^{--} \; , \quad [\Phi_a \; , \; \mathbf{d}^{(0)} ]_{_{PB}}= -\rho^{++} u_a^{--} \; , \quad [\Phi_a \; , \; \mathbf{d}^{++i} ]_{_{PB}}= -\rho^{++} u_a^{i} \; , \quad \\ \label{(d,d)=} & {} \{ d_\alpha \; , \; d_{\beta} \}_{_{PB}}= - 2i P\!\!\!/_{\alpha\beta} \; \equiv - 2i \Phi\!\!\!/_{\alpha\beta} - 2i \rho^{++} v_{\alpha}{}^-_q v_{\beta}{}^-_q \; , \end{eqnarray} and the Lorentz algebra relations (\ref{PB=d'd}). This allows us to find the following {\it fermionic and bosonic second class constraints}, the latter split in mutually conjugate pairs \begin{eqnarray} \label{IIcl} & d^+_q:= v^{+\alpha}_q d_\alpha \approx 0 \; , \qquad & \qquad \{ d^+_q \; , \; d^+_p \}_{_{PB}}= - 2i \rho^{++} \delta_{pq} \; , \nonumber \\ & u^{a++}\Phi_a \approx 0 \, , \quad P_{++}^{[\rho]} \approx 0 \, , \qquad & {}\qquad [u^{a++}\Phi_a \; , \; P_{++}^{[\rho]} \}_{_{PB}}= -1 \; , \nonumber \\ & u^{a i}\Phi_a \approx 0 \, , \qquad \mathbf{d}^{++j} \approx 0 \, , \qquad & {}\qquad [u^{ai}\Phi_a \; , \; \mathbf{d}^{++j} \}_{_{PB}}= - \rho^{++} \; . \qquad \end{eqnarray} Here $v^{+\alpha}_q$ is an element of the inverse spinor moving frame matrix $V^{-1}{}_{(\beta)}^{\; \alpha}= ( v^{+\alpha}_q \; , \; v^{-\alpha}_q) \in Spin(1,10)$ which obeys $v^{+\alpha}_qv_{\alpha q}^{\; +}= 0$ and $ v^{+\alpha}_qv_{\alpha q}^{\; -}= \delta_{qp}$. In $D$=11 (as in the other cases when the charge conjugation matrix exists) this is expressed through the original spinor harmonics with the help of Eqs. (\ref{harmVdefC}), \begin{eqnarray} \label{V-1=CV} D=11\; : \qquad v^{\pm \alpha}_q = \pm i C^{\alpha\beta}v_{\beta q}^{\; \pm}\; \end{eqnarray} (notice that the $D=11$ charge conjugation matrix is imaginary in our `mostly minus' signature). Introducing the Dirac brackets \begin{eqnarray} \label{DB} {} [\ldots \; , \; \ldots ]_{_{DB}} &=& [\ldots \; , \; \ldots ]_{_{PB}} + [\ldots \; , \; P_{++}^{[\rho]} ]_{_{PB}} \cdot [ (u^{++}P-\rho^{++}) \; , \; \ldots ]_{_{PB}} - \qquad \nonumber \\ && \qquad - \; [\ldots \; , \; (u^{++}P-\rho^{++}) ]_{_{PB}} \cdot [ P_{++}^{[\rho]} \; , \; \ldots ]_{_{PB}} - \qquad \nonumber \\ && - [\ldots \; , \; u^{j}P ]_{_{PB}} {1\over \rho^{++}} [ \mathbf{d}^{++j}\; , \; \ldots ]_{_{PB}} + [\ldots\; , \; \mathbf{d}^{++j} ]_{_{PB}} {1\over \rho^{++}} [ u^{j}P\; , \; \ldots ]_{_{PB}} - \qquad \nonumber \\ && - [\ldots \; , \; d^+_q ]_{_{PB}} {i\over 2\rho^{++}} [ d^+_q\; , \; \ldots ]_{_{PB}} \; , \end{eqnarray} one can treat the second class constraints as the strong relations \begin{eqnarray} \label{strongIIcl} & d^+_q:= v^{+\alpha}_q d_\alpha = 0 \; ; \qquad \rho^{++}= u^{a++}P_a \, , \quad P_{++}^{[\rho]} = 0 \, ; \qquad u^{a i}P_a = 0 \, , \quad \mathbf{d}^{++j} = 0 \; . \qquad \end{eqnarray} \subsection{First class constraints and their (nonlinear) algebra} \label{secIclass} The remaining constraints are \begin{eqnarray} \label{pre-Icl} & d^-_q:= v^{-\alpha}_q d_\alpha \approx 0 \; , \qquad u^{a--}\Phi_a = u^{a--}P_a =: P^{--} \approx 0 \, , \\ \label{pre-IclH} & \mathbf{d}^{ij} \approx 0 \, , \qquad \mathbf{d}^{(0)} \approx 0 \, , \qquad \mathbf{d}^{--i} \approx 0 \; . \qquad \end{eqnarray} They give rise to the first class constraints. Namely, the Dirac bracket algebra of the constraints (\ref{pre-Icl}), (\ref{pre-IclH}) is closed and contains the following nonvanishing brackets \begin{eqnarray} \label{(IH,IH)=DB} {} & [\mathbf{d}^{ij} , \; \mathbf{d}^{kl}]_{_{DB}} = 4\mathbf{d}^{[k|[i} \delta^{j]|l]} \; , & \; [\mathbf{d}^{ij} , \; \mathbf{d}^{--k}]_{_{DB}} = 2\mathbf{d}^{-- [i} \delta^{j]k}\; , \qquad [\mathbf{d}^{(0)} , \; \mathbf{d}^{\pm\pm i}\}_{_{DB}} = \pm 2 \mathbf{d}^{\pm\pm i} , \qquad \\ \label{d--d--DB} && {} \fbox{$[\mathbf{d}^{--i} \; , \; \mathbf{d}^{--j} ]_{_{DB}} = {i\over 2P^{\!^{++}}} \; d^-_q \gamma^{ij}_{qp} d^-_p $} \; , \qquad \\ \label{(IH,I)=DB} {} & [\mathbf{d}^{ij} \; , \; d^-_p]_{_{DB}} = - {1\over 2} \gamma^{ij}_{pq} d_q^- \; , & \; {} [\mathbf{d}^{(0)} \; , \; d^-_p]_{_{DB}} = - d_q^- \; , \qquad [\mathbf{d}^{(0)} \; , \; P^{--}]_{_{DB}} = -2 P^{--} \; , \qquad \\ \label{(I,I)=DB} && {} \fbox{$ \; \{ d_q^- \; , \; d^-_{p} \} _{_{DB}}= - 2i \delta_{qp} P^{--} \;$} \; . \qquad {} \end{eqnarray} Notice that the right hand side of Eq. (\ref{d--d--DB}) includes the product of the two fermionic first class constraint and, hence, implies moving outside the Lie algebra (to the enveloping algebra) \footnote{ One may also think of this as an analogy of the very well known phenomenon of the non--commutativity of the bosonic spacetime coordinates of the superparticle which appears in standard formulation \cite{Casalbuoni} after transition to the Dirac brackets for the second class constraints; see also the second reference in \cite{A+L82}. There the Dirac brackets of two bosonic coordinates are proportional to the product of two Grassmann coordinates \cite{Casalbuoni,A+L82}. In four dimensions such a noncommutativity is overcame by passing to the so called chiral basis of $D=4$ superspace the imaginary part of the bosonic coordinate of which is given by the Grassmann coordinates bilinear. The use of the Gupta-Bleuler technique \cite{Casalbuoni,JdA+L88} also helps. The appearance of a nonlinear algebra of constraints was also observed for the twistor--like formulation of $D$=4 null superstring and null--supermembranes in \cite{BZnull}. Notice finally that among the `nonlinear algebras', the most popular are the W--algebra intensively studied some years ago (see {\it e.g.} \cite{SKrASo} and reference therein). }. If this term were absent, one would state that the first class constraints (\ref{pre-IclH}) generated $H= SO(1,1)\otimes SO(9)]\subset\!\!\!\!\!\!\times K_9$ group symmetry, and the whole gauge symmetry would be described by its semidirect product (see (\ref{(IH,I)=DB})) $H \subset\!\!\!\!\!\!\times \Sigma^{(1|16)}$ with the $d=1, N=16$ supersymmetry group $\Sigma^{(1|16)}$ of the $\kappa$--symmetry and $b$--symmetry, Eqs. (\ref{pre-Icl}), (\ref{(I,I)=DB}). Then the actual algebra of Eqs. (\ref{(IH,IH)=DB}), (\ref{d--d--DB}), (\ref{(IH,I)=DB}), (\ref{(I,I)=DB}) is a {\it `generalized W--deformation'} of the Lie superalgebra of this semidirect product $[ SO(1,1)\otimes SO(9)]\subset\!\!\!\!\!\!\times K_9]\subset\!\!\!\!\!\!\times \Sigma^{(1|16)}$. The role of the constant parameter for the standard deformation here is taken by the function ${1\over P^{++}}$ (hence the name {\it generalized} for this `W-deformation'). However, although momentum $P^{++}= u^{a ++}P_a$ is a dynamical variable, it has vanishing Dirac brackets with all the first class constraints. One may guess that the complete BRST charge $\mathbb{Q}$ for the algebra of the first class constraints (\ref{(I,I)=DB}) is quite complicated and its use is not too practical. Following the pragmatic spirit of the pure spinor approach \cite{NB-pure,nonmNB}, it is tempting to take care of the constraints corresponding to the (deformed) $[SO(1,1)\otimes SO(9)]\subset\!\!\!\!\!\!\times K_9$ part of the gauge symmetries in a different manner, by imposing them as conditions on the wavefunctions in quantum theory, and to leave with a short and fine BRST charge corresponding to the supersymmetry algebra (\ref{(I,I)=DB}) of the $\kappa$--symmetry and the $b$--symmetry generators. However, the appearance of the deformation given by the product of the fermionic first class constraints in the {\it r.h.s.} of Eq. (\ref{d--d--DB}) might produce doubts on the consistency of such a prescription. Indeed, imposing, for instance, the deformed (now non--Abelian) $K_9$ constraint $\mathbf{d}^{--i}$ as a condition on the wave function in quantum theory, $\widehat{\mathbf{d}}{}^{--i}\Phi=0$, one should also impose by consistency the condition, $[\widehat{\mathbf{d}}{}^{--i}, \widehat{\mathbf{d}}{}^{--j}]\Phi=0$ which implies $\gamma^{ij}_{qp}\, \widehat{d}^-_q \widehat{d}^-_p\Phi=0$. To clarify the situation with the BRST quantization of the nonlinear algebra (\ref{(IH,IH)=DB})--(\ref{(I,I)=DB}) and its possible simplification we begin with studying the BRST charge $\mathbb{Q}^\prime$ corresponding to the subalgebra of $\kappa$--, $b$-- and $K_9$--symmetry generators, $d_q^-$, $P^{--}$ and $\mathbf{d}^{--i}$. \subsection{BRST charge for a nonlinear sub(super)algebra of the first class constraints} \label{BRST-min} \label{Qprime} The sub--superalgebra of the $\kappa$--, $b$-- and the deformed $K_9$--symmetry generators, $d_q^-$, $P^{--}$ and $\mathbf{d}^{--i}$ is described by Eqs. (\ref{d--d--DB}) and (\ref{(I,I)=DB}) plus vanishing brackets for the rest, \begin{eqnarray} \label{didj,dqdq} && {} [\mathbf{d}^{--i} \; , \; \mathbf{d}^{--j} ]_{_{DB}} = {i\over 2P^{\!^{++}}} \; d^-_q \gamma^{ij}_{qp} d^-_p \quad (a) \; , \qquad \qquad {} \{ d_q^- \; , \; d^-_{p} \} _{_{DB}}= - 2i \delta_{qp} P^{--} \quad (b) \; . \qquad \end{eqnarray} It is obtained from (\ref{(IH,IH)=DB})-- (\ref{(I,I)=DB}) by setting the generators of $SO(9)\otimes SO(1,1)$ equal to zero, $\mathbf{d}^{ij}=0$ and $\mathbf{d}^{(0)}=0$. Notice that, when acting on the space of $SO(9)\otimes SO(1,1)$ invariant functions, the full BRST charge $\mathbb{Q}$ of our $D=11$ superparticle reduces to the BRST charge of the algebra (\ref{didj,dqdq}). In the quantum theory such an algebra reduction can be realized by imposing $\mathbf{d}^{ij}$ and $\mathbf{d}^{(0)}$ as conditions on the state vectors $\widehat{\mathbf{d}}^{ij}\; \Phi =0$ and $\widehat{\mathbf{d}}^{(0)}\;\Phi=0$. This specifies the wavefunction dependence on the harmonics making it a function on the non--compact coset $SO(1,10)/[SO(9)\times SO(1,1)]$ (dependence on $l^{\pm\pm i}$ parameters only in the case of explicit parametrization (\ref{harmU=}), (\ref{harmV=})). We denote the BRST charge corresponding to the non--linear superalgebra (\ref{didj,dqdq}) by $\mathbb{Q}^{\prime}$ which reflects the fact that it gives only a part of the full BRST charge describing the complete gauge symmetry algebra (\ref{(IH,IH)=DB})-- (\ref{(I,I)=DB}) of the M0--brane in spinor moving frame formulation. The master equation \begin{eqnarray} \label{QmQm=0} {} \{ \; \mathbb{Q}^{\prime}\; , \; \mathbb{Q}^{\prime}\;\}_{_{DB}} =0 \; \end{eqnarray} has the solution \begin{eqnarray} \label{Qmin} \mathbb{Q}^{\prime}&=& {\lambda}^+_q d_q^{-} + c^{++} P^{--} + c^{++j}\mathbf{d}^{--j} - \; i \lambda^+_q\lambda^+_q \pi^{[c]}_{++}\; + {i\over 2P^{\!^{++}}} c^{++i}c^{++j} d_q^{-}\gamma^{ij}_{qp} P^{-[\lambda]}_p + \qquad \nonumber \\ && + {1\over P^{\!^{++}}} c^{++i}c^{++j} \lambda_q^{+}\gamma^{ij}_{qp} P^{-[\lambda]}_p \pi^{[c]}_{++} - {i\over 4(P^{\!^{++}})^2} c^{++i}c^{++j}c^{++k}c^{++l}P^{-[\lambda]}_q\gamma^{ijkl}_{qp} P^{-[\lambda]}_p \pi^{[c]}_{++} \; . \qquad \end{eqnarray} Here ${\lambda}^+_q$ is the bosonic ghost for the fermionic $\kappa$--symmetry gauge transformations, $c^{++}$ and $c^{++j}$ are the fermionic ghosts for the bosonic $b$--symmetry and deformed $\mathbb{K}_9$ symmetry transformations, and $P^{-[\lambda]}_q$, $\pi^{[c]}_{++}$ are the (bosonic and fermionic) ghost momenta conjugate to ${\lambda}^+_q$ and $c^{++}$, \begin{eqnarray} \label{ghostDB} && {} [{\lambda}^+_q \; , \; P^{-[\lambda]}_p ]_{_{DB}} = \delta_{qp}\; , \qquad {} \{ c^{++} \; , \; \pi^{[c]}_{++} \} _{_{DB}}= -1\; , \qquad {} \{ c^{++i} \; , \; \pi^{[c]}_{++ j} \} _{_{DB}}= - \delta^i_{j} \; . \qquad \end{eqnarray} Notice that the fermionic ghost momentum $\pi^{[c]}_{++ j}$ conjugate to $c^{++ j}$ does not enter $\mathbb{Q}^\prime$ (\ref{Qmin}). The $\mathbb{Q}^\prime$ of Eq. (\ref{Qmin}) is the third rank BRST charge in the sense that the series stops on the third degree in the ghost momenta $P^{-[\lambda]}_p$, $\pi^{[c]}_{++}$. Technically, the decomposition stops due to nilpotency of $\pi^{[c]}_{++}$. The nilpotency of the BRST charge (\ref{Qmin}) is preserved in the quantum theory, $(\mathbb{Q}^{\prime})^2=0$, as far as no products of noncommuting operators (like {\it e.g.} $\lambda^+_q P^{-[\lambda]}_q$) appear in the calculation of $(\mathbb{Q}^{\prime})^2$. \subsection{The further reduced BRST charge $\mathbb{Q}^{susy}$} \label{secQsusy} The (already restricted) BRST charge (\ref{Qmin}) is (still) too much complicated to discuss it as a counterpart of (or as an alternative to) the Berkovits pure spinor BRST charge. A (further) reduction looks necessary. To this end let us notice that $\mathbb{Q}^{\prime}$ of Eq. (\ref{Qmin}) can be presented as a sum \begin{eqnarray}\label{Q'=} \mathbb{Q}^{\prime}&=& \mathbb{Q}^{susy} + c^{++j}\widetilde{\mathbf{d}}{}^{--j}\; , \end{eqnarray} of the much simpler operator \begin{eqnarray}\label{Qbrst1} \fbox{$\; \mathbb{Q}^{susy}= \lambda^+_q \; d_q^{-} + c^{++} \; P^{--} \; - \; i \lambda^+_q\lambda^+_q \pi^{[c]}_{++}\;$}\; , \qquad {} \{ \mathbb{Q}^{susy} \; , \; \mathbb{Q}^{susy} \}_{_{DB}} = 0 \; , \qquad \end{eqnarray} and the term containing the $c^{++j}$ ghost fields. The operator (\ref{Qbrst1}) can be identified as BRST charge corresponding to the $d=1$, $N=16$ supersymmetry algebra \begin{eqnarray}\label{16+1al} & {} \{ d_q^{-} \; , \; d_p^- \}_{_{DB}} = -2i P^{--} \; , \qquad [ P^{--} \; , \; d_p^- ]_{_{DB}} = 0 \; , \qquad [ P^{--} \; , \; P^{--} ]_{_{DB}} \equiv 0 \; . \qquad \end{eqnarray} of the $\kappa$-- and $b$--symmetry generators (\ref{16+1al}). The second term in (\ref{Q'=}), $c^{++j}\widetilde{\mathbf{d}}{}^{--j}$, contains the deformed $K_9$ generator modified by additional ghost contributions, \begin{eqnarray}\label{td--j:=} \widetilde{\mathbf{d}}{}^{--i} & = {\mathbf{d}}{}^{--i} + {i\over 2P^{\!^{++}}} c^{++j} d_q^{-}\gamma^{ij}_{qp} P^{-[\lambda]}_p + {1\over P^{\!^{++}}} c^{++j} \lambda_q^{+}\gamma^{ij}_{qp} P^{-[\lambda]}_p \pi^{[c]}_{++} - \qquad \nonumber \\ & - {i\over 4(P^{\!^{++}})^2} c^{++j}c^{++k}c^{++l}P^{-[\lambda]}_q\gamma^{ijkl}_{qp} P^{-[\lambda]}_p \pi^{[c]}_{++} \; . \qquad \end{eqnarray} The `nilpotency' of the $\mathbb{Q}^{susy}$(\ref{Qbrst1}) (${} \{ \mathbb{Q}^{susy} \; , \; \mathbb{Q}^{susy} \}_{_{DB}} = 0$) guaranties the consistency of the reduction of the $\mathbb{Q}^{\prime}$--cohomology problem to the $\mathbb{Q}^{susy}$--cohomology. For the classical BRST charge such a reduction can be reached just by setting the $K_9$ ghost equal to zero, $c^{++j}=0$. In classical mechanics one can consider this reduction as a result of the gauge fixing, {\it e.g.} in the explicit parametrization (\ref{harmU=}), (\ref{harmV=}) by setting $l^{++i}=0$ and (as $l^{ij}=l^{(0)}=0$ can be fixed by $SO(1,1)\otimes SO(9)$ transformations) expressing all the harmonics in terms of nine parameters $l^{--i}$ (related to the projective coordinates of the $S^9$ sphere) as in Eqs. (\ref{U=l--}), (\ref{V=l--}). Although technical, the question of how to realize a counterpart of such a classical gauge fixing in quantum description looks quite interesting. The problem is whether in this way one arrives just at scalar functions on $S^9= SO(1,10)/[[SO(1,1)\otimes SO(9)]\subset\!\!\!\!\!\!\times K_9]$, or the interplay of the $v_q^+$ (or $u_m^{++}$, $u_m^i$) harmonics and the $K_9$ ghost $c^{++j}$ may result in wavefunctions transforming nontrivially under $SO(1,1)\otimes SO(9)$ (a counterpart of the effect of the $D$=4 helicity appearance in the quantization of $D$=4 superparticle, see \cite{B90} and refs. herein). Such an interplay could appear, {\it e.g.} when one imposes the quantum counterpart of the deformed $K_9$ constraints modified by ghost contribution (\ref{td--j:=}) on the wavefunctions. However, this interesting problem is out of the score of the present paper devoted to a search for the origin and geometric meaning of the Berkovits approach in the frame of spinor moving frame formulation of (presently) M0--brane. Thus, let us accept, following the pragmatic spirit of the pure spinor approach \cite{NB-pure}, the simple prescription of the reduction of the first class constraint Dirac brackets algebra down to the $d=1$ $N=16$ supersymmetry algebra of $\kappa$--symmetry and $b$--symmetry, Eq. (\ref{(I,I)=DB}) (taking care of other constraints in a different manner), which implies the reduction of $\mathbb{Q}^{\prime}$ to the much simpler $\mathbb{Q}^{susy}$, and let us turn to the study of the cohomology problem for the BRST charge $\mathbb{Q}^{susy}$ (\ref{Qbrst1}). \section{BRST quantization of the D=11 superparticle. Cohomology of $\mathbb{Q}^{susy}$ and the origin of the complexity of the Berkovits approach }\label{QsusyCoH} \setcounter{equation}0 \subsection{Quantum BRST charge $\mathbb{Q}^{susy}$} It is practical, omitting the overall $\pm i$ factor, to write the quantum BRST charge obtained from (\ref{Qbrst1}) as \begin{eqnarray}\label{Qsusy} \mathbb{Q}^{susy}= \lambda^+_q \; D_q^{-} + i c^{++} \partial_{++} - \lambda^+_q\lambda^+_q {\partial\over \partial c^{++}}\; , \qquad {} \{ \mathbb{Q}^{susy} \; , \; \mathbb{Q}^{susy} \} = 0 \; , \qquad \qquad \end{eqnarray} where the quantum operators $D^-_q$ and $\partial_{++}$, associated with $d^-_q$ and $P_{++}$, obey the $d=1, n=16$ supersymmetry algebra ({\it cf.} (\ref{(I,I)=DB})) \begin{eqnarray}\label{DD=d--} {} \{ D_p^{-} , D_q^{-} \} = 2i \delta_{qp} \partial_{++} \; , \qquad [ \partial_{++}\, , D_p^{-}]=0 \; . \end{eqnarray} The quantum BRST operator $\mathbb{Q}^{susy}$ (\ref{Qsusy}) should act on the space of wavefunctions that depend on the physical (gauge invariant) variables and on a number of variables which transform nontrivially under the action of generators $\partial_{++}$, $D_q^{-}$ (in general case, the variables of a model cannot be split covariantly on gauge invariant and pure gauge ones, but for our model this is actually possible, see Sec. 5). It is convenient to use a realization of $\partial_{++}$, $D_q^{-}$ as differential operators on the $1+16$ dimensional superspace $W^{(1|16)}$ of coordinates $(x^{++},\theta^+_q)$, \begin{eqnarray}\label{D-q=} D_q^{-} = \partial_{+q} + i \theta^+_q \partial_{++}\; , \qquad \partial_{++} := {\partial \over \partial x^{++}}, \qquad \partial_{+q} := {\partial \over \partial \theta^+_q} \; . \end{eqnarray} These variables have straightforward counterparts in the covariant light--cone basis, $\theta^+_q= \theta^\alpha v_{\alpha}{}^+_q$ and $x^{++}=x^{m} u_{m}^{++}$ (see \cite{Sok,GHT93} and Sec. 5). The other `physical' variables, on which the wavefunctions should also depend, can be related to other coordinates of this basis, including $x^{--}=x^{m} u_{m}^{--}$ and $\theta^-_q= \theta^\alpha v_{\alpha}{}^-_q$ and the harmonics $v_{\alpha}{}^-_q$ parametrizing $S^9$ (and carrying $9$ of $10$ degrees of freedom of the light--like momentum). However, to study the cohomology of the BRST operator (\ref{Qsusy}) the dependence on these latter coordinates is inessential and, in this section, we will use the notation $\Phi = \Phi ( \lambda^+_q \, , c^{++} \, ; \, x^{++} , \theta^{+}_q \; , ...)$ or $\Phi (c^{++} , \lambda^+_q\, ... )$ to emphasize the essential dependence of our wavefunctions. The Grassmann odd $c^{++}\;$ variable, $\;c^{++}c^{++}=0$, and the bosonic variables $\lambda^+_q$ in (\ref{Qsusy}) are ghosts for the bosonic and 16 fermionic first class constraints represented by the differential operators $\partial_{++}$ and $D^-_q$. Their ghost numbers are $1$, and this also fixes the ghost number of the BRST charge to be one, \begin{eqnarray}\label{ghN} gh_\# (\lambda^+_q)=1 \; , \qquad gh_\# (c^{++})=1 \; , \qquad gh_\# ( \mathbb{Q}^{susy})=1\; . \end{eqnarray} The cohomology problem has to be solved for functions with definite ghost number $g := gh_\# (\Phi)$. Let us begin, however, with some general observations for which the ghost number fixing is not relevant. \subsubsection{The nontrivial cohomology of $\mathbb{Q}^{susy}$ is located at $\lambda^+_q\lambda^+_q=0$} BRST cohomology is determined by wavefunctions $\Phi$ which are BRST-closed, $\mathbb{Q}^{susy}\Phi=0\;$, but not BRST-exact. They are defined modulo the BRST transformations {\it i.e.} modulo BRST-exact wavefunctions $\mathbb{Q}^{susy}\chi$, where $\chi$ is an arbitrary function of the same configuration space variables and of ghost number $gh_\# (\chi)= gh_\#(\Phi)-1$, \begin{eqnarray} \label{Qcoh=def} \mathbb{Q}^{susy}\Phi=0 \; , \qquad \Phi \sim \Phi^\prime = \Phi + \mathbb{Q}^{susy}\chi \; , \qquad gh_\# (\chi)= gh_\# (\Phi) - 1\; . \qquad \end{eqnarray} Decomposing the wave function $\Phi = \Phi (c^{++} , \lambda^+_q \, ; \, x^{++} , \theta^{+}_q \; , ...)$ in power series of the Grassmann odd ghost $c^{++}$, \begin{eqnarray}\label{Phi=Phi(c)} \Phi &=& \Phi_0 + c^{++} \Phi_{++} \qquad \\ \nonumber \qquad &=& \Phi_0(\lambda^+_q \, ; \, x^{++} , \theta^{+}_q \, ;\ldots ) + c^{++} \Phi_{++}(\lambda^+_q \, ; \, x^{++} , \theta^{+}_q \, ;\ldots ) \; , \end{eqnarray} one finds that $\mathbb{Q}^{susy} \Phi =0$ for the superfield (\ref{Phi=Phi(c)}) implies for its components \begin{eqnarray}\label{QPhi=0} \lambda^+_q D^-_q \Phi_0 = \lambda^+_q\lambda^+_q \Psi_{++}\quad (a) \; , \qquad \lambda^+_q D^-_q \Psi_{++} = i \partial_{++}\Phi_0 \quad (b)\; . \qquad \end{eqnarray} Using a similar decomposition for the arbitrary superfield in (\ref{Qcoh=def}), $\chi= \chi_0 + c^{++} K_{++}$, one finds for the BRST transformations, \begin{eqnarray}\label{Phi=Phi+Qchi} \Phi \mapsto \Phi^\prime = \Phi + \mathbb{Q}^{susy} \chi \quad \Rightarrow \quad \cases{\Phi_0 \mapsto \Phi_0^\prime = \Phi_0 + \lambda^+_q D^-_q \chi_0 - \lambda^+_q \lambda^+_q K_{++} \;\;\; (a) \; , \qquad \cr \Psi_{++} \mapsto \Psi_{++}^\prime = \Psi_{++} + i \partial_{++}\chi_0 + \lambda^+_q D^-_q K_{++} \;\;\; (b) \;{} } \; . \qquad \end{eqnarray} If one assumes that the spinorial bosonic ghost $\lambda^+_q$ is non-zero, or, equivalently, that the square $\lambda^+_q\lambda^+_q\not= 0$, then one can use Eq. (\ref{QPhi=0}a) to express the fermionic component of the wave function in terms of the bosonic one, $ \Psi_{++}= \lambda^+_q D^-_q\Phi_0/ \lambda^+_p \lambda^+_p$. Then one can also chose the second bosonic component $K_{++}$ of the parameter superfield $\chi=\chi_0 + c^{++} K_{++}$ to be $K_{++}= {1\over \lambda^+_p\lambda^+_p} (\Phi_0 + \lambda^+_q D^-_q\chi_0)$ and arrive at $\Phi_0^\prime =0$ in (\ref{Phi=Phi+Qchi}a). Thus, if the ghost variables $\lambda^+_q$ parametrize $\mathbb{R}^{16}- \{ 0\}$, $\lambda^+_q\lambda^+_q\not= 0$ and the BRST cohomology of $\mathbb{Q}^{susy}$ is necessarily trivial: all the BRST--closed states are BRST-exact. Hence, if $\mathbb{Q}^{susy}$ has to admit non-trivial closed states, they must have a representation by wavefunctions with support on $\lambda^+_q\lambda^+_q\not= 0$. In other words, the closed non-exact wavefunctions representing non-trivial cohomology must be of the form $\Phi \propto \delta (\lambda^+_q\lambda^+_q)$ plus a possible BRST trivial contribution. \bigskip \subsection{Cohomologies at vanishing bosonic ghost } \label{CohQ-2} Thus wavefunctions describing the non-trivial cohomology of $\mathbb{Q}^{susy}$, if exists, must have representation by closed non-exact wavefunctions of the form $\Phi= \delta (\lambda^+_q\lambda^+_q)\; \Phi^{++}$, where $\Phi^{++}= \Phi^{++}+ c^{++}\Psi^{0}$ has ghost number two units more than $\Phi$, $\; gh_{\#}(\Phi^{++})\, = \, gh_{\#}(\Phi^{0}) + 2\;$. But there is a difficulty with studying these states: since the bosonic ghosts $\lambda^+_q$ are real, $\lambda^+_q\lambda^+_q=0$ implies $\lambda^+_q=0$. Thus, since ${Q}^{susy}$ includes $\lambda^+_q$ in an essential manner, it is necessary to make a `regularization' allowing us to consider, at the intermediate stages, a nonvanishing $\lambda^+_q$ which nevertheless satisfies $\lambda^+_q\lambda^+_q=0$. This is possible if we {\it allow $\lambda^+_q$ to be complex} ({\it cf.} with the pure spinors by Berkovits \cite{NB-pure}), \begin{eqnarray}\label{ll-cl} \lambda^+_q\mapsto \tilde{\lambda}^+_q\; : \quad \fbox{$\; \tilde{\lambda}^+_q\tilde{\lambda}^+_q=0 \; , \quad (\tilde{\lambda}^+_q)^*\not= \tilde{\lambda}_q \qquad \Rightarrow \qquad \tilde{\lambda}^+_q\not= 0\; $ is possible }\; . \qquad \end{eqnarray} A suggestive form of the general solution of $\tilde{\lambda}^+_q\tilde{\lambda}^+_q=0$ is \begin{eqnarray}\label{l-null} \tilde{\lambda}^+_q= \epsilon^+ \; (n_q +i m_q)\; , \qquad \vec{n}^2:=n_q n_q=1\; , \quad \vec{m}^2:=m_qm_q=1 \; , \qquad \vec{n}\vec{m}= n_qm_q= 0 \; , \end{eqnarray} where $n_q$ and $m_q$ are two real mutually orthogonal unit $SO(16)$ vectors ($SO(9)$ spinors) and $\epsilon^+$ is a real number. The only real representative of the family of complex $SO(9)$ spinors $\tilde{\lambda}^+_q$ in (\ref{l-null}) is $\tilde{\lambda}^+_q=0$; this corresponds to setting the `regularization parameter' $\epsilon^+$ equal to zero. The `regularized' BRST charge is thus complex. It contains the complex ghost $\tilde{\lambda}^+_q$ rather than the real ${\lambda}^+_q$ in (\ref{Qsusy}), but does not contain $(\tilde{\lambda}^+_q)^*$. It acts on the space of wavefunctions depending, among other configuration space variables, on the complex $\tilde{\lambda}^+_q$. Since the discussion of the previous section is not affected by above complexification ${\lambda}^+_q\mapsto \tilde{\lambda}^+_q$, we conclude that the non-trivial cohomology states of the complexified BRST charge are wavefunctions of the form \begin{eqnarray}\label{Phi(reg)} \Phi= \delta (\tilde{\lambda}^+_q\tilde{\lambda}^+_q)\; \Phi^{++}(\tilde{\lambda}^+_q \, , \, c^{++}\, ; \; x^{++} , \theta^{+}_q \, , \; \ldots )\; . \end{eqnarray} As the BRST charge $\mathbb{Q}^{susy}$ does not contain any derivative with respect to the bosonic ghost ${\lambda}^+_q$, its regularization acts on the $\Phi^{++}$ part of the function $ \Phi$ in (\ref{Phi(reg)}) only. Namely, one finds \begin{eqnarray}\label{QsusyPSI=} \mathbb{Q}^{susy}\vert_{_{{\lambda}^+_p\mapsto \tilde{\lambda}^+_p}} \; \delta (\tilde{\lambda}^+_q\tilde{\lambda}^+_q)\; \Phi^{++}(\tilde{\lambda}^+_q\; \, , \, c^{++}\, ; \ldots ) = \delta (\tilde{\lambda}^+_q\tilde{\lambda}^+_q)\; \tilde{Q}^{susy} \Phi^{++}(\tilde{\lambda}^+_q\; \, , \, c^{++}\, ; \ldots ) \; , \qquad \end{eqnarray} where we introduced the non-Hermitian BRST charge ({\it cf.} (\ref{Qsusy})) \begin{eqnarray}\label{tQsusy} \fbox{$\; \tilde{Q}^{susy}= \tilde{\lambda}^+_q \; D_q^{-} + i c^{++} \partial_{++} \qquad \; , \qquad \tilde{\lambda}^+_q \tilde{\lambda}^+_q = 0 \; $} \; , \qquad \tilde{Q}^{susy}=\mathbb{Q}^{susy}\vert_{{\lambda}^+_q\mapsto \tilde{\lambda}^+_q\; : \; \tilde{\lambda}^+_q\tilde{\lambda}^+_q=0}\; , \qquad \end{eqnarray} which can be used to reformulate the regularized cohomology problem. Note that, once we have concluded that the nontrivial cohomology of $\mathbb{Q}^{susy}$ is determined by wavefunctions of the form (\ref{Phi(reg)}), we can reduce the nontrivial cohomology search to the set of such functions, restricting as well the arbitrary superfields $\chi$ of the BRST transformations (\ref{Phi=Phi+Qchi}) to have the form $\chi = \delta (\tilde{\lambda}^+_q \tilde{\lambda}^+_q) \chi^{++}$. Then, the regularized cohomology problem for the complexified BRST operator ($\mathbb{Q}^{susy}$ of (\ref{Qsusy}) now depending on the complexified bosonic ghost $\tilde{\lambda}^+_q$), reduces to the search for {\it $\tilde{\lambda}^+_q=0$ `value'} of the functions describing non-trivial cohomologies of the $\tilde{Q}^{susy}$ operator in Eq. (\ref{tQsusy}), \begin{eqnarray}\label{CHtQsusy} \tilde{Q}^{susy} \Phi^{++} =0 \; , \qquad \Phi^{++} \sim \Phi^{++\,\prime }= \Phi^{++} + \tilde{Q}^{susy}\chi^{++}\; . \qquad \end{eqnarray} This problem (\ref{CHtQsusy}) can be reformulated in terms of components $\Phi_0^{++}$ and $\Psi^{(0)}$ of the wavefunction superfield $\Phi^{++} = \Phi_0^{++} + c^{++} \Psi^{(0)}$ giving rise to the following equations \begin{eqnarray}\label{EqCHtQ} & \tilde{\lambda}^+_q D^-_q \Phi^{++}_0 = 0\; , \qquad \qquad & \tilde{\lambda}^+_q D^-_q \Psi^{(0)} = i \partial_{++}\Phi^{++}_0\; . \qquad \\ \label{Phi=Phi+tQchi} & \Phi^{++}_0 \sim \Phi_0^{++}{}^\prime = \Phi^{++}_0 + \tilde{\lambda}^+_q D^-_q \chi^{++}_0 \; , \qquad & \Psi^{(0)} \sim \Psi^{(0)\prime} = \Psi^{(0)} + i \partial_{++}\chi^{++}_0 + \tilde{\lambda}^+_q D^-_q K^{(0)} \; . \qquad \end{eqnarray} To obtain the cohomology of $\mathbb{Q}^{susy}$, we have to set $\tilde{\lambda}^+_q=0$ at the end to remove the regularization; thus we are really interested in the wavefunctions for $\tilde{\lambda}^+_q=0$: $\; \Phi_0^{++}(0):= \Phi_0^{++}\vert_{\tilde{\lambda}^+_q=0}= \Phi_0^{++}(0 \; , \; x^{++} , \theta^{+}_q \, ;\; \ldots )$, $\Psi_0^{(0)}(0):= \Psi_0^{(0)}\vert_{\tilde{\lambda}^+_q=0}= \Psi_0^{(0)}(0 \; , \; x^{++} , \theta^{+}_q \, ;\; \ldots )$. Eqs. (\ref{EqCHtQ}), (\ref{Phi=Phi+tQchi}) show that the `superfield' cohomology problem of Eq. (\ref{CHtQsusy}) includes a (pure-spinor like) cohomology problem for the leading component $\Phi_0^{++}$ of the $\Phi^{++}$ superfield, \begin{eqnarray}\label{CHtQ0} \tilde{\lambda}^+_q D^-_q \Phi^{++}_0 = 0\; , \qquad \Phi^{++}_0 & \mapsto \Phi_0^{++}{}^\prime = \Phi^{++}_0 + \tilde{\lambda}^+_q D^-_q \chi^{++}_0 \; . \qquad \end{eqnarray} Let us recall that we are interested in the cohomology problems for fixed ghost number \begin{eqnarray}\label{g0hN} g =gh_{\#}(\Phi )= g_0 -2 \; , \qquad g_0:= gh_{\#}(\Phi_0^{++} )\; . \qquad \end{eqnarray} As far as the remaining part of the cohomology problem (\ref{CHtQsusy}) (or (\ref{EqCHtQ}), (\ref{Phi=Phi+tQchi})) is concerned, \begin{eqnarray}\label{CHtQ00} \tilde{\lambda}^+_q D^-_q \Psi^{(0)} = i \partial_{++}\Phi^{++}_0\; , \qquad \Psi^{(0)} & \mapsto \Psi^{(0)\prime} = \Psi^{(0)} + i \partial_{++}\chi^{++}_0 + \tilde{\lambda}^+_q D^-_q K^{(0)} \; , \qquad \end{eqnarray} the presence of the $i \partial_{++}\chi^{++}_0$ term in the BRST transformations suggests its triviality (which is indeed the case, see below). Thus we have reduced our cohomology problem for the Lorentz harmonics BRST charge (\ref{Qsusy}) to the auxiliary cohomology problem (\ref{CHtQ0}) for the charge (\ref{tQsusy}). Before turning to it, we would like to comment on the relation of our BRST charge (\ref{Qsusy}) involving a complex $SO(9)$ spinor $\tilde{\lambda}^+_q$, satisfying $\tilde{\lambda}^+_q\tilde{\lambda}^+_q=0$, with the Berkovits BRST charge constructed with the $D$=11 pure spinors \cite{NB-pure}. \bigskip \subsection{Relation with the Berkovits's pure spinors} \label{CohQ-3} The $D=11$ pure spinors of Berkovits obey \cite{NB-pure} ${\Lambda}\Gamma_a{\Lambda}=0$ (\ref{NB-pureSp}) and, in general, carry $46$ ($23$ complex) degrees of freedom. A specific $39$ parametric solution $\tilde{\Lambda}$ can be found using the spinor moving frame approach (see \cite{BZ-str,BL98'}). It is given by \footnote{Indeed, using the constraint (\ref{vv=uG}) one finds that $\tilde{\Lambda}\tilde{\Gamma}_a\tilde{\Lambda}= \tilde{\lambda}^+_q v^-_q\tilde{\Gamma}_av^-_p \, \tilde{\lambda}^+_p = u_a^{--} \ \tilde{\lambda}^+_q \tilde{\lambda}^+_q = 0$ since $\tilde{\lambda}^+_q \tilde{\lambda}^+_q = 0$.} \begin{eqnarray}\label{Lpure=lv} \fbox{$\; \tilde{\Lambda}_\alpha = \tilde{\lambda}^+_q v_{\alpha}{}^-_q\;$} \; , \qquad \{ v_{\alpha}{}^-_q\}= {SO(1,10)\over SO(1,1)\otimes SO(9)\otimes K_9}= S^9 \; , \qquad \tilde{\lambda}^+_q\tilde{\lambda}^+_q=0 \quad \Rightarrow \quad \tilde{\Lambda}\Gamma_a\tilde{\Lambda}=0 \; . \qquad \end{eqnarray} Thus, the complex $16$ component $SO(9)$ spinor $\tilde{\lambda}^+_q \tilde{\lambda}^+_q = 0$ with $\tilde{\lambda}^+_q \tilde{\lambda}^+_q = 0$, carries $30$ of the $39$ degrees of freedom of the (Berkovits-type) pure spinor (\ref{Lpure=lv}). The remaining $9$ degrees of freedom in this pure spinor correspond to the $S^9$ sphere of the light--like eleven--dimensional momentum modulo its energy. Furthermore, as far as the $\kappa$--symmetry generator $D^-_q$ is basically $v^{-\alpha}_q \mathbf{d}_\alpha$, one finds that Berkovits BRST charge in Eq. (\ref{QbrstB}) can be obtained from our (\ref{tQsusy}) by replacing the composite pure spinor $\tilde{\lambda}^+_qv^{-\alpha}_q$ (\ref{Lpure=lv}) by a generic pure spinor $\tilde{\Lambda}^\alpha$ and by ignoring the second quite simple $c^{++}$ term in (\ref{tQsusy}). In other words, \begin{eqnarray}\label{tQsusy=QB} \tilde{Q}^{susy} = \mathbb{Q}^{B}\vert_{\Lambda^\alpha = \tilde{\lambda}^+_q v^{-\alpha}_q } + i c^{++} \partial_{++}\; , \qquad \end{eqnarray} Of course, the generic Berkovits's pure spinor \cite{NB-pure} in $D$=11 carries $46$ real degrees of freedom, while the composite pure spinor (\ref{Lpure=lv}) only carries $39$. However, it is not obvious that all degrees of freedom in a pure spinor are equally important for the description of superparticle in the Berkovits approach. Notice in particular that only the pure spinor cohomology at vanishing bosonic ghost describe the superparticle, while the complete pure spinor cohomology is much reacher and correspond to the spinorial cohomologies of \cite{SpinCohom02}. As far as the generalization for the case of superstring is concerned, it is important to note that {\it in $D=10$ dimensional case}, which corresponds to the Green--Schwarz superstring, {\it Eq. (\ref{Lpure=lv}) does provide the {\bf general solution} of the pure spinor constraint (\ref{NB-pureSp})}. Indeed, in $D=10$ this solution carries {\bf 16+8-2=22 } degrees of freedom, the same number as the generic pure spinor. Thus one may expect that the substitution of the solution (\ref{Lpure=lv}) for pure spinor used to describe superstring in \cite{NB-pure} ({\it i.e.} replacing the pure spinor approach by a pragmatically designed Lorentz harmonic approach) should not produce any additional anomaly. Coming back to our M0--brane case, we conclude that a counterpart (\ref{tQsusy}) of the Berkovits BRST charge (\ref{QbrstB}) appears on the way of regularization ($\lambda^-_q\mapsto \tilde{\lambda}^-_q\not= (\tilde{\lambda}^-_q)^*$) from the directly obtained BRST charge (\ref{Qsusy}) when the $D=11$ superparticle is quantized in its twistor--like Lorentz harmonics formulation (\ref{11DSSP}). \bigskip \subsection{Cohomology of $\tilde{\lambda}^+_q D^-_q$ } \label{CohQ-4} The physical spectrum of the model is found by solving the BRST cohomology problem in a sector of Hilbert space with a fixed ghost number. When dealing with the $\Phi_0^{++}$ part of the wavefunction $\Phi^{++}$, $\;\Phi^{++}= \Phi_0^{++}+ c^{++}\Psi_0$, the only remaining carrier of the ghost number is the bosonic ghost $\tilde{\lambda}^+_q$. Thus the ghost number $g_0:=g-2$ of the wavefunction $\Phi_0^{++}$ (see (\ref{g0hN})) coincides with its homogeneity degree in $\tilde{\lambda}^+_q$, \begin{eqnarray}\label{gh=homD} & \Phi_0^{++}( z \tilde{\lambda}^+_q \, , \, \ldots ) = z^{{g_{_0}}} \Phi_0^{++}(\tilde{\lambda}^+_q , \, \ldots ) \qquad \Leftrightarrow \qquad gh_{\#} (\Phi_0^{++}) =g_0 \; . \qquad \end{eqnarray} We are interested in $ \Phi= \delta (\tilde{\lambda}^+_q\tilde{\lambda}^+_q)\; \Phi^{++}(\tilde{\lambda}^+_q, \, \ldots )$, Eq. (\ref{Phi(reg)}) which, after removing regularization, can be written as $ \Phi= \delta ({\lambda}^+_q{\lambda}^+_q)\; (\Phi_0^{++}\vert_{{\lambda}^+_q=0} + c^{++} \Psi_0\vert_{{\lambda}^+_q=0})$. This means that we are actually interested in the cohomologies of the operator $\tilde{\lambda}^+_q D^-_q$ at vanishing bosonic ghost, $\tilde{\lambda}^+_q=0$. As such, one immediately concludes that we cannot have nontrivial cohomology with $\Phi_0^{++}$ of ghost number $g_0>0$ since, due to (\ref{gh=homD}), $\Phi_{0\, g_0>0}^{++}({\lambda}^+_q=0)=0$. Furthermore, the values of the ghost number $g_0<0$ are actually prohibited for $\Phi^{++}=\Phi_0^{++}+ \ldots $ in (\ref{Phi(reg)}), because $ \Phi_{0\, g_0< 0}^{++}\longmapsto\!\!\!\!\!\!\!\!\!\!\!\!_{_{{\lambda}^+_q\mapsto 0} } \infty $ and the expression for $\Phi$ in (\ref{Phi(reg)}) diverges (as $\delta (\lambda^2) \cdot \infty$) and cannot describe a physical state. Thus a non-trivial BRST cohomology for (\ref{Qsusy}) may come from the $\tilde{\lambda}^+_q D^-_q$ cohomologies in the Hilbert space sector of the ghost number $g_0=0$ {\it only}. This corresponds to $g:=g_0-2=-2$ for the ghost number of the complexified $\mathbb{Q}^{susy}$--closed, non-exact wavefunction $\Phi$ in Eq. (\ref{Phi(reg)}) (see Eq. (\ref{g0hN})). Assuming the wavefunctions $\Phi_0^{++}$ to be analytic in $\tilde{\lambda}^+_q$, one finds that, being homogeneous of degree zero, the wavefunction is actually independent of $\tilde{\lambda}^+_q$. Then $\tilde{\lambda}^+_q D^-_q \Phi_0^{++}=0$ actually implies $D^-_q \Phi_0^{++}=0$. As far as the BRST transformations $\Phi_0^{++}\mapsto \Phi_0^{++}{}^\prime = \Phi^{++}_0 + \tilde{\lambda}^+_q D^-_q \chi^{++}_0$ of Eq. (\ref{CHtQ0}) are considered, the above assumptions requires $\chi^{++}_0$ to be an analytic function of $\tilde{\lambda}^+_q$ with degree of homogeneity $-1$, and such a nonvanishing function does not exist. Hence the calculation of the reduced BRST cohomology (\ref{CHtQ0}) ($(\tilde{\lambda}^+_q D^-_q$)--cohomology) in the space of the analytic wavefunctions $\Phi_0^{++}$ of ghost number zero is reduced to calculating the kernel of the $\tilde{\lambda}^+_q D^-_q$ operator which, in the sector of ghost number zero, coincides with the kernel, $D^-_q\Phi_0^{++}=0$, of the $\kappa$--symmetry generator $D^-_q$, \begin{eqnarray}\label{gh=0(coh)} & g_0:= gh_{\#} \Phi^{++}_0 =0 \; , \qquad \tilde{\lambda}^+_q D^-_q \Phi^{++}_0 =0 \quad & \Rightarrow \quad D^-_q \Phi^{++}_0 =0 \; . \qquad \end{eqnarray} With the realization (\ref{D-q=}), this equation implies the vanishing of all the coefficients in the decomposition of $\Phi^{++}_0$ in the power series on $\theta^+_q$, and requires that the leading ($\theta^+_q$ independent) component does not depend on $x^{++}$. In other words the general solution of this equation is a function independent on both $\theta^+_q$ and $x^{++}$, \begin{eqnarray}\label{(gh0coh)=} g_0:= gh_{\#} \Phi^{++}_0 =0 \; , \quad \tilde{\lambda}^+_q D^-_q \Phi^{++}_0 =0 \;\; \Rightarrow \quad & \Phi^{++}_0 \not= \Phi^{++}_0(x^{++}\, , \, \theta^+_q) \; \qquad \\ \nonumber & \qquad \left({\partial\;\;\; \over \partial x^{++}}\Phi^{++}_0=0\; , \;\; {\partial\;\;\over \partial \theta^+_q }\Phi^{++}_0=0\; \right)\; . \end{eqnarray} The ghost number of the second component $\Psi_0$ of the wavefunction $\Phi^{++}=\Phi_0^{++} + c^{++}\Psi_0$ is $gh_\# (\Psi_0)= g_0 - 1$, so that when $g_0=0$ and the nontrivial cohomologies can be carried by $\Phi_0^{++}$, $gh_\# (\Psi_0)= -1$ which, according to the discussion above, requires $\Psi_0=0$. On the other side, when $g_0=1$ and the wavefunction $\Phi_0^{++}$ cannot describe a nontrivial cohomology of $Q_{susy}$, one can find a nonzero BRST closed $\Psi_0$ obeying the first equation in (\ref{CHtQ00}). However, the second equation in (\ref{CHtQ00}) allows one to `gauge' $\Psi_0$ away by using the parameter $\chi^{++}_0$ so that the cohomology problem defined by Eqs. (\ref{CHtQ00}) has only the trivial solution. Thus the nontrivial {\it cohomology of the BRST charge $\mathbb{Q}^{susy}$} (\ref{Qsusy}) is described by the cohomology of the complex $\tilde{Q}^{susy}$ (\ref{tQsusy}) in the sector of ghost number $g_0:=gh_{\#}(\Phi^{++})=0$ (which corresponds to $g:=gh_{\#}(\Phi)=-2$ for $\Phi$ in (\ref{Phi(reg)})), which in turn {\it is described by wavefunctions that depend on the `physical variables' only}. This actually reduces the covariant quantization problem to the quantization of the physical degrees of freedom, {\it i.e.} to a counterpart of the twistor quantization presented in \cite{BdAS2006}. The fact that the cohomologies of the BRST operator are described by wavefunctions that do not dependent on variables on which the constraints $D^-_q$ and $\partial_{++}$ act nontrivially ($x^{++}$ and $\theta^+_q$ in (\ref{Phi(reg)})) is related to properties that are specific for the superparticle case, where there exists a coordinate basis in which the action is written in terms of variables invariant under both $\kappa$--symmetry (generated by $D^-_q$ above) and $b$--symmetry (generated by $\partial_{++}$). The action in such a coordinate basis will be discussed in the next, concluding Sec. \ref{Concl}. Let us note that the above effect does not happen in the superstring case, and hence in the cohomology problem for the superstring counterpart of the BRST charge (\ref{Qsusy}) such a simplification cannot occur. We have to stress that of all the cohomologies of the complex Berkovits--like BRST charge $\tilde{\mathbb{Q}}^{susy}$ only their values at vanishing bosonic ghost, $\tilde{\lambda}^-_q=0$, describe the cohomologies of the M0--brane BRST charge $\mathbb{Q}^{susy}$ and, hence, the superparticle spectrum. The $\tilde{\mathbb{Q}}^{susy}$ cohomologies for $\tilde{\lambda}^-_q\not= 0$, corresponding to the higher ghost numbers, are reacher and are related with the spinorial cohomologies of \cite{SpinCohom02}. \bigskip \section{M0--brane and its quantization in the covariantized light--cone basis. } \label{SecAnalB} The simple structure of the cohomology of the M0--brane BRST charge $\mathbb{Q}^{susy}$ can be explained by studying the spinor moving frame action (\ref{11DSSP}) in different basis of canonical variables, particularly in the {\it covariantized light--cone basis} \cite{Sok,NPS,GHT93}. The coordinates of this, $(x^{\pm\pm},x^{i}, \theta^{\pm}_q )$, are constructed from the ones of the standard basis of superspace $Z^M=(x^m \; , \; \theta^\alpha )$ and harmonics as (see \cite{GHT93}, {\it cf.} \cite{Sok}) \begin{eqnarray}\label{analX-Th} x^{\pm\pm}=x^m u_m^{\pm\pm}\; , \qquad x^{i}=x^m u_m^{\; i}\; , \qquad \theta^{\pm}_q := \theta^\alpha v_{\alpha q}^{\; \pm} \; . \end{eqnarray} The change of variables (\ref{analX-Th}) in the superparticle action (\ref{11DSSP}) gives \begin{eqnarray}\label{11DSSPan} S:= \int d\tau L &=& \int_{W^1} \left( {1\over 2} \rho^{++} Dx^{--} - {1\over 2} \rho^{++} \Omega^{--i} \tilde{x}^i - i D\theta_q \; \theta_q \right) , \qquad \end{eqnarray} where \begin{eqnarray}\label{tXi=} & \tilde{x}^i = x^i + i \theta^-_p \gamma^i_{pq} \theta^+_q := x^i + i \theta^{\alpha}\, v_{\alpha p}^{\; -} \gamma^i_{pq} v_{\beta q}^{\; +} \, \theta^{\beta} \; , \qquad & Dx^{--}:= dx^{--}+ 2\Omega^{(0)} x^{--}\; , \qquad \\ \label{Th=} & \theta_q = \sqrt{\rho^{++}}\; \theta^-_q := \sqrt{\rho^{++}}\; \theta^{\alpha} v_{\alpha q}^{\; -}\; , \qquad & D\theta_q := d\theta_q + {1\over 4}\Omega^{ij}\theta_p \gamma^{ij}_{pq} \; , \qquad \end{eqnarray} and $\Omega^{(0)}$, $\Omega^{ij}$ are the $SO(1,1)$ and $SO(9)$ Cartan forms, see Eq. (\ref{Omab}). Notice that the action (\ref{11DSSPan}) is given in terms of $\kappa$-- and $b$-- invariant variables, so that no further gauge fixing is needed. Indeed, the irreducible $\kappa$--symmetry of the action (\ref{11DSSP}) is characterized by Eq. (\ref{kappa-irr}), \begin{eqnarray}\label{kappa-sym} \delta_\kappa x^m = i \delta_\kappa \theta^\alpha \Gamma^m_{\alpha\beta}\theta^\beta\; , \qquad \delta_\kappa \theta^\alpha = \kappa^{+q} v_q^{-\alpha} \; , \qquad \delta_\kappa v_\alpha{}^-_q =0 = \delta_\kappa u_m^{--}\; . \end{eqnarray} For the fermionic coordinate functions in the covariantized light cone basis one finds that $\theta^+_q$ is transformed additively by the $16$--component $\kappa$--symmetry parameter, $\delta_\kappa \theta^+_q= \kappa^{+q}$, while $\delta_\kappa \theta^-_q=0$. Furthermore, $\delta_\kappa x^{++} = 2i \kappa^{+q} \theta^+_q$, while $\delta_\kappa x^{--}=0$ and, although $\delta_\kappa x^{i} = i \kappa^{+q} \gamma^i_{qp}\theta^-_p$, $\tilde{x}^i $ of Eq. (\ref{tXi=}) is $\kappa$--invariant, $\delta_\kappa \tilde{x}^i =0$. Thus all the variables entering the action (\ref{11DSSPan}) are inert under $\kappa$--symmetry, \begin{eqnarray}\label{kappa-inv} \delta_\kappa x^{--}=0\; , \qquad \delta_\kappa \tilde{x}^i := \delta_\kappa x^{i} - i \kappa^{+q} \gamma^i_{qp}\theta^-_p =0 \; , \qquad \delta_\kappa \theta^-_q=0\; , \qquad i_\kappa \Omega^{--i}=0 \qquad \end{eqnarray} This completes the proof of that just the change of variable (\ref{analX-Th}) in the spinor moving frame action (\ref{11DSSP}) results in the functional (\ref{11DSSPan}) which involves $\kappa$--invariant variables only. This phenomenon of an automatic gauge fixing, noticed already in \cite{Sok}, explains the mentioned simple structure of the cohomology of the BRST operator constructed from just the $\kappa$-- and $b$--symmetry generators $D^-_q$ and $\partial_{++}$. The above `automatic' gauge fixing does not occur in the superstring case and, hence, the cohomology of the corresponding Lorentz harmonics BRST operators are expected to be richer. \subsection{On BRST quantization of M0--brane in the covariantized light cone basis} \label{SecAnal} Hence, a difference between the original action of Eq. (\ref{11DSSP}) and the action in the covariantized light--cone basis (\ref{analX-Th}), Eq. (\ref{11DSSPan}), is that the latter contains only variables invariant under the $\kappa$-- and $b$--symmetries. Thus changing the basis to (\ref{analX-Th}) automatically provides the $\kappa$--symmetry and $b$--symmetry gauge fixed action (this effect was firstly noticed in \cite{Sok}). Another difference between the two actions is that the harmonics $v_\alpha{}^-_{q}$ enter in (\ref{11DSSPan}) {\it only} through the Cartan forms $\Omega^{--j}$, $\Omega^{(0)}$, $\Omega^{ij}$ defined by Eqs. (\ref{dv-q}), (\ref{Omab})) and entering the canonical Liouville one form on the $SO(1,D-1)$ group manifold as defined in Eqs. (\ref{H0:=Om(gen)--}), (\ref{H0:=OmD-L}), \begin{eqnarray} \label{H0:=OmD-L1} & {1\over 2}\Omega^{(a)(b)}\mathbf{d}_{(a)(b)} := - {1\over 2}\Omega^{--i}\mathbf{d}^{++i} - {1\over 2}\Omega^{++i}\mathbf{d}^{--i}- \Omega^{(0)} \mathbf{d}^{(0)} + {1\over 2} \Omega^{ij} \mathbf{d}^{ij} \; . \qquad \end{eqnarray} \subsubsection{\small Hamiltonian mechanics in the covariantized light--cone basis} Let us define the canonical momenta in the usual way and the covariant canonical momenta by (\ref{H0:=OmD-L}) and remove the second class constraints on the harmonics by using the Dirac brackets (\ref{DB-harm}) (see Sec. \ref{DBsec}). Doing the same for the fermionic second class constraints, we identify the $16$ Grassmann variables with their momenta, \begin{eqnarray}\label{DB(ThTh)=} {} \{ \theta_{q}\, , \, \theta_{p} \}_{_{DB}} = -{i\over 2} \delta_{qp} \; . \end{eqnarray} Then the bosonic `primary' constraints implied by the action (\ref{11DSSPan}) read \begin{eqnarray} \label{IclAnn0} \mathbf{d}^{(0)}+ \rho^{++}x^{--}\approx 0\; , \qquad \mathbf{d}^{ij}+ {i\over 2}\theta \gamma^{ij}\theta \approx 0\; , \qquad \mathbf{d}^{--i}\approx 0\; , \qquad \\ \label{AnPrimDj} \mathbf{d}^{++i}- \rho^{++}\tilde{x}^{i}\approx 0\; , \qquad \tilde{P}_{j}\approx 0\; , \qquad \\ \label{AnPrimCP} P_{--}-{1\over 2}\rho^{++}\approx 0\; , \qquad P^{(\rho )}_{++}\approx 0\; . \qquad \end{eqnarray} Clearly, the last two constraints, Eqs. (\ref{AnPrimCP}), provide the resolved pair of the second class constraints, which allows us to remove the $\rho^{++}$ variable by replacing it by $2P_{--}$. The same is true about the pairs of constraints in (\ref{AnPrimDj}), which allows us to remove the orthogonal $\tilde{x}^{i}$ coordinates (the non-covariant counterparts of which describe the physical degrees of freedom in the standard light--cone gauge description of the Brink-Schwarz superparticle and Green-Schwarz superstring) by expressing them through the covariant momenta $\mathbf{d}^{++i}$ for the harmonic variables and the $P_{--}$ momentum \begin{eqnarray} \label{tXi=d/2P} \tilde{x}^{i} = \; {\; \mathbf{d}^{++i} \over 2 P_{--}} \; . \qquad \end{eqnarray} The remaining constraints, Eqs. (\ref{IclAnn0}), \begin{eqnarray} \label{IclAnn} \widetilde{\mathbf{d}^{(0)}}:= \mathbf{d}^{(0)}+ 2x^{--}P_{--}\approx 0\; , \qquad \widetilde{\mathbf{d}^{ij}}:=\mathbf{d}^{ij}+ {i\over 2}\theta \gamma^{ij}\theta \approx 0\; , \qquad \widetilde{\mathbf{d}^{--i}}:= \mathbf{d}^{--i}\approx 0\; , \qquad \end{eqnarray} are first class ones. Their Dirac brackets produce the $(so(1,1)\oplus so(9)) \subset\!\!\!\!\!\!+ K_9$ algebra, which can be obtained from the $so(1,10)$ of Eq. (\ref{PB=d'd}) by omitting the relations involving $\mathbf{d}^{++i}$, \begin{eqnarray}\label{DB(d,d)=Ann} && {} [\widetilde{\mathbf{d}^{ij}}\; , \; \widetilde{\mathbf{d}^{kl}}]_{_{DB}} = 2\widetilde{\mathbf{d}^{k[i} } \delta^{j]l} - 2\widetilde{\mathbf{d}^{l[i} } \delta^{j]k}\; , \qquad {} [\widetilde{\mathbf{d}^{(0)}}\; , \; \widetilde{\mathbf{d}^{ij}}]_{_{DB}} = 0 \; , \qquad \nonumber \\ && {} [\widetilde{\mathbf{d}^{(0)}}\; , \; \widetilde{\mathbf{d}^{-- i}}]_{_{DB}} = - 2 \widetilde{\mathbf{d}^{-- i}} \; , \qquad [\widetilde{\mathbf{d}^{ij}}\; , \; \widetilde{\mathbf{d}^{-- k}}]_{_{DB}} = 2\widetilde{\mathbf{d}^{-- [i}} \delta^{j]k}\; , \qquad \nonumber \\ && {} [\widetilde{\mathbf{d}^{--i}}\; , \; \widetilde{\mathbf{d}^{-- j}}]_{_{DB}} = 0 \; . \qquad \end{eqnarray} No `W-deformation' occurs here. Actually this is natural, as the {\it r.h.s.} in Eq. (\ref{d--d--DB}) was proportional to the square of the $ \kappa$--symmetry generator absent in the covariantized light--cone basis. \subsubsection{BRST charge for the first class constraints in the covariantized light--cone basis} In the covariantized light--cone basis, where the $\kappa$--symmetry and $b$--symmetry are automatically gauge fixed, the superparticle quantization might be based on the BRST operator for the algebra (\ref{DB(d,d)=Ann}) of the $SO(1,1)\otimes SO(9)\subset\!\!\!\!\!\!\times K_9$ symmetry, appearing here as the full BRST operator for the gauge symmetries of the M0-brane model, \begin{eqnarray}\label{Q(H)} {} \mathbf{Q}^{\!^{[SO(1,1)\otimes SO(9)]\subset\!\!\!\!\times K_9 }} &=& c^{++ i}D^{-- i} + {1\over 2} c^{ij} D^{ij} + c^{(0)}D^{(0)} - \nonumber \\ && - {1\over 2} c^{++ i}c^{ij} {\partial\;\; \over \partial c^{++ j}} + 2 c^{(0)}c^{++j} {\partial\;\; \over \partial c^{++ j}} + c^{ik}c^{jk} {\partial\;\; \over \partial c^{ij}}\; . \qquad \end{eqnarray} Here $D^{(0)}$, $ D^{ij}$ and $D^{-- i}$ are harmonic covariant derivatives representing the $SO(1,1)$, $SO(9)$ and $K_9$ generators and, thus, obeying the Lie algebra \begin{eqnarray}\label{H(d,d)=Ann} {} [D^{(0)}\; , \; D^{-- i} ]= - 2 D^{-- i} \; , \quad {} [D^{ij}\; , \; D^{-- k}] = 2D^{-- [i} \delta^{j]k}\; , \quad \nonumber \\ {} [D^{ij}\; , \; D^{kl}] = 2D^{k[i} \delta^{j]l} - 2D^{l[i} \delta^{j]k}\; , \quad {} [D^{(0)}\; , \; D^{ij}] = 0 \; , \qquad {} [D^{--i}\; , \; D^{-- j}] = 0 \; , \end{eqnarray} $c^{(0)}$, $ c^{ij}$ and $c^{++ i}$ are the fermionic ghosts for these symmetries and the derivative with respect to the tensorial ghost is defined by ${\partial c^{i^\prime j^\prime} \over \partial \, c^{ij}}= 2\delta_{[i}^{i^\prime}\delta_{j]}^{j^\prime}$. \subsection{Covariant quantization of the physical degrees of freedom and hints of hidden symmetries} Although the quantization of the physical degrees of freedom in the covariantized light cone basis ({\it cf.} \cite{Sok}, where the vector harmonics were used for the first time in quantization of such a type) is similar to the supertwistor quantization of \cite{BdAS2006}, we briefly discuss it here as it gives hints about possible hidden symmetries of the 11D supergravity (see \cite{BdAS2006} for the discussion on $SO(16)$). As the first class constraints (\ref{IclAnn}) obey the Dirac bracket algebra (\ref{DB(d,d)=Ann}) isomorphic to $[SO(1,1)\otimes SO(9)]\subset\!\!\!\!\!\!\times K_9$ (no deformation appear), we can, following Dirac \cite{Dirac}, just impose their quantum counterparts $D^{(0)}$, $ D^{ij}$ and $D^{-- i}$ (\ref{H(d,d)=Ann}) as differential operator conditions on the wavefunction $\Phi$, \begin{eqnarray}\label{HPhi=0} {} D^{(0)}\Phi =0 \; , \qquad D^{ij}\Phi =0 \; , \qquad D^{--i}\Phi =0 \; . \qquad \end{eqnarray} In the purely bosonic limit the differential equations (\ref{HPhi=0}) are imposed on the wavefunction which depends on the spinorial harmonics (which, due to the second class constraints, parametrize the $Spin(1,10)$ group manifold, see Secs. 3.2-2.4) and $\rho^{++}$. \footnote{Alternatively, one can consider a wavefunction dependent on harmonics and $x^{--}$, but for our line of arguments the use of wavefunctions dependent on $\rho^{++}$ ($=2P_{--}$, see (\ref{AnPrimCP})) is more convenient.} Imposing the conditions (\ref{HPhi=0}) is tantamount to requiring that, as a function of harmonics, the wavefunction is now a function on the $S^9$ sphere which (in the light of the primary constraint (\ref{P-rvv}) generalizing the Cartan--Penrose representation for a light--like vector to D=11) can be identified with the space of light--like momentum modulo its scale. This scale of the massless particle momentum, the energy, can be identified then (again in the light of the Cartan--Penrose constraint (\ref{P-rvv})) with the Lagrange multiplier $\rho^{++}$. Then, as the canonical Hamiltonian $H_0$ corresponding to the action (\ref{11DSSPan}) is zero, $H_0\approx 0$, one concludes that, in the purely bosonic limit, the wavefunction is just an arbitrary function of the above listed physical bosonic variables, namely \begin{eqnarray}\label{Phi=S9R} \Phi \vert_{\theta_q =0} = \Phi_0(\mathbb{R}_+ \otimes \mathbb{S}^9)\; , \qquad \{ (v_{\alpha q}^-\, , \; \rho^{++}) \} = \mathbb{R}_+ \otimes \mathbb{S}^9 = \{ (p_{\underline{m}}\; : \; p^2:= p_{\underline{m}}p^{\underline{m}}=0 )\}\; . \qquad \end{eqnarray} This result coincides with one obtained in \cite{BdAS2006} in the framework of supertwistor quantization of the M0--brane model. The complete M0--brane action (\ref{11DSSPan}) includes also the fermionic contribution $D\theta_q \; \theta_q= d\theta_q \; \theta_q + \Omega_{pq}\theta_{[p} \; \theta_{q]}$, where $\Omega_{pq}= - \Omega_{qp}:= {1\over 4} \Omega^{ij}\gamma^{ij}_{pq}$ is the $Spin(9)$ connection. Their presence modifies the $SO(9)$ generator by the term bilinear in fermions (see Eq. (\ref{IclAnn})), but this does not change the conclusion about the wavefunction dependence on the bosonic configurational space coordinates (which, from the spacetime point of view, happen to parametrize the light-like momentum). Then, the fermionic variables $\theta_q $ obey the second class constraints stating that they are momenta for themselves, which can be treated in the strong sense after passing to the Dirac brackets (\ref{DB(ThTh)=}). In quantum theory the Dirac brackets relation (\ref{DB(ThTh)=}) give rise to the anti--commutational relation stating that the Grassmann coordinate function of the M0--brane becomes a Clifford algebra valued, \begin{eqnarray}\label{hThhTh=} {} \{ \hat{\theta}_{q}\, , \, \hat{\theta}_{p} \} = {1\over 2} \delta_{qp} \; , \qquad q=1,2,\ldots , 16 . \qquad \end{eqnarray} This $O(16)$ covariant Clifford algebra $\mathrm{C}\ell^{16}$ has a finite dimensional representation by $256\otimes 256$ sixteen dimensional gamma matrices \begin{eqnarray}\label{th=Gamma16} {} & \hat{\theta}_{q} = \, {1\over 2 }\, ({\Gamma}_{q})_{{\cal A}}{}^{{\cal B}} \; , \qquad {\cal A}\, , \, {{\cal B}} = 1, \ldots , 256 \; , \qquad q=1,2,\ldots , 16 \; . \qquad \end{eqnarray} Notice that the $O(16)$ symmetry of the Clifford algebra $\mathrm{C}\ell^{16}$ is the same $O(16)$ which we have met in the classical analysis of the spinor moving frame action, sec. \ref{O(16)}. Indeed, it acts in the same way and on the same indices, as far as $\theta_q = \sqrt{\rho^{++}}\; \theta^{\alpha} v_{\alpha q}^{\; -}$, Eqs. (\ref{analX-Th}), (\ref{Th=}). Thus our spinor moving frame formulation (\ref{11DSSP}) makes manifest, already at the classical level, the $SO(16)$ symmetry playing, as we will see in a moment, an important role in the M0--brane quantization. But before, let us make the following comments. Firstly, substituting for $\theta_q$ its contraction with an $SO(16)$ matrix, $\theta_q \mapsto \theta_p S_{pq}$ would produce the covariant derivative with the SO(16) connection $\Omega_{pq} \mapsto (dS\, S^T)_{pq}+ {1\over 4} \Omega^{ij}(S^T\gamma^{ij}S)_{pq}$, \begin{eqnarray}\label{DthSO16} && D(\theta S)_q \; (\theta S)_q = \tilde{D} \theta_q \; \theta_q= d\theta_q \; \theta_q + \tilde{\Omega}_{pq} \theta_{[p} \; \theta_{q]} \; , \qquad S\, S^T =I \; , \qquad \nonumber \\ && \qquad \tilde{\Omega}_{pq} = (dS\, S^T)_{pq}+ {1\over 4} \Omega^{ij}(S^T\gamma^{ij}S)_{pq}\equiv (dS\, S^T)_{pq}+ {1\over 4} \Omega^{ij}((S^T\gamma^{[i}S)(S^T\gamma^{j]}S))_{pq} \; . \qquad \end{eqnarray} It is not evident that such transformation leave the model invariant. To be convinced that they do (when supplemented by the corresponding transformations of the bosonic variables), one can recall that $\theta_q= \sqrt{\rho^{++}} \theta^\alpha v_{\alpha q}^-$ (Eq. (\ref{analX-Th})), that the action (\ref{11DSSPan}) is equivalent to (\ref{11DSSP}) (obtained from it just by moving derivatives) and that the change $ v_{\alpha q}^-\mapsto v_{\alpha p}^-S_{pq}$ leaves the action (\ref{11DSSP}) unchanged as far as $S\, S^T= I$ ({\it i.e.} $S\in O(16)$). Secondly, taking into account the results of quantization in the bosonic case, in which the state vector is described by the wavefunction of the the light--like momentum, $\Phi_0= \Phi_0(p_{\underline{m}}\vert_{p^2=0} )$, one might think that the state vector of the supersymmetric particle is described by the {\it Clifford superfield} \cite{Dima88}, {\it i.e.} by the wavefunction dependent on such a light--like momentum $p_{\underline{m}}$ {\it and} on the Clifford algebra valued $\hat{\theta}_{q}$ variable, \begin{eqnarray}\label{Phi(Cl)} \Phi (p_{\underline{m}}\vert_{p^2=0}\, , \, \hat{\theta}_q )= \Phi_0(p_{\underline{m}}\vert_{p^2=0} ) + 2\hat{\theta}_q \Psi_q (p_{\underline{m}}\vert_{p^2=0} ) + \ldots + {2^n\over n! } \hat{\theta}_{q_1} \ldots \hat{\theta}_{q_{n}} \Phi_{q_{n}\ldots q_{1}}(p_{\underline{m}}\vert_{p^2=0} ) + \qquad \nonumber \\ + \ldots + {2^{16}\over 16! } \hat{\theta}_{q_1} \ldots \hat{\theta}_{q_{16}} \Phi_{q_{16}\ldots q_{1}}(p_{\underline{m}}\vert_{p^2=0} ) \; , \qquad \hat{\theta}_{q}\hat{\theta}_{p}+ \hat{\theta}_{q}\hat{\theta}_{q}= {1\over 2 }\delta_{qp}\hat{\mathbb{I}} \; , \qquad \end{eqnarray} where the coefficients are antisymmetric on their indices, $\Phi_{q_{n}\ldots q_{1}}(p_{\underline{m}}\vert_{p^2=0} )=\Phi_{[q_{n}\ldots q_{1}]}(p_{\underline{m}}\vert_{p^2=0} )$. However, {\it such a representation of the $SO(16)$ symmetry is reducible}. It is reducible also as a represenation of the Clifford algebra $\mathrm{C}\ell^{16}$. To see this, one can use the matrix representation (\ref{th=Gamma16}) substituting the sixteen dimensional gamma--matrices for $2\hat{\theta}_q$. Then the (\ref{Phi(Cl)}) becomes represented by the $256\times 256$ matrix wavefunction, $\Phi (p_{\underline{m}}\vert_{p^2=0}\, , \, \hat{\theta}_q ) \qquad \mapsto \qquad \Phi_{{\cal A}}{}^{{\cal B}} (p_{\underline{m}}\vert_{p^2=0})$, \begin{eqnarray}\label{Phi(Cl)G} \Phi_{{\cal A}}{}^{{\cal B}} (p_{\underline{m}})\, := \Phi_0(p_{\underline{m}} ) \delta_{{\cal A}}{}^{{\cal B}} + \Psi_q (p_{\underline{m}})\Gamma_q{}_{{\cal A}}{}^{{\cal B}} + \ldots + {1\over n! } \Phi_{q_{n}\ldots q_{1}} (p_{\underline{m}}) \Gamma_{q_1\ldots q_{n}}{}_{{\cal A}}{}^{{\cal B}} + \qquad \nonumber \\ + {} \ldots + {1\over 16!} \Phi_{q_{16}\ldots q_{1}}(p_{\underline{m}} ){\Gamma}_{q_1\ldots q_{16}}{}_{{\cal A}}{}^{{\cal B}} \; , \qquad {p^2=0} \; . \qquad \end{eqnarray} This is a general $SO(16)$ {\it bi}--spinor carrying the $\mathbf{256 \times 256}$ representation which is {\it reducible} both as the representation of the $SO(16)$ symmetry and of the Clifford algebra $\mathrm{C}\ell^{16}$. The appearance of a reducible representation contradicts to the very spirit of the quantization procedure. The result of quantization of a particle mechanics is assumed to be an elementary particle, the definition of which (see {\it e.g.} \cite{Novozhilov}) was formulated involving the requirement to be irreducible representation of Poincar\'e and other physical symmetry groups. This makes accessible the procedure of projecting out a part of quantum state spectrum in quantization of spinning particle \cite{spinQuant} and the famous GSO (Gliozzi--Scherk--Olive) projection in quantization of the RNS string model \cite{GSO}. Hence, the prescription of an unrestricted Clifford superfield does not work, at least in our D=11 massless superparticle case. A simplest irreducible representation of $\mathrm{C}\ell^{16}$ is the $SO(16)$ Majorana spinor, $\mathbf{256}$, and the choice of the wavefunction $\Phi_{{\cal A}} (p_{\underline{m}}\vert_{p^2=0})$ gives the linearized supergravity supermultiplet (see \cite{Green+99,BdAS2006}). The physical degrees of freedom of the linearized $D=11$ supergravity multiplet are described by symmetric traceless $SO(9)$ tensor $h_{IJ}=h_{(IJ)}$, an antisymmetric third rank $SO(9)$ tensor $A_{J_1J_2J_3} =A_{[J_1J_2J_3]}$ and a $\gamma$-traceless fermionic $SO(9)$ vector-spinor $ \Psi_{Ip}$. Indeed, the solution of the linearized Einstein, three--form gauge field and the Rarita-Schwinger equations can be written in terms of the above $h_{IJ}$, $A_{J_1J_2J_3}$, $ \Psi_{Ip}$ and Lorentz harmonics as (see \cite{GHT93,BdAS2006}) \begin{eqnarray} \label{11DSGlin} & \cases{h_{mn}(p)= u^I_m u^J_n h_{(IJ)}(p) \; , \cr A_{m n p}(p)= u^I_m u^J_n u^K_p A_{IJK}(p) \; , \cr \Psi_{m\,\alpha}(p) = \Psi_{I\, q}(p)u^I_m\; v_{\alpha q}^{\;\; -}\sqrt{\rho^{++}}\; , } \qquad \left. \matrix{ p_m =\rho^{++}u_m^{--}\; , \qquad \cr u_m^{--} \Gamma^m_{\alpha\beta}= 2v_{\alpha q}^{\;\;-}v_{\beta q}^{\;\;-}} \right\} \quad \Rightarrow \quad p^2=0\; . \end{eqnarray} The action of $\hat{\theta}_q$ on these on-shell fields are defined by (see \cite{Green+99} for the light--cone gauge quantization and \cite{BdAS2006} for the supertwistor quantization) \begin{eqnarray} \label{repr-16b} {2}\hat{\theta}_{q} h_{IJ} & =& \gamma^I_{qp} \Psi_{Jp}+ \gamma^J_{qp} \Psi_{Ip} \; , \qquad \nonumber \\ {2}\hat{\theta}_{q} A_{IJK} &= & \gamma^{IJ}_{qp} \Psi_{Kp}+ \gamma^{KI}_{qp} \Psi_{Jp}+ \gamma^{JK}_{qp} \Psi_{Ip} \; , \qquad \\ \label{repr-16f} {2}\hat{\theta}_{q} \Psi_{Ip} &= & \gamma^J_{qp} h_{IJ}+ {1\over 3!} \left(\gamma^{IJ_1J_2J_3}_{qp} - 6 \delta^{I[J_1} \gamma^{J_2J_3]}_{qp}\right) A_{J_1J_2J_3} \; , \qquad \end{eqnarray} To see that Eq. (\ref{repr-16b}) is nothing but an action of the $d=16$ gamma matrix (see (\ref{th=Gamma16})) on one Majorana spinor of $SO(16)$, let us begin by splitting the Majorana spinorial representation of $SO(16)$ on two Majorana--Weyl (MW) spinor representations, $\mathbf{256}=\mathbf{128}+\widetilde{\mathbf{128}}$, \begin{eqnarray}\label{Phi(256)=} \Phi_{{\cal A}} (p_{\underline{m}}\vert_{p^2=0})\, := \left( \matrix{\Phi_A (p_{\underline{m}}\vert_{p^2=0}) \cr \Psi^{\tilde{A}} (p_{\underline{m}}\vert_{p^2=0})} \right)\; . \qquad \end{eqnarray} The observation is that the balance of the bosonic and fermionic degrees of freedom in $D=11$ supergrvaity multiplet is just $128+128$ and that ({\it e.g.}) the first, $\mathbf{128}$, of the above MW spinor representations can be used to describe the physical degrees of freedom of the bosonic fields of the linearized supergravity supermultiplet, while the second, $\widetilde{\mathbf{128}}$,- to describe the physical degrees of freedom of gravitino field \begin{eqnarray}\label{Phi(128)} \Phi_A &=& \left( \matrix{h_{IJ} \cr A_{IJK} }\right) \; , \qquad h_{IJ} = h_{(IJ)}\; , \quad h_{II}=0 \; , \qquad A_{IJK}=A_{[IJK]}\; , \quad \nonumber \\ && \quad A=1,\ldots 128 \; , \quad I,J,K = 1,\ldots 9 \; \quad \left({9\cdot 10\over 2} - 1 + \left\{\matrix{ 3 \cr 9}\right\} = 44 + 84=128\;\right)\; , \qquad \\ \label{Psi(t128)} \Psi^{\tilde{A}} &=& \sqrt{2} \Psi_{Iq} \; , \qquad \Psi_{Iq}\gamma^I_{qp}=0 \; , \qquad \nonumber \\ && \quad \tilde{A}=1,\ldots 128 \; , \quad I = 1,\ldots 9 \; , \quad q=1,\ldots , 16 \; \qquad \left( \;9\cdot 16 - 16 =128\;\right)\; . \qquad \end{eqnarray} To resume, the Majorana spinor of $SO(16)$, (\ref{Phi(256)=}), can be presented as \begin{eqnarray}\label{Phi(256)=b+f} \Phi_{{\cal A}} \, := \left( \matrix{\Phi_A \cr \Psi^{\tilde{A}} } \right)\; = \left(\matrix{ \left( \matrix{ h_{IJ} \cr A_{IJK}} \right) \cr {} \cr \sqrt{2} \Psi_{Iq} } \right)\; , \qquad \cases{ h_{IJ} = h_{(IJ)}\; , \quad h_{II}=0 \; , \cr A_{IJK}=A_{[IJK]}\; , \cr {} \; \cr \Psi_{Iq}\gamma^I_{qp}=0 \; . } \qquad \end{eqnarray} Finally, assigning the Grassmann parity $0$ and $1$ to the first and second Majorana--Weyl components, (\ref{Phi(128)}) and (\ref{Psi(t128)}), of the (momentum representation) wavefunction (\ref{Phi(256)=b+f}), one arrives at the linearized on shell multiplet of $D=11$ supergravity. \bigskip With the Weyl representation of the gamma-matrices \begin{eqnarray} \label{16dGamSig} & ({\Gamma}_{q})_{{\cal A}}{}^{{\cal B}}= \left(\matrix{ 0 & \sigma_{q\, A}{}^{\tilde{B}} \cr \tilde{\sigma}_q{}^{\tilde{B}}{}_{A}{} & 0 } \right) \\ \label{16dPauliM} & (\sigma_q\tilde{\sigma}_p+ \sigma_p\tilde{\sigma}_q)=\delta_{qp} \mathbb{I}_{128\times 128} \; , \qquad (\sigma_q\tilde{\sigma}_p)_{AB}=\delta_{AB}+ \sigma_{qp}{}_{AB} \; \qquad \end{eqnarray} Eqs. (\ref{repr-16b}) and (\ref{repr-16f}) can be formulated as an action of the d=16 Pauli matrices on two Majorana--Weyl representations of the $SO(16)$, \begin{eqnarray} \label{11DSUSY} & 2\hat{\theta}_q \Phi_A = \sigma_q{}_A{}^{\tilde{B}} \Psi^{\tilde{B}} \; , \qquad 2\hat{\theta}_q \Psi^{\tilde{A}} = \tilde{\sigma}_q{}^{\tilde{A}} {}_B \Phi_B \; . \qquad \end{eqnarray} This corresponds to the following representation of the $d=16$ Pauli matrices algebra (\ref{16dPauliM}) in terms of $d=6$ Dirac matrices $\gamma^I_{qp}=\gamma^I_{(qp)}$: \begin{eqnarray} \label{16dPauli=} \sigma_{q\, A}{}^{\tilde{B}}&=& \left(\matrix{ \sqrt{2} \gamma^{(I_1}{}_{qp} \,\delta^{I_2)J} - {\sqrt{2}\over 9} \delta^{I_1I_2}\, \gamma_{qp}^{J} \qquad \cr \hline {} \cr {3\over \sqrt{2}}\gamma^{[I_1I_2}{}_{qp} \; \delta^{I_3]J} - {1\over 3 \sqrt{2}}(\gamma^{I_1I_2I_3}\gamma^J){}_{qp} \; } \right)\equiv \left(\matrix{ \sqrt{2} \gamma^{(I_1}{}_{qp} \,\delta^{I_2)J} - {\sqrt{2}\over 9} \delta^{I_1I_2}\, \gamma_{qp}^{J} \qquad \cr \hline {} \cr \sqrt{2}\gamma^{[I_1I_2}{}_{qp} \; \delta^{I_3]J} - {1\over 3 \sqrt{2}}(\gamma^{I_1I_2I_3J}){}_{qp} \; } \right) \; , \qquad \nonumber \\ {} \nonumber \\ \tilde{\sigma}_q{}^{\tilde{B}}{}_{A}&=& \left(\matrix{ \sqrt{2}\delta^{J(I_1} \gamma^{I_2)}{}_{qp} - {\sqrt{2}\over 9} \delta^{I_1I_2} \gamma^{J}_{qp} \qquad \vert \qquad {1\over 3\sqrt{2}}(\gamma^J\gamma^{I_1I_2I_3})_{qp} - {3\over \sqrt{2}} \delta^{J[I_1}\gamma^{I_2I_3]}{}_{qp} } \right) \; . \qquad \end{eqnarray} \bigskip Actually, the above results can be used {\it to speculate about possible $E_8$ symmetry of the 11D supergravity}. For the 11D supergravity dimensionally reduced down to d=3 this symmetry was conjectured already in \cite{CremmerJulia78} and proved in \cite{E8}. Recently the appearance of $E_8$ symmetry in D=11 supergravity was discussed in \cite{LambertWest07}. Our line is a bit different and refers on the physical degrees of freedom in the supergravity fields, associated to the irreducible representation of $SO(D-2)=SO(9)$, as described above, rather than on the compactification of D=11 supergravity to d=3. The generators of $E_8$ can be split onto the set of the generators of its maximal compact subgroup $SO(16)$, $J_{qp}$, $128$ generators $Q^A$ collected in the Majorana--Weyl spinor of $SO(16)$, whose commutation relations close on the $SO(16)$ generator, \begin{eqnarray} \label{E8:SO} E_8\; : \qquad && [J_{qp}\, , \, J_{q^\prime p^\prime }\, ]= 2 \delta_{q^\prime[q}J_{p] p^\prime} - 2 \delta_{p^\prime [q}J_{p]q^\prime}\, \; , \qquad \\\label{E8:SOQ} && [J_{qp}\, , \, Q_A]= {1\over 2}\sigma_{pq}{}_{AB} Q_B \; , \qquad \\ \label{QQ=SO} && [ Q_A\, , \, Q_B]= \sigma_{pq}{}_{AB} J_{pq} \; , \qquad \end{eqnarray} The Jacobi identities are satisfied due to the sigma-matrix identity $\sigma^{pq}{}_{(AB}\sigma^{pq}{}_{C)D}=0$. Then, in the superparticle quantization above the linearized supergravity multiplet appears in such a way that all the bosonic fields-- or, more precisely, their physical components-- are collected in the Majorana--Weyl spinor of $SO(16)$. This makes tempting to speculate on the relation of the bosonic field of D=11 supergravity with the $SO(16)$ spinorial generator $Q^A$ and, further, with the $E_8/SO(16)$ coset. Furthermore, this suggests the speculation about possible $E_8$ symmetry of the uncompactified $D=11$ dimensional supergravity (i.e. without dimensional reduction to $d=3$). Clearly, the linear approximation, which is seen from the superparticle quantization, do not feel the difference between $E_8$ and its contraction given by extension of $SO(16)$ by the mutually commuting spinorial generators (which includes $[Q_A \, , \, Q_B ]=0$ instead of $[Q_A \, , \, Q_B ]=\sigma^{qp}_{AB}J_{qp}$ in (\ref{QQ=SO})). So, to establish the hypothetic $E_8$ symmetry of the uncompactified $D=11$ supergravity, one should define the $E_8$ transformations on eleven dimensional vielbein $e_m{}^a(x)$ and gauge field $A_{mnk}$ \footnote{The inclusion of fermions is a separate problem; usually, when the $E_n$ symmetries of the compactified (to $d=11-n$) supergravity are considered, the fermions are transformed as the field on nonlinear realization.} and to show that (at least bosonic) supergravity equations are invariant under such transformations. The experience of the description of the hidden $SO(16)$ symmetry \cite{Nicolai87} suggests that this $E_8$ (if exists) might become manifest in a formalism with broken Lorentz invariance. A new suggestion which brings our study is that, a Lorentz symmetry breaking, which is appropriate to find the hidden $E_8$ (and also $SO(16)$) symmetry might appear to be $SO(1,10)\mapsto SO(1,1) \otimes SO(9)$ (or $SO(1,10)\mapsto [SO(1,1) \otimes SO(9)]\subset\!\!\!\!\!\!\times K_9$) rather than $SO(1,10)\mapsto SO(1,2) \otimes SO(8)$ used in \cite{Nicolai87} to construct the $SO(16)$ invariant formulation. A check of whether the $D=11$ supergravity has indeed a hidden $E_8$ symmetry, even without compactification, or the above described $SO(16)$ invariance of the linearized supergravity and the coincidence of the number of physical polarizations of the bosonic fields of the linearized supergravity multiplet with the dimension of the $E_8/SO(16)$ coset is purely occasional is an interesting subject for future study. \bigskip \section{Conclusions and outlook}\label{Concl} \setcounter{equation}0 \subsection{Conclusions} In this paper we have studied the BRST quantization of the M0-brane in the framework of its spinor moving frame formulation \cite{BL98',BdAS2006} (see \cite{B90,IB+AN96} for $D=4$ and $10$) where the action includes the spinorial Lorentz harmonics as twistor--like auxiliary variables. Our main motivation was to search for the origin and geometrical meaning of the properties of the pure spinor approach to the quantum superparticles and superstrings \cite{NB-pure}. We have constructed here the Hamiltonian mechanics of the $D$=11 massless superparticle in the spinor moving frame formulation separating covariantly the first and the second class constraints (which has been possible due to the use of spinorial harmonics \cite{BZ-str,BZ-strH}) and defining the Dirac brackets allowing to treat the second class constraints as strong equalities. We have shown that the set of the first class constraints of the M0--brane in the spinor moving frame formulation can be separated into two groups. The first one includes the $16$ fermionic generators of the $\kappa$--symmetry (which is irreducible in the spinor moving frame formulation due to the presence of spinorial harmonics) and one bosonic generator of the $b$-symmetry. These generate the $d=1, N=16$ supersymmetry gauge supergroup $\Sigma^{(1|16)}$. The remaining first class constraints correspond to the generators of $H= [SO(1,1)\times SO(9)]\subset\!\!\!\!\!\! \times K_9$ gauge symmetry. This eliminates the excess of variables in the harmonics used to formulate the massless $D$=11 superparticle model making them the homogeneous coordinates of $S^9$ which can be identified as D=11 celestial sphere. However, the superalgebra of the Dirac brackets of the first class constraints is given by a {\it `W--deformation'} of the one of the semidirect product $H\subset\!\!\!\!\!\! \times \Sigma^{(1|16)}$, rather then by this semidirect product itself. This `W--deformation' is produced by the appearance of the product of two $\kappa$--symmetry generators in the Dirac brackets of two $K_9$ generators, so that $K_9$ is no longer an abelian subgroup and the Dirac brackets describes a generalized subalgebra of the enveloping superalgebra rather than a Lie superalgebra. The structure of the complete BRST charge $\mathbb{Q}$ for all the first class constraints of the M0--brane model is too complicated and its use is not practical. This can be seen already form the BRST charge $\mathbb{Q}^\prime$ for the nonlinear algebra of the $\kappa$-symmetry, $b$--symmetry and the deformed $K_9$ symmetry which we have constructed in this paper (\ref{Q'=}). It already contains seven terms with up to fourth power of the ghost fields. In the search for a counterpart of (or even an alternative for) the Berkovits BRST charge we have accepted a further reduction of $\mathbb{Q}^\prime$ down to the simple BRST charge $\mathbb{Q}^{susy}$ (\ref{Qbrst1}) associated to the $\kappa$- and $b$--symmetry gauge supergroup $\Sigma^{(1|16)}$. We have shown that the non-trivial cohomologies of $\mathbb{Q}^{susy}$ can be described by wavefunctions which have support on $\lambda_q^+\lambda_q^+=0$. This condition requires the bosonic ghost $\lambda_q^+$, corresponding to the $\kappa$-symmetry, to be zero. Since $\lambda_q^+$ defines essentially the BRST charge $\mathbb{Q}^{susy}$, this makes a regularization necessary. Such a regularization is made by allowing the $\kappa$-symmetry bosonic ghost to become complex, $\lambda_q^+\mapsto \tilde{\lambda}_q^+\not= (\tilde{\lambda}_q^+)^*$, and by considering the non-Hermitian BRST charge $\tilde{\mathbb{Q}}^{susy}$ resulting from it. The cohomology of the original BRST charge $\mathbb{Q}^{susy}$ is then given by the cohomology of its complexified and further reduced version $\tilde{\mathbb{Q}}^{susy}$ (Eq. (\ref{tQsusy})) at zero value of the bosonic ghost. The need for a complex BRST charge at the regularization stage when computing the non-trivial cohomology shows a reason for the intrinsic complexity of the Berkovits pure spinor formalism for the superparticles and the superstring. This conclusion is further supported by the observation that our $\tilde{\mathbb{Q}}^{susy}$ is essentially a particular case of the Berkovits BRST charge for $D=11$ superparticle, but with a composite pure spinor constructed from the $\kappa$--symmetry ghost and Lorentz harmonics (Eq.(\ref{Lpure=lv}), see also below). Computing the cohomology of the BRST charge $\mathbb{Q}^{susy}$ we have found that it is nontrivial only in the sector with ghost number $-2$ (which corresponds to the ghost number $g_0=0$ for the wavefunctions describing cohomologies of $\tilde{\mathbb{Q}}^{susy}$) and are essentially described by functions depending only on the physical variables, which are inert under both the fermionic $\kappa$- and bosonic $b$- gauge symmetries. The reason for such a simple structure is the existence of a specific coordinate basis, the covariantized light-cone basis, the transition to which results in the disappearance from the action of all the worldline fields that transform nontrivially under the $\kappa$- and the $b$- gauge symmetries. We have studied the covariant quantization of the physical degrees of freedom in the covariant light--cone basis. This quantization, quite close to the supertwistor one in \cite{BdAS2006}, shows the hints of possible hidden symmetries of D=11 supergravity (or, probably, of M-theory). These include the $SO(16)$ already mentioned in \cite{BdAS2006} (and presumably related with the one of \cite{Nicolai87}), but also some indication of possible $E_8$, which brings us quite close to the $E_{10}$ and $E_{11}$ busyness of \cite{Nicolai07} and \cite{WestE11}. \subsection{Outlook 1: on BRST charge for superstring} The main conclusion of our present study of the M0 case is that the twistor-like Lorentz harmonic approach \cite{BZ-str,BdAS2006}, originated in \cite{Sok,NPS,Lharm}, is able to produce a simple and practical BRST charge. This suggests a similar investigation of the $D=10$ Green--Schwarz superstring case. For instance, for the IIB superstring the Berkovits BRST charge looks schematically like \begin{eqnarray}\label{QIIB} \mathbb{Q}^B_{IIB}=\int \Lambda^{\alpha 1}d_{\alpha} + \int \Lambda^{\alpha 2}d^2_{\alpha}\; , \qquad \Lambda^{\alpha 1}\sigma^a_{\alpha\beta}\Lambda^{\beta 1}= 0= \Lambda^{\alpha 1}\sigma^a_{\alpha\beta}\Lambda^{\beta 1} \end{eqnarray} with two complex pure spinors $\Lambda^{\alpha 1}$ and $\Lambda^{\alpha 2}$. By analogy with our study of M0--brane (see (\ref{Lpure=lv})), one may expect that the BRST quantization of the of the Green--Schwarz superstring in its spinor moving frame formulation \cite{BZ-str,BZ-strH} would result, after some reduction and on the way of regularization of the `honest' ('true') hermitian BRST charge, in a complex charge of the form (\ref{QIIB}), but with composite pure spinors \begin{eqnarray}\label{pureSp12=} \widetilde{\Lambda}^{\alpha 1} = \tilde{\lambda}^+_p v^{-\alpha}_p\; , \qquad \widetilde{\Lambda}^{\alpha 2} = \tilde{\lambda}^-_p v^{+\alpha}_p\; , \qquad \tilde{\lambda}^+_p\tilde{\lambda}^+_p=0= \tilde{\lambda}^-_p\tilde{\lambda}^-_p\; . \qquad \end{eqnarray} Here, the $\tilde{\lambda}^{\pm}_p$ are two complex eight component $SO(8)$ spinors and the stringy harmonics $v^{\mp\alpha}_p$ are the homogeneous coordinates of the non--compact $16$--dimensional coset \begin{eqnarray}\label{harmV=IIB} \{ V_{(\beta)}{}^{\alpha} \} = \{ ( v^{+\alpha}_p \; , \; v^{-\alpha}_p )\} = {Spin(1,9) \over SO(1,1)\otimes SO(8) } \; , \end{eqnarray} characteristic for the spinor moving frame formulation of the (super)string \cite{BZ-str,BZ-strH} and describing the spontaneous breaking of the spacetime Lorentz symmetry by the string model. It worth noticing that, in contrast with the M0--brane case, the $D=10$ solution (\ref{pureSp12=}) of the pure spinor constraints in (\ref{QIIB}) {\it carries the same number of degrees of freedom}, $44$($=2\times 8 + 2 \times 14$), that the pair of Berkovits complex pure spinors $\Lambda^{\alpha 1}, \Lambda^{\alpha 2}$ ($22+22$). Hence it provides {\it the general solution} of the $D=10$ pure spinor constraints in terms of harmonics (\ref{harmV=IIB}) and two complex $SO(8)$ spinors of zero square so that its substitution for the generic pure spinor of \cite{NB-pure} should not produce any anomaly or other problem related to the counting of degrees of freedom. \subsection{Outlook 2: $SO(16)$, $E_8$ and al that. } Searching for the explanation of simple structure of the cohomologies of the M0--brane BRST charge $\mathbb{Q}^{susy}$ we studied the M0--brane model in different, the so--called covariantized light cone basis \cite{GHT93}, the counterpart of which was first considered in \cite{Sok}. The change of variables to this basis removes automatically all the worldline fields which transformed nontrivially under the $\kappa$--symmetry and $b$--symmetry. Such a phenomenon of automatical gauge fixing was first described in \cite{Sok}; one might observe it as well when passed to the pure (super)twistor form of the action, as in \cite{BdAS2006}. Quantizing superparticle in this coordinate basis (as well as in the supertwistor one \cite{BdAS2006}) one easily sees the $SO(16)$ symmetry of the model \footnote{In our spinor moving frame or twistor--like Lorenz harmonics formulation \cite{BL98',IB+AN96,BdAS2006,IB07} this symmetry can be seen also at the classical level (see sec. 2.3); in the standard Brink--Schwarz formulation it is hidden and appears after quantization in the light--cone gauge.}. The reason is that, both in the covariantized light cone basis and after fixing the usual light--cone and the (non--covariant) $\kappa$--symmetry gauge, the superparticle action contains a set of $16$ fermionic fields which, upon quantization, become the $Cl^{16}$ Clifford algebra valued. The supergravity multiplet appears in the superparticle spectrum when one choose the wavefunction to be in ${\mathbf 256}$ Majorana spinor representation of $Cl^{16}$. The bosonic and fermionic fields of the supermultiplet appear as different ($\mathbf{128}$ and ${\mathbf{\overline{128}}}$) Majorana Weyl parts of this Majorana spinor. Furthermore, the observation of the well-known fact that $E_8$ exceptional group Lie algebra can be written in terms of the generators of $SO(16)$ and $128$ bosonic generators carrying the Majorana spinor ($\mathbf{128}$) representation of $SO(16)$ makes it tempting to speculate on that the $E_8$ symmetry might be characteristic of the $D=11$ supergravity itself rather than of its reduction to $d=3$ only. In such a scenario the bosonic fields of the D=11 supergravity multiplet appear to be associated to the generators of the $E_8/SO(16)$ coset. Notice that the assumption on the Goldstone nature of graviton (physical degrees of freedom in our case) is very much in spirit of the $E_{11}$ activity of \cite{WestE11}, which develops in this respect the line of Borisov and Ogievetsky \cite{BO74}. Also similarly to the case of $E_{10}$ and $E_{11}$ conjecture(s), the fermionic field (gravitino) appears to be out of the consideration and have to be considered as a `field of nonlinear realization' \cite{CCWZ}. Surely, the superparticle quantization provides us only with the linearized fields describing on-shell degrees of freedom. A check of whether the $D=11$ supergravity has indeed a hidden $E_8$ symmetry, even without compactification, or the above described $SO(16)$ invariance of the linearized supergravity and the coincidence of the number of physical polarizations of the bosonic fields of the linearized supergravity multiplet with the dimension of the $E_8/SO(16)$ coset is purely occasional, is an interesting subject for future study. Let us notice that $E_n/H_n$ cosets, which appeared as a manifold of scalar fields for the $d=11-n$ compactifications of D=11 supergravity, were considered recently in \cite{Hull07} in relation with the M--theoretic generalizations of the Hitchin's generalized geometries \cite{Hitchin}. In particular, it was shown \cite{Hull07}, that $E_7/SU(8)$ and $E_6/Sp(4)$ cosets can be described by $n$--dimensional components of the bosonic fields of supergravity, $g_{ij}$, $A_3$ and $A_6$ (metric, three form gauge field, and its 11--dimensional dual). The $E_n/H_n$ cosets with $n< 6$ can be described by the $n$--dimensional components of $g_{ij}$ and $A_3$. The $128$--dimensional $n=8$ coset $E_8/SO(16)$ does not feet in this picture. Indeed, it is easy to see that the number of the components of $8$--dimensional $g_{ij}$, $A_3$ and $A_6$, is $36+56+28=120 < 128$\footnote{Of course, the simplest proposition to feet the coset dimension ($128$) would be to add the eight--dimensional one form $A_1$, but this field, in contrast with $g_{ij}$, $A_3$ and $A_6$, does not have a straightforward D=11 origin.} In the light of this the coincidence of the number of parameter of the $E_8/SO(16)$ coset with the number of polarizations of the physical bosonic fields of the supergravity multiplet, observed in sec. 5.2 and discussed above, looks even more intriguing and worth further thinking. \bigskip {\bf Acknowledgments. } {The author thanks Jos\'e de Azc\'arraga, Paolo Pasti, Dmitri Sorokin, Mario Tonin for useful discussions and Kelly Stelle for the conversation on $E_{10}$, $E_{11}$ and $E_8$ issues. This work has been partially supported by research grants from the Ministerio de Educaci\'on y Ciencia (FIS2005-02761) and EU FEDER funds, the Generalitat Valenciana, the Ukrainian State Fund for Fundamental Research (N383), the INTAS (2006-7928) and by the EU MRTN-CT-2004-005104 network {\it Constituents, Fundamental Forces and Symmetries of the Universe} in which the author is associated to the Valencia University. } \bigskip {\bf Notice added in proofs}. When the present work was finished, the author became aware of the work \cite{Duff85} in which the possible hidden $E_8\times SO(16)$ symmetry of D=11 supergravity was conjectured for the first time. \bigskip
2023-04-23T06:10:11.638Z
2008-04-01T19:22:40.000Z
redpajama/arxiv
arxiv_0002
341
23,903
1492a7e01d92eb9d7d77460010addb93d6dc92d5
\section{Introduction} \label{sc.intro} Recently, there is a lot of interest in the studies of thermodynamics of strongly interacting matter using the Polyakov loop enhanced Nambu-Jona-Lasinio (PNJL) model \cite{pnjl0,pnjl1,pnjl2}. This model couples the chiral and deconfinement order parameters through a simple-minded coupling of the NJL model \cite{njl1} with the Polyakov loop model \cite{polyd1}. The two major thrusts in recent times have been to estimate various thermodynamic observables using this model (see e.g. \cite{pnjl3,mesoni,pnjl4, isospini,susci}), and to make systematic improvements of the model \cite{ratti4,polrgi,ringi}. Another set of important result has come from similar studies in chiral quark models that go beyond the mean field treatment \cite{enrique}. In this note we deal with the improvement of the Polyakov loop model and describe some of its consequences, remaining within the domain of mean field analysis. The Polyakov loop model used in many of the recent literature is the one given in Ref.\cite{pnjl2}. The Polyakov loop $\Phi$ has been treated here as a Z(3) spin field \cite{polyl}. Using this model we estimated \cite{pnjl3} a very sensitive observable - the quark number susceptibility (QNS) and also the higher order coefficients in the Taylor expansion of pressure in quark number chemical potential $\mu_0$. Comparison with the data from Lattice QCD (LQCD) \cite{sixx} showed that the QNS in the PNJL model and LQCD agree quite well both qualitatively and quantitatively. The fourth order coefficient $c_4$ showed qualitative agreement but had a quantitative difference at high temperatures. Some of us further extended the PNJL model to include isospin chemical potential $\mu_I$ \cite{pnjl4}. The isospin number susceptibility (INS) and its derivative with respect to $\mu_0$ and $\mu_I$ were obtained. In this case the fourth order derivative $c_4^I$ was quite consistent with lattice data, but the INS was not. A possible reason for such departures is that the mean-field treatment of the PNJL model is insufficient. But then it should have affected the coefficients systematically {\it i.e.\/}, all the fourth order coefficients should deviate further from LQCD data than the second order coefficients. There are however other {\it simpler} reasons that should be considered first. The PNJL model is only a model which can mimic some of the characteristics of a fundamental theory like QCD and its discretized version LQCD. Moreover, the parameters like the couplings and masses are quite different in the PNJL model and the LQCD simulations. Thus some quantitative difference is naturally expected. Apart from these we made an important observation in \cite{pnjl4} that $\Phi$ has a big role to play in the behaviour of these coefficients. We pointed out how the quantitative differences could be caused by the behaviour of $\Phi$ as a function of temperature and chemical potentials. The most important {\it physical problem} in the simple-minded PNJL model is the following. $\Phi$ being the normalized trace of the Wilson line ${\mathbf L}$, which is an SU(3) matrix, should lie in the range $0 \le \Phi \le 1$. But it was found to be greater than 1 at temperatures above 2$T_c$ (see Fig.2 in Ref. \cite{pnjl4}). The natural way to cure this problem is to consider a proper Jacobian of transformation from the matrix valued field $L$ to the complex valued field $\Phi$ which will then constrain the value of $\Phi$ to $\Phi < 1$. This is quite a well known construction in SU(N) matrix model (see e.g.\cite{dumitru,steinacker,akemann}), in certain variations of Polyakov loop model (\cite{meisinger2,ratti4}), as well as in QCD motivated phenomenological models (see \cite{mustafa} and references therein). Also this is ubiquitous in various strong coupling effective theories of Lattice QCD (see e.g. \cite{strc1}). Here we introduce the Vandermonde term in the Polyakov loop model in a conceptually different way than that in the earlier models. In the next section we discuss our approach. In section III we show the changes in measurements of the susceptibilities and various other quantities due to the VdM term. The final section contains our conclusions. \section{Formalism} \label{sc.formal} At a temperature $T$, the SU(3) Wilson line is given by $ {\mathbf L} ({\bf x}) = {\cal P} {\rm exp} (ig \int_0^{1/T} A_0^a ({\bf x}) \lambda_a d\tau)$, where $g$ is the gauge coupling, $A_0^a$ ($a$ = 1,2,...8) are the time-like components of the gluon field, $\lambda_a$ are the Gell-Mann matrices and $\tau$ the imaginary time in the Euclidian field theory. The Polyakov loop is defined as $\Phi = {\rm tr} {\mathbf L} /3$ and its conjugate is $\bar{\Phi} = {\rm tr} {\mathbf L}^{\dagger} /3$. Since ${\mathbf L}$ is itself a SU(3) matrix so $\Phi,\bar{\Phi} \le 1$. The gluon thermodynamics can be described as an effective theory of the Polyakov loops \cite{polyd1}. On the other hand quark thermodynamics can be effectively described in terms of NJL model \cite{njl1}, and the two are coupled to obtain the PNJL model (e.g., \cite{pnjl2}). The thermodynamic potential in this model can be obtained in terms of the sigma and pion condensates and the thermal average of the Polyakov loop. However the version of the PNJL model \cite{pnjl2} leads to $\Phi > 1$ for $T > 2T_c$. To rectify this anomaly, the authors of Ref.\ \cite{pnjl2} have recently proposed a complete modification of the Polyakov loop model \cite{ratti4}, motivated from the strong coupling results used by Fukushima \cite{pnjl1} Our aim in this work is also similar, but the approach is somewhat different. We retain the Polyakov loop potential of \cite{pnjl2,pnjl3,pnjl4} but treat it as a matrix model. Also the way we define pressure is quite different as discussed below. We first outline our scheme using an arbitrary matrix model for the Wilson line ${\mathbf L}$, which for simplicity is assumed to be a potential ${\cal V}[{\mathbf L}] \equiv {\cal V}[\Phi,\bar{\Phi}]$. In the following equation, we express the partition function for this theory first as a path integral over ${\mathbf L}$ and then over the fields $\Phi$ and $\bar{\Phi}$. \begin{subequations} \begin{eqnarray} Z = \int {\cal D}{\mathbf L} \, {\rm e}^{-\,\frac{1}{T}{\cal V}[\Phi,\bar{\Phi}]} &=& \int \prod_{\bf x} d{\mathbf L}({\bf x})\,{\rm e}^{-\,\frac{1}{T}{\cal V}[\Phi,\bar{\Phi}]} \label{eq.inttrace}\\ &=& \int \prod_{\bf x} J[\Phi({\bf x}),\bar{\Phi}({\bf x})] \, d\Phi({\bf x}) \, d\bar{\Phi}({\bf x}) \, {\rm e}^{-\,\frac{1}{T}{\cal V}[\Phi,\bar{\Phi}]} \label{eq.intmes} \end{eqnarray} \end{subequations} \noindent where, ${\cal D} {\mathbf L}$ is the SU(3) Haar measure, $J[\Phi,\bar{\Phi}]$ is the Jacobian of transformation (also called Vandermonde determinant, see e.g. Ref. \cite{trinhammer}) from ${\mathbf L}$ to ($\Phi,\bar{\Phi}$), and is given as $J[\Phi,\bar{\Phi}] \equiv (27/24\pi^2) (1 - 6\,\bar{\Phi} \Phi + 4\,(\bar{\Phi}^3 + \Phi^3) - 3\,(\bar{\Phi} \Phi)^2)$. Our interest then would be to obtain the pressure which is given by, \begin{eqnarray} P = T \frac{\partial\, \ln Z}{\partial v} = - \left\langle \frac{\partial\, {\cal V} }{\partial v} \right\rangle \simeq - \frac{1}{v} \langle {\cal V} \rangle \label{eq.prest} \end{eqnarray} \noindent where, $v$ denotes the physical volume of the system and $\langle \rangle$ denotes thermal averaging. The last approximation holds in the infinite volume limit. The role of the Jacobian is to be understood as follows. First, it is a factor reweighting the field configurations and hence significantly affects all thermal averages. However the Jacobian is not explicitly space-time dependent, there is no extra term to be averaged in Eqn. \ref{eq.prest} as one might expect when redefining the path integration from ${\mathbf L}$ to $\Phi$ (Eqn. \ref{eq.inttrace} to Eqn. \ref{eq.intmes}). A typical example of such a dependence would be if we were considering say a Fourier transform of the fields. In case of a free field this kind of dependence of the Jacobian on the volume and temperature is very important in obtaining the correct partition function. Thus, in our mean field treatment we have to carefully incorporate the effect of the Jacobian and this is the main aim of this paper. The effect of the Jacobian is reflected in the mean fields $\left<\Phi\right>$ and $\left<\bar{\Phi}\right>$, and we express the pressure as, \begin{eqnarray} P = - \frac{1}{v}{\cal V}(\left<\Phi\right>,\left<\bar{\Phi}\right>). \end{eqnarray} \noindent To relate to pure glue theory, we now replace the potential density ${\cal V}/v$ by a Landau-Ginzburg type functional ${\cal U}$, given by\cite{pnjl2}, \begin{eqnarray} \frac{\mathcal{U}\left(\Phi,\bar{\Phi},T\right)}{ T^4} = -\frac{b_2\left(T\right)}{ 2 }\bar{\Phi} \Phi- \frac{b_3}{6}\left(\Phi^3+ {\bar{\Phi}}^3\right)+ \frac{b_4}{4}\left(\bar{\Phi} \Phi\right)^2 ~~~, \label{eq.uu} \end{eqnarray} \noindent with \begin{eqnarray} b_2\left(T\right)=a_0+a_1\left(\frac{T_0}{T}\right) +a_2\left(\frac{T_0}{T} \right)^2+a_3\left(\frac{T_0}{T}\right)^3~~~. \label{eq.bb} \end{eqnarray} \begin{figure}[!tbh] \subfigure{ {\includegraphics [scale=0.6] {fig1a.eps}} } \hskip 0.15 in \subfigure{ {\includegraphics[scale=0.6]{fig1b.eps}} } \caption{$\Phi$ and $P/P_{SB}$ for $\kappa = 0$($T_0 = 0.27$ GeV) and $\kappa=0.5$($T_0=0.2555$ GeV). The value of $T_c$ is 0.270 GeV. } \label{fg.purek} \end{figure} \noindent To make a saddle point approximation to the mean fields, the potential density ${\cal U}$ was minimized w.r.t. $\Phi$ and $\bar{\Phi}$ in Ref. \cite{pnjl2}. These were then used to obtain pressure $P=-{\cal U}$. The coefficients $a_i$ ($i$=0,1,2,3) and $b_j$ ($j$=2,3,4) were fitted from Lattice data of pressure in pure gauge theory, and $T_0$ is precisely the transition temperature $T_c = 270$ MeV \cite{tcpg1,tcpg2,tcpg3}. As $T \rightarrow \infty$, $P/T^4 \rightarrow 16\pi^2/90$. However, to take care of the effect of the Jacobian as discussed above, we now propose to minimize the following modified potential, \begin{eqnarray} \frac{{\cal U}^{\prime}(\Phi,\bar{\Phi})}{T^4} = \frac{{\cal U}(\Phi,\bar{\Phi})}{T^4} - \kappa \ln [J(\Phi,\bar{\Phi})], \label{eq.uup} \end{eqnarray} \noindent where $\kappa$ is a dimensionless parameter to be determined phenomenologically. The mean field value of pressure is still obtained from the relation $P=-{\cal U}$. A very simple example of this approach is demonstrated in the appendix. Note that the Jacobian term is considered as an extra effective term in the modified potential density implying a sort of normalized volume factor. This is quite natural as the form of Eqn. \ref{eq.intmes} implies that there is a Jacobian sitting at each and every space-time coordinate, depending on the value of $\Phi$ and $\bar{\Phi}$. \begin{figure}[!tbh] \subfigure{ {\includegraphics [scale=0.6] {fig2a.eps}} } \hskip 0.15 in \subfigure{ {\includegraphics[scale=0.6]{fig2b.eps}} } \caption{$\Phi$ and $P/P_{SB}$ for $\kappa = 0$($T_0 = 0.27$ GeV) and $\kappa=(0.22\,T_0^3/T^3)$($T_0=0.2555$ GeV). Here $T_c =$ 0.270 GeV. } \label{fg.purek3} \end{figure} With the new minimization condition all the coefficients should be estimated afresh. Instead, we retain the values of $a_i$ and $b_j$ obtained in \cite{pnjl2} and tune only the values of $T_0$ and $\kappa$. This is equivalent to a correlated modification of the $a_i$ and $b_j$ keeping $T_0$ fixed at 270 MeV. We show the variation of the Polyakov loop and the pressure $P$ normalized to Stefan-Boltzmann (SB) pressure $P_{SB}$ for pure gauge theory, as a function of temperature. In Fig.\ref{fg.purek} we have used a small non-zero constant value of $\kappa = 0.05$. In Fig.\ref{fg.purek3} we find similar behaviour for a temperature dependent $\kappa = 0.22 T_0^3/T^3$. In both the figures the $\kappa=0$ curves are for the Polyakov loop model without the VdM term. Thus the parameter space of $\kappa$ is quite open at this stage. Within the range of temperatures ($T < 3 T_c$) where the Polyakov loop model is supposed to be a good description of the system, our approach and that of Ref. \cite{ratti4} give similar results. The reason behind this is that one can suitably adjust the parameters in both approaches. However, our method for introducing the VdM potential as discussed above, is very much different from that of Ref. \cite{ratti4}. The main difference is that the pressure computed in Ref. \cite{ratti4} includes the VdM term. Thus the coefficient of the VdM term requires an inverse temperature dependence, so that on a naive extrapolation to high temperatures, the pressure does not blow up with the logarithm of the Jacobian. In that case another problem crops up with the remaining part of the thermodynamic potential, which at high temperatures has no bound, contrary to the claim that $\Phi \rightarrow 1$ as $T \rightarrow \infty$. Precisely because $\Phi$ should go to 1 as $T \rightarrow \infty$ we believe that the VdM term should be very important at high temperatures to constrain the maximum value of $\Phi$ to 1. The exercise for introducing a VdM term for the Polyakov loop model itself has nothing new to offer. Even without it the potential ${\cal U}$ was able to describe the pure glue theory quite well. However its importance becomes evident in the PNJL model. The Polyakov loop has a coupling to the fermionic part as will be seen in the corresponding thermodynamic potential below, which forces the $\Phi$ to be greater than 1, and more so as the chemical potential is increased. The VdM term can inhibit such a behaviour. The thermodynamic potential of the PNJL model \cite{pnjl2,pnjl3,pnjl4} is given as, \begin{eqnarray} \Omega&=&{\cal U}\left(\Phi,\bar{\Phi},T\right)+ 2 G_1(\sigma_u^2 + \sigma_d^2) + 4 G_2 \sigma_u \sigma_d \nonumber \\ &-& \sum_{f=u,d} 2\,T\int\frac{\mathrm{d}^3p}{\left(2\pi\right)^3} \left\{ \ln\left[1+3\left(\Phi+\bar{\Phi}\mathrm{e}^ {-\left(E_f-\mu_f\right)/T}\right)\mathrm{e}^{-\left(E_f-\mu_f\right)/T} + \mathrm{e}^{-3\left(E_f-\mu_f\right)/T}\right]\right . \nonumber\\ &+& \left . \ln\left[1+3\left(\bar{\Phi}+\Phi\mathrm{e}^{-\left(E_f+\mu_f\right)/T} \right)\mathrm{e}^{-\left(E_f+\mu_f\right)/T}+ \mathrm{e}^{-3\left(E_f+\mu_f\right)/T}\right] \right\} - \sum_{f=u,d} 6\int\frac{\mathrm{d}^3p}{\left(2\pi\right)^3}{E_f} \theta\left(\Lambda^2-\vec{p}^{~2}\right) ~~~. \label{omega} \end{eqnarray} Here quark condensates for the two light flavors $u$ and $d$ are given by $\sigma_u = <\bar{u}u>$ and $\sigma_d=<\bar{d}d>$ respectively, and the respective chemical potentials are $\mu_u$ and $\mu_d$. Note that $\mu_0 = (\mu_u+\mu_d)/2$ and $\mu_I = (\mu_u-\mu_d)/2$. The quasi-particle energies are $E_{u,d}=\sqrt{\vec{p}^{~2}+m_{u,d}^2}$, where $m_{u,d}=m_0-4 G_1 \sigma_{u,d} -4 G_2 \sigma_{d,u}$ are the constituent quark masses and $m_0$ is the current quark mass (we assume flavour degeneracy). $G_1$ and $G_2$ are the effective coupling strengths of a local, chiral symmetric four-point interaction. We take $G_1 = G_2 = G/4$, where $G$ is the coupling used in Ref. \cite{pnjl2}. $\Lambda$ is the 3-momentum cutoff in the NJL model. ${\cal U}\left(\Phi,\bar{\Phi},T\right)$ is the effective potential for $\Phi$ and $\bar{\Phi}$ as given in Eqn. \ref{eq.uu}. We locate the transition temperature in this model from the peaks in the temperature variation of $d\Phi/dT$ and $d\sigma_{u,d}/dT$. Similar to the case of the Polyakov loop model we would now obtain the mean fields by minimizing, \begin{eqnarray} \frac{\Omega^{\prime}}{T^4} = \frac{\Omega}{T^4} - \kappa\,\ln [J(\Phi,\bar{\Phi}] \label{eq.omegap} \end{eqnarray} The coefficient $\kappa$ in the VdM term can in general have some temperature and/or chemical potential dependence. Here we take a constant value $\kappa = 0.2$ which suffices for the purpose of the present work. To set this value we looked at the two important quantities affected by the VdM term. First one is $\Phi$ which decreases with the increase of $\kappa$ and hence decreases the pressure. Second one is the transition temperature which increases with $\kappa$. Thus we try to optimize $\kappa$ to get both the pressure and the transition temperature as close as possible to the LQCD results for two quark flavours. On a naive extrapolation of this model to large chemical potentials, the $\Phi$ and $\bar{\Phi}$ should grow towards 1 (deconfinement at large chemical potential) even at very low temperatures. Thus again the logarithmic term blows up. So if pressure is computed including the VdM term as is done in Ref. \cite{ratti4}, an anomalous logarithmic divergence would come up. There may be some new physics that can obscure such terms by making $\kappa \rightarrow 0$ as $\mu \rightarrow \infty$. But that would again run into a problem in restricting $\Phi$ in the domain $0 \le \Phi \le 1$. Apart from the difference in the treatment of the VdM term we would now remove the condition $\Phi = \bar{\Phi}$ used in \cite{ratti4}, since it has important implications for susceptibilities. Before going over to our results let us take a digression to the Lattice computation of $\Phi$. On the lattice $\Phi$ is computed from the relation \cite{polyl}, \begin{eqnarray} \Phi(T) = \exp\,(-\bigtriangleup F_{q \bar{q}}(\infty,T)/2T), \label{eq.screen} \end{eqnarray} \noindent where, $\bigtriangleup F_{q \bar{q}}(\infty,T) = F_{q \bar{q}}(\infty,T) - F_{00}(T)$, and $F_{q \bar{q}}(r,T)$ is the free energy of a pair of heavy quark and anti-quark at a separation $r$ at a temperature $T$. This has been used to define a renormalized Polyakov loop in lattice simulations of both pure gluon \cite{polatgl1,polatgl2} and full QCD \cite{polatfm}. In fact the data of \cite{polatgl1} was used to obtain the different parameters of the Polyakov loop model in \cite{pnjl2}, and is being used by us here, and in that sense $\Phi$ is the renormalized Polyakov loop. But even in this exercise the $\Phi$ in the Polyakov loop model of \cite{pnjl2} goes to 1 at large T and is thus different from lattice results for $T > T_c$. On the lattice the value of $\Phi$ goes above 1 for $T > T_c$. It has been argued that since the $\Phi$ measured in lattice simulations is a renormalized quantity, it is no more a character of the group SU(3) and is thus not limited to values below 1. From Eqn.\ref{eq.screen}, it is evident that $\Phi > 1$ only when $\bigtriangleup F_{q \bar{q}}(\infty,T) < 0$, and this can be very easily seen to be true in the lattice simulations and happens for $T > T_c$. Now, the free energy $F_{q \bar{q}}(r,T)$ can be considered to be composed of three components, namely, a confining potential, a screening potential and an entropy part. For low temperatures the confining part is dominant and $\bigtriangleup F_{q \bar{q}}(\infty,T) > 0$. In the deconfined phase for large distances, the screening potential drops out so the entropy part is dominant which could lead to $\bigtriangleup F_{q \bar{q}}(\infty,T) \simeq - T \bigtriangleup S_{q \bar{q}}(T) < 0$, where $\bigtriangleup S_{q \bar{q}}(T) = S_{q \bar{q}}(T) - S_{00}$, and $S_{q \bar{q}}(T)$ denotes the entropy of the system with a pair of quark and anti-quark. However the heavy quarks as such are not expected to contribute significantly to the entropy and it seems natural to have $\bigtriangleup S_{q \bar{q}}(T) = 0$, and thus $\bigtriangleup F_{q \bar{q}}(\infty,T) = 0$ for $T > T_c$. Instead the value is negative on the lattice and $\bigtriangleup F_{q \bar{q}}(\infty,T) \rightarrow - \infty$ as $T \rightarrow \infty$, leading to $\Phi \rightarrow \infty$. One has to then worry about what can bend it down towards 1 at asymptotic temperatures as was observed by Gava and Jengo in perturbative evaluation of $\Phi$ \cite{gava}. However this perturbative calculation also points to the fact that as the temperature is lowered from asymptotic values the $\Phi$ is greater than 1. Also recent continuum estimates in chiral quark models \cite{enrique1} using dimensional reduction find close agreement with both lattice and perturbative calculations. On the other hand another lattice computation of the Polyakov loop in pure glue theory uses a renormalization dependent on temperature instead on the lattice spacing and finds the values to remain below 1 at least upto $T \sim 3.5 T_c$ \cite{dumitrupol}. We thus admit that the state of affairs with the lattice computation of $\Phi$ is not very clear to us at this stage. There is a missing link from quantum computations to our matrix model mean-field computations. \section{Results and discussions} \label{sc.results} \begin{figure}[!tbh] \subfigure[]{ {\includegraphics [scale=0.6] {fig3a.eps}} \label{fg.peaks} } \hskip 0.15 in \subfigure[]{ {\includegraphics[scale=0.6]{fig3b.eps}} \label{fg.phisig} } \vskip 0.1 in \subfigure[]{ {\includegraphics [scale=0.6] {fig3c.eps}} \label{fg.presr} } \caption{(a): Peaks in ${\rm d}\Phi /{\rm d}T$ and ${\rm d}\sigma /{\rm d}T$ sets the $T_c$ at around 230 MeV. (b): $\Phi$ and $\sigma$ as functions of $T/T_c$.\\ {\it Note:} In this figure $\sigma = G(\sigma_u + \sigma_d)$. } \end{figure} \subsection{PNJL Model: Pressure, specific heat and speed of sound} \label{sc.pressure} Now we discuss the results for the PNJL model with VdM term. Here the $\Omega^{\prime}$ as given in Eqn. \ref{eq.omegap} is minimized with respect to the fields and all the thermodynamic quantities are obtained using these values. The peaks of the $d\Phi/dT$ and $d\sigma_{u,d}/dT$ curves, as shown in Fig. \ref{fg.peaks}, differs by 5 MeV. Their average position, which is at 230 MeV, is taken as the transition (or crossover) temperature $T_c$. In spite of the significant difference of $T_c$ in the PNJL model with the corresponding LQCD value of 192(7)(4) MeV \cite{cheng}, the thermodynamic quantities when plotted against the scaled temperature $T/T_c$ show similar behaviour. We shall henceforth show the temperature dependences in terms of $T/T_c$. As mentioned earlier we are using an optimized value of $\kappa = 0.2$. The temperature dependence of the fields are shown in Fig. \ref{fg.phisig}. It agrees reasonably with that of the LQCD results as shown in Fig. 1 of Ref. \cite{petcy}. The scaled pressure $P/P_{SB}$ is plotted in Fig. \ref{fg.presr}. It slightly overestimates the LQCD pressure \cite{leos}. However it agrees well with the recent LQCD results for 2+1 flavors with almost physical quark masses \cite{datta}. Now, the energy density $\epsilon$ is obtained from the relation, \begin{eqnarray} \epsilon = - T^2 \left . \frac{\partial}{\partial T} \left(\frac{\Omega}{T}\right) \right |_V = - T \left . {\frac{\partial \Omega}{\partial T}} \right |_V + \Omega ~~~. \end{eqnarray} \noindent The rate of change of energy density $\epsilon$ with temperature at constant volume is the specific heat $C_V$ which is given as, \begin{eqnarray} C_V = \left . {\frac{\partial \epsilon}{\partial T}} \right |_V = - \left . T {\frac{\partial^2 \Omega}{\partial T^2}} \right |_V ~~~. \label{sph} \end{eqnarray} \noindent The square of velocity of sound at constant entropy $S$ is given by, \begin{eqnarray} v_s^2 = \left . {\frac{\partial P}{\partial \epsilon}} \right |_S = \left . {\frac{\partial P}{\partial T}} \right |_V \left / \left . {\frac{\partial \epsilon}{\partial T}} \right |_V \right . = \left . {\frac{\partial \Omega}{\partial T}} \right |_V \left / \left . T {\frac{\partial^2 \Omega}{\partial T^2}} \right |_V \right . ~~~. \label{sps} \end{eqnarray} \noindent The conformal measure is given by, \begin{eqnarray} {\cal C}=\Delta/\epsilon \qquad ; \qquad \Delta = \epsilon - 3P \label{cnm} \end{eqnarray} \begin{figure}[!tbh] \subfigure[]{ {\includegraphics [scale=0.6] {fig4a.eps}} \label{fg.cv} } \hskip 0.15 in \subfigure[]{ {\includegraphics[scale=0.6]{fig4b.eps}} \label{fg.cs} } \caption{(a): Temperature dependence of energy density $\epsilon$ and specific heat $C_V$. (b): Temperature dependence of squared speed of sound $v_s^2$ and conformal measure $\Delta/\epsilon$. The arrows on the right show the corresponding SB limit. } \label{fg.cvcs} \end{figure} These quantities are plotted in Fig. \ref{fg.cvcs}. At higher temperatures the $C_V$ is slightly lower than the values obtained in \cite{pnjl3}. However, the velocity of sound and the conformal measure remain unaltered in the whole range of temperatures. Thus the VdM term affects $C_V$ but not quantities involving ratios of pressure and energy density e.g. $v_s^2$ and ${\cal C}$. It is interesting to note that our earlier \cite{pnjl3} as well as the present work, have been able to predict the value of $v_s^2$ quite well when compared to the recent LQCD results \cite{datta}. We hope similar encouraging results would be obtained on the lattice for the specific heat. \subsection{Taylor expansion of Pressure} \label{sc.tayexp} \begin{figure}[!tbh] \subfigure[]{ \label{fg.c2} {\includegraphics [scale=0.6] {fig5a.eps}} } \hskip 0.1 in \subfigure[]{ \label{fg.c4} {\includegraphics [scale=0.6] {fig5b.eps}} } \vskip 0.1 in \subfigure[]{ \label{fg.c6} {\includegraphics [scale=0.6] {fig5c.eps}} } \caption{The Taylor expansion coefficients of pressure in quark number and isospin chemical potentials as functions of $T/T_c$. Symbols are LQCD data \cite{sixx}. Arrows on the right indicate the corresponding ideal gas values. } \label{fg.scnord}\end{figure} The Taylor expansion coefficients of pressure with respect to chemical potentials have been the focus of comparison of PNJL and LQCD results \cite{pnjl3,pnjl4,ratti4,ratti5}. Here we have expanded the scaled pressure ($P/T^4$) in a Taylor series for the quark number and isospin number chemical potentials, $\mu_0$ and $\mu_I$ respectively, \begin{eqnarray} \frac{P(T,\mu_0,\mu_I)}{T^4}= \sum^{\infty}_{n=0} \sum^{n}_{j=0} \frac{n !}{j ! (n-j) !} c^{jk}_n(T) \left(\frac{\mu_0}{T}\right)^j \left(\frac{\mu_I}{T}\right)^k ~~~;~ k=n-j, \label{tay} \end{eqnarray} where, \begin{eqnarray} c^{jk}_n(T) = \frac{1}{n!} \frac{\partial^n \left ({P(T,\mu_0,\mu_I) / T^4} \right )} {\partial \left(\frac{\mu_0}{T}\right)^j \partial \left(\frac{\mu_I}{T}\right)^k} \Big|_{\mu_0=0,\mu_I=0} ~~~. \label{taycoff} \end{eqnarray} The $n= {\rm odd}$ terms vanish due to CP symmetry. Even for the $n= {\rm even}$ terms, due to flavour degeneracy all the coefficients $c^{jk}_n$ with $j$ and $k$ both odd vanish identically. We evaluate all the 10 nonzero coefficients (including the pressure at $\mu_0 = \mu_I = 0$) upto order $n=6$ and compare them to LQCD data. These coefficients were evaluated in \cite{pnjl3,pnjl4} and certain differences were found w.r.t. LQCD data. We shall now discuss the effects of the VdM term on these coefficients. The coefficients we deal with are given by, \begin{eqnarray} c_n(T) &=& \frac{1}{n!} \left. \frac{\partial^n \left ({P(T,\mu_0) / T^4} \right ) } {\partial \left(\frac{\mu_0}{T}\right)^n}\right|_{\mu_0=0} = c^{n0}_n~~~, \end{eqnarray} \begin{eqnarray} c^I_n(T) &=& \left. {\frac{1}{n!} \frac{\partial^n \left ({P(T,\mu_0,\mu_I) / T^4} \right ) }{ \partial \left(\frac{\mu_0 }{ T }\right)^{n-2} \partial \left(\frac{\mu_I }{T }\right)^2 }}\right|_{\mu_0=0,\mu_I=0} = c^{(n-2) 2}_n~~~; ~ n > 1. \end{eqnarray} We present the QNS, INS and their higher order derivatives with respect to $\mu_0$ in Fig.\ \ref{fg.scnord}. We have plotted the LQCD data from Ref.\ \cite{sixx} for quantitative comparison. At the second order (Fig. \ref{fg.c2}) we find that the QNS $c_2$ compares well with the LQCD data upto about 1.2 $T_c$. Thereafter the PNJL values rise up towards the SB limit, while the LQCD values saturate at about $80\%$ of this limit. The INS $c^I_2$ also shows similar behaviour, but at lower temperatures it goes slightly above the corresponding LQCD values. There is no significant difference of $c^I_2$ with and without the VdM term. However $c_2$ was close to the LQCD result without VdM term \cite{pnjl4}, but now at high temperatures it goes above the LQCD values and approaches $c^I_2$. Thus at high temperatures these coefficients overestimate the LQCD results but both are almost equal to each other, similar to that observed on the Lattice. This was not so without the VdM term \cite{pnjl4}. Now we discuss the $4^{th}$ order coefficients (Fig.\ref{fg.c4}). The values of $c_4$ in the PNJL model with VdM term matches closely with those of LQCD data for the full range of temperatures. This is in contrast to that found without the VdM term \cite{pnjl3} where they were close only upto $T \sim 1.1 T_c$. The VdM term does not affect the coefficient $c^I_4$ which agrees well with LQCD data for the full range of $T$. Also both these coefficients approach each other as well as the corresponding SB limit. At the $6^{th}$ order (Fig.\ref{fg.c6}) the coefficients do not seem to be affected by the VdM term. Thus we write down the salient features regarding the Taylor coefficients in this modified PNJL model: \begin{itemize} \item All the coefficients start approaching their respective SB limit around $2 T_c$. \item Both the QNS and INS approach each other at $2 T_c$. This is also true for their corresponding responses to quark chemical potential given by the $4^{th}$ and $6^{th}$ order coefficients. \item At high temperatures, except $c_2$ and $c^I_2$, all the coefficients compare well quantitatively with the LQCD data. \item The main effect of the VdM term is to move $c_2$ and $c_4$ close to their respective SB limits. \end{itemize} \begin{figure}[!tbh] \subfigure[]{ \label{fg.phmub} {\includegraphics [scale=0.6] {fig6a.eps}} } \hskip 0.15 in \subfigure[]{ \label{fg.phmui} {\includegraphics[scale=0.6]{fig6b.eps}} } \caption{(a): $\Phi$ (solid lines) decreases and $\bar{\Phi}$ (dotted lines) increases as a function of $\mu_0/T$ ($\mu_I=0$) at low temperatures and almost equal and constant at high temperatures. (b): $\Phi$ (solid lines) and $\bar{\Phi}$ (dotted lines) are equal and almost constant as a function of $\mu_I/T$ ($\mu_0=0$). } \label{fg.phimub}\end{figure} We have emphasized the role of the Polyakov loop in obtaining the values of the Taylor coefficients in our earlier works \cite{pnjl3,pnjl4}. In those works we found firstly that the Polyakov loop goes above 1 at high temperatures and also has a significant dependence on $\mu_0$ but not on $\mu_I$. Here as shown in Fig. \ref{fg.phimub}, the VdM term restricts the value of $\Phi$ within 1, and also the $\mu_0$ dependence at higher temperatures is almost negligible. Thus even the splitting between $\Phi$ and $\bar{\Phi}$ has almost disappeared. We note here that though we let $\Phi$ and $\bar{\Phi}$ to be different, they come out to be almost equal at high temperatures. This is in contrast to imposing $\Phi = \bar{\Phi}$ for the full range of temperatures as done in Ref. \cite{ratti4}. The difference between $\Phi$ and $\bar{\Phi}$ is responsible for the difference of $c_2$ and $c^I_2$ in the intermediate temperatures. To complete the comparison with the LQCD data we have looked at the flavour diagonal ($c_n^{uu}$) and flavour off-diagonal ($c_n^{ud}$) susceptibilities defined as, \begin{eqnarray} c^{uu}_n = \frac{c^{n0}_n + c^{(n-2) 2}_n }{ 4}, \qquad{\rm and}\qquad c^{ud}_n = \frac{c^{n0}_n - c^{(n-2) 2}_n }{4} . \end{eqnarray} The $2$-nd order flavour diagonal and off-diagonal susceptibilities are given by, \begin{eqnarray} \nonumber \frac{\chi_{uu}(T,\mu_u=0,\mu_d=0)}{T^2} &=& \left. \frac{\partial^2 P(T,\mu_u,\mu_d)}{\partial\mu_u^2} \right|_{{\mu_u=\mu_d=0}} = 2c_2^{uu}, \qquad{\rm and}\qquad \\ \nonumber \frac{\chi_{ud}(T,\mu_u=0,\mu_d=0)}{T^2} &=& \left. \frac{\partial^2 P(T,\mu_u,\mu_d)}{\partial\mu_u\partial\mu_d} \right|_{{\mu_u=\mu_d=0}} = 2c_2^{ud} . \end{eqnarray} \begin{figure}[!tbh] {\includegraphics [scale=0.45] {fig7a.eps}} {\includegraphics[scale=0.45]{fig7b.eps}} {\includegraphics[scale=0.45]{fig7c.eps}} {\includegraphics [scale=0.45] {fig7d.eps}} {\includegraphics[scale=0.45]{fig7e.eps}} {\includegraphics[scale=0.45]{fig7f.eps}} \caption{The flavour diagonal ({\it upper row}) and flavour off-diagonal ({\it lower row}) susceptibilities for $n = 2$, $4$ and $6$ as functions of $T/T_c$. Symbols are LQCD data \cite{sixx}. The arrows on the right indicate the respective ideal gas values.} \label{fg.diodi}\end{figure} These are shown in Fig.\ \ref{fg.diodi}. Except $c_2^{uu}$, all the other LQCD diagonal and off-diagonal coefficients are close to their respective ideal gas values from $1.2T_c$ onwards. The most striking discrepancy without the VdM term w.r.t the LQCD data was (see \cite{pnjl4}) in the $2$-nd order flavour off-diagonal susceptibility $c^{ud}_2$. $c_2^{ud}$ signifies the mixing of $u$ and $d$ quarks through the contribution of the two disconnected $u$ and $d$ quark loops. While the LQCD data shows that this kind of correlation between the $u$-$d$ flavours are almost zero just away form $T_c$, the PNJL model results remained non-zero even upto $2 T_c$. Adding the VdM term this part of the PNJL physics is now consistent with LQCD results. Below $1.2T_c$ there is still a large quantitative difference between the PNJL and LQCD results for $c_2^{ud}$. Obviously the VdM term is not expected to affect the results at low temperatures significantly. At the moment it is not clear what physics lie behind the difference between PNJL and LQCD results for $c_2^{uu}$ at high temperatures and $c_2^{ud}$ at low temperatures. Perhaps the quark masses may hold an answer. \section{Summary} \label{sc.summary} In this work the PNJL model of Ref.\ \cite{pnjl2,pnjl3,pnjl4} has been extended by introducing a VdM term. The important change it brings about is to set the upper limit of the Polyakov loop to 1. With this model we have studied some thermodynamic properties of strongly interacting matter with the light flavours $u$ and $d$ within a certain range of temperature $T$, and small values of chemical potentials $\mu_0$ and $\mu_I$. In principle the VdM term affects all thermodynamic quantities. We adjusted the parameters in the model so that the pressure and energy density is close to that computed in LQCD. We have then made estimates of the specific heat, the speed of sound and conformal measure. Further, we have extracted the Taylor expansion coefficients of pressure in the two chemical potentials upto $6$-th order. All the coefficients approach their respective SB limit above $2 T_c$. A quantitative comparison with the LQCD results show reasonable agreement, though the QNS $c_2$ and the INS $c^I_2$ on the Lattice are smaller by about 20 $\%$. In contrast our earlier estimates \cite{pnjl3,pnjl4} of these coefficients without the VdM term showed that $c_4$ and $c_I^2$ differ from the LQCD results. Thus the main effect of the VdM term is to impose physical constraints on $\Phi$ and $\bar{\Phi}$ such that at large temperatures the coefficients of the same order approach each other. This is clearly visible from the flavour off-diagonal coefficients shown in Fig. \ref{fg.diodi}. The remaining difference of the values of the QNS and INS in the model and lattice still needs to be addressed. Possible future steps to bring in better agreement could be to include beyond mean field effects and/or to include some sort of temperature dependence to the coefficient of the VdM term. However the lattice quark masses may be important in bridging the gap. We already found that such data for pressure with almost physical quark masses \cite{datta} show an increase at any given temperature when compared to data with larger quark masses \cite{leos}. This would encourage us to believe that extraction of the susceptibilities with similar quark masses on the lattice may have a better agreement with our results. Another way to compare results would be to re-estimate the parameters of the NJL model directly from the pion mass and decay constants from the lattice. We hope to undertake such studies in future. In an alternative formulation of the PNJL model including the effect of the VdM term, the coefficients $c_2$, $c_4$, $c_6$ and $c_8$ have been calculated \cite{ratti4}. Surprisingly, we more or less agree with those results quantitatively. Apart from the fact that this may be possible due to various adjustable parameters in both the models, the main reason seems to be the small dependence of $\Phi$ and $\bar{\Phi}$ on the chemical potentials. The basic difference between the two approaches is in the use of the VdM potential. The VdM term is required to obtain the mean field solution of $\Phi$ and $\bar{\Phi}$. But as we have explained in the formalism that it should not be included in the expression for pressure. On the other hand in Ref. \cite{ratti4} apart from obtaining the mean fields the VdM term is included while calculating the value of pressure. The difference in the mean field treatment coupled by almost same final results provide hints to the fact that mean field treatment has certain shortcomings and is unable to settle issues at hand. It would thus be worthwhile to look beyond. \begin{acknowledgments} We would like to thank A. Bhattacharya, S. Datta, S. Digal, S. Gupta, S. Mukherjee, P.B. Pal and R. Pisarski for many useful discussions and comments. We are thankful to A. Dumitru and O. Kaczmarek for useful discussions on the lattice computation of the Polyakov loop. \end{acknowledgments}
2023-04-23T06:10:11.831Z
2008-04-28T11:58:17.000Z
redpajama/arxiv
arxiv_0002
346
6,379
be66dfff8837f6f162470fc4a74682a687d2aba8
\section{The proof of Theorem \ref{rf} when case (a) of Lemma \ref{gluing} arises} \label{case a} Figure \ref{inv1} depicts an involution $\tau_1$ on $E_1 \times I$ under which $\partial_3 E_1\times I$ is invariant, has its boundary components interchanged, and $\tau_1(A_{11})=A_{12}$. Then $\tau_1$ extends to an involution of $X_1$ since its restriction to $\partial_3 E_1\times I = A_1$ coincides with the restriction to $A_1$ of the standard involution of $V_1$. Evidently $\tau_1(\partial_1 E_1 \times \{0\}) = \partial_2 E_1 \times \{1\}$ and $\tau_1(\partial_2 E_1 \times \{0\}) = \partial_1 E_1 \times \{1\}$. \begin{figure}[!ht] {\epsfxsize=4in \centerline{\epsfbox{inv1.ai}}\hspace{10mm}} \caption{$X_1$ and the involution $\tau_1$}\label{inv1} \end{figure} Figure \ref{inv2} depicts an involution $\tau_2$ on $E_2 \times I$ under which each of the annuli $\partial_3 E_2 \times I, A_{21}$, and $A_{22}$ are invariant. Further, it interchanges the components of $E_2 \times\partial I$ and as in the previous paragraph, $\tau_2$ extends to an involution of $X_2$. Note that $\tau_2(\partial_j E_2 \times \{0\}) = \partial_j E_2 \times \{1\}$ for $j = 1, 2$. Next consider the orientation preserving involution $\tau_2' = f (\tau_1|P_1) f^{-1}$ on $P_2$. By construction we have $\tau_2'(\partial_j E_2 \times \{0\}) = \partial_j E_2 \times \{1\}$ for $j = 1, 2$, and therefore $\tau_2' = g (\tau_2|P_2) g^{-1}$ where $g: P_2 {\rightarrow} P_2$ is a homeomorphism whose restriction to $\partial P_2$ is isotopic to $1_{\partial P_2}$. The latter fact implies that $g$ is isotopic to a homeomorphism $g': P_2 {\rightarrow} P_2$ which commutes with $\tau_2|P_2$. Hence, $\tau_2'$ is isotopic to $\tau_2|P_2$ through orientation preserving involutions whose fixed point sets consist of two points. In particular, $\tau_1$ and $\tau_2$ can be pieced together to form an orientation preserving involution $\tau: M {\rightarrow} M$. For each slope $\gamma$ on $\partial M$, $\tau$ extends to an involution $\tau_\gamma$ of the associated Dehn filling $M(\gamma) = M \cup V_\gamma$, where $V_\gamma$ is the filling solid torus. Thurston's orbifold theorem applies to our situation and implies that $M(\gamma)$ has a geometric decomposition. In particular, $M(\alpha)$ is a Seifert fibred manifold whose base orbifold is of the form $S^2(2,3,4)$, a $2$-sphere with three cone points of orders $2,3,4$ respectively. It follows immediately from our constructions that $X_1(\beta)/ \tau_\beta$ and $X_2(\beta)/ \tau_\beta$ are $3$-balls. Thus $M(\beta)/ \tau_\beta = (X_1(\beta)/ \tau_\beta) \cup (X_2(\beta)/ \tau_\beta) \cong S^3$ and since $\partial M / \tau \cong S^2$, it follows that $M / \tau$ is a $3$-ball. More precisely, $M/\tau$ is an orbifold $(N,L^0)$, where $N$ is a 3-ball, $L^0$ is a properly embedded 1-manifold in $N$ that meets $\partial N$ in four points, and $M$ is the double branched cover of $(N,L^0)$. We will call $(N,L^0)$ a {\em tangle}, and if we choose some identification of $(\partial N,\partial L^0)$ with a standard model of ($S^2$, {\em four points}), then $(N,L^0)$ becomes a {\em marked} tangle. Capping off $\partial N$ with a 3-ball $B$ gives $N\cup_\partial B\cong S^3$. Then, if $\gamma$ is a slope on $\partial M$, we have $V_\gamma/\tau_\gamma \cong (B,T_\gamma)$, where $T_\gamma$ is the rational tangle in $B$ corresponding to the slope $\gamma$. Hence \begin{equation*} \begin{split} M(\gamma)/\tau_\gamma & = (M/\tau) \cup (V_\gamma/\tau_\gamma)\\ & = (N,L^0) \cup (B,T_\gamma)\\ & = (S^3, L^0 (\gamma))\ , \end{split} \end{equation*} where $L^0(\gamma)$ is the link in $S^3$ obtained by capping off $L^0$ with the rational tangle $T_\gamma$. \begin{figure}[!ht] {\epsfxsize=4in \centerline{\epsfbox{inv2.ai}}\hspace{10mm}} \caption{$X_2$ and the involution $\tau_2$}\label{inv2} \end{figure} We now give a more detailed description of the tangle $(N,L^0)$. For $i=1,2$, let $B_i=V_i/ \tau_i$, $W_i=E_i\times I/\tau_i$, $Y_i=X_i/\tau_i$, and $Q_i=P_i/\tau_i$. Figure \ref{quo1} gives a detailed description of the branch sets in $B_i$, $W_i$, $Y_i$ with respect to the corresponding branched covering maps. Note that $N$ is the union of $Y_1, Y_2$, and a product region $R \cong Q_1 \times I$ from $Q_1$ to $Q_2$ which intersects the branch set $L^0$ of the cover $M {\rightarrow} N$ in a $2$-braid. In fact, it is clear from our constructions that we can think of the union $(L^0 \cap R) \cup (\partial N \cap R)$ as a ``$4$-braid" in $R$ with two ``fat strands" formed by $\partial N \cap R$. See Figure \ref{filling}(a). By an isotopy of $R$ fixing $Q_2$, and which keeps $R$, $Q_1$, and $Y_1$ invariant, we may untwist the crossings between the two fat strands in Figure \ref{filling}(a) so that the pair $(N, L^0)$ is as depicted in Figure \ref{filling}(b). \begin{figure}[!ht] {\epsfxsize=5in \centerline{\epsfbox{quo1.ai}}\hspace{10mm}} \caption{The branch sets in $B_i, W_i$, and $Y_i$}\label{quo1} \end{figure} The slope $\beta$ is the boundary slope of the planar surface $P$, and hence the rational tangle $T_\beta$ appears in Figure~\ref{filling}(b) as two short horizontal arcs in $B$ lying entirely in $Y_2(\beta) = X_2 (\beta)/\tau_\beta$. Since $\Delta (\alpha,\beta)=2$, $T_\alpha$ is a tangle of the form shown in Figure~\ref{filling6}(a). Recall that $M(\alpha)$ is a Seifert fibred manifold with base orbifold of type $S^2 (2,3,4)$, and is the double branched cover of $(S^3,L^0(\alpha))$. Write $L= L^0(\alpha)$. \begin{lemma} \label{type} $L$ is a Montesinos link of type $(\frac{p}{2}, \frac{q}{3}, \frac{r}{4})$. \end{lemma} {\noindent{\bf Proof.\hspace{2mm}}} By Thurston's orbifold theorem, the Seifert fibering of $M(\alpha)$ can be isotoped to be invariant under $\tau_\alpha$. Hence the quotient orbifold is Seifert fibered in the sense of Bonahon-Siebenmann, and so either $L$ is a Montesinos link or $S^3 \setminus L$ is Seifert fibred. From Figure 6(a) we see that $L$ is a 2-component link with an unknotted component and linking number $\pm 1$. But the only link $L$ with this property such that $S^3 \setminus L$ is Seifert fibred is the Hopf link (see \cite{BM}), whose $2$-fold cover is $P^3$. Thus $L$ must be a Montesinos link. Since the base orbifold of $M(\alpha)$ is $S^2(2,3,4)$, $L$ has type $(\frac{p}{2}, \frac{q}{3}, \frac{r}{4})$ (c.f \S 12.D of \cite{BuZi}).{\hspace{2mm}{\small $\diamondsuit$}} It's easy to check that any Montesinos link $L$ of the type described in the Lemma \ref{type} has two components, one of which, say $K_1$, is a trivial knot, and the other, $K_2$, a trefoil knot. Our goal is to use the particular nature of our situation to show that the branch set $L$ cannot be a Montesinos link of type $(\frac{p}{2}, \frac{q}{3}, \frac{r}{4})$, and thus derive a contradiction. {From} Figure \ref{filling}, we see that $L^0$ has a closed, unknotted component, which must be the component $K_1$ of the Montesinos link of type $(\frac{p}{2}, \frac{q}{3}, \frac{r}{4})$ described above. Then $L^0 \setminus K_1 = K_2 \cap N$, which we denote by $K_2^0$. \begin{figure}[!ht] {\epsfxsize=5in \centerline{\epsfbox{filling.ai}}\hspace{10mm}} \caption{The branch set $L^0$ in $N$}\label{filling} \end{figure} Now delete $K_1$ from $N$ and let $U$ be the double branched cover of $N$ branched over $K_2^0$. Then $U$ is a compact, connected, orientable $3$-manifold with boundary a torus which can be identified with $\partial M$. In particular, if we consider $\alpha$ and $\beta$ as slopes on $\partial U$, then both $U(\alpha)$ and $U(\beta)$ are the lens space $L(3,1)$, since they are $2$-fold covers of $S^3$ branched over a trefoil knot. Hence the cyclic surgery theorem of \cite{CGLS} implies that $U$ is either a Seifert fibred space or a reducible manifold. \begin{lemma} $U$ is not a Seifert fibred space. \end{lemma} {\noindent{\bf Proof.\hspace{2mm}}} Suppose $U$ is a Seifert fibred space, with base surface $F$ and $n \geq 0$ exceptional fibres. If $F$ is non-orientable then $U$ contains a Klein bottle, hence $U(\alpha) \cong L(3,1)$ does also. But since non-orientable surfaces in $L(3,1)$ are non-separating, this implies that $H_1(L(3,1); \mathbb Z/2) \not \cong 0$, which is clearly false. Thus $F$ is orientable. If $U$ is a solid torus then clearly $U(\alpha) \cong U(\beta) \cong L(3,1)$ implies $\Delta(\alpha, \beta) \equiv 0$ (mod $3$), contradicting the fact that $\Delta(\alpha, \beta) = 2$. Thus we assume that $U$ is not a solid torus, and take $\phi \in H_1(\partial U)$ to be the slope on $\partial U$ of a Seifert fibre. Then $U(\phi)$ is reducible \cite{Hl} so $d = \Delta(\alpha, \phi) > 0$, and $U(\alpha)$ is a Seifert fibred space with base surface $F$ capped off with a disk, and $n$ or $n+1$ exceptional fibres, according as $d = 1$ or $d > 1$. Since $U(\alpha)$ is a lens space and $U$ isn't a solid torus, we must have that $F$ is a disk, $n = 2$, and $d = 1$. Similarly $\Delta (\beta, \phi) = 1$. In particular, without loss of generality $\beta = \alpha + 2\phi$ in $H_1(\partial U)$. The base orbifold of $U$ is of the form $D^2(p,q)$, with $p,q > 1$. Then $H_1(U)$ is the abelian group defined by generators $x,y$ and the single relation $px + qy = 0$. Suppose $\alpha \mapsto ax + by$ in $H_1(U)$. Then $H_1(U(\alpha))$ is presented by the matrix $\left( \begin{array}{cc} p & a \\ q & b \end{array} \right)$. Similarly, since $\phi \mapsto px$ in $H_1(U)$, $H_1(U(\beta))$ is presented by $\left( \begin{array}{cc} p & a + 2p\\ q & b \end{array} \right)$. But the determinants of these matrices differ by $2pq \geq 8$, so they cannot both be $3$ in absolute value. This completes the proof of the lemma. {\hspace{2mm}{\small $\diamondsuit$}} Thus $U$ is reducible, say $U \cong V \# W$ where $\partial V = \partial U$ and $W \not \cong S^3$ is closed. Consideration of $M(\alpha)$ and $M(\beta)$ shows that $W \cong L(3,1)$ and $V(\alpha) \cong V(\beta) \cong S^3$, and so Theorem 2 of \cite{GL1} implies that $V \cong S^1 \times D^2$. It follows that any simple closed curve in $\partial U$ which represents either $\alpha$ or $\beta$ is isotopic to the core curve of $V$. Let $\lambda \in H_1(\partial U)$ denote the meridional slope of $V$. Then $\{\beta, \lambda\}$ is a basis of $H_1(\partial U)$ and up to changing the sign of $\alpha$ we have $\alpha=\beta \pm 2\lambda$. Since $U\cong (S^1 \times D^2)\,\#\, L(3,1)$, we can find a homeomorphism between the pair $(N,K_2^0)$ and the tangle shown in Figure~\ref{NK20}(a), with the $\beta$, $\alpha$, and $\lambda$ fillings shown in Figures~\ref{NK20}(b), (c) and (d) respectively. (We show the case $\alpha = \beta +2\lambda$; the other possibility can be handled similarly.) \begin{figure}[!ht] {\epsfxsize=4in \centerline{\epsfbox{NK20.ai}}\hspace{10mm}} \caption{} \label{NK20} \end{figure} Recall that in Figure \ref{filling}(b), the slope $\beta$ corresponds to the rational tangle consisting of two short ``horizontal'' arcs in the filling ball $B$. It follows that under the homeomorphism from the tangle shown in Figure~\ref{NK20}(a) to $(N,K_2^0)$ shown in Figure~\ref{filling}(b), the tangle $T_\alpha$ (resp. $T_\lambda$) is sent to a rational tangle of the form shown in Figure~\ref{filling6}(a) (resp. \ref{filling6}(b)). {From} Figure~\ref{NK20}(d) we see that $L^0(\gamma)$ is a link of three components $K_1 \cup O_1 \cup K_3$, where $O_1$ is a trivial knot which bounds a disk $D$ disjoint from $K_3$ and which intersects $\partial N$ in a single arc; see Figure~\ref{filling6}(b). Push the arc $O_1\cap B$ with its two endpoints fixed into ${\partial} B$ along $D$, and let $O_1^*$ be the resulting knot (see part (c) of Figure \ref{filling6}). Then there is a disk $D_*$ (which is a subdisk of $D$) satisfying the following conditions: \begin{itemize} \item[(1)] $\partial D_*=O_1^*$. \item[(2)] $D_*$ is disjoint from $K_3$. \item[(3)] The interior of $D_*$ is disjoint from $B$. \end{itemize} \begin{figure}[!ht] {\epsfxsize=6in \centerline{\epsfbox{filling6.ai}}\hspace{10mm}} \caption{The tangle fillings $N(\alpha)$ and $N(\lambda)$.} \label{filling6} \end{figure} \noindent Perusal of Figure \ref{filling6} (c) shows that the following condition is also achievable. \begin{itemize} \item[(4)] $D_* \cap Q_2$ has a single arc component, and this arc component connects the two boundary components of $Q_2$ and is outermost in $D_*$ amongst the components of $D_* \cap Q_2$. \end{itemize} Among all disks in $S^3$ which satisfy conditions (1)--(4), we may assume that $D_*$ has been chosen so that \begin{itemize} \item[(5)] $D_* \cap Q_2$ has the minimal number of components. \end{itemize} \begin{claim}\label{parallel} Suppose that $D_* \cap Q_2$ has circle components. Then each such circle separates $K_3 \cap Q_2$ from $\partial Q_2$ in $Q_2$. \end{claim} {\noindent{\bf Proof.\hspace{2mm}}} Let $\delta$ be a circle component of $D_* \cap Q_2$. Then $\delta$ is essential in $Q_2 \setminus (Q_2 \cap K_3)$, for if it bounds a disk $D_0$ in $Q_2 \setminus (Q_2 \cap K_3)$, then an innermost component of $D_* \cap D_0 \subset D_* \cap Q_2$ will bound a disk $D_1 \subset D_0$. We can surger $D_*$ using $D_1$ to get a new disk satisfying conditions (1)--(4) above, but with fewer components of intersection with $Q_2$ than $D_*$, contrary to assumption (5). Next since the arc component of $D_* \cap Q_2$ connects the two boundary components of $Q_2$, $\delta$ cannot separate the two boundary components of $Q_2$ from each other. Lastly suppose that $\delta$ separates the two points of $Q_2 \cap K_3$. Then $\delta$ is isotopic to a meridian curve of $K_3$ in $S^3$. But this is impossible since $\delta$ also bounds a disk in $D_*$ and is therefore null-homologous in $S^3 \setminus K_3$. The claim follows.{\hspace{2mm}{\small $\diamondsuit$}} \begin{figure}[!ht] {\epsfxsize=4in \centerline{\epsfbox{filling7a.ai}}\hspace{10mm}} \caption{Capping off the $4$-braid to obtain a trivial link}\label{filling7} \end{figure} It follows from Claim \ref{parallel} that there are disjoint arcs in $Q_2$, one, say $\sigma_1$, which connects the two points of $Q_2 \cap K_3$ and is disjoint from $D_*$, and $\sigma_2 = D_* \cap Q_2$ the other. Hence we obtain a ``2-bridge link'' of two components -- one fat, one thin -- in $S^3$ by capping off the ``$4$-braid'' in $R$ with $\sigma_1$ and $\sigma_2$ in $Y_2 \subset Y_2(\beta)$ and with $K_3 \cap Y_1 \subset Y_1$ and $\partial N \cap Y_1 \subset Y_1$ in the $3$-ball $Y_1(\beta)$ (see Figure \ref{filling7}(a)). Furthermore, since the disk $D_*$ gives a disk bounded by the ``fat knot'' which is disjoint from the ``thin knot'', the link is a trivial link. \begin{figure}[!ht] {\epsfxsize=4in \centerline{\epsfbox{filling8a.ai}}\hspace{10mm}} \caption{The pair $(N, L^0)$ and the filling tangle $T_{\alpha}$}\label{filling8} \end{figure} \begin{figure}[!ht] {\epsfxsize=4in \centerline{\epsfbox{filling8b.ai}}\hspace{10mm}} \caption{The two possible $T_{\alpha}$}\label{filling8b} \end{figure} Now it follows from the standard presentation of a $2$-bridge link as a $4$-plat (see \S 12.B of \cite{BuZi}), that there is an isotopy of $R$, fixed on the ends $Q_1,Q_2$ and on the two fat strands, taking the ``4-braid'' to one of the form shown in Figure~\ref{filling7}(b). Hence $(N,L^0)$ has the form shown in Figure~\ref{filling8}(a). The filling rational tangle $T_{\alpha}$ is of the form shown in Figure~\ref{filling8}(b). Since the component $K_2^0 (\alpha)$ of $L^0(\alpha) =L$ has to be a trefoil, there are only two possibilities for the number of twists in $T_\alpha$; see Figure \ref{filling8b}. The two corresponding possibilities for $L$ are shown in Figure~\ref{filling9}. But these are Montesinos links of the form $(\frac13, \frac{-3}8,\frac{m}2)$ and $(\frac13, \frac{-5}8, \frac{m}2)$, respectively. \begin{figure}[!ht] {\epsfxsize=5in \centerline{\epsfbox{filling9.ai}}\hspace{10mm}} \caption{$L$ as a Montesinos link of the type $(\frac{1}{3}, \frac{-3}{8}, \frac{m}{2})$ or $(\frac13, \frac{-5}{8}, \frac{m}{2})$}\label{filling9} \end{figure} This final contradiction completes the proof of Theorem~\ref{rf} under the assumptions of case (a) of Lemma \ref{gluing}.{\hspace{2mm}{\small $\diamondsuit$}} \section{The proof of Theorem \ref{rf} when case (b) of Lemma \ref{gluing} arises} In this case we choose an involution $\tau_1$ on $E_1 \times I$ as shown in Figure \ref{inv3}. Then $\tau_1(\partial_3 E_1 \times \{j\}) = \partial_3 E_1 \times \{j\}$, $\tau_1(\partial_1 E_1\times \{j\}) = \partial_2 E_1\times \{j\}$ ($j = 0,1$), and the restriction of $\tau_1$ on $\partial_3 E_1 \times I$ extends to an involution of $V_1$ whose fixed point set is a core circle of this solid torus. Thus we obtain an involution $\tau_1$ on $X_1$. The quotient of $V_1$ by $\tau_1$ is a solid torus $B_1$ whose core circle is the branch set. Further, $A_1 / \tau_1$ is a longitudinal annulus of $B_1$. The quotient of $E_1 \times I$ by $\tau_1$ is also solid torus $W_1$ in which $(\partial_3 E_1 \times I) / \tau_1$ is a longitudinal annulus. Figure \ref{inv3} depicts $W_1$ and its branch set. It follows that the pair $(Y_1 = X_1/ \tau_1, \hbox{branch set of } \tau_1)$ is identical to the analogous pair in Section~\ref{case a} (see Figure~\ref{quo1}). Next we take $\tau_2$ to be the same involution on $X_2$ as that used in Section~\ref{case a}. An argument similar to the one used in that section shows that $\tau_1$ and $\tau_2$ can be pieced together to form an involution $\tau$ on $M$. {From} the previous paragraph we see that the quotient $N = M / \tau$ and its branch set are the same as those in Section~\ref{case a}. Hence the argument of that section can be used from here on to obtain a contradiction. This completes the proof of Theorem \ref{rf} in case (b). {\hspace{2mm}{\small $\diamondsuit$}} \begin{figure}[!ht] {\epsfxsize=5.5in \centerline{\epsfbox{inv3.ai}}\hspace{10mm}} \caption{Involution on $E_1\times I$}\label{inv3} \end{figure} \newpage \def$\underline{\hskip.5truein}${$\underline{\hskip.5truein}$}
2023-04-23T06:10:12.027Z
2007-10-19T22:50:40.000Z
redpajama/arxiv
arxiv_0002
360
3,171
0f7cf97e807966d075ba92deae2d14bbc858d9cb
\section{Introduction} The effect of channel coupling, that is, couplings of the relative motion between the colliding nuclei to their intrinsic motions as well as transfer reactions, have been well known in heavy-ion collisions around the Coulomb barrier. In heavy-ion fusion reactions at sub-barrier energies, the channel coupling effects enhance considerably the fusion cross sections as compared to the prediction of potential model calculation \cite{beck-88,bah-tak98,das-98}. It has been well established by now that the channel coupling gives rise to a distribution of potential barriers \cite{esb-81,nag-86,hag-95}. Based on this idea, a method was proposed to extract barrier distributions directly from experimental fusion excitation functions by taking the second derivative of the product of center mass energy, $E$, and the fusion cross section, $\sigma_{\rm fus}(E)$, with respect to $E$ \cite{row-91}. Coupled-channels calculations as well as high precision fusion data have shown that the fusion barrier distributions, $D^{\rm fus}=d^2[E\sigma_{\rm fus}(E)]/dE^2$, is sensitive to the details of channel couplings, while the sensitivity is much more difficult to see in the fusion cross sections \cite{das-98,das-981,leigh-95}. Similar information as the fusion cross section can also be obtained from the quasi-elastic scattering (a sum of elastic, inelastic and transfer processes) at backward angles \cite{ARN88}. Timmers {\it et al.} measured the quasi-elastic scattering cross section for several systems \cite{thim-95}, for which the fusion barrier distribution had already been extracted \cite{leigh-95}. They proposed that the corresponding barrier distribution can be extracted by taking the first derivative of the ratio of the quasi-elastic to the Rutherford cross sections, $d\sigma_{\rm qel}/d\sigma_R$, with respect to the energy, $E$, {\it i.e.,} $D^{\rm qel}=-d(d\sigma_{\rm qel}/d\sigma_R)/dE$. The properties of the quasi-elastic barrier distributions have been studied in more details in Ref. \cite{hag-04}. These studies show that the quasi-elastic barrier distribution is similar to the fusion barrier distribution, although the former is somewhat smeared and less sensitive to the nuclear structure effects. \begin{figure \includegraphics{16O144Sm.ps} \caption{Comparison between the experimental fusion (the filled circles) and quasi-elastic (the open squares) barrier distributions for the $^{16}$O$+^{144}$Sm reaction. They are normalized to unit area in the energy interval between $E_{\rm c.m.}=$ 56 and 70 MeV. The experimental data are taken from Refs. \cite{leigh-95} and \cite{thim-95}.} \end{figure} One of the systems which Timmers {\it et al.} measured is $^{16}$O$+^{144}$Sm \cite{thim-95}. Figure 1 shows the comparison of the experimental barrier distribution extracted from fusion (the filled circles) and quasi-elastic (the open squares) processes. In order to compare the two barrier distributions, we scale them so that the energy integral between $E_{\rm c.m.}$= 56 and 70 MeV is unity. For energies below 62 MeV, the two barrier distributions resemble each other. However, at higher energies, they behave rather differently, although the overall width of the distributions is similar to each other. That is, the quasi-elastic barrier distribution decreases monotonically as a function of energy while the fusion barrier distribution exhibits a distinct peak at energy around $E_{\rm c.m.}=65$ MeV. So far, no theoretical calculations have succeeded in explaining this difference. The coupled-channels calculations of Timmers {\it et al.} \cite{thim-95} with the computer code {\tt ECIS} \cite{ecis}, which took into account the one quadrupole, $2^+$, and the one octupole, $3^-$, phonon excitations of $^{144}$Sm, were unable to reproduce both the experimental data of the quasi-elastic cross sections and the quasi-elastic barrier distribution. The {\tt ECIS} results for the ratio of quasi-elastic scattering to the Rutherford cross sections fall off more steeply than the experimental data, while the obtained barrier distribution has a secondary peak similar to the fusion barrier distribution. They argued that this failure is largely due to the residual excitations not included in the {\tt ECIS} calculations, which they postulated to be transfer channels. Esbensen and Buck have also performed the coupled-channels calculations for this system taking into account the second order couplings \cite{Esb-96}. However, they did not analyze the quasi-elastic barrier distribution. These previous coupled-channels calculations took into account only the single phonon excitations in $^{144}$Sm. On the other hand, Hagino {\it et al.} \cite{hag-97,hag-971} have shown that the double anharmonic quadrupole and octupole phonon excitations play an important role in reproducing the experimental fusion barrier distribution for this system. However, its effect on the quasi-elastic scattering has not yet been clarified so far. The aim of this paper is then to study weather the double anharmonic vibrational excitations of the $^{144}$Sm nucleus can explain the difference in the shape of barrier distribution between fusion and quasi-elastic. The role of proton transfer reactions in this system is also discussed. The paper is organized as follows. In the next section, we briefly explain the coupled-channels formalism which takes into account the anharmonicities of the vibrational excitations. We present the results of our calculations in Sec. III. We then summarize the paper in Sec. IV. \section{Coupled-channels formalism for anharmonic vibration} In this section, we briefly describe the coupled-channels formalism which includes the effects of anharmonic excitations of the vibrational states. We follow the procedure of Refs. \cite{hag-97,hag-971}, which was successfully applied to describe the experimental fusion cross sections as well as the fusion barrier distributions of $^{16}$O+$^{144,148}$Sm systems. The total Hamiltonian of the system is assumed to be \begin{eqnarray} H&=&-\frac{\hbar^2}{2\mu}\nabla^2+H_{\rm vib}+V_{\rm coup}(\boldsymbol{r},\xi) \end{eqnarray} where $\boldsymbol{r}$ is the coordinate of the relative motion between the target and the projectile nuclei, $\mu$ is the reduced mass and $\xi$ represents the internal vibrational degrees of freedom of the target nucleus. $H_{\rm vib}$ describes the vibrational spectra in the target nucleus. The coupling between the relative motion and the intrinsic motion of the target nucleus is described by the coupling potential $V_{\rm coup}$ in Eq.(1), which consists of the Coulomb and nuclear parts. Using the no-Coriolis (iso-centrifugal) approximation \cite{bah-tak98,hag-99}, they are given as \begin{eqnarray} V_{\rm coup}(r,\xi)=V_C(r,\xi)+V_N(r,\xi),\qquad\qquad\qquad\qquad \\ V_C(r,\xi)=\frac{Z_PZ_Te^2}{r}\left(1+\frac{3R_T^2}{5r^2} \frac{\hat{O}_{20}}{\sqrt{4\pi}}+\frac{3R_T^3}{7r^3} \frac{\hat{O}_{30}}{\sqrt{4\pi}}\right), \label{vcoupc} \\ V_{N}(r,\xi)=\frac{-V_0}{\left[1+\textrm{exp}\left(\frac{ [r-R_0-R_T(\hat{O}_{20}+\hat{O}_{30})/\sqrt{4\pi}]}{a}\right)\right]}. \quad\,\label{vcoupn} \end{eqnarray}\\ Here $\hat{O}_{20}$ and $\hat{O}_{30}$ are the excitation operators for the quadrupole and octupole vibrations, respectively, and $R_T$ is the target radius. The effect of anharmonicities for the quadrupole and octupole vibrations are taken into account based on the U(5) limit of the Interacting Boson Model (IBM). The matrix elements of the operator $\hat{O}=\hat{O}_{20}+\hat{O}_{30}$ in Eqs.(\ref{vcoupc}) and (\ref{vcoupn}) then read \cite{baha-9394,hag-97,hag-971}, \begin{widetext} \begin{equation} O_{ij}= \left[\begin{array}{cccccc} 0&\beta_2&\beta_3&0&0&0\\ \beta_2&-\frac{2}{\sqrt{14N}}\chi_2 \beta_2&-\frac{2}{\sqrt{15N}}\chi_3\beta_3& \sqrt{2(1-1/N)}\beta_2&\sqrt{1-1/N}\beta_3&0\\ \beta_3&-\frac{2}{\sqrt{15N}}\chi_3\beta_3& -\frac{2}{\sqrt{21N}}\chi_{2f}\beta_2&0& \sqrt{1-1/N}\beta_2& \sqrt{2(1-1/N)}\beta_3\\ 0&\sqrt{2(1-1/N)}\beta_2&0&-\frac{4}{\sqrt{14N}}\chi_2\beta_2& -\sqrt{\frac{8}{15N}}\chi_3\beta_3&0\\ 0&\sqrt{1-1/N}\beta_3&\sqrt{1-1/N}\beta_3&-\sqrt{\frac{8}{15N}}\chi_3\beta_3& (-\frac{2}{\sqrt{14N}}\chi_2-\frac{2}{\sqrt{21N}}\chi_{2f})\beta_2& -\sqrt{\frac{8}{15N}}\chi_3\beta_3\\ 0&0&\sqrt{2(1-1/N)}\beta_3&0&-\sqrt{\frac{8}{15N}}\chi_3\beta_3& -\frac{4}{\sqrt{21N}}\chi_{2f}\beta_2\\ \end{array} \right] \end{equation} \end{widetext} for 6 low-lying states ($i,j$=1-6), where $|1\rangle = |0^+\rangle$, $|2\rangle = |2^+\rangle$, $|3\rangle = |3^-\rangle$, $|4\rangle = |2^+\otimes2^+\rangle$, $|5\rangle = |2^+\otimes3^-\rangle$, and $|6\rangle = |3^-\otimes3^-\rangle$. In Eq.(5), $\beta_2$ and $\beta_3$ are the quadrupole and the octupole deformation parameters, respectively, which can be estimated from the electric transition probabilities. The scaling of coupling strength with $\sqrt{N}$, $N$ being the number of boson in the system, is introduced to ensure the equivalence between the IBM and the geometric model in the large $N$ limit \cite{baha-9394}. When all the $\chi$ parameters in Eq.(5) are set to be zero then the quadrupole moment of all the states vanishes, and one obtains the harmonics limit in the large $N$ limit. Nonzero values of $\chi$ generate the quadrupole moments, and, together with finite boson number, they are responsible for the anharmonicities in the vibrational excitations. \section{$^{16}$O$+^{144}$Sm reaction : Comparison with experimental data} We now apply the formalism to analyze the quasi-elastic scattering data of $^{16}$O$+^{144}$Sm \cite{thim-95}. The calculations are performed with a version \cite{hag2} of the coupled-channels code {\tt CCFULL} \cite{hag-99} once the coupling matrix elements are determined from Eq.(5). Notice that the iso-centrifugal approximation employed in this code works well for quasi-elastic scattering at backward angles \cite{hag-04}. In the code, the regular boundary condition is imposed at the origin instead of the incoming wave boundary condition. \subsection{Effect of anharmonicities of nuclear vibrations} In the calculations presented below, we include only the excitations in the $^{144}$Sm nucleus whilst the excitations of the $^{16}$O is not explicitly included. For sub-barrier fusion reactions, the latter has been shown to lead only to a shift of the fusion barrier distribution in energy without significantly altering its shape \cite{hag-972}, and hence can be incorporated in the choice of the bare potential. This is a general feature for reactions with the $^{16}$O as a projectile. We have confirmed that it is the case also for the quasi-elastic barrier distribution. That is, although the $^{16}$O excitations contribute to the absolute value of quasi-elastic cross sections themselves, the shape of quasi-elastic barrier distribution is not altered much. Since we are interested mainly in the difference of the shape between the fusion and the quasi-elastic barrier distributions, we simply do not include the $^{16}$O excitations and instead adjust the inter-nuclear potential. For simplicity, we take the eigenvalues of the $H_{\rm vib}$ in Eq.(1) to be $\epsilon=n_2\epsilon_2+n_3\epsilon_3$, where $n_2$ and $n_3$ are the number of quadrupole and octupole phonons, respectively. $\epsilon_2$ and $\epsilon_3$ are the excitation energies of the quadrupole and the octupole phonon states of the target nucleus, {\it i.e.}, $\epsilon_2=1.61$ MeV and $\epsilon_3=1.81$ MeV, respectively. Notice that we assume the harmonic spectra for the phonon excitations. It has been shown in Refs. \cite{hag-97,hag-971} that the effect of anharmonicity with respect to the excitation energy on the barrier distribution is insignificant once the energy of the single phonon states is fixed. The radius and diffuseness parameters of the real part of the nuclear potential are taken to be the same as those in Ref. \cite{hag-97}, {\it i.e.,} $r_{0}=1.1$ fm and $a=0.75$ fm, respectively, while the depth parameter is slightly adjusted in order to reproduce the experimental quasi-elastic cross sections. The optimum value is obtained as $V_0=112$ MeV. As usually done, we use a short-range imaginary potential with $W_{0}=30$ MeV, $r_{w}=1.0$ fm and $a_w=0.3$ fm to simulate the compound nucleus formation. Finally, the target radius is taken to be $R_T=1.06A_T^{1/3}$. We use the same values for the parameters $\beta_2,\beta_3, N, \chi_2, \chi_{2f}$, and $\chi_3$ as in Ref. \cite{hag-97}. All the calculations presented below are performed at $\theta_{\rm c.m.}=170^\circ$. \begin{figure \includegraphics{16O144Smqel.ps} \caption{Comparison of the experimental data with the coupled-channels calculations for $^{16}$O$+^{144}$Sm reaction for (a) the ratio of quasi-elastic to the Rutherford cross sections and for (b) quasi-elastic barrier distribution. The dotted and dashed lines are obtained by including up to the single and the double phonon excitations in the harmonic limit, respectively. The solid line is the result of the coupled-channels calculations with the double anharmonic phonon excitations. The experimental data are taken from Ref. \cite{thim-95}.} \end{figure} \begin{figure \includegraphics{16O144Smelcom.ps} \caption{(a) Comparison of the measured pure elastic (the open squares), the $Z=8\,(-\,\textrm{el})$ (the open circles) and the residual (the filled circles) components of $d\sigma_{\rm qel}/d\sigma_R$ with the coupled-channels calculations for $^{16}$O$+^{144}$Sm reaction. The $Z=8\,(-\,\textrm{el})$ component is defined as the $Z=8$ yields subtracted the elastic component, while the residual component the sum of $Z=6$ and 7 yields. The dashed line is the result of elastic scattering, while the dotted line shows the inelastic cross sections for the single 2$^+$ and 3$^-$ phonon states. The solid line is the result of the sum of inelastic cross sections for the double phonon states in $^{144}$Sm. (b) The same as (a) but for the pure elastic and the total inelastic cross sections. The experimental data are taken from Ref. \cite{thim-95}.} \end{figure} The results of the coupled-channels calculations are compared with the experimental data in Fig. 2. Figures 2(a) and 2(b) show the ratio of the quasi-elastic to the Rutherford cross sections, $d\sigma_{\rm qel}/d\sigma_R$, and the quasi-elastic barrier distributions, $D^{\rm qel}$, respectively. The dotted line denotes the result in the harmonic limit, where coupling to the quadrupole and octupole vibrations in $^{144}$Sm are truncated at the single phonon level, {\it i.e.,} only the $2^+$ and $3^-$ states are taken into account and all the $\chi$ parameters in Eq.(5) are set to be zero. As we see this calculation fails to reproduce the experimental data. The obtained quasi-elastic cross sections, $d\sigma_{\rm qel}/d\sigma_R$, drop much faster than the experimental data at high energies. Also the quasi-elastic barrier distribution, $D^{\rm qel}$, exhibits a distinct peak at energy around $E_{\rm c.m.}=65$ MeV. These results are similar to the one achieved in Ref. \cite{thim-95}. The dashed line represents the result when the coupling to the quadrupole and octupole vibrations of $^{144}$Sm is truncated at the double phonon states in the harmonic limit. In this case, we take into account the couplings to the $2^+$, $3^-$, $2^+\otimes2^+$,$2^+\otimes 3^-$ and \mbox{$3^-\otimes3^-$} states. It is obvious that the results are inconsistent with the experimental data. To see the effect of anharmonicities of the vibrations, we then perform the the same calculations using the coupling matrix elements given in Eq.(5). The resultant quasi-elastic excitation function and the quasi-elastic barrier distribution are shown by the solid line. The calculated ratio of quasi-elastic to Rutherford cross sections quite well agree with the experimental data. This suggests that the inclusion of anharmonic effects in the vibrational motions is important for the description of the quasi-elastic excitation functions for the $^{16}$O$+^{144}$Sm reaction. On the other hand, the result for $D^{\rm qel}$ is still similar to the barrier distribution obtained by assuming the harmonic limit truncated at the one phonon level (the dotted line), although the former has a more smooth peak. Figure 3 shows the decomposition of the quasi-elastic cross sections to each channel for the calculation with the coupling to the double anharmonic vibrations (the solid line in Fig. 2). The fraction of cross section for each channel $i$ in the quasi-elastic cross section, $d\sigma_i/d\sigma_{\rm qel}=d\sigma_i/[\sum_jd\sigma_j]$, is also shown in Fig. 4. The open squares are the experimental elastic cross section while the open circles are the measured excitation function for $Z=8$ subtracted the contribution from the elastic channel. The latter contains not only the neutron transfer components but also the contributions of inelastic cross sections. The filled circles are the experimental residual (a sum of $Z=7$ and $Z=6$ yields) components of the $d\sigma_{\rm qel}/d\sigma_R$. The dashed line shows results of the coupled-channels calculations for the elastic channel. It reproduces reasonably well the experimental data for elastic scattering. The $Z=8$ component of quasi-elastic cross sections is almost exhausted by the single phonon excitations, that is, the combined $2^+$ and $3^-$ channels, as shown by the dotted-line in Figs. 3(a) and 4(a). The cross sections for the double phonon channels are given by the solid line in Figs. 3(a) and 4(a). These are important at energies higher than around 66 MeV. If the components of all the inelastic channels included in the calculation are summed up, we obtain the dot-dashed line in Figs. 3 (b) and 4(b). \begin{figure \includegraphics{fraction.ps} \caption{ Same as Fig. 3, but for the fraction of cross section for each channel in the quasi-elastic cross sections. } \end{figure} \subsection{Effects of proton transfer reactions} In the previous subsection we have shown that the experimental quasi-elastic cross sections can be well explained within the present coupled-channels calculations, which takes into account only the inelastic excitations in $^{144}$Sm. However, the shape of quasi-elastic barrier distribution is still somewhat inconsistent with the experimental data. As one sees in Figs. 3(a) and 4(b), the experimental data indicate that the charged particle transfer reactions may also play some role (see the filled circles in the figures). In this subsection, we therefore investigate the effects of proton transfer reactions, in addition to the anharmonic double phonon excitations. To this end, we use the macroscopic form factor for the transfer coupling \cite{dasso-8586}, \begin{equation} F_{\rm trans}(r)=-F_{\rm tr}\frac{dV(r)}{dr} \end{equation} where $F_{\rm tr}$ is the coupling strength and $V(r)$ is the real part of the nuclear potential. In this paper, we consider a single proton transfer as well as the direct proton pair transfer reactions, although the experimental Z=6 component may also include the alpha-particle transfer channel. The corresponding optimum $Q-$values for the transfer between the ground states are $Q_{\rm opt}(1p)=-1.79$ MeV and $Q_{\rm opt}(2p)=0.13$ MeV, respectively. The coupling strength $F_{\rm tr}$ in Eq.(6) is determined so that the experimental transfer cross sections for each Z=6 and Z=7 components \cite{thim-thes} are reproduced. The optimum values for $F_{\rm tr}$ are found to be 0.12 and 0.16 fm for the one and the two proton transfer channels, respectively. \begin{figure \includegraphics{16O144Smqelptrans.ps} \caption{Effect of proton transfers on the quasi-elastic scattering cross sections (the upper panel) and on the quasi-elastic barrier distribution (the lower panel) for $^{16}$O$+^{144}$Sm reaction. The solid line is the result of the coupled-channels calculations including the effect of double anharmonic vibrations only. The dashed line is obtained by including, in addition, the couplings to the proton transfer channels. The experimental data are taken from Ref. \cite{thim-95}.} \end{figure} \begin{figure \includegraphics{16O144Smelcomptrans.ps} \caption{ Contribution of quasi-elastic cross sections from several channels. The solid and dashed line are the results of the coupled-channels calculations for the proton transfer and the elastic cross sections, respectively. The dotted line denotes the sum of total inelastic and proton transfer cross sections. The corresponding experimental data are shown by the filled circles, the open squares, and the open triangles, respectively, which are taken from Ref. \cite{thim-95}.} \end{figure} \begin{figure \includegraphics{fractionptrans.ps} \caption{ Same as Fig. 6, but for the fraction in the quasi-elastic cross sections. } \end{figure} \begin{figure \includegraphics[angle=0,width=0.457\textwidth]{16O144Sm1.ps} \caption{Comparison of the theoretical fusion barrier distribution (dashed line) with the quasi-elastic barrier distribution (solid-line) obtained with different coupling schemes for $^{16}$O$+^{144}$Sm system. Both functions are normalized to unit area in energy interval between 54 and 70 MeV. (a) The results of the coupling to one phonon state of quadrupole and octupole excitations of $^{144}$Sm in the harmonic oscillator limit. (b) The same as (a) but for the coupling up to double phonon states. (c) The result when the coupling to anharmonic vibration of double quadrupole and octupole excitations in $^{144}$Sm is taken into account.} \end{figure} The effects of proton transfer reactions on the quasi-elastic scattering is illustrated in Fig. 5. The solid line represents the results of the calculations including only the coupling to the double anharmonic vibrations. The dashed line is obtained by taking the coupling to the proton transfer channels into account, in addition to the anharmonic vibration channels. The upper panel shows the quasi-elastic cross sections, while the lower panel the quasi-elastic barrier distribution. We observe from Fig. 5(a) that the inclusion of proton transfer reactions overestimates the experimental $d\sigma_{\rm qel}/d\sigma_R$ at energies between 62 and 68 MeV. Also the higher peak in the quasi-elastic barrier distribution becomes more distinct and thus worsens as compared to the calculation without the transfer channels. Figure 6 shows the contribution of each channel to the quasi-elastic cross sections. The fraction of each contribution is also shown in Fig. 7. The open squares are the experimental elastic cross sections, while the filled circles and the open triangles are the experimental proton transfer cross sections and the sum of total inelastic and transfer cross sections, respectively. The coupled-channels calculations for the elastic cross sections are shown by the dashed-line. Although it reproduces the experimental data below around 62 MeV, it overestimates the data at higher energies. The sum of the contributions from the total inelastic and the proton transfer channels is denoted by the dotted line, which reproduces the experimental data reasonably well, although the proton transfer cross sections themselves are underestimated at energies larger than 60 MeV (the solid line). The overestimation of the quasi-elastic cross section indicated in Fig. 5(a) is therefore largely due to the contribution of elastic channel. From this study, we conclude that the inclusion of the proton transfer reactions in the coupled-channels calculations does not explain the difference of the shape between the fusion and quasi-elastic barrier distributions for the $^{16}$O$+^{144}$Sm system. \subsection{Discussions} We have argued that the presence of high energy shoulder, instead of high energy peak, in the quasi-elastic barrier distribution for the scattering between $^{16}$O and $^{144}$Sm nuclei cannot be accounted for within the present coupled-channels calculations, which take into account the anharmonic double phonon excitations in $^{144}$Sm as well as the proton transfer channels. Figure 8 compares the calculated fusion barrier distribution $D^{\rm fus}$ and the corresponding quasi-elastic barrier distribution $D^{\rm qel}$ for several coupling schemes as shown in Fig. 2 in the coupled-channels calculations. The solid line shows the quasi-elastic barrier distribution while the dashed line is for the fusion barrier distribution. They are normalized so that the energy integral between 54 and 70 MeV is unity. Figures 8(a) and 8(b) are obtained by including the one phonon and the two phonon excitations in $^{144}$Sm in the harmonic limit, respectively. Figure 8(c) is the result of the double anharmonic vibration coupling. From these figures, it is evident that the theoretical fusion and quasi-elastic barrier distributions are always similar to each other within the same coupling scheme, although the latter is slightly more smeared due to the low-energy tail \cite{hag-04}. This would be the case even with the excitations in $^{16}$O as well as neutron transfer channels, which are not included in the present coupled-channels calculations. Therefore, it seems unlikely that the experimental fusion and quasi-elastic barrier distributions can be explained simultaneously within the standard coupled-channels approach. \section{Conclusion} We have studied the effects of double anharmonic vibrations of the $^{144}$Sm nucleus on the large angle quasi-elastic scattering for $^{16}$O$+^{144}$Sm system. We have shown that the experimental data for the quasi-elastic scattering cross sections for this reaction can be reasonably well explained. However, we found that the obtained quasi-elastic barrier distribution still shows the clear doubled-peaked structure, that is not seen in the experimental data. This was not resolved even if we took the proton transfer channels into account. Our coupled-channels calculations indicate that, within the same coupling scheme, the quasi-elastic and fusion barrier distributions are always similar to each other. Although detailed analyses including neutron transfer channels in a consistent manner are still necessary, it is thus unlikely that the fusion and quasi-elastic barrier distributions can be explained simultaneously with the standard coupled-channels framework. This fact might be related to the large diffuseness problem in sub-barrier fusion, in which dynamical effects such as couplings to deep-inelastic scattering are one of the promising origins \cite{NBD04,DHN04,MHD07}. It is still an open problem to perform the coupled-channels calculations with such dynamical effects and explain the difference of the shape between the fusion and the quasi-elastic barrier distributions for the $^{16}$O$+^{144}$Sm reaction. \begin{acknowledgments} This work was partly supported by The 21st Century Center of Excellence Program ``Exploring New Science by Bridging Particle-Matter Hierarchy'' of Tohoku University and partly by Monbukagakusho Scholarship and Grant-in-Aid for Scientific Research under the program number 19740115 from the Japanese Ministry of Education, Culture, Sports, Science and Technology. \end{acknowledgments}
2023-04-23T06:10:12.286Z
2007-10-15T11:30:59.000Z
redpajama/arxiv
arxiv_0002
368
4,441
18ee9b2f14b480a26f845ef8f46792532e903c68
\section{Introduction} Quark colour confinement in hadron physics remains unexplained after more than 30 years of intense study (for a recent overview see Ref.~\cite{Alkofer:2006fu}). In lattice gauge theory two avenues of research have been most commonly adopted: confinement by $Z(N)$ centre vortices and confinement by means of abelian monopoles (for a critical discussion of both see Ref.~\cite{Greensite:2003bk}). Gauge fields can be first fixed to a particular gauge, such as Maximal Abelian Gauge (MAG) or Maximal Centre Gauge (MCG), monopoles and centre vortices are then defined by the projection of these gauge-fixed fields to $U(1)^{N-1}$ or $Z(N)$ gauge fields, respectively. Much of the progress to date has occurred in $SU(2)$ using MAG and MCG, with original findings reproducing about 90\% \cite{Bali:1996dm} and about 100\% \cite{Bertle:2000py} of the non-abelian string tension. Removing monopole \cite{Miyamura:1995xn, Sasaki:1998ww, Bornyakov:2007fz} or centre vortex \cite{Bornyakov:2007fz,Leinweber:2006zq, de Forcrand:1999ms, Gattnar:2004gx, Gubarev:2005az} degrees of freedom from $SU(2)$ lattice gauge fields appears to leave topologically trivial, non-confining gauge fields that preserve chiral symmetry. Work in $SU(3)$ has not progressed to this level. While initial investigations were hopeful \cite{Montero:1999by, Faber:1999sq}, subsequent results for MAG and MCG were not so encouraging \cite{Stack:2002sy, Langfeld:2003ev}, both failing to reproduce the full non-abelian string tension. Earlier studies in $SU(2)$ using MCG showed that the centre-projected configurations recovered the full string-tension, however further study into the ambiguities of the gauge-fixing procedure showed that this result is plagued by Gribov copy effects \cite{Bornyakov:2000ig, Faber:2001hq}: methods which give higher values of the gauge fixing functional produce smaller values for the vortex induced string tension. We point out that when the Laplacian Centre Gauge of Ref.~\cite{deForcrand:2000pg} (which is free of Gribov ambiguities on the lattice) is used as the fixing method the full $SU(3)$ (and $SU(2)$) string tension is recovered for the centre-projected gauge fields. However, the interpretation of this vortex matter is cumbersome in the continuum limit \cite{Langfeld:2003ev, Langfeld:2001nz}. In this paper we focus on the Gribov problem of the $SU(3)$ centre vortex picture of confinement using the MCG gauge-fixing method. In the centre-vortex picture the gauge fields are considered to be decomposed into a long-range field $Z_\mu$ carrying all the confining fluctuations and a short-range field $V_\mu$ containing non-confining perturbations as well as other short range effects \begin{displaymath} U_\mu(x)=Z_\mu(x)V_\mu(x). \end{displaymath} Here $Z_\mu(x)$ is the centre element which is closest, on the $SU(3)$ group manifold, to $U_\mu(x)$. \subsection{Smearing as a Preconditioner} Since the centre elements, in the centre vortex picture of confinement, correspond to the long-range physics, we employ the use of smearing, which smoothes out the short-range fluctuations, to construct a preconditioning gauge transformation for each gauge field prior to gauge-fixing \cite{Hetrick:1997yy}. Firstly, we smear the gauge field using any smearing algorithm (stout-link smearing \cite{Morningstar:2003gk} was used in the data shown here but other smearing algorithms are currently under investigation). We then fix the smeared field using the MCG gauge-fixing method. At each iteration we keep track of the total gauge transformation that has been applied to the smeared gauge field. Once the algorithm has converged we use the stored total transformation as a preconditioning gauge transformation for the unsmeared gauge field. We emphasise that the (unsmeared) preconditioned gauge field remains on the same gauge orbit since the preconditioning is merely a (specific) gauge transformation on the original links. Gauge-fixing the preconditioned field simply gives us a Gribov copy of the result from gauge-fixing the original gauge field. \section{Identifying Vortex Matter} We employ the MCG gauge-fixing algorithm as outlined in Ref.~\cite{Langfeld:2003ev}. The gauge condition we chose to maximise (with respect to the centre elements $Z_\mu(x)$) in this algorithm is \begin{displaymath} V_U[\Omega] = \frac{1}{N_l}\sum_{x,\mu}\big[\frac{1}{3}\text{tr}U^{\Omega}_\mu(x)\big]\big[\frac{1}{3}\text{tr}U^{\Omega}_\mu(x)\big]^\dagger, \end{displaymath} where $N_l$ is the number of links on the lattice and $U^{\Omega}$ is the gauge-transformed field. After fixing the gauge, each link should be close to a centre element of $SU(3)$, $Z^m=e^{i\phi^m}$, $\phi^m=\frac{2\pi}{3}m$ with $m\in \{ -1,0,1 \}$. Since, for every link, \begin{displaymath} \frac{1}{3}\text{tr} U^{\Omega}_\mu(x) = u_{x,\mu}e^{i\phi_{x,\mu}} \hspace{1cm}\text{and}\hspace{1cm} \phi_{x,\mu}=\tan^{-1}\frac{\text{Im}(\text{tr} U^{\Omega}_\mu(x))}{\text{Re}(\text{tr} U^{\Omega}_\mu(x))} \end{displaymath} then $\phi_{x,\mu}$ should be close to some $\phi^m$, by construction of the gauge-fixing condition. We then perform the centre projection by mapping \begin{displaymath} SU(N)\mapsto Z_N :\hspace{1cm} U^{\Omega}_\mu(x)\mapsto Z_\mu(x) \hspace{1cm} \text{with}\hspace{1cm} Z_\mu(x)=e^{i\phi^m_{x,\mu}}, \end{displaymath} with the appropriate choice of $\phi^m_{x,\mu}$, $m\in \{ -1,0,1 \} $. To reveal the vortex matter we simply take a product of links around an elementary plaquette. We say a vortex pierces the plaquette if this product is a non-trivial centre element and the plaquette is then a \emph{P-vortex}. In a toy Yang-Mills model we can remove these P-vortices by hand from the configuration using $U_\mu^\prime(x)=Z^\dagger_\mu(x)U_\mu^\Omega(x)$. \section{Results} Calculations are performed using 100 quenched configurations with the Luscher-Weisz plaquette plus rectangle gauge action \cite{Luscher:1984xn} on a $16^3\times32$ lattice with $\beta=4.6$. Similar results are being found on 200 $20^3\times40$ lattices but since these results are currently incomplete they are not presented here. Stout-link smearing with a smearing parameter of $0.1$ is used to construct the preconditioning transformation with the number of sweeps ranging from 0 to 12 in steps of 4 sweeps. Here, each preconditioning was conducted independently although it would be possible to use the preconditioning from an initial level to precondition subsequent levels of smearing, thereby decreasing computation time when seeking comparative results. \begin{table} \hspace{-0.04\textwidth} \begin{minipage}{0.4\textwidth} \centering \begin{tabular}{|c||c|c|} \hline \textbf{Comparison} & \textbf{Decreased} & \textbf{Higher} \\ \textbf{Sweeps} & \textbf{Vortices} & \textbf{Maxima} \\ \hline $0\to 4$&100\%&100\%\\ $4\to 8$&85\%&79\%\\ $8\to 12$&79\%&64\%\\ \hline \end{tabular} \caption{\textnormal{The percentage of configurations that have a lower number of P-vortices and the percentage of configurations that achieve a higher gauge condition maximum when comparing two different values of the preconditioning.}} \label{tab:transitions} \end{minipage} \hspace{ 0.033\textwidth} \begin{minipage}{0.5\textwidth} \centering \begin{tabular}{|c||c|c|c|} \hline \textbf{Sweeps} & \textbf{Total Iter. Blocks} & \textbf{Smear Max} & \textbf{Max}\\ \hline 0&$60\pm16$&****&0.7360(11)\\ 4&$89\pm15$&0.9153(16)&0.7409(9)\\ 8&$97\pm20$&0.9375(21)&0.7417(11)\\ 12&$95\pm17$&0.9461(17)&0.7421(9)\\ \hline \end{tabular} \caption{\textnormal{For each of the sweeps used in the preconditioning: the average total (smeared gauge field fixing plus preconditioned gauge field fixing) number of iteration blocks (of 50) used, the average smeared gauge condition maximum reached and the average preconditioned gauge condition maximum reached.}} \label{tab:errors} \end{minipage} \end{table} If we look at the number of P-vortices located in each configuration compared with the number located on the next level of preconditioning (Table~\ref{tab:transitions}), we can see that the first level of smearing decreases the number of vortices identified in all configurations. This behaviour continues right through all levels of preconditioning although the size of the effect drops to 79\% of configurations by the last preconditioning. The magnitude of the reduction in number of P-vortices is as high as $66\pm16\%$ when using the first level of preconditioning with the effect dropping to as low about 10\% for the transition between the $8$ and $12$ sweep smearing levels. The gauge condition maximum achieved also increases in all configurations for the first level of preconditioning, with the magnitude of this effect dropping to 64\% by the final preconditioning. We also note from Table~\ref{tab:errors} that the total number of blocks of 50 iterations of the gauge-fixing algorithm increases slightly when using the preconditioning. Typically, $\frac{2}{3}$'s of the iterations are spent fixing the smeared field and $\frac{1}{3}$ fixing the preconditioned field. The gauge-fixing maximum achieved monotonically increases however, both for the smeared fields and for the preconditioned field (though not to the same degree) with increasing levels of smearing preconditioning. It should be noted that, regardless of the preconditioning level, the centre phases of the links of the fields always remain evenly distributed across the three possible values, reflecting the fact that the realisation of centre symmetry remains unaffected. \subsection{The Static Quark Anti-quark Potential} Computing the static quark anti-quark potential as a function of their separation is a two step process. Wilson loops $W(R,T)$ of extension of $R\times T$ have the large $T$ behaviour \begin{displaymath} \langle W(R,T)\rangle \propto \text{exp}\{ -V(r)aT\},\hspace{1cm} r:=Ra, \end{displaymath} where $a$ is the lattice spacing. The method for extracting the potential is identical to that of two-point functions. To obtain the static quark anti-quark potential as a function of the quark separation we simply repeat this process for a range of values of the separation $R$. As can be seen in Figure~\ref{figSQP}, by using off-axis spatial paths for the Wilson loops, we can obtain non-integer values of $R$. In Figure~\ref{figeffm} the effective potential $V(r)$ is plotted against $T$ (by taking the log of the ratio of the value of the Wilson loops at two adjacent time slices) for two Gribov copies of each of the vortex-only and vortex-removed configurations and comparing these to the results from the original configuration. On the left is the unpreconditioned Gribov copy and on the right is a Gribov copy that uses 4 sweeps of smearing. We are seeking to find a plateau in the potential in each case. We note that the quality of the fit, as defined by the $\chi^2$ values, depends greatly on the fit range chosen and one must first account for the systematic drift in the data before selecting a sensible fit region. In particular, for the vortex-removed configurations, the potential does appear to become degenerate at larger values of the separation but choosing a suitable fit is challenging. We also note that the scale of the potential is greatly reduced for the vortex-only configurations while the reliability of the result increases dramatically, allowing for fits at very large $T$ values. \begin{figure}[h] \hspace{-1cm} $\begin{array}{c@{\hspace{1cm}}c} \mbox{\bf Original} & \mbox{\bf Preconditioned 4 Sweeps}\\ \includegraphics[height=5.4cm]{StaticQuarkPotential} & \includegraphics[height=5.4cm]{SM4StaticQuarkPotential} \\ [0.4cm] \mbox{\bf Preconditioned 8 Sweeps} & \mbox{\bf Preconditioned 12 Sweeps}\\ \includegraphics[height=5.4cm]{SM8StaticQuarkPotential} & \includegraphics[height=5.4cm]{SM12StaticQuarkPotential} \end{array}$ \caption{The static quark anti-quark potential plots for each of preconditioning smearing sweeps used. Each plot contains data for the full, vortex-removed and vortex-only configurations.} \label{figSQP} \end{figure} In Figure~\ref{figSQP} the potential $V(r)$ is plotted against the separation $r$ for our original configuration and 3 Gribov-copy ensembles. We can see that the results are consistent with the loss of confinement for the vortex-removed configurations. The preconditioned configurations better reproduce the full potential at small distances. We also note that the string tension of the vortex-only results drops dramatically when we apply our preconditioning, going from about 65\% of the full string tension to as low as about 26\%. \section{Conclusions} We have shown that using smearing as a preconditioner to the MCG gauge fixing algorithm increases the maximum of the gauge-fixing condition. We find that these higher maxima correspond to a lower number of measured of P-vortices. When looking at the static quark anti-quark potential for the vortex-only and vortex-removed configurations we find results consistent with loss of confinement for all the vortex-removed configurations. We also find that the preconditioning decreases the measured string tension of the vortex-only results, reducing them from 65\% to as low as 26\% of the full string tension. Similar to what has been observed in $SU(2)$ \cite{Bornyakov:2000ig,Faber:2001hq}, there appears to be a significant anti-correlation between the value achieved in the gauge fixing functional and the percentage string tension reproduced by centre vortices. Although the fundamental modular region of MCG would be an ideal candidate for a unique definition of vortex texture, it seems that the vortex matter arising from the first Gribov region as a whole has a bigger phenomenological relevance. Further work is continuing on larger lattice volumes. Results to date are consistent with findings here.
2023-04-23T06:10:12.347Z
2007-10-16T08:32:48.000Z
redpajama/arxiv
arxiv_0002
370
2,260
94e7980270ef6ef5cffbf0e09de91d859c024857
\section{Introduction} \begin{figure} \includegraphics[width=3.3in]{ridge1} \caption{\label{photo} Particle-rich ridge in an inclined film experiment.} \end{figure} Film flow of particle-laden liquid occurs in many important contexts, from geophysical flows such as erosion and turbidity currents \cite{hutter} to industrial processes including papermaking and the application of fertilizers. While sophisticated constitutive models have been developed for general suspension flow, these models are generally not compatible with the lubrication approximation used for single-phase film flow. As a result, the mathematical description of particle-laden films remains a challenging problem. The complexity of such films is evident in a recent study by Zhou et al. \cite{ZDBH} of flow on an incline. They observed three distinct flow types, characterized by the relative motion of the liquid and particulate phases. At low inclination angles $\alpha$ and particle volume fractions $\phi$ the particles settle to the bottom substrate and are removed from the flow. At intermediate $\alpha$ and $\phi$ the suspension appeared well mixed for the duration of their experiment. At larger $\alpha$ and $\phi$ the particles were observed to accumulate near the advancing contact line, forming a pronounced ridge up to several times thicker than the upstream film. They also reported that the growth of the fingering instability, which is known to deform the contact line in many film problems, is somewhat suppressed in the third regime. Zhou et al. also introduced a lubrication model for this unique particle-rich ridge regime, which was revised and analyzed in \cite{cook}. This model attributes the aggregation to the buoyant force on the denser particles, with the relative velocity specified by a hindered settling function $f(\phi)$ \cite{rz}. The bulk motion is determined by balancing the gravity force with the viscous stress, which is expressed in terms of an effective viscosity $\mu(\phi)$ \cite{krieger}. Thus particles move downstream slightly faster than the fluid and accumulate near the contact line, where the larger viscosity makes the film thicker. In order to depth-average the Stokes equations for the lubrication approximation, the dependence of $\phi$ on the coordinate $z$ normal to the plane must be specified. The model of Zhou et al. assumes $\phi$ is independent of $z$, which prohibits particles from settling to the substrate. They use this assumption for simplicity, but note it is reasonable because shear-induced diffusion can oppose the settling of particles in the $z$ direction, possibly resulting in a stable depth profile. Such an equilibrium is the likely explanation why particles do not always settle out of the flow as in the low-$\alpha$, low-$\phi$ regime. Zhou's assumption has the consequence that the particle-rich ridge by the mechanism described above occurs in the lubrication model regardless of $\alpha$ and $\phi$. This letter proposes a model that balances shear-induced migration with settling to explicitly determine the depth profile of $\phi$, which is necessary in order to describe particles settling to the substrate and to explain the existence of distinct settling behaviors. The specific depth profile also has a large impact on the relative velocities of the two phases: the phase-averaged velocity of the mixture depends strongly on $z$, so a given phase will move faster when it is concentrated near the free surface rather than near the substrate. It is found that at an equilibrium profile this effect represents a larger contribution to the relative velocity than does settling in the flow direction, which suggests a three-dimensional treatment reflecting the stratified nature of the flow may be necessary to accurately describe the ridge phenomenon. Similar models have been studied before, notably by Schaflinger et al. \cite{schaflinger} and Timberlake and Morris \cite{morris}. Shaflinger et al. used the ``diffusive flux" model for shear-induced diffusion introduced by Leighton and Acrivos \cite{migration}, which states that the volume flux of particles is given by \begin{equation} \label{leighton} N_d = -a^2 \dot{\gamma} \hat{D}(\phi) \nabla \phi, \end{equation} where $\dot{\gamma}$ is the shear rate, $a$ is the particle radius, and the dimensionless diffusion coefficient was found by Leighton \cite{leighton} to be well approximated by $\hat{D}(\phi) = \frac{1}{3} \phi^2 (1+\frac{1}{2} e^{8.8 \phi})$. The use of the scalar shear rate restricts this model to simple shear flows, which nonetheless include film flow where $\dot{\gamma}=dv/dz$ and $v$ is the velocity of the mixture. Schaflinger et al. balanced this flux with that due to gravitational settling in the $z$ direction, which they approximated with a hindered settling function. This condition along with the Newtonian stress balance allowed them to derive a system of two first-order ordinary differential equations for the concentration and shear stress, which they solved numerically. Two important features of the solutions can be deduced from the form of \eqref{leighton}. Because the flux is proportional to the shear rate, the vanishing stress at the free surface $z=h$ ensures there is no diffusive flux to balance settling, and therefore $\phi(h)=0$ for all solutions\footnote{A steady solution also requires no diffusive flux where the maximum concentration $\phi_m$ is reached, corresponding to packed spheres, however this cannot happen at the free surface in Shaflinger et al.'s model because $d\phi/dz \leq 0$.}. Also, the diffusive flux must be always directed upward in order to balance gravity, which implies by \eqref{leighton} that $d\phi/dz \leq 0$. Timberlake and Morris included theory for the depth profile of concentration in their experimental paper on film flow of a neutrally buoyant suspension. Their description uses the ``suspension balance" model of Nott and Brady \cite{nott-brady} for particle migration. That more rigorous model calculates a ``temperature" measuring fluctuations in particle velocities, which is generated by shear, dissipated by viscous stress, and diffuses through an effect related to the finite particle size. This last property is the most significant difference between the diffusive flux and suspension balance models, implying that particle migration depends nonlocally on the shear rate, which in this case allows a small nonzero concentration at the free surface. Otherwise the two models generally give similar predictions \cite{fang}. Since Timberlake and Morris considered neutrally buoyant particles, $\phi$ increases with $z$, which is also confirmed by their experiment. Rather than assuming the film is always in diffusive equilibrium, they retain the $x$ coordinate in the flow direction, and their calculations indicate a distance on the order of $200h$ is necessary to reach equilibrium. This factor decreases strongly with the bulk concentration and is proportional to $(h/a)^2$. \section{model} This work will use the diffusive flux model for simplicity, and proceed similarly to Schaflinger et al., but differ crucially by using an extra term in which the particle flux opposes gradients in the shear rate, in addition to opposing concentration gradients as in \eqref{leighton}. This effect was introduced in \cite{migration} and quantified in \cite{phillips} in the expression \begin{equation} \label{phillips} \frac{D\phi}{Dt} = a^2 \nabla \cdot \left[ K_c \phi \nabla(\dot{\gamma} \phi) + K_\eta \dot{\gamma} \frac{\phi^2}{\mu(\phi)} \nabla \mu(\phi) \right] \end{equation} for the particle migration, where the best fit with experiment was obtained with the values $K_c=0.43$ and $K_\eta=0.65$ for the two constants. Equation \eqref{phillips} corresponds to a particle flux \begin{eqnarray} \label{phillips sigma} F_m = -a^2 K_c \phi \nabla \left(\frac{\sigma}{\mu(\phi)} \phi \right) - a^2 (K_\eta - K_c) \frac{\sigma \phi^2}{\mu(\phi)^2} \nabla \mu(\phi) \nonumber \\ = -\frac{a^2 \phi}{\mu(\phi)} \left( K_c \nabla(\sigma \phi) + (K_\eta-K_c) \frac{\sigma \phi}{\mu(\phi)} \nabla \mu(\phi) \right), \end{eqnarray} where the shear rate $\dot{\gamma}$ has been eliminated in favor of the shear stress $\sigma = \mu(\phi) \dot{\gamma}$. For a flat film on an incline, equilibrium is reached when this flux balances that of gravitational settling in the $z$ direction. Settling rates are commonly expressed as a product of the velocity of a single sphere $v_s = -(2/9) \Delta \rho g/\mu_f$ by a hindered settling function $f(\phi)$ for which many empirical formulas exist. Here $\rho$ and $\mu_f$ are the density and viscosity of the fluid, $g$ is the gravitational constant, and $\Delta=(\rho_p-\rho)/\rho$ is the density difference for particles of density $\rho_p$. In this case it is convenient to follow Schaflinger et al. and use the hindered settling function $f(\phi)=(1-\phi)/\mu(\phi)$, leading to the settling flux \begin{equation} \label{settling flux} F_s = -\frac{2}{9} \frac{a^2 \Delta \rho g \cos \alpha}{\mu_f} \frac{\phi (1-\phi)}{\mu(\phi)}, \end{equation} where $\alpha$ is the angle of inclination. The balance of flux $F_m+F_s=0$ then takes the form \begin{equation} \label{flux balance 1} K_c (\sigma \phi)' + (K_\eta-K_c) \frac{\sigma \phi}{\mu(\phi)} \mu(\phi)' = -\frac{2}{9} \frac{ \Delta \rho g \cos \alpha}{\mu_f} (1-\phi) \end{equation} where the gradients have been replaced primes denoting differentiation by $z$. Substituting the standard formula $\mu(\phi) = \mu_f (1-\phi/\phi_m)^{-2}$ \cite{krieger} with the maximum packing fraction $\phi_m \approx 0.67$ and differentiating yields \begin{eqnarray} \label{flux balance 2} \left[1 + \frac{2(K_\eta-K_c)}{K_c} \frac{\phi}{\phi_m-\phi} \right] \sigma \phi' \nonumber \\ = \phi(1+\Delta \phi) - \frac{2 \Delta}{9 K_c} (\cot \alpha) (1-\phi), \end{eqnarray} where $z$ and $\sigma$ have now been nondimensionalized using the depth of the film $h$ and the unit of stress $(\rho g/h) \sin \alpha$. For a flat film there is no capillary force, so the pressure can be set to zero at the free surface $z=1$, and is assumed to be hydrostatic in the suspension. The nondimensional shear stress then satisfies the equation \begin{equation} \label{stress} \sigma' = -(1+\Delta \phi). \end{equation} Equations \eqref{flux balance 2} and \eqref{stress} constitute the system to be studied here, with the understanding that \eqref{flux balance 2} is replaced by $\phi'=0$ when $\phi=0$ or $\phi=\phi_m$ to ensure pure fluid and packed particles are admissible solutions and to keep the concentration within its meaningful range. The physical boundary conditions both involve the stress: $\sigma(0)=(1+\Delta \phi_0)$ and $\sigma(1)=0$, where $\phi_0$ is the imposed average concentration. Thus for these two equations there is only a one-parameter family of physically meaningful solutions, parameterized by $\phi_0$. In practice this system was easiest to solve by shooting with a Runge-Kutta method from $z=0$ while adjusting the value of $\phi(0)$. Once $\sigma$ and $\phi$ are determined, the mixture velocity can be calculated using $dv/dz=\dot{\gamma}=\sigma(z)/\mu(\phi(z))$ and $v(0)=0$. \section{solutions} \begin{figure} \includegraphics[width=3.3in]{phase_overlay} \caption{\label{overlay} The function $\phi^*(\alpha)$ determining whether particles tend toward the top or bottom of the film. Overlaid are Zhou et al.'s experimental parameters for which particles settle to the substrate ($\bigcirc$, white), remain well mixed ($\bigtriangleup$, light), or accumulate in a ridge ($\Diamond$, dark). Experimental data are from figure 2 of \cite{ZDBH}.} \end{figure} Since particle migration in this model does not strictly oppose the concentration gradient, $\phi$ is not constrained to decrease with $z$ as in the work of Schaflinger et al. The lack of a migration flux at the free surface however is general to the diffusive flux model, and still applies here, forcing either $\phi(1)=0$ or $\phi(1)=\phi_m$. Since $\sigma \geq 0$, it is also apparent from equation \eqref{flux balance 2} that $\phi(z)$ is monotone, because $\sigma \phi'$ is determined by a function of $\phi$ only with a single unstable root $\phi^* = \phi^*(\alpha)$ in its allowable domain (between $0$ and $\phi_m$). There are then two possibilities: $\phi_0 > \phi(0) > \phi^*$ with $\phi(1) = \phi_m$, or $\phi_0 < \phi(0) < \phi_m$ with $\phi(1) = 0$. In the latter case, the particulate phase is located preferentially near the bottom of the film and (because $v(z)$ is always increasing) moves slower than the fluid on average, both of which are necessary conditions for the particles to settle out of the flow. It seems natural then to associate $\phi_0<\phi^*(\alpha)$ with this regime in Zhou et al.'s experimental work \cite{ZDBH}. The case $\phi_0>\phi^*(\alpha)$ should then correspond to the particle-rich ridge regime, as the particles do not settle to the bottom and move faster on average than the fluid, even without including the settling velocity in the flow direction. While there is no obvious reason why there should be a regime (other than the single solution $\phi \equiv \phi^*$) where the fluid and particles move at the same velocity, it may be that experiments in which the suspension stayed well-mixed had $\phi_0 \approx \phi^*$ and the relatively small difference between the two velocities did not have time to produce noticeable segregation on the experimental time scale. \begin{figure} \includegraphics[width=3.3in]{sample} \caption{\label{sample} Depth profiles of $\phi$ and $v$ for two average concentrations at $\alpha=45^\circ$. Bulk concentration $\phi_0=0.25$: velocity (dot) and concentration (long dash), bulk concentration $\phi_0=0.45$: velocity (short dash) and concentration (solid). Velocities are scaled by the average velocity of a homogeneous film at the same concentration. With this rescaling the average velocities at $\phi_0=0.25$ of the particle and liquid phases are $0.57$ and $0.70$, and at $\phi_0=0.45$ the velocities are $1.41$ and $1.33$ respectively.} \end{figure} Plotted in figure \ref{overlay} is the calculated transition point $\phi^*(\alpha)$ and the experimental data from \cite{ZDBH}. As expected, the transition lies within the well-mixed regime. This calculation involves no fitting parameters, and the agreement is remarkable considering the simplifying assumptions of one-dimensional, time-independent flow. The position of the curve $\phi^*(\alpha)$ also suggests that the experimentally observed well-mixed films mostly lie in the $\phi_0 < \phi^*(\alpha)$ range, and therefore would likely result in particles settling out of the flow were the experiments continued longer. Examples of the two cases ($\phi_0 > \phi^*$ and $\phi_0 < \phi^*$) are shown in figure \ref{sample} for $\alpha=45^\circ$, $\phi^*(\alpha) \approx 0.35$. The effect of the increasing concentration profile for $\phi_0=0.45$ is to flatten the velocity near the top from the parabolic shape of an unstratified film, while for $\phi_0=0.25$ the absence of particles near the top increases the shear in this area. Also of interest is the fact that when $d\phi/dz>0$ both phases move faster than the velocity of an unstratified film, because of the high-shear, low-$\phi$ region at the bottom and the low shear at the top where $v$ is at its greatest. Both phases are slower when $d\phi/dz<0$. \begin{figure} \includegraphics[width=3.3in]{velocity_ratio} \caption{\label{vrel} The ratio $v_{rel}/v_{av} = (v_p - v_f)/(\phi v_p + (1-\phi) v_f)$ of velocities relevant for formation of the particle rich ridge. Velocity difference due to the stratified flow as described above (dash), and velocity difference due to direct gravitational settling in the flow direction as described by Zhou et al. (solid).} \end{figure} In figure \ref{vrel} the relative velocity due to stratification is compared with the in-plane settling velocity used in \cite{ZDBH} and \cite{cook} at $\alpha=45^\circ$. Specifically the vertical axis measures the ratio $v_{rel}/v_{av} = (v_p - v_f)/(\phi v_p + (1-\phi) v_f)$ that determines the accumulation of particles in an experiment limited by the length of the channel. For concentrations greater than $0.37$, stratification has a larger effect than in-plane settling. Since the particle-rich ridge occurs at rather high concentrations, the stratified flow appears to be the more important cause of the ridge. A description of the ridge evolution including stratification is possible within the lubrication context if the film is assumed to be always in equilibrium between settling and migration, by using the calculations of figure \ref{vrel} to determine the relative velocity from $\phi$. This would result in a system similar to that in \cite{cook}, which for length scales greater than a modified capillary length describes a ridge that grows linearly with time. If this route is followed, care must be taken to ensure the length scale is also large enough to justify the equilibrium assumption. The experiments and two-dimensional calculations of Timberlake and Morris \cite{morris} indicate the distance travelled before reaching equilibrium can be as large as tens of centimeters, even for an experiment with fairly large particles such as \cite{ZDBH}. At shorter length scales, such a two-dimensional model may therefore be necessary, which would generalize the above results by allowing non-equilibrium concentration profiles. The most likely effect of non-equilibrium physics would be to lengthen the timescale of phase separation, making the well-mixed regime more likely for length-limited experiments. This new theory demonstrates the importance of particle migration in determining the flow, and can provide a starting point for studying effects such as ridge formation, particle deposition in the clear fluid regime, the contact-line instability, or span-wise particle banding \cite{carpen}. This work is part of my doctoral dissertation at UCLA, and I am grateful to my advisor Andrea Bertozzi for her help and guidance. Financial support was provided by NSF grants ACI-0321917 and DMS-0502315 and ONR grant N000140710431.
2023-04-23T06:10:12.746Z
2007-10-16T18:40:31.000Z
redpajama/arxiv
arxiv_0002
388
2,924
e6bc9c63511aeaa9d4386a9ace44b6e05cc0a9f8
\section{Introduction} The best known and by far most studied aperiodic tiling of plane is the Penrose tiling \cite{Pe,Br}. This tiling plays an important role in the quasicrystal structure understanding and has some remarkable mathematical properties. Direct structure observation, using high-resolution transmission electron microscopy, shows that the atomic arrangement in the tenfold symmetry plane of certain decagonal quasicrystals can simply be interpreted in terms of the Penrose tiling \cite{St,Ab}. Many aspects concerning the mathematical properties have been studied and are found in the literature; see for example \cite{Ja,Se,Mo}. In this paper, we present some results concerning the inflation symmetries of the Penrose type tilings. Our approach, based on the use of the three projectors corresponding to the usual decomposition of the five-dimensional superspace used in the cut and project construction, is an extension of the method used by Katz and Duneau in the description of self-similarity properties of icosahedral quasicrystals \cite{Ka,Co}. There are also two important differences. In the case analysed by Katz and Duneau, the representation of the icosahedral group in the superspace $\mathbb{R}^6$ is a sum of only two irreducible representations, and the orthogonal projection of the lattice $\mathbb{Z}^6$ on the internal subspace is a dense subset. In our case, the representation of the symmetry group $C_5$ in the superspace $\mathbb{R}^5$ is a sum of three real irreducible representations, and the orthogonal projection of the lattice $\mathbb{Z}^5$ on the internal three-dimensional subspace is not a dense subset. The vertex set of the Penrose tiling is a multi-component model set \cite{Ba,Mo,Cot}, and we have adapted the Katz and Duneau method to such a case. The set $\Lambda $ of inflation factors obtained by using our method is a union of four cut and project sets. To our knowledge, the known inflation factors of the Penrose tiling are only those forming the multiplicative semigroup $\mathcal{I}=\{ \, (-\tau )^k\ |\ k\!\in \!\mathbb{N}\, \}$. Since the elements of this semigroup are not uniformly distributed in $\mathbb{R}$ we can not have $\Lambda \subset \mathcal{I}$. We have found new scaling factors and inflation centers, but very probably our list is not exhaustive. It is known that the inflation factor $\tau $ is directly related to a construction of the Penrose tiling by tile substitutions \cite{Ja,Se}. We do not know if the new found scaling factors are related to some substitution rules or not. The importance of inflation symmetries in quasicrystal physics is comparable with that of translation symmetries in the description of crystals. A fractal shape of the energy spectrum may be a direct consequence of the self-similarity properties of the quasicrystal. A method to study the self-similarity properties of the cut and project sets has been presented in \cite{Ma}, but as the authors show at the end of the paper, the method does not apply to the Penrose type tilings. \section{Penrose type tilings} In this section we review certain results concerning the Penrose type tilings defined by projection and introduce certain notations. The relation \begin{equation} a:\mathbb{R}^5\longrightarrow\mathbb{R}^5,\qquad a(x_1,x_2,x_3,x_4,x_5)=(x_2,x_3,x_4,x_5,x_1) \end{equation} defines an orthogonal representation of the cyclic group $C_5=\langle \ a\ | \ a^5=e\ \rangle =\{ e, a, a^2, a^3, a^4 \}$ in the usual five-dimensional Euclidean space $\mathbb{R}^5$. The space $\mathbb{R}^5$ can be decomposed into a sum of orthogonal $C_5$-invariant subspaces $\mathbb{R}^5={\bf E}\oplus {\bf E}'\oplus {\bf E}''$, where \[ \begin{array}{l} {\bf E}\!=\!\left. \left\{ \alpha \left(1,{\rm cos}\frac{2\pi }{5},{\rm cos}\frac{4\pi }{5}, {\rm cos}\frac{6\pi }{5},{\rm cos}\frac{8\pi }{5}\right)\!+\! \beta \left(0,{\rm sin}\frac{2\pi }{5},{\rm sin}\frac{4\pi }{5}, {\rm sin}\frac{6\pi }{5}, {\rm sin}\frac{8\pi }{5}\right)\ \right|\ \alpha , \beta \!\in \!\mathbb{R} \right\} \\[2mm] {\bf E}'\!=\!\left. \left\{ \alpha \left(1,{\rm cos}\frac{4\pi }{5},{\rm cos}\frac{8\pi }{5}, {\rm cos}\frac{2\pi }{5},{\rm cos}\frac{6\pi }{5}\right)\!+\! \beta \left(0,{\rm sin}\frac{4\pi }{5},{\rm sin}\frac{8\pi }{5}, {\rm sin}\frac{2\pi }{5}, {\rm sin}\frac{6\pi }{5}\right)\ \right|\ \alpha , \beta \!\in \!\mathbb{R} \right\} \\[2mm] {\bf E}''\!=\!\{ \alpha (1,1,1,1,1)\ |\ \alpha \in \mathbb{R} \} \end{array} \] It is known that ${\rm cos}\frac{\pi }{5}=\frac{\tau }{2}$ and ${\rm cos}\frac{2\pi }{5}=-\frac{\tau '}{2}$, where $\tau $, $\tau '$ are the irrational numbers $\tau =(1+\sqrt{5})/2$ and $\tau '=(1-\sqrt{5})/2$. The vectors $e_1=(1,0,0,0,0)$, $e_2=(0,1,0,0,0)$, ... , $e_5=(0,0,0,0,1)$ form the canonical basis of $\mathbb{R}^5$, and the matrices of the orthogonal projectors corresponding to ${\bf E}$, ${\bf E}'$ and ${\bf E}''$ in this basis are \begin{equation} \label{projectors} \begin{array}{lll} \pi = \frac{1}{5}{\cal M}(2,-\tau ', -\tau ),\quad & \pi ' = \frac{1}{5}{\cal M}(2,-\tau , -\tau '),\quad & \pi '' = \frac{1}{5}{\cal M}(1,1,1) \end{array} \end{equation} where \begin{equation} {\cal M}(\alpha ,\beta ,\gamma )=\left( \begin{array}{rrrrr} \alpha &\ \beta &\ \gamma &\ \gamma &\ \beta \\ \beta & \ \alpha &\ \beta &\ \gamma &\ \gamma \\ \gamma &\ \beta & \ \alpha &\ \beta &\ \gamma \\ \gamma &\ \gamma &\ \beta &\ \alpha &\ \beta \\ \beta &\ \gamma &\ \gamma &\ \beta &\ \alpha \end{array} \right) . \end{equation} The projections on the space ${\bf E}$ of the endpoints of the vectors $e_1$, $e_2$, ... , $e_5$ are the vertices of a regular pentagon. Any point $x\in \mathbb{Z}^5$ has ten arithmetical neighbours, namely, $x\pm e_1$, $x\pm e_2$, $x\pm e_3$, $x\pm e_4$, $x\pm e_5$, and the projections on ${\bf E}$ of these points are the vertices of a regular decagon. If we project on ${\bf E}$ the 2-faces of the unit hypercube $(0,1)^5$ we get, up to certain rotations, only two rhombs. We embed the physical two-dimensional space into $\mathbb{R}^5$ by identifying it with the subspace ${\bf E}$, and regard the complementary space ${\bf E}^\perp ={\bf E}'\oplus {\bf E}''$ which corresponds to the projector $\pi ^\perp =\pi '+\pi ''$ as an internal space. By using the strip \begin{equation} {\bf S}=\{ \ x\in \mathbb{R}^5\ |\ \pi ^\perp x\in {\bf W}\ \}={\bf E}+(0,1)^5 \end{equation} corresponding to the window ${\bf W}=\pi ^\perp \left((0,1)^5\right)$ we define the pattern \cite{Ka} \begin{equation} \mathcal{P}=\pi \left( \mathbb{Z}^5\cap {\bf S}\right)= \{ \ \pi x\ |\ x\in \mathbb{Z}^5,\ \pi ^\perp x\in {\bf W}\ \}. \end{equation} The window ${\bf W}$ defined as the projection on the three-dimensional space ${\bf E}^\perp $ of the open unit hypercube $(0,1)^5$ is a polyhedron and one can remark that its frontier contains some points belonging to $\pi ^\perp (\mathbb{Z}^5)$. The pattern $\mathcal{P}$ is interesting, but it is a particular one. For any vector $v\in {\bf E}' $ we can consider the translated strip $v+{\bf S}$ which corresponds to the translated window $v+{\bf W}$, and define the pattern \begin{equation} \label{pattern} \mathcal{P}_v=\pi \left( \mathbb{Z}^5\cap (v+{\bf S})\right)= \{ \ \pi x\ |\ x\in \mathbb{Z}^5,\ \pi ^\perp x\in v+{\bf W}\ \}. \end{equation} We say that $\mathcal{P}_v$ is a {\it non-singular pattern} if $v$ is such that the frontier of $v+{\bf W}$ does not contain any element of $\pi ^\perp (\mathbb{Z}^5)$. In the opposite case we say that $\mathcal{P}_v$ is a {\it singular pattern}. One can prove that the points of any non-singular pattern $\mathcal{P}_v$ are the vertices of a tiling of the plane ${\bf E}$ formed by only two tiles, each of them in a finite number of orientations \cite{Ka}. The hyperplane \begin{equation} \mathcal{E}={\bf E}\oplus {\bf E}' =\left. \left\{ \ (x_1,x_2,x_3,x_4,x_5)\in \mathbb{R}^5\ \right| \ x_1+x_2+x_3+x_4+x_5=0 \ \right\} \end{equation} is orthogonal to the one-dimensional space ${\bf E}''=\{ \ ( \alpha ,\alpha ,\alpha ,\alpha ,\alpha )\ |\ \alpha \in \mathbb{R} \ \}$, and the lattice $\mathbb{Z}^5$ is contained in the family of parallel equidistant spaces $\{ \ \mathcal{E}_n\ |\ n\in \mathbb{Z}\ \}$, where \begin{equation} \mathcal{E}_n=nw+\mathcal{E}= \left. \left\{ \ (x_1,x_2,x_3,x_4,x_5)\in \mathbb{R}^5\ \right| \ x_1+x_2+x_3+x_4+x_5=n \ \right\} \end{equation} and $w=(0.2,0.2,0.2,0.2,0.2)$. If $x=(x_1,x_2,x_3,x_4,x_5)$ belongs to the open unit hypercube $(0,1)^5$ then $0<x_1+x_2+x_3+x_4+x_5<5$, and therefore only the hyperplanes $\mathcal{E}_1$, $\mathcal{E}_2$, $\mathcal{E}_3$ and $\mathcal{E}_4$ meet $(0,1)^5$. This means that the set $\mathbb{Z}^5\cap (v+{\bf S})$ occurring in the definition of $\mathcal{P}_v$ is contained in $\mathcal{E}_1 \cup \mathcal{E}_2\cup \mathcal{E}_3\cup \mathcal{E}_4$, whence \begin{equation} \mathcal{P}_v= \bigcup_{n=1}^4\left\{ \ \pi x\ |\ x\in \mathbb{Z}^5\cap \mathcal{E}_n , \ \pi 'x\in v+{\bf W}_n\ \right\} \end{equation} where ${\bf W}_n=\pi '\left( \mathcal{E}_n\cap (0,1)^5\right)=-nw+(\mathcal{E}_n\cap {\bf W})$. By direct computation, one can prove that the set ${\bf W}_1$ is the interior of the regular pentagon with the vertices $\pi 'e_1$, $\pi 'e_3$, $\pi ' e_5$, $\pi 'e_2$, $\pi ' e_4$ (considered in this order), ${\bf W}_2=-\tau {\bf W}_1$, ${\bf W}_3=\tau {\bf W}_1$ and ${\bf W}_4=-{\bf W}_1$. \section{Self-similarities of the singular pattern $\mathcal{P}$} The translation corresponding to the vector $5w=(1,1,1,1,1)\in \mathbb{Z}^5\cap {\bf E}''$ \begin{equation} \mathbb{Z}^5\cap \mathcal{E}_n\longrightarrow \mathbb{Z}^5\cap \mathcal{E}_{n+5}: x\mapsto x+5w \end{equation} is a one-to-one transformation, $\pi (x+5w)=\pi x$ and $\pi '(x+5w)=\pi 'x$. If we put ${\bf W}_{5k}=\emptyset $, ${\bf W}_{5k+1}={\bf W}_1$, ${\bf W}_{5k+2}=-\tau {\bf W}_1$, ${\bf W}_{5k+3}=\tau {\bf W}_1$ and ${\bf W}_{5k+4}=-{\bf W}_1$ for any $k\in \mathbb{Z}$ then we can re-write the definition of $\mathcal{P}_v$ as \begin{equation} \mathcal{P}_v=\bigcup_{n\in \mathbb{Z}}\left\{ \ \pi x\ | \ x\in \mathbb{Z}^5\cap \mathcal{E}_n , \ \pi 'x\in v+{\bf W}_n\ \right\}. \end{equation} We say \cite{Ka,Ma} that $\lambda \in \mathbb{R}$ is an {\it inflation factor} of $\mathcal{P}_v$ if there exists a point $y\in {\bf E}$ (called an {\it inflation center}) such that $\mathcal{P}_v$ is invariant under the mapping ${\bf E}\longrightarrow {\bf E}: x\mapsto y+\lambda (x-y)$, that is, $x\in \mathcal{P}_v\ \Longrightarrow \ y+\lambda (x-y)\in \mathcal{P}_v $.\\[5mm] {\bf Theorem 1. } {\it The matrix $A=\lambda \pi +\lambda '\pi '+\lambda ''\pi ''$ has integer entries if and only if $(\lambda , \lambda ',\lambda '')$ belongs to the set} \[ \begin{array}{l} \mathcal{L}\!=\!\left\{ \left( \frac{ \alpha - \beta }{2}+\beta \tau ,\frac{\alpha -\beta }{2}+\beta \tau ', \frac{\alpha +\gamma }{2}+2\gamma \right)\ ; \ \alpha ,\beta ,\gamma \in \mathbb{Z}, \ (-1)^\alpha \!=\!(-1)^\beta \!=\!(-1)^\gamma \ \right\}. \end{array} \] {\bf Proof.} From (\ref{projectors}) we get \[ \begin{array}{l} A=\mathcal{M}\left( \frac{2}{5}\lambda +\frac{2}{5}\lambda '+\frac{1}{5}\lambda '', \frac{-1+\sqrt{5}}{10}\lambda +\frac{-1-\sqrt{5}}{10}\lambda '+\frac{1}{5}\lambda '', \frac{-1-\sqrt{5}}{10}\lambda +\frac{-1+\sqrt{5}}{10}\lambda '+\frac{1}{5}\lambda ''\right).\end{array} \] If this matrix has integer entries then $2\lambda +2\lambda '+\lambda ''\in 5\mathbb{Z}$, $-\lambda -\lambda ' +2\lambda ''\in 5\mathbb{Z}$ and $\lambda -\lambda '\in \mathbb{Z}\sqrt{5}$, whence $\lambda +\lambda '\in \mathbb{Z}$. Denoting $\lambda +\lambda '=\alpha $ and $\lambda -\lambda '=\beta \sqrt{5}$ we obtain $\lambda =\frac{ \alpha - \beta }{2}+\beta \tau $, $\lambda '=\frac{\alpha -\beta }{2}+\beta \tau '$ and $-\alpha +2\lambda ''\in 5\mathbb{Z}$. If we put $-\alpha +2\lambda ''=5\gamma $ then $\lambda ''=\frac{\alpha +5\gamma }{2}$ and $A=\mathcal{M}\left(\frac{\alpha +\gamma }{2},\frac{\gamma +\beta }{2},\frac{\gamma -\beta }{2}\right)$. In order to have integer entries the numbers $\alpha $, $\beta $ and $\gamma $ must have the same parity. \qquad $\rule{2mm}{2mm} $\\[5mm] {\bf Theorem 2.} {\it If $(\lambda ,\lambda ',\lambda '')$ belongs to the subset \begin{equation} \tilde{\mathcal{L}}=\{ \ (\lambda ,\lambda ',\lambda '')\in \mathcal{L}\ |\ \ \lambda '{\bf W}_n\subseteq {\bf W}_{\lambda ''n},\ \ for\ any\ \ n\in \{ 1,2,3,4\}\ \} \end{equation} then $\lambda $ is an inflation factor of the singular pattern $\mathcal{P}$.}\\[3mm] {\bf Proof.} If $(\lambda ,\lambda ',\lambda '')\in \tilde{\mathcal{L}}$ then the corresponding mapping $A:\mathbb{R}^5\longrightarrow \mathbb{R}^5$, $A=\lambda \pi +\lambda '\pi '+\lambda ''\pi ''$ has the property \begin{equation} \left. \begin{array}{r} x\in \mathbb{Z}^5\cap \mathcal{E}_n \\ \pi 'x\in {\bf W}_n \end{array} \right\} \quad \Longrightarrow \quad \left\{ \begin{array}{l} Ax\in \mathbb{Z}^5\cap \mathcal{E}_{\lambda ''n} \\ \pi '(Ax)\in {\bf W}_{\lambda ''n} \end{array} \right. \end{equation} In view of the definition (\ref{pattern}) considered for $v=(0,0,0,0,0)$, this means that $\pi x\in \mathcal{P}\Longrightarrow \pi (Ax)\in \mathcal{P}$. But $\pi (Ax)=\lambda \pi x$, and hence, we have $y\in \mathcal{P}\Longrightarrow \lambda y\in \mathcal{P}$. \qquad $\rule{2mm}{2mm} $\\[5mm] {\bf Theorem 3.} {\it Each element of the set \[ \begin{array}{ll} \Lambda &=\left\{ \left. 1-\frac{\beta +5\gamma }{2} +\beta \tau \ \right| \ \beta , \gamma \in \mathbb{Z},\ \ (-1)^\beta =(-1)^\gamma , \ \ -\frac{\tau }{2}\leq 1-\frac{\beta +5\gamma }{2} +\beta \tau ' \leq 1 \right\} \\[2mm] &\cup \left\{ \left. 2-\frac{\beta +5\gamma }{2} +\beta \tau \ \right| \ \beta , \gamma \in \mathbb{Z}, \ \ (-1)^\beta =(-1)^\gamma , \ \ -\frac{1}{2}\leq 2-\frac{\beta +5\gamma }{2} +\beta \tau ' \leq \frac{1}{\tau } \right\} \\[2mm] &\cup \left\{ \left. 3-\frac{\beta +5\gamma }{2} +\beta \tau \ \right| \ \beta , \gamma \in \mathbb{Z}, \ \ (-1)^\beta =(-1)^\gamma , \ \ -\frac{1}{\tau }\leq 3-\frac{\beta +5\gamma }{2} +\beta \tau ' \leq \frac{1}{2} \right\} \\[2mm] &\cup \left\{ \left. 4-\frac{\beta +5\gamma }{2} +\beta \tau \ \right| \ \beta , \gamma \in \mathbb{Z}, \ \ (-1)^\beta =(-1)^\gamma , \ \ -1\leq 4-\frac{\beta +5\gamma }{2} +\beta \tau ' \leq \frac{\tau }{2} \right\} \end{array} \] is an inflation factor of $\mathcal{P}$.}\\[3mm] {\bf Proof.} It is sufficient to analyse the cases $\lambda ''\in \{ 0,1,2,3,4,5\}$. In the case $\lambda ''=0$ we do not obtain any inflation factor. The ratio between the distance from the centre of a regular pentagon to one of its sides and the distance from the centre to one vertex is cos$\frac{\pi }{5}=\frac{\tau }{2}$. By imposing the conditions \[ \begin{array}{ll} \lambda '{\bf W}_1\subseteq {\bf W}_1, \quad \lambda '{\bf W}_2\subseteq {\bf W}_2,\quad \lambda '{\bf W}_3\subseteq {\bf W}_3, \quad \lambda '{\bf W}_4\subseteq {\bf W}_4& {\rm in\ the\ case} \ \lambda ''=1,\\ \lambda '{\bf W}_1\subseteq {\bf W}_2, \quad \lambda '{\bf W}_2\subseteq {\bf W}_4,\quad \lambda '{\bf W}_3\subseteq {\bf W}_1, \quad \lambda '{\bf W}_4\subseteq {\bf W}_3& {\rm in\ the\ case} \ \lambda ''=2,\\ \lambda '{\bf W}_1\subseteq {\bf W}_3, \quad \lambda '{\bf W}_2\subseteq {\bf W}_1,\quad \lambda '{\bf W}_3\subseteq {\bf W}_4, \quad \lambda '{\bf W}_4\subseteq {\bf W}_2& {\rm in\ the\ case} \ \lambda ''=3,\\ \lambda '{\bf W}_1\subseteq {\bf W}_4, \quad \lambda '{\bf W}_2\subseteq {\bf W}_3,\quad \lambda '{\bf W}_3\subseteq {\bf W}_2, \quad \lambda '{\bf W}_4\subseteq {\bf W}_1& {\rm in\ the\ case} \ \lambda ''=4 \end{array} \] we get the set $\Lambda $.\qquad $\rule{2mm}{2mm} $\\[5mm] The well-known inflation factor $\lambda \!=\!-\tau $ belongs to $\Lambda $, and is obtained for $\gamma\!=\!1$, $\beta \!=\!-1$. One can remark that $\Lambda $ is a union of four cut and project sets. Since the elements of the multiplicative semigroup $\mathcal{I}=\{ \, (-\tau )^k\ |\ k\in \mathbb{N}\, \}$ are not uniformly distributed in $\mathbb{R}$, we can not have $\Lambda \subset \mathcal{I}$.\\[5mm] {\bf Theorem 4.} {\it If $(\lambda ,\lambda ',\lambda '')$ belongs to the subset \begin{equation} \tilde{\mathcal{L}}_0=\{ \ (\lambda ,\lambda ',\lambda '')\in \mathcal{L}\ |\ \ \lambda '\overline{\bf W}_n\subset {\bf W}_{\lambda ''n}\ \ for\ any\ \ n\in \{ 1,2,3,4\}\ \} \end{equation} then there is an infinite number of inflation centers $y\in {\bf E}$ such that \begin{equation} {\bf E}\longrightarrow {\bf E}: \ x\mapsto y+\lambda (x-y) \end{equation} is an affine self-similarity of $\mathcal{P}$, that is, such that $x\in \mathcal{P}\Longrightarrow y+\lambda (x-y)\in \mathcal{P}$.}\\[3mm] {\bf Proof.} If $(\lambda ,\lambda ',\lambda '')\in \tilde{\mathcal{L}}_0$ then there is a neighbourhood $\mathcal{V}_{(\lambda ,\lambda ',\lambda '')}$ of the origin in ${\bf E}'$ such that the relation $(1-\lambda ')u+ \lambda '{\bf W}_n\subset {\bf W}_{\lambda ''n}$ is verified for any $n\in \{ 1,2,3,4\}$ and any $u\in \mathcal{V}_{(\lambda ,\lambda ',\lambda '')}$. Since the projection $\pi '(\mathbb{Z}^5\cap \mathcal{E})$ is dense in ${\bf E}'$, the set $\mathcal{C}_{(\lambda ,\lambda ',\lambda '')}=\left\{ t\in \mathbb{Z}^5\cap \mathcal{E}\ \left| \ \pi 't\in \mathcal{V}_{(\lambda ,\lambda ',\lambda '')}\right. \right\}$ is an infinite set. For each $t\in \mathcal{C}_{ \ (\lambda ,\lambda ',\lambda '')}$ the mapping $A_t:\mathbb{R}^5\longrightarrow \mathbb{R}^5$, $A_tx=t+(\lambda \pi +\lambda '\pi '+\lambda ''\pi '')(x-t)$ satisfies the relations $\pi '(A_tx)=\pi 't+\lambda ' \pi '(x-t)=(1-\lambda')\pi 't+\lambda ' \pi 'x$, and \begin{equation} \left. \begin{array}{r} x\in \mathbb{Z}^5\cap \mathcal{E}_n \\ \pi 'x\in {\bf W}_n \end{array} \right\} \quad \Longrightarrow \quad \left\{ \begin{array}{l} A_tx\in \mathbb{Z}^5\cap \mathcal{E}_{\lambda ''n} \\ \pi '(A_tx)\in {\bf W}_{\lambda ''n} \end{array} \right. \end{equation} This means that $\pi x\in \mathcal{P}\Longrightarrow \pi (A_tx)\in \mathcal{P}$, that is, $\pi x\in \mathcal{P}\Longrightarrow \pi t+\lambda (\pi x-\pi t)\in \mathcal{P}$.\qquad $\rule{2mm}{2mm} $\\[5mm] \section{Self-similarities of the patterns $\mathcal{P}_v$} In order to avoid certain `defects' in tiling we have to translate the strip ${\bf S}$ by a vector $v\in {\bf E}'$ such that the frontier of $v+{\bf S}$ contains no point of $\mathbb{Z}^5$, and to define $\mathcal{P}_v=\pi \left(\mathbb{Z}^5\cap (v+{\bf S})\right)$. The translated strip $v+{\bf S}$ corresponds to the translated window $v+{\bf W}=\pi ^\perp \left( v+(0,1)^5\right)$. In this case $\pi '\left( \mathcal{E}_n\cap (v+(0,1)^5)\right)=-nw+\mathcal{E}_n\cap (v+W)=v+{\bf W}_n$, for any $n\in \{ 1,2,3,4\}$.\\[5mm] {\bf Theorem 5.} {\it If $(\lambda ,\lambda ',\lambda '')\in \tilde{\mathcal{L}}_0$ then there is an infinite number of inflation centers $y\in {\bf E}$ such that \begin{equation} {\bf E}\longrightarrow {\bf E}: \ x\mapsto y+\lambda (x-y) \end{equation} is an affine self-similarity of $\mathcal{P}_v$, that is, such that $x\in \mathcal{P}_v\Longrightarrow y+\lambda (x-y)\in \mathcal{P}_v$.}\\[3mm] {\bf Proof.} If $(\lambda ,\lambda ',\lambda '')\in \tilde{\mathcal{L}}_0$ then there is a neighbourhood $\mathcal{U}_{(\lambda ,\lambda ',\lambda '')}$ of $v$ in ${\bf E}'$ such that the relation $(1-\lambda ')(u-v)+ \lambda '{\bf W}_n\subset {\bf W}_{\lambda ''n}$ is verified for any $n\in \{ 1,2,3,4\}$ and any $u\in \mathcal{U}_{(\lambda ,\lambda ',\lambda '')}$. Since the projection $\pi '(\mathbb{Z}^5\cap \mathcal{E})$ is dense in ${\bf E}'$, the set $\mathcal{D}_{(\lambda ,\lambda ',\lambda '')}=\left\{ t\in \mathbb{Z}^5\cap \mathcal{E}\ \left| \ \pi 't\in \mathcal{U}_{(\lambda ,\lambda ',\lambda '')}\right. \right\}$ is an infinite set. For each $t\in \mathcal{D}_{ \ (\lambda ,\lambda ',\lambda '')}$ the mapping $A_t:\mathbb{R}^5\longrightarrow \mathbb{R}^5$, $A_tx=t+(\lambda \pi +\lambda '\pi '+\lambda ''\pi '')(x-t)$ satisfies the relations $\pi '(A_tx)=\pi 't+\lambda ' \pi '(x-t)=v+(1-\lambda')(\pi 't-v)+\lambda ' (\pi 'x-v)$ and \begin{equation} \left. \begin{array}{r} x\in \mathbb{Z}^5\cap \mathcal{E}_n \\ \pi 'x\in v+{\bf W}_n \end{array} \right\} \quad \Longrightarrow \quad \left\{ \begin{array}{l} A_tx\in \mathbb{Z}^5\cap \mathcal{E}_{\lambda ''n} \\ \pi '(A_tx)\in v+{\bf W}_{\lambda ''n} \end{array} \right. \end{equation} This means that $\pi x\!\in \!\mathcal{P}_v\Longrightarrow \pi (A_tx)\!\in \!\mathcal{P}_v$, that is, $\pi x\!\in \!\mathcal{P}_v\Longrightarrow \pi t\!+\!\lambda (\pi x\!-\!\pi t)\!\in \!\mathcal{P}_v$.\qquad $\rule{2mm}{2mm} $\\[5mm] \section{Concluding remark} In the existing models, generally, the pentagonal/decagonal quasicrystals are considered to be two-dimensional, the third dimension being treated separately. Recently, Ben-Abraham {\it et. al.} \cite{BA} have shown that strip projection method offers a natural way to describe these quasicrystals as three-dimensional quasicrystals. It is sufficient to choose in the above decomposition ${\bf E}^\perp $ as a physical space instead of ${\bf E}$. By a simple change of interpretation, our mathematical results concerning the Penrose type tilings become mathematical results concerning the model proposed by Ben-Abraham {\it et. al.} These new structures are invariant under certain transformations which act as inflation but with different constants in quasiperiodic layers and orthogonal direction. \section*{Acknowledgment} The author is grateful to one of the referees for some very useful suggestions. This research was supported by the grant CEx05-D11-03.
2023-04-23T06:10:12.831Z
2008-03-10T21:14:45.000Z
redpajama/arxiv
arxiv_0002
394
3,778
d81cc27dcdb035f24aadf28ca5040135e0875ff3
\section{Introduction} Study of hypernuclei is one of the frontiers in nuclear physics. The strange degree of freedom gives a new dimension to the description of nuclear structure. A hyperon (or a strange quark) embedded in nuclei plays a characteristic role as an ``impurity'' or a probe in the many body system\cite{Tamura}. Modern nucleon-nucleon ($NN$) potentials give a successful description of the $NN$ scattering data and have been used to make precise calculations of light nuclei\cite{Nogga,Pieper}. In contrast, hyperon-nucleon ($YN$) and hyperon-hyperon ($YY$) interactions have large uncertainties, because the scattering experiments are either difficult or impossible due to the short life-time of hyperons. The full phase shift analysis at the same level of the $NN$ case is not available yet. Although a lot of theoretical models which describe $YN$ interaction together with the $NN$ interaction have been published, the different model predicts different phase shifts and scattering parameters, e.g., of the $\Lambda N$ potential in spite of the nice description of the $NN$ sector\cite{ESC04,NSC97,NSC89,FSS,fss2,GSOBEP }. Recently, a lattice QCD study of the $NN$ potential has been performed\cite{Ishii}. This approach may lead to a new paradigm to study the $YN$ and $YY$ interactions too, since the lattice QCD is an {\it ab initio} method of treating the fundamental theory of strong interaction. (See also the conventional approach to the $YN$ phase shifts using the L\"{u}scher's finite volume formula\cite{Beane}.) The purpose of the present report is to explore the $YN$ and $YY$ potentials from the lattice QCD simulation on the basis of the methodology developed in Refs.~\cite{Ishii,Aoki}. The extension from the $NN$ potential to $YN$ and $YY$ potentials is relatively straightforward. In the case of the $NN$ potential, there are only two representations in the isospin channel, i.e., ${\bf 2}\otimes{\bf 2}={\bf 3}\oplus{\bf 1}$ in $SU(2)$, which correspond to isovector and isoscalar channels. Including the strange degrees of freedom extends the arithmetic into flavor $SU(3)$, $ {\bf 8}\otimes{\bf 8}= {\bf 27}\oplus\overline{\bf 10}\oplus{\bf 1}\oplus{\bf 8}\oplus {\bf 10}\oplus{\bf 8}. $ Here the isovector (isoscalar) channel of the $NN$ sector is assigned to be a subset in the ${\bf 27}$-plet ($\overline{\bf 10}$-plet) representation. The potentials for newly arising channels are hardly determined from the real experiment so far. The lattice QCD simulation with physical strange quark mass provides new numerical ``data'' for these strange channels. In this report, we focus on the $N\Xi$ potential in the isovector ($I=1$) channel as a first step. We keep away, at present, from the isoscalar ($I=0$) channel of the $N\Xi$ potential, since it is not the lowest state of the isoscalar $^1S_0$ channel and $\Lambda\Lambda$ strong decay mode may open below the $N\Xi$. There are almost no experimental information on the $N\Xi$ interaction, although a few experimental data\cite{Nakazawa,Fukuda,Khaustov} suggest that the $\Xi$-nucleus potential would be weakly attractive. Moreover, the $\Xi$-nucleus interaction will be studied as a day one experiment in the near future at J-PARC\cite{JPARC} through $(K^-,K^+)$ reaction on the nuclear target such as $^{12}$C. \section{Formulation} As we mentioned, the methodology to obtain the potential is along the lines of Refs.~\cite{Aoki} and \cite{Ishii}. The latter describes, successfully for the first time, the $NN$ potentials from lattice QCD simulation. We start from an effective Schr\"{o}dinger equation for $N\Xi$ system at low energies: \begin{equation} -{1\over 2\mu}\nabla^2 \phi(\vec{r}) + \int d^3r^\prime U(\vec{r},\vec{r}^\prime) \phi(\vec{r}^\prime) = E \phi(\vec{r}), \end{equation} where $\mu=m_{N}m_{\Xi}/(m_{N}+m_{\Xi})$ and $E$ are the reduced mass of the $N\Xi$ system and the nonrelativistic energy in the center-of-mass frame, respectively. The nonlocal potential can be represented by the derivative expansion as \begin{equation} U(\vec{r},\vec{r}^\prime)= V_{N\Xi}(\vec{r},\nabla)\delta(\vec{r}-\vec{r}^\prime). \end{equation} The general expression of the potential $V_{N\Xi}$ would be \begin{eqnarray} V_{N\Xi} &=& V_0(r) +V_\sigma(r)(\vec{\sigma}_N\cdot\vec{\sigma}_\Xi) +V_\tau(r)(\vec{\tau}_N\cdot\vec{\tau}_\Xi) +V_{\sigma\tau}(r) (\vec{\sigma}_N\cdot\vec{\sigma}_\Xi) (\vec{\tau}_N\cdot\vec{\tau}_\Xi) \nonumber \\ && +V_T(r)S_{12} +V_{T\tau}(r)S_{12}(\vec{\tau}_N\cdot\vec{\tau}_\Xi) +V_{LS}(r)(\vec{L}\cdot\vec{S}_+) +V_{LS\tau}(r)(\vec{L}\cdot\vec{S}_+)(\vec{\tau}_N\cdot\vec{\tau}_\Xi) \nonumber \\ && +V_{ALS}(r)(\vec{L}\cdot\vec{S}_-) +V_{ALS\tau}(r)(\vec{L}\cdot\vec{S}_-)(\vec{\tau}_N\cdot\vec{\tau}_\Xi) +{O}(\nabla^2). \end{eqnarray} Here $S_{12}=3(\vec{\sigma}_1\cdot\vec{n})(\vec{\sigma}_2\cdot\vec{n})-\vec{\sigma}_1\cdot\vec{\sigma}_2$ is the tensor operator with $\vec{n}=\vec{r}/|\vec{r}|$, $\vec{S}_{\pm}=(\vec{\sigma}_1 \pm \vec{\sigma}_2)/2$ symmetric ($+$) and antisymmetric ($-$) spin operators, $\vec{L}=-i\vec{r}\times\vec{\nabla}$ the relative angular momentum operator, and $\vec{\tau}_N$ ($\vec{\tau}_{\Xi}$) is isospin operator for $N$ ($\Xi$). We note that the antisymmetric spin-orbit forces ($V_{ALS}$ and $V_{ALS\tau}$) newly come up because the constituents ($N$ and $\Xi$) are not identical. According to the above expansion of the potential, the wave function should be classified by the total isospin $I$, the total angular momentum and parity $J^\pi$ with $\vec{J}=\vec{L}+\vec{S}_+$. Particular spin (isospin) projection can be made in terms of $\vec{\sigma}_N\cdot\vec{\sigma}_\Xi$ ($\vec{\tau}_N\cdot\vec{\tau}_\Xi$), e.g., for the isospin projection we have $P^{(I=0)}=(1-\vec{\tau}_N\cdot\vec{\tau}_\Xi)/4$ and $P^{(I=1)}=(3+\vec{\tau}_N\cdot\vec{\tau}_\Xi)/4$. In this work, we focus only the isospin $I=1$, $S$-wave component of the wave function, so as to obtain the (effective) central potential through \begin{equation} V_{\rm central}(r) = E + {1\over 2\mu}{\vec{\nabla}^2\phi(r)\over \phi(r)}. \end{equation} The $S$-wave wave function is measured from the equal-time Bethe-Salpeter (BS) amplitude $\phi(\vec{r};k)$ as \begin{eqnarray} \phi(\vec{r};k) &=& {1\over 24} \sum_{{\cal R}\in O} {1\over L^3} \sum_{\vec{x}} P^\sigma_{\alpha\beta} \left\langle 0 \left| p_\alpha({\cal R}[\vec{r}]+\vec{x}) \Xi^0_\beta(\vec{x}) \right| p \Xi^0 ; k \right\rangle, \\ % p_\alpha(x) &=& \varepsilon_{abc} \left( u_a(x) C \gamma_5 d_b(x) \right) u_{c\alpha}(x), \\ % \Xi^0_\beta(y) &=& \varepsilon_{abc} \left( u_a(y) C \gamma_5 s_b(y) \right) s_{c\beta}(y), \end{eqnarray} where $\alpha$ and $\beta$ denote the Dirac indices, $a$, $b$ and $c$ the color indices, and $C=\gamma_4\gamma_2$ the charge conjugation matrix. The summation over ${\cal R}\in O$ is taken for cubic transformation group to project onto the $S$-wave, and the summation over $\vec{x}$ for zero total spatial momentum. $p_\alpha(x)$ and $\Xi^0_\beta(y)$ are the local field operators for the proton and $\Xi^0$. We take the upper components of the Dirac indices $\alpha$ and $\beta$ to construct the spin singlet (triplet) channel by $P^\sigma_{\alpha\beta}=(\sigma_2)_{\alpha\beta}$ ($P^\sigma_{\alpha\beta}=(\sigma_1)_{\alpha\beta}$). The $\phi(\vec{r})$ with $\vec{r}=\vec{x}-\vec{y}$ is understood as the probability amplitude to find ``nucleon-like'' three quarks located at point $\vec{x}$ and ``$\Xi$-like'' three quarks located at point $\vec{y}$. The $\phi(\vec{r})$ includes not only the elastic amplitude $N\Xi\rightarrow N\Xi$ but also the inelastic amplitudes such as $N\Xi\rightarrow \pi N\Xi$ and $N\Xi \rightarrow \Lambda\Sigma$, and so on. Note that, at low energies below the thresholds, however, the asymptotic behavior of $\phi(\vec{r})$ is not affected by the inelastic contributions, since they decrease exponentially in the asymptotic region. { (In the present calculation with the isospin $I=1$, the $\Lambda\Lambda$ channel is closed due to the isospin conservation.) } On the other hand, $\phi(\vec{r})$ and hence the potential may depend on the interpolating fields in the interaction region. Further study on this issue is found in Ref.\cite{IshiiFull} for the $NN$ potential. In the actual simulations, the BS amplitude is obtained through the four-point correlator, \begin{eqnarray} F_{p\Xi^0}(\vec{x}, \vec{y}, t; t_0) &=& \left\langle 0 \left| p_\alpha(\vec{x},t) \Xi^0_\beta(\vec{y},t) \overline{\cal J}_{p \Xi^0}(t_0) \right| 0 \right\rangle \\ &=& \sum_n A_n \left\langle 0 \left| p_\alpha(\vec{x}) \Xi^0_\beta(\vec{y}) \right| n \right\rangle {\rm e}^{-E_n(t-t_0)}. \end{eqnarray} Here $\overline{\cal J}_{p\Xi^0}(t_0)$ is a source term located at $t=t_0$. We utilize the wall source for $\overline{\cal J}_{p\Xi^0}$ in this work in order to enhance the lowest scattering state of the $p\Xi^0$ system. $E_n$ is the energy of the $p\Xi^0$ state, $|n\rangle$, and $A_n(t_0)=\langle n|\overline{\cal J}_{p\Xi^0}(t_0)|0\rangle$. We also calculate the two-point correlator, $C(t;t_0)=\sum_{\vec{x}}\langle 0|{\cal B}_\alpha(\vec{x},t) \overline{\cal B}_\alpha(\vec{x}^\prime,t_0)|0\rangle$, for the octet baryons (${\cal B}=N,\Xi,\Lambda,\Sigma$), in order to check whether various two baryon (${\Lambda\Lambda}$,${N\Xi}$,${\Lambda\Sigma}$ and ${\Sigma\Sigma}$) thresholds are reproduced in the correct order. The interpolating fields for $\Lambda$ and $\Sigma^+$, employed in this work, are given by \begin{equation} \Lambda_\alpha(x)= \varepsilon_{abc} \left\{ % \left( d_a(x) C \gamma_5 s_b(x) \right) u_{c\alpha}(x) % + \left( s_a(x) C \gamma_5 u_b(x) \right) d_{c\alpha}(x) % - 2 \left( u_a(x) C \gamma_5 d_b(x) \right) s_{c\alpha}(x) % \right\}, \end{equation} \begin{equation} \Sigma^+_\beta(y)= - \varepsilon_{abc} \left( u_a(y) C \gamma_5 s_b(y) \right) u_{c\beta}(y). \end{equation} \section{Numerical calculation} We use the standard Wilson gauge action at the gauge coupling $\beta=5.7$ on the $32^3\times 32$ lattice together with the standard Wilson quark action. See Ref.\cite{IshiiFull} for details. The hopping parameter of $\kappa_{ud}=0.1678$ is chosen for the $u$ and $d$ quarks, which corresponds to $m_\pi\simeq 0.37$~GeV, $m_\rho\simeq0.81$~GeV, and $m_N\simeq 1.16$~GeV. In order to determine the parameter for the strange quark mass ($\kappa_s$), we first measure the correlators of pseudo scalar and vector mesons by using the interpolating fields given by \begin{eqnarray} {\cal M}_{\rm ps}(x) &=& \overline{q}_{1,c\alpha}(x) \gamma_{5,\alpha\beta} q_{2,c\beta}(x) \qquad \mbox{for pseudo scalar meson}, \\ {\cal M}_{{\rm v},k}(x) &=& \overline{q}_{1,c\alpha}(x) \gamma_{k,\alpha\beta} q_{2,c\beta}(x) \qquad \mbox{for vector meson}, \end{eqnarray} with applying six set of hopping parameters; first three sets for $\kappa_{1}=\kappa_2$ ($\pi$ and $\rho$ mesons) with taking a number from $\{0.1678, 0.1665, 0.1640\}$, and another three sets for $\kappa_{1}>\kappa_2$ ($K$ and $K^\ast$ mesons) with taking different two numbers from $\{0.1678, 0.1665, 0.1640\}$. Assuming the following functional forms for the pseudo scalar meson mass squared and for the vector meson mass, \begin{eqnarray} && (m_{ps}a)^2 = {B\over 2} \left( {1\over \kappa_1} - {1\over \kappa_c} \right) + {B\over 2}\left( {1\over \kappa_2} - {1\over \kappa_c} \right), \\ && (m_{v}a) = C + {D\over 2} \left( {1\over \kappa_1} - {1\over \kappa_c} \right) + {D\over 2}\left( {1\over \kappa_2} - {1\over \kappa_c} \right), \end{eqnarray} we obtained critical hopping parameter $\kappa_c=0.1693$, and the physical parameter, $\kappa_{phys}=0.1691$, from $(m_\pi a/m_\rho a)=(135/770)$. The lattice scale is determined so as to be $a=0.1420$ fm from the physical $\rho$ meson mass. The parameter for the strange quark mass is determined as $\kappa_s=0.1643$ from the physical $K$ meson mass (494 MeV). \section{Results and Discussion} \begin{table}[b] \centering \leavevmode \begin{tabular}{cccccccc} \hline $m_\pi$ & $m_\rho$ & $m_K$ & $m_{K^\ast}$ & $m_p$ & $m_{\Xi^0}$ & $m_\Lambda$ & $m_{\Sigma^+}$ \\ \hline 367(1) & 811(4) & 552.6(5) & 882(2) & 1164(7) & 1379(6) & 1263(6) & 1312(6) \\ \hline \end{tabular} \caption{Hadron masses, given in units of MeV, measured from the lattice QCD simulation. The number in the parenthesis shows the errorbar in the last digit.} \label{masses} \end{table} Table~\ref{masses} lists the hadron masses measured from the present lattice QCD simulation. 1283 gauge configurations are used to calculate the hadron masses and wave functions. (17 exceptional configurations are not used out of totally 1300 gauge configurations.) We note that the present results for the baryon masses provide the correct order of particular threshold energy of two baryon states in the strangeness $S=-2$ sector; $E_{th}(\Lambda\Lambda)=2525(11)$MeV (this channel is not allowed in the present case because of isospin conservation), $E_{th}(N\Xi)=2544(12)$MeV, $E_{th}(\Lambda\Sigma)=2575(11)$MeV, and $E_{th}(\Sigma\Sigma)=2624(11)$MeV. This warrants the desirable asymptotic behavior of the wave function. \begin{figure}[t] \centering \leavevmode \includegraphics[width=.8\textwidth]{tmp_wave_11_.eps} \caption{The radial wave function of $p\Xi^0$, in $^1S_0$ (circle) and $^3S_1$ (triangle) channels, obtained from the lattice QCD at $t-t_0=6$. } \label{wave} \end{figure} Figure~\ref{wave} shows the wave function obtained at the time slice $t-t_0=6$. The $^1S_0$ ($^3S_1$) channel is plotted by circles (triangles), which are normalized at the spatial boundary $\vec{r}=(32/2,0,0)$. All the data are taken into account for $r\compoundrel<\over\sim 0.7$ fm, while only the data on the $x$-, $y$-, and $z$-axis and their nearest neighbors are used to plot for the outer region. As seen in the Figure, the wave functions are suppressed in the short distance region, and a slight enhancement is found in the medium range region for both $^1S_0$ and $^3S_1$ channels. There is sizable difference between the $^1S_0$ and $^3S_1$, particularly of suppression in the short distance, suggesting that the $^1S_0$ channel has stronger repulsive core. \begin{figure} \centering \leavevmode \includegraphics[width=.9\textwidth]{tmp_pot_11_.eps} \caption{The effective central potential for $p\Xi^0$, in the $^1S_0$ (circle) and $^3S_1$ (triangle), obtained from the wave function at time slice $t-t_0=6$. The inset shows its enlargement. } \label{pot} \end{figure} Figure~\ref{pot} shows the (effective) central potentials for $p\Xi^0$ in the $^1S_0$ and $^3S_1$ channels. These results are still preliminary, since the potentials are obtained by assuming $E=0$: The energy should be determined by fitting the asymptotic behavior of the wave function with the use of the Green's function $G(\vec{r},E)$ which is a solution of the Helmholtz equation on the lattice\cite{Aoki,Luscher}. Preliminary calculation suggests that $E$ would be small negative values for both $^1S_0$ and $^3S_1$ channels, similar to the case of the $NN$ potential\cite{Ishii,IshiiProc}. In order to see the ground state saturation of the present results, we plot, in Fig.~\ref{pots}, the time-slice dependence of the potential in both of the $^1S_0$ (left-hand-side) and $^3S_1$ (right-hand-side) channels at several radial distances; $r=0.14$, $0.20$, $0.71$, $1.42$, and $2.27$ fm. We can see that the saturation is achieved for $t-t_0\ge 6$ within errors. \begin{figure}[t] \centering \leavevmode \begin{minipage}[t]{0.49\textwidth} \includegraphics[width=.98\textwidth]{tmp_pot_11_upper_1S0_.eps} \includegraphics[width=.98\textwidth]{tmp_pot_11_lower_1S0_.eps} \footnotesize \end{minipage} \hfill \begin{minipage}[t]{0.49\textwidth} \includegraphics[width=.98\textwidth]{tmp_pot_11_upper_3S1_.eps} \includegraphics[width=.98\textwidth]{tmp_pot_11_lower_3S1_.eps} \end{minipage} \caption{The time-slice dependence of the potential in the $^1S_0$ (left-hand-side) and $^3S_1$ (right-hand-side) channel for several radial distance $r$. } \label{pots} \end{figure} The present work is a first step toward the $YN$ and $YY$ potentials from the lattice QCD simulation. Systematic studies of the various channels such as $\Lambda N$, $\Sigma N$, $\Lambda\Lambda$, and so on are all interesting and important because they are intimately related not only to the structure of hypernuclei but also to the internal structure of neutron stars. We will present such studies in the near future. \acknowledgments Lattice QCD Monte Carlo calculation has been done with IBM Blue Gene/L computer at KEK. H.~N. is supported by the Special Postdoctoral Researchers Program at RIKEN. This research was partly supported by Grants-in-Aid for Young Scientists (B) (No. 17740174) from the Japan Society for Promotion of Science (JSPS), and by the Ministry of Education, Science, Sports and Culture, Grant-in-Aid (Nos. 13135204, 15540251, 15540254, 18540253, 19540261).
2023-04-23T06:10:13.601Z
2007-10-19T04:45:23.000Z
redpajama/arxiv
arxiv_0002
422
2,928
b631f165764f528fb7399d6846d5f865ec58466f
\section{Introduction} Time-dependent backgrounds in string theory are hard to analyze\cite{Liu:2002ft}. Perturbative string theory breaks down in some spacetime regions due to a large string coupling, and it appears that a full nonperturbative string theory formulation is required. One clean example with the lightlike Linear Dilaton theory is proposed in \cite{Craps:2005wd}. On the other hand, there are some interesting developments which emphasize the role of perturbative string theory in the analysis of time-dependent backgrounds\cite{McGreevy:2005ci, Horava:2007yh}. But the complete understanding of time-dependent backgrounds is still out of reach in string theory. It turns out that many interesting cosmological solutions have broken Lorentz symmetry. And it is interesting to consider these solutions with their manifest global symmetry. Furthermore fundamental issues related to time, especially to ``emergent time'', is not clear (see, {\it e.g.,}\cite{Seiberg:2006wf}). Thus it is interesting to consider alternative approaches, which can shed light on time-dependent backgrounds and on fundamental issues of time. Recently a bosonic string theory with manifest Galilean symmetry in target space was constructed in an elementary fashion\cite{Kim:2007hb}, motivated by earlier works\cite{Gomis:2000bd, Danielsson:2000gi, Danielsson:2000mu}. These non-relativistic string theories clearly treat time differently than relativistic string theory. In non-relativistic string theories, time in target space can be described by the first order nonunitary $\beta\gamma$ CFT, while second order $X^0$ CFT plays the role of time in the relativistic theory. Thus we can hope to obtain some insights on the issues of time-dependent backgrounds in string theory from this very different approach. As we mention in the final section of this paper, there are some intriguing pieces of evidence that these non-relativistic string theories can be connected to known time-dependent backgrounds in string theory. This possibility opens up a new framework for addressing the issues related to time and to time-dependent string solutions. With these motivations, we briefly review the construction of the bosonic non-relativistic string theory, which has a manifest Galilean symmetry in target space. Compared to earlier works, the theory does not assume a compact coordinate and has a simpler action, a $\beta\gamma$ CFT in addition to the usual bosonic $X$ CFTs. The first order $\beta\gamma$ CFT is directly related to time and energy in target space. Time in target space is parametrized by a one parameter family of selection sector and is explicitly realized through the generalized Galilean boost symmetry of the action. We quantize the theory in an elementary fashion which reveals many interesting features. The spectrum is very similar to the relativistic bosonic string theory, except for the overall motion of the string which is governed by a non-relativistic energy dispersion relation. The ground state has the energy \begin{equation} E = \frac{1}{p p'}\left(\frac{\alpha'}{4} k^ik_i - 1\right)\ , \end{equation} where $p$ and $p'$ are the parameters which specify the selection sector and the ground state vertex operator, respectively, and $k^i$s are the transverse momenta. The particle corresponding to the ground state is still ``tachyonic'' because it is possible to have negative energy for the range $\frac{\alpha'}{4} k^ik_i \leq 1$. Thus it is desirable to remove this state from the spectrum. The first excited state has $24$ degrees of freedom which transform into each other under $SO(24)$ rotations. The world sheet constraint algebra imposes strong restrictions on the spectrum of string theories. We can enlarge the world sheet constraint algebra by adding the supercurrents to construct non-relativistic superstring theories. We start with the non-relativistic superstring action in terms of the component fields in the critical case, which reveals an interesting simplification in the fermionic sector. The fermionic sector can be rewritten in the same form as in the relativistic superstring theory with a simple transformation. The rest of the quantization is very similar to that of the relativistic superstring theory, except for a different global symmetry structure. We explicitly construct the vertex operators using the bosonization technique,then we quantize the theory and check the modular invariance. We encounter a non-relativistic analogue of the Dirac equation in the ground state of the $R$ sector. By solving the equation we show that the fermionic sector has eight physical degrees of freedom which transform in the spinor representation ${\bf 8}$ of $SO(8)$. But there is one clear difference: the fermions in this theory are non-chiral. We contrast this to the relativistic case. This is done in section 2. In section 3, we consider the ``noncritical'' version of non-relativistic superstring theories. We present the superspace formulation of the new first order matter ${\bf \Sigma\Gamma}$ CFT in detail. There exist an infinite range of possible string theories for the general conformal weights of the ${\bf \Sigma\Gamma}$ CFT. There are two different categories in the noncritical theories distinguished by the conformal weight of the $\beta\gamma$ CFT: those with integer weight and those with half integer conformal weight. The former case is similar to the case we quantize in this paper. The latter case seems more exotic and it is expected to give us a rather different view on the geometric interpretation of target space. Using the world sheet constraint algebra, we construct all possible string theories with extended supersymmetry in section 4. The bosonic and supersymmetric non-relativistic string cases are presented here. We comment on some immediate observations. We conclude in section 5. In section 6, we mention possible intriguing applications of this non-relativistic string theory to time-dependent string backgrounds such as the lightlike Linear Dilaton theory. \section{Critical Non-Relativistic Supersymmetric String} \subsection{New Matter $\beta\gamma$ CFT and $bc$ CFT} We start with a full non-relativistic superstring action of component fields in the conformal gauge \begin{eqnarray} S &=& \int \frac{d^2z}{2\pi} \left( \beta \bar{\partial} \gamma + \bar{\beta} \partial \bar{\gamma} + \frac{1}{\alpha'} \partial X^i \bar{\partial} X_i + b_g \bar{\partial} c_g + \bar{b_g} \partial \bar{c_g}\right) \nonumber \\ &+& \int \frac{d^2z}{2\pi} \left( b \bar{\partial} c + \bar{b} \partial \bar{c} + \frac{1}{2} \left( \psi^i \bar{\partial} \psi_i + \bar{\psi}^i \partial \bar{\psi}_i \right) + \beta_g \bar{\partial} \gamma_g + \bar{\beta_g} \partial \bar{\gamma_g}\right) \ , \label{fermionicaction1} \end{eqnarray} where $i$ runs from 2 to 9 for $X^i$ and $\psi^i$ for the critical non-relativistic superstring theory. The commuting matter $\beta\gamma$ CFT has weights, $h(\beta) = 1$ and $h(\gamma) = 0$, and has its central charge, $c_{\beta\gamma} = 2$. The anticommuting matter $bc$ CFT, whose central charge is $c_{bc} = 1$, has weight $h(b) = 1/2$ and $h(c) = 1/2$. In conventional notation for the superstring case, the total central charge of the matter sector is $\hat{c}^{\bf m} = \frac{2}{3} c^{\bf m} = \frac{2}{3} (3 + \frac{3}{2} D)$, which cancels the central charge from the ghost sector $\hat{c}^{\bf gh} = \frac{2}{3} c^{\bf gh} = \frac{2}{3} (-26 + 11) = -10$. Thus this theory is anomaly free if $D = 8$. This is indicated above by the spatial index $i$ which runs from 2 to 9. We consider the cases with general conformal weights in the matter $\beta\gamma$ and $bc$ CFTs in the next section. The case with conformal weight of $\beta$ as $1$ is rather special and we will call it as the ``critical'' case as in bosonic non-relativistic theory. We briefly comment on the new matter $\beta\gamma$ and $bc$ CFTs. Their OPEs are \begin{eqnarray} \gamma(z_1) \beta(z_2)\hspace{0.1 in} \sim & \frac{1}{z_{12}} &\sim \hspace{0.1 in} - \beta(z_1) \gamma(z_2) \ , \\ b(z_1) c(z_2) \hspace{0.1 in} \sim & \frac{1}{z_{12}} &\sim \hspace{0.1 in} c(z_1) b(z_2) \ . \end{eqnarray} The bosonic and the fermionic energy momentum tensors and their mode expansions are \begin{eqnarray} T_{b}^{\beta\gamma b c} = - (\partial \gamma) \beta -\frac{1}{2} c (\partial b) + \frac{1}{2}(\partial c) b &=& \sum_{m \in {\bf Z}} \frac{L_m}{z^{m+2}} \ , \label{bosonicenergymomentumtensor111}\\ T_{f}^{\beta\gamma b c} = \frac{1}{2} c \beta - \frac{1}{2} (\partial \gamma) b &=& \sum_{r \in {\bf Z} + \nu} \frac{G_r}{2 \cdot z^{r+3/2}} \ . \label{bosonicenergymomentumtensor222} \end{eqnarray} As is well known there are two possible sectors for the fields with the half integer conformal weight. These are $\nu = 0$ and $\nu = 1/2$ cases corresponding to $R$ sector and $NS$ sector, respectively. We can also find mode expansions and their hermiticity properties of the fields \begin{eqnarray} \gamma(z) = \sum_{n \in {\bf Z}} \frac{\gamma_n}{z^{n}} \ , \hspace{0.3 in} \gamma_n^{\dagger} = \gamma_{-n} \ , && \beta(z) = \sum_{n \in {\bf Z}} \frac{\beta_n}{z^{n+ 1}} \ , \hspace{0.3 in} \beta_{n}^{\dagger} = - \beta_{-n} \ , \\ c(z) = \sum_{r \in {\bf Z} + \nu} \frac{c_r}{z^{r+1/2}} \ , \hspace{0.2 in} c_r^{\dagger} = c_{-r} \ , && b(z) = \sum_{r \in {\bf Z} + \nu} \frac{b_r}{z^{r+ 1/2}} \ , \hspace{0.2 in} b_{r}^{\dagger} = b_{-r} \ . \end{eqnarray} And the mode expansions for the energy momentum tensors are \begin{eqnarray} && L_m^{\beta\gamma b c} = \sum_{n \in {\bf Z}} n \beta_{m-n}\gamma_n + \sum_{s \in {\bf Z} +\nu} (s - m/2) b_{m-s} c_s + a\delta_{m,0} \ , \\ && G_r^{\beta\gamma b c} = \sum_{m \in {\bf Z}} \left( c_{r-m} \beta_m + m \gamma_m b_{r-m} \right) \ . \end{eqnarray} There is a normal ordering constant for $L_0$ in each sector \begin{equation} a_R^{\beta\gamma b c} = \frac{1}{8} \ , \hspace{0.3 in} a_{NS}^{\beta\gamma b c} = 0 \ . \end{equation} This is only from the new matter sector, $\beta\gamma$ and $ b c$ CFTs, and is one part of the total normal ordering constant.\footnote {It is important to observe that the total normal ordering constant for non-relativistic superstring theory is the same as that of the relativistic theory \begin{equation} a_R = 0, \hspace{0.3 in} a_{NS} = -\frac{1}{2} \ , \nonumber \end{equation} because there are other contributions from the $X^i$ CFTs and the ghost CFT. } \subsection{Fermionic Sector and its Symmetry} The fermionic $bc$ CFT is a new ingredient of this non-relativistic superstring theory. There are immediate observations which are rather interesting. As we briefly mentioned at the beginning of this section, the conformal weights of the fields $b$, $c$ and all the other fermionic fields $\psi^i$ are equal and the value is $1/2$. From this observation, we can think about a transformation \begin{equation} c = \frac{1}{\sqrt{2}}(\psi^1 - \psi^0) \ , \qquad b = \frac{1}{\sqrt{2}}(\psi^1 + \psi^0) \ . \end{equation} Combining these fields with the other fermionic fields $\psi^i$, we can see that the action of the fermionic sector is exactly the same as that of the relativistic one \begin{equation} S_F = \int \frac{d^2z}{2\pi} \left( b \bar{\partial} c + \bar{b} \partial \bar{c} + \frac{1}{2} \left( \psi^i \bar{\partial} \psi_i + \bar{\psi}^i \partial \bar{\psi}_i \right) \right) = \int \frac{d^2z}{4\pi} \left( \psi^\mu \bar{\partial} \psi_\mu + \bar{\psi}^\mu \partial \bar{\psi}_\mu \right) \ , \end{equation} where $\mu$ runs from 0 to 9. We can naively think that there are $SO(9,1)$ invariance in the fermionic sector of this non-relativistic superstring theory. But as is obvious from the original action, there is no symmetry transformation which connects the fields $\psi^0, \psi^1$ and the other transverse fields $\psi^i$. The symmetry groups of the fermionic sector are the $SO(8)$ rotations among the $\psi^i$s as well as a one-parameter family of superconformal symmetry which is related to rescaling $\beta \rightarrow x \beta$ and $\gamma \rightarrow \gamma /x$.\footnote{We realize that there exist this symmetry when we have discussions with Professor Ori Ganor and with Professor Ashvin Vishwanath. We thank for their questions and comments related to this.} The latter is actually realized as the relative rescaling between $k^\gamma$ and $p'$ in the bosonic string case, related by rescaling $k^\gamma \rightarrow x k^\gamma$ and $p' \rightarrow p' /x$. We can denote this zero dimensional conformal symmetry as ``$SO(1,1)$'', thus the symmetry group turns out to be $SO(1,1) \times SO(8)$. This symmetry group becomes important when we consider a non-relativistic analogue of the Dirac equation. Even though we know there is no relativistic $SO(9,1)$ symmetry, we still use the relativistic notation to make the expression simple and to get some intuitions from the relativistic results. \subsection{Vertex Operators} Most of the vertex operators for this theory are already known. The vertex operators of the $X^i,\psi^i$ CFTs and of the superconformal ghost sector with the $b_g c_g$ and $\beta_g \gamma_g$ CFTs are already well understood and can be found in many places (see, {\it e.g.,} \cite{fms, pol, gsw}). Constructing vertex operators for the bosonic $\beta\gamma$ CFT is considered in \cite{Kim:2007hb, Gomis:2000bd}. Thus let's concentrate on the vertex operators of the fermionic $bc$ CFT. The fermionic matter sector, in terms of the fermionic fields $\psi^\mu$, $\mu = 0 \cdots 9$, has well understood vertex operators in the relativistic string theory\cite{fms, gsw, pol}. Thus we can just borrow the results from them with caution. In this section we will use both the notations $\psi^0, \psi^1$ and $bc$. For the Neveu-Schwarz ($NS$) sector, there is no $r=0$ mode and we can define the ground state to be annihilated by all $r>0$ modes \begin{equation} \psi_r^{\mu} ~ | 0; k^\gamma, k^{\bar{\gamma}}, \vec{k} \rangle_{NS} = 0 \ , \hspace{0.3 in} r > 0 \ . \end{equation} This ground state is ``tachyonic''. The vertex operator corresponding to $NS$ ground state is \begin{eqnarray} &&V_{NS, 0}(k^\gamma, k^{\bar{\gamma}}, k^i; z, \bar{z}) = e^{-\varphi} V_{0}(k^\gamma, k^{\bar{\gamma}}, k^i; z, \bar{z}) \ , \\ &&V_{0}(k^\gamma, k^{\bar{\gamma}}, k^i; z, \bar{z}) = g: e^{ik^\gamma \gamma + ik^{\bar{\gamma}} \bar{\gamma} - ip' \int^{z} \beta - iq' \int^{\bar{z}} \bar{\beta} + ik^i \cdot X_i} : \ , \label{groundvertex} \end{eqnarray} where the field $\varphi$ comes from the bosonization of the superconformal ghost fields and has nothing to do with the selection parameter $\phi$. And the bosonic ground state vertex operator $V_0$ was considered in \cite{Gomis:2000bd, Kim:2007hb} with $k^\gamma$, $k^{\bar{\gamma}}$ and $k^i$ representing the overall continuous momenta along the coordinates $\gamma$, $\bar{\gamma}$ and $X^i$, respectively. The first excited state in the $NS$ sector is a linear combination of the fermionic excitations $b_{-1/2}$, $c_{-1/2}$ and $\psi^i_{-1/2}$. \begin{equation} | e ; k^\gamma, k^{\bar{\gamma}}, \vec{k}\rangle_{NS} = \left( e_c c_{-1/2} + e_b b_{-1/2} + e_i \psi_{-1/2}^{i} \right) | 0; k^\gamma, k^{\bar{\gamma}}, \vec{k} \rangle_{NS} \ . \end{equation} We use two different notations for the fermionic sector (i) $e_\mu\psi^\mu$ with $\mu = 0, \cdots , 9$ and (ii) $e_M\psi^M_{-1/2} = \left( e_c c_{-1/2} + e_b b_{-1/2} + e_i \psi_{-1/2}^{i} \right) $ with $i = 2, \cdots, 9$. The vertex operator corresponding to the first excited state $V_{NS, 1}(k^\gamma, k^{\bar{\gamma}}, k^i; z, \bar{z})$ is \begin{eqnarray} e^{-\varphi} \psi^M V_{0}(k^\gamma, k^{\bar{\gamma}}, k^i; z, \bar{z}) & \hbox{or} & e^{-\varphi} \psi^\mu V_{0}(k^\gamma, k^{\bar{\gamma}}, k^i; z, \bar{z}) \ . \end{eqnarray} The modes with $r<0$ for the fields $\psi_r$ act as raising operators and each mode can be excited only once. The Ramond ($R$) sector ground state is degenerate due to the zero modes $\psi_0^{\mu}$ (or $\psi_0^{M}$). We can define the $R$ ground state to be those that are annihilated by all $r>0$ modes. And the zero modes satisfy the Dirac gamma matrix algebra with $\Gamma^{\mu} \cong \sqrt{2} \psi_0^{\mu}$. Since $\{\psi_r^{\mu}, \psi_0^{\nu} \} = 0$ for $r>0$, the zero modes $\psi_0^{\mu}$ take ground states into ground states. Thus the ground states form a representation of the gamma matrix algebra. For critical case with ``10 dimensions'' we can represent this as $| {\bf s} \rangle =| s_0\rangle \times | \vec{s}\rangle= | s_0\rangle \times | s_1, s_2, s_3, s_4 \rangle$ with $s_0, s_a = \pm 1/2$. Here we separate $s_0$ from the others to indicate that there is no symmetry transformation between $s_0$ and $\vec{s}$. It is convenient to combine two fermions, $\psi^2$ and $\psi^3$ for example, into a complex pair, $\psi \equiv \frac{1}{\sqrt{2}}(\psi^2 + i \psi^3)$ and $\psi^{\dagger} \equiv \frac{1}{\sqrt{2}}(\psi^2 - i \psi^3)$,\footnote{ Note that we use different notation for the complex field compared to \cite{pol}.} to consider a more general periodicity condition \begin{equation} \psi (w + 2\pi) = e^{2\pi i \nu} \psi(w) \ , \end{equation} for any real $\nu$. Here we concentrate on two cases $\nu = 0$ and $\nu = 1/2$. The mode expansions are \begin{equation} \psi(z) = \sum_{r \in {\bf Z} +\nu} \frac{\psi_r}{z^{r+1/2}}, \qquad \psi^{\dagger}(z) = \sum_{s \in {\bf Z} -\nu} \frac{\psi^{\dagger}_s}{z^{s+1/2}} \ , \label{complexfermions11} \end{equation} with a commutation relation $\{ \psi_r, \psi^{\dagger}_s \} = \delta_{r, -s}$. We can define a reference state $|0 \rangle_{\nu}$ by \begin{equation} \psi_{n+\nu} |0 \rangle_{\nu} = \psi^{\dagger}_{n + 1 -\nu} |0 \rangle_{\nu} = 0 \ , \qquad n = 0, 1, \cdots \ . \end{equation} The first nonzero terms in the Laurent expansions are related to the indices $r = -1 +\nu$ and $s = - \nu$. And these conditions can uniquely identify the state $|0 \rangle_{\nu}$. Similarly for the corresponding vertex operator ${\cal A}_\nu$, the OPEs \begin{equation} \psi(z) {\cal A}_\nu (0) = {\cal O}(z^{-\nu + 1/2}), \qquad \psi^{\dagger}(z) {\cal A}_\nu (0) = {\cal O}(z^{\nu - 1/2}) \ \end{equation} can determine the vertex operator as \begin{equation} {\cal A}_\nu \simeq e^{i(-\nu + 1/2 )H} \ . \label{bosonizedvertex} \end{equation} This vertex operator has weight $h = \frac{1}{2} (\nu-\frac{1}{2})^2$. The boundary conditions are same for $\nu$ and $\nu + 1$, but the reference states are not. The reference state is a ground state only for $ 0 \leq \nu \leq 1$. For the $R$ sector with $\nu = 0$, there are two degenerate ground states which can be identified as $|s \rangle \cong e^{is H}$ with $s = 1/2$ and $s = -1/2$. It is convenient to use bosonization to take care of branch cut which arises in the fields with the half integer conformal weight. And the explicit bosonization expressions are \begin{eqnarray} \frac{1}{\sqrt{2}}(\psi^1 - \psi^0) = c \cong e^{-iH^0} \ , &&\frac{1}{\sqrt{2}}(\psi^1 + \psi^0) = b \cong e^{iH^0} \ , \\ \frac{1}{\sqrt{2}}(\psi^{2a} \pm i \psi^{2a+1} ) \cong e^{\pm i H^a} \ , &&a = 1, \cdots, 4 \ , \end{eqnarray} where $H(z)$ fields are the holomorphic part of corresponding scalar fields. Then the corresponding vertex operator $\Theta_{\bf s}$ for an $R$ sector ground state $|{\bf s} \rangle = | s_0, \vec{s} \rangle $ is \begin{equation} \Theta_{\bf s} \cong \exp \Big[ i s_0 H^0\Big]\times \exp \Big[ i\sum_{a=1}^{4} s_a H^a\Big]. \end{equation} This spin field operator produces a branch cut in $\psi^{\mu}$ and need to be combined with an appropriate antiholomorphic vertex operator. Thus the $R$ ground state vertex operators are \begin{equation} V_{R, 0}(s_0, \vec{s}; k^\gamma, k^{\bar{\gamma}}, k^i; z, \bar{z}) = e^{-\varphi/2} \Theta_{\bf s} V_{0}(k^\gamma, k^{\bar{\gamma}}, k^i; z, \bar{z}) \ , \end{equation} where $\varphi$ is related to the bosonization of the superconformal ghost fields and $V_0$ is given in equation (\ref{groundvertex}). Now we are ready to quantize the theory. \subsection{Quantization} In the old covariant quantization procedure, we ignore the ghost excitations and concentrate on the matter sector, which has the $X^i$, $\psi^i$, $\beta\gamma$ and $ b c$ CFTs. We impose the physical states conditions \begin{eqnarray} \left(L_{n}^{\bf m} + a \delta_{n,0} \right) |\psi\rangle = 0, \hspace{0.1 in} n \geq 0 \ , && \hspace{0.2 in} G_r^{\bf m} | \psi \rangle = 0, \hspace{0.1 in} r \geq 0 \ , \end{eqnarray} where '$\bf m$' denotes the matter sector. We can construct spurious states which are orthogonal to all physical states such as \begin{eqnarray} L_{n}^{\bf m} | \chi\rangle \ , \hspace{0.2 in} n < 0 \ , &\hspace{0.2 in}& \hspace{0.2 in} G_r^{\bf m} | \chi \rangle \ , \hspace{0.2 in} r < 0 \ . \end{eqnarray} These states satisfy $\langle \psi | L_n^{\bf m} |\chi \rangle = 0$ and $ \langle \psi |G_r^{\bf m} | \chi \rangle = 0 $. If these states satisfy the physical state conditions, then we call them null states. We need to impose equivalence relations to get a physical Hilbert space. \bigskip {\it $NS$ sector} The $NS$ sector with $\nu = 1/2$ is simpler and we consider this first. For the ground state (with simplified notation $|0; k\rangle_{NS}$ instead of $|0; k^\gamma, k^{\bar{\gamma}}, \vec{k} \rangle_{NS}$), the physical state condition $\left( L_0^{\bf m} -\frac{1}{2} \right) |0; k\rangle_{NS} = 0$ gives us the mass shell equation \begin{equation} \frac{\alpha'}{4} \vec{k}^2 - k^\gamma p' -\frac{1}{2} = 0 \ . \end{equation} The other physical state conditions, $L_n^{\bf m} |0; k\rangle_{NS} = 0$ for $ n > 0$ and $G_{r}^{\bf m} |0; k\rangle_{NS} = 0$ for $ r \geq 1/2$, are trivial. Thus there is one equivalence class, corresponding to a scalar particle. The first excited level (with simplified notation $|e; k \rangle_{NS}$ instead of $|e; k^\gamma, k^{\bar{\gamma}}, \vec{k} \rangle_{NS}$) has 10 states \begin{equation} |e; k \rangle_{NS} = \left(e_c c_{-1/2} + e_b b_{-1/2} + e_i \psi_{-1/2}^i \right) |0; k \rangle_{NS}. \end{equation} The nontrivial physical state conditions, $\left( L_0^{\bf m} -\frac{1}{2} \right) |e; k \rangle_{NS}=0$ and $G_{1/2}^{\bf m} |e; k \rangle_{NS}=0$, give us \begin{eqnarray} \frac{\alpha'}{4} \vec{k}^2 - k^\gamma p' = 0 \ , \label{massshellforns1} \\ - p' e_c + k^\gamma e_b + (\alpha'/2)^{1/2} k^i e_i = 0 \ , \end{eqnarray} while a spurious state \begin{equation} G_{-1/2}^{\bf m} |0; k \rangle_{NS} = \left( (\alpha'/2)^{1/2} k^i \psi_{i,1/2} + k^\gamma c_{-1/2} - p' b_{-1/2}\right) |0; k \rangle_{NS} \end{equation} is physical and null. Thus there is an equivalent relation \begin{equation} \left(e_c, \hspace{0.1 in} e_b, \hspace{0.1 in}e_i \right) \cong \left(e_c + k^\gamma,\hspace{0.1 in} e_b -p' ,\hspace{0.1 in} e_i+ (\alpha'/2)^{1/2} k_i \right) \ . \end{equation} Thus for the first excited state in the $NS$ sector, there are only 8 independent degrees of freedom. The global symmetries are the conformal scaling and the $SO(8)$ rotation, $SO(1,1) \times SO(8)$, as we point out above. At this stage, these symmetries are manifest in the equation (\ref{massshellforns1}). But we show in the previous work\cite{Kim:2007hb} that the energy dispersion relation for the particle corresponding to this level is actually \begin{equation} E = p_t = \frac{1}{pp'} \left(\frac{\alpha'}{4} \vec{k}^2 - 1 \right) \ , \end{equation} where $p$ and $p'$ are parameters specifying a selection sector and the ground state vertex operator, respectively. Thus non-relativistic particles have $SO(8)$ symmetry which is smaller than $SO(1,1) \times SO(8)$. The explicit dependence of energy on the parameter $p'$ breaks $SO(1,1)$ scaling symmetry. Particularly, at the first excited level of $NS$ sector, these eight degrees of freedom transform into each other in the vector representation of ${\bf 8}_v$ of $SO(8)$ similar to the case of relativistic massless excitations.\footnote{\label{so11spectrumfootnote} There is another way to think about the expression (\ref{massshellforns1}). Rather than breaking $SO(1,1)$ symmetry, we can go to a frame, $k^i = 0 $ for $i = 2, \cdots , 8$ and $k^9 \neq 0$, which is similar to the relativistic consideration and keeps the $SO(1,1) \times SO(7)$ symmetry. For further explanation, please see the appendix. } \bigskip {\it $R$ sector} In the $R$ sector, we have degenerate ground states $|v, u; k \rangle_R = |s_0, \vec{s}; k\rangle_R ~ \left( v_{s_0} \otimes u_{\vec{s}} \right)$, where $v$ and $u$ are ``polarizations'' along $bc$ and $\psi^i$, respectively. The nontrivial physical conditions are \begin{eqnarray} &&0 = L_0^{\bf m} |v, u; k \rangle_R = \left(\frac{\alpha'}{4} \vec{k}^2 - k^\gamma p' \right) |v, u; k \rangle_R \ , \\ &&0 = G_0^{\bf m} |v, u; k \rangle_R = \left(\left(\frac{\alpha'}{2}\right)^{1/2} k^i \psi_{0, i} + k^\gamma c_0 - p' b_0\right) |v, u; k \rangle_R \ . \label{nrDiraceq1} \end{eqnarray} The first equation is the usual mass shell condition. The second equation is an analogue of the relativistic Dirac equation. We can check that $G_0^2 = L_0$. So the $G_0$ condition implies the mass shell condition. The second equation is particularly important for us to investigate the difference between the spectrum of the non-relativistic theory and that of the relativistic one. To make things more transparent, we can rewrite the equation in terms of the fields $\psi^0$ and $\psi^1$, which reads \begin{eqnarray} \frac{1}{2^{1/2}} \Big( \alpha'^{1/2} k^i \psi_{0, i} - ( k^\gamma + p' ) \psi_{0, 0} + (k^\gamma - p') \psi_{0, 1} \Big) = 0 \ . \label{nrDiraceq2} \end{eqnarray} This equation is the same as the relativistic one if we use $\left(\frac{\alpha'}{2}\right)^{1/2} k^\mu \psi_{0, \mu} = 0$, with $(\alpha' )^{1/2} k^0 = - k^\gamma - p' $ and $(\alpha' )^{1/2} k^1 = k^\gamma - p' $. With an appropriate signature, we can get \begin{equation} k^\mu k_\mu = \frac{\alpha' ~ k^i k_i}{2} - \frac{(k^\gamma + p')^2 }{2} + \frac{(k^\gamma - p')^2 }{2} = \frac{\alpha'}{2} k^i k_i - 2 k^\gamma p' = 0 \ . \end{equation} Particularly there is no further constraint in the vertex operators for the change of fields from $bc$ to $\psi^0, \psi^1$, thus the fermionic sector has $SO(1,1) \times SO(8)$ symmetry,\footnote{ It is interesting to observe that the one parameter family of superconformal symmetry ``$SO(1,1)$'' can be transformed into $SO(1,1)$ Lorentz symmetry.} where there is no connection between $\psi^0, \psi^1$ and the other $\psi^i$s. It is interesting to observe that the $SO(1,1)$ has boost symmetry and is realized as the rescaling of the relative magnitude of $k^\gamma$ and $p'$ while keeping the magnitude of their product $k^\gamma p'$ fixed. We can think about the non-relativistic Dirac equation with manifest $SO(8)$ symmetry structure. For the spinors of $SO(8)$, we can impose the Majorana condition and the Weyl condition simultaneously, and there are two inequivalent irreducible spinor representations, ${\bf 8}_c$ and ${\bf 8}_s$. The description of Dirac matrices for $SO(8)$ requires a Clifford algebra with eight anticommuting matrices, which are 16-dimensional matrices corresponding to reducible ${\bf 8}_c + {\bf 8}_s$ representation of $SO(8)$. These matrices can be written in the block form \begin{equation} \gamma^i = \begin{pmatrix} 0 & \gamma^i_{a\dot{a}} \\ \gamma^i_{\dot{b}b} & 0 \end{pmatrix} \ , \end{equation} where the equations $\{\gamma^i , \gamma^j \} = 2 \delta^{ij}$ are satisfied with $\gamma^i_{a\dot{a}} \gamma^j_{\dot{a}b} + \gamma^j_{a\dot{a}} \gamma^i_{\dot{a}b} = 2\delta^{ij} \delta_{ab}$ with $i,j = 2, \cdots , 9$. $\gamma^i_{\dot{a}a}$ is the transpose of $\gamma^i_{a\dot{a}}$ and can be expressed in terms of real components. To apply these matrices to the non-relativistic Dirac equation (\ref{nrDiraceq2}), we can construct the ten dimensional Dirac matrices $\Gamma^\mu$ explicitly \begin{equation} \Gamma^0 = \sigma^3 \otimes {\bf 1}_{16} ,\qquad \Gamma^1 = \sigma^1 \otimes {\bf 1}_{16}, \qquad \Gamma^0 = i\sigma^2 \otimes \gamma^i, \end{equation} where ${\bf 1}_{16}$ is the $16 \times 16$ identity matrix and $i = 2, \cdots , 9$. Here all the Gamma matrices are real and thus it is possible to impose Majorana condition for all the spinor fields. Using $\psi_0^\mu = \Gamma^\mu/\sqrt{2}$, we can rewrite the equation (\ref{nrDiraceq2}) as $\frac{\alpha'^{1/2}}{2} ~k_\mu \Gamma^\mu = 0$. To go further we can use the basis \begin{equation} v_{s_0} \otimes u_{\vec{s}} = \begin{pmatrix} v_+ \\ v_- \end{pmatrix}_2 \otimes \begin{pmatrix} u^b \\ u^{\dot{a}} \end{pmatrix}_{16} \ . \end{equation} And we can explicitly write the non-relativistic Dirac equation \begin{equation} \frac{\sqrt{\alpha'}}{2} \begin{pmatrix} v_+ \\ -v_- \end{pmatrix}_2 \otimes \begin{pmatrix} k_i \gamma^i_{a\dot{a}}u^{\dot{a}} \\k_i \gamma^i_{\dot{b}b} u^{b} \end{pmatrix}_{16} + \begin{pmatrix} k^\gamma v_- \\ -p' v_+ \end{pmatrix}_2 \otimes \begin{pmatrix} u_a \\ u_{\dot{b}} \end{pmatrix}_{16} = 0 \ . \end{equation} To solve this problem we can go to a basis $v_+ = \sqrt{\frac{k^\gamma}{p'}}~ v_-$.\footnote{ This condition is actually equivalent to use the symmetry transformation of $SO(1,1)$ to rescale $k^\gamma = p'$. } Then we have the equations, \begin{eqnarray} \frac{\sqrt{\alpha'}}{2} k_i \gamma^i_{a\dot{a}} u^{\dot{a}} + \sqrt{k^\gamma p'} u_a &=& 0 \ , \\ \frac{\sqrt{\alpha'}}{2} k_i \gamma^i_{\dot{b}b} u^{b} + \sqrt{k^\gamma p'} u_{\dot{b}} &=& 0 \ . \end{eqnarray} These equations are very similar to the relativistic Dirac equation presented in \cite{gsw} with a definite chirality in the 10 dimensional fermion.\footnote{We thank Professor Petr Ho\v{r}va for discussions and comments on the non-relativistic Dirac equation and interesting ideas related to non-relativistic system.} And it is possible to satisfy the non-relativistic Dirac equation with manifest $SO(8)$ symmetry by exploiting the superconformal rescaling symmetry. Furthermore this equation tells that there is no chiral property for the non-relativistic fermions because these two inequivalent irreducible spinor representations ${\bf 8}_c$ and ${\bf 8}_s$ are connected by the non-relativistic Dirac equation.\footnote{ Then why are there two inequivalent propagating degrees of freedom ${\bf 8}_s$ and ${\bf 8}_c$ in the relativistic case? These two inequivalent degrees of freedom come from the 10 dimensional Weyl conditions $\Gamma_{11} \lambda = \pm \lambda$, which are not available for the non-relativistic theory. For $k^0 = k^9$, it is possible to impose $s_0 = 1/2$ and there is ${\bf 8}_s$ spinor. For $k^0 = -k^9$, the other spinor ${\bf 8}_c$ is available. (These two equations $k^0 = \pm k^9$ satisfy $k^\mu k_\mu = 0$.) This does not apply for the non-relativistic theory. Because there is no 10 dimensional Weyl condition and the bosonic dispersion relation does not have two inequivalent choice for the relation $k^\gamma$ and $p'$. } We will denote this as ${\bf 8}$. Thus we can summarize the particle contents for the first two states in the $NS$ sector and for the ground state of $R$ sector in the table. \bigskip \begin{table}[h] \begin{tabular}{lcccccc \hline \hline sector & $\hspace{0.5 in}$ & $SO(8)$ spin & $\hspace{0.5 in}$ & $-\frac{\alpha'}{4} \vec{k}^2 + k^\gamma p'$ \\ \hline $NS_0$ && ${\bf 1}$ & & -1/2 \\ $NS$ && ${\bf 8}_v $ && 0 \\ $R$ && ${\bf 8}$ & & 0 \\ \hline \hline \end{tabular} \caption{Spectrum of the holomorphic sector for ground and first excited level of $NS$ sector and ground state of $R$ sector. ${\bf 8}_v$ is the fundamental representation of $SO(8)$ and ${\bf 8}$ is one copy of the spinor representation of $SO(8)$. } \end{table} \bigskip {\it Closed String Spectrum} The closed string spectrum has two copies of above spectrum, each from holomorphic and antiholomorphic sectors. Because of the level matching condition $NS_0$ sector can only combined with the other $NS_0$ sector $-\frac{\alpha'}{4} \vec{k}^2 + k^\gamma p' = -\frac{\alpha'}{4} \vec{k}^2 + k^{\bar{\gamma}} q' = -1/2$. This is a nondegenerate state of the non-relativistic closed string. This state will be projected out due to the requirement of modular invariance which requires at least one $R$ sector. Now it is rather straightforward to construct the closed string spectrum at the next level because there are one copy of the vector representation ${\bf 8}_v$ and one copy of the spinor representation ${\bf 8}$ of $SO(8)$. The spinor representation ${\bf 8}$ is nonchiral and it is expected that the whole theory is nonchiral. We can identify the spinor representation ${\bf 8}$ as one of the two chiral representations ${\bf 8}_c$ or ${\bf 8}_s$ of $SO(8)$. And then the whole spectrum is similar to that of the relativistic Type IIB superstring theory, which has the same spinor representations in both the holomorphic and the antiholomorphic sectors. This signals that the theory is modular invariant and consistent even before we actually check the modular invariance. We summarize the ground state and first excited states in the following table. \bigskip \begin{table}[h] \begin{tabular}{lcccccc} \hline \hline sector & $\hspace{0.25 in}$ & $SO(8)$ spin & $\hspace{0.5 in}$ & tensors & $\hspace{0.5 in}$ & dimensions \\ \hline ($NS_0$, $NS_0$) && ${\bf 1} \times {\bf 1}$ && &=& ${\bf 1}$ \\ \hline ($NS$, $NS$) && ${\bf 8}_v \times {\bf 8}_v$ &= & [0] + [2] + (2) &=&${\bf 1} + {\bf 28} + {\bf 35}$ \\ ($NS$, $R$) && ${\bf 8}_v \times {\bf 8}$ & &&=& ${\bf 8} + {\bf 56}$ \\ ($R$, $NS$) && ${\bf 8} \times {\bf 8}_v$ & &&=& ${\bf 8} + {\bf 56}$ \\ ($R$, $R$) && ${\bf 8} \times {\bf 8}$ &=& [0] + [2] + [4] &=& ${\bf 1} + {\bf 28} + {\bf 35}$ \\ \hline \hline \end{tabular} \caption{Closed superstring spectrum for the ground state and the first excited state of $NS$ sector and the ground state of $R$ sector. ${\bf 8}_v$ is the fundamental representation and ${\bf 8}$ is one copy of the spinor representation of $SO(8)$. } \end{table} \bigskip \subsection{Partition Function and Modular Invariance} To show that the theory is consistent, we need to check the modular invariance. The bosonic part of the modular invariance is already shown in the previous work\cite{Kim:2007hb}. Thus we can concentrate on the fermionic sector. As explained in the previous section, The field contents of the non-relativistic superstring theory is the same as those of the relativistic IIB string theory. Thus the modular invariance can be proved in a similar way. For completeness we provide a very brief proof of the modular invariance of the fermionic sector by closely following \cite{pol}. For the complex fermion $\psi$, we can introduce a general periodicity $\alpha = 1- 2\nu$ with \begin{equation} \psi (\omega + 2\pi) = e^{\pi i (1-\alpha)} \psi (\omega) \ . \end{equation} Then the raising operators can be written as $\psi_{-m + (1-\alpha)/2}$ and $\psi^{\dagger}_{-m + (1+\alpha)/2}$ with positive integer $m$. In the bosonized language given in (\ref{bosonizedvertex}), the weight of the vertex operator is $\alpha^2/8$.\footnote{ We can get the same result from the fermionic language, where the normal ordering constant can be calculated by the zero point mnemonic given in \cite{pol}. } Using this result we can calculate \begin{equation} Tr_\alpha \Big( q^{L_0 - c/24} \Big) = q^{(3 \alpha^2 - 1)/24} \prod_{m=1}^{\infty} \left( 1 + q^{m - (1-\alpha)/2} \right) \left(1 + q^{m - (1+\alpha)/2}\right) \ . \end{equation} To accommodate this general boundary condition, we join the fermions into complex pairs in \ref{complexfermions11}. Then a fermion number $Q$ can be defined as $+1$ for $\psi$ and $-1$ for $\psi^{\dagger}$. $Q$ corresponds to be $H$ momentum in the bosonization formula. The ground state has a $Q$ charge as $\alpha/2$. Thus we can define the more general trace \begin{eqnarray} Z_\beta^\alpha (\tau) &=& Tr_\alpha \Big( q^{L_0 - c/24} \exp(\pi i \beta Q) \Big) = q^{(3 \alpha^2 - 1)/24} \exp(\pi i \alpha \beta /2) \\ && \times\prod_{m=1}^{\infty} \left( 1 + \exp(\pi i \beta) q^{m - (1-\alpha)/2} \right) \left(1 + \exp(-\pi i \beta) q^{m - (1+\alpha)/2}\right) \\ &=& \frac{1}{\eta(\tau)} \vartheta \left[ \begin{array}{cl} \alpha/2 \\ \beta/2 \end{array} \right](0, \tau) \ . \end{eqnarray} Here $\alpha$ and $\beta$ can have $0$ and $1$. We have the relevant traces $Z_0^0, Z_0^1, Z_1^0$ and $Z_1^1$. The holomorphic part of the partition function for the fermionic sector is \begin{equation} Z_{\psi}(\tau) = \frac{1}{2} \Big[ Z_0^0 (\tau)^4 - Z_1^0(\tau)^4 - Z_0^1 (\tau)^4 - Z_1^1 (\tau)^4 \Big] \ , \end{equation} where the first $-$ sign comes from the ghost contribution and the last two $-$ signs come from the spacetime spin statistics. And the total partition function is \begin{eqnarray} Z_{total} = \frac{V_{8} V_{\beta\gamma}}{2p'q'} \int_F \frac{d^2 \tau}{16 \pi^2 \alpha' \tau_2^2} \left(Z_X^{8} Z_{\psi}(\tau) Z_{\psi}(\tau)^*\right) \ . \end{eqnarray} This short explanation proves the modular invariance and it is the same as that of the Type IIB string. \section{General Non-Relativistic Supersymmetric String} In this section we consider the $\beta\gamma$ and $bc$ CFTs with general conformal weights. First we explain the new matter sector in the superspace formulation. Then we construct a ``noncritical'' version of the non-relativistic superstring theories. \subsection{Matter ${\bf \Sigma\Gamma}$ CFT} \noindent Let's start with supersymmetric string theory action with a matter ${\bf \Sigma\Gamma}$ CFT in addition to the usual ${\bf X}^i$ CFT and the ghost ${\bf BC}$ CFT in the conformal gauge \begin{equation} S_{susy} = \int \frac{d^2 z d^2 \theta}{2\pi} \left( {\bf \Sigma} \bar{\bf D}_{\bar{\theta}} {\bf \Gamma} \right). \label{susyaction1} \end{equation} The equations of motion for the fields are $\bar{\bf D}_{\bar{\theta}} {\bf \Gamma} = 0 = \bar{\bf D}_{\bar{\theta}} {\bf \Sigma} .$ There are a similar action and equations of motion for the anti-holomorphic part of ${\bf \Sigma\Gamma}$ and ${\bf BC}$ CFTs. OPEs of new ${\bf \Sigma\Gamma}$ CFT are given by \begin{equation} {\bf \Gamma}(z_1, \theta_1) {\bf \Sigma}(z_2, \theta_2) ~\sim~ \frac{\theta_{12}}{ \hat{z}_{12}} ~\sim~ {\bf \Sigma}(z_1, \theta_1) {\bf \Gamma}(z_2, \theta_2) \ , \end{equation} where $\theta_{12} = \theta_1 - \theta_2$ and $\hat{z}_{12} = z_1 - z_2 -\theta_1 \theta_2$. The super energy momentum tensor\footnote {This can be contrasted to the energy momentum tensor of ${\bf BC}$ super ghost CFT \begin{equation} {\bf T}_{ghost}^{{\bf B}{\bf C}} = - (\lambda_g - 1) {\bf C}\left({\bf D}^2 {\bf B} \right) + \frac{1}{2} \left({\bf D} {\bf C}\right) \left({\bf D} {\bf B} \right) - (\lambda_g - \frac{1}{2}) \left({\bf D}^2 {\bf C}\right) {\bf B} \ . \nonumber \end{equation} The ghost energy momentum tensor has the same form as that of the matter ${\bf \Sigma\Gamma}$ CFT except the sign differences. And the conformal weights of the ghost super fields with $\lambda_g = 2$ are $h({\bf B}) = \lambda_g - 1/2, h({\bf C}) = 1 - \lambda_g$. And those of the component fields are $ h(\beta_g) = \lambda_g - 1/2, h(c_g) = 1 - \lambda_g, h(b_g) = \lambda_g, h(\gamma_g) = 3/2 - \lambda_g.$} is a chiral superfield of dimension $3/2$ with the ordinary energy momentum tensor of dimension $2$ in it ${\bf T}({\bf z}) = T_F(z) + \theta T_B(z)$ \begin{equation} {\bf T} = (\lambda - 1) {\bf\Gamma} \partial {\bf \Sigma} + \frac{1}{2} \left( {\bf D} {\bf \Gamma}\right) \left( {\bf D} {\bf \Sigma} \right) + (\lambda - \frac{1}{2}) \partial {\bf \Gamma} {\bf \Sigma} \ . \end{equation} For $\lambda = 1$ case, the super energy momentum tensor simplifies further and have the form \begin{equation} {\bf T}_{\lambda = 1} = \frac{1}{2} \left( {\bf D} {\bf \Gamma}\right) \left( {\bf D} {\bf \Sigma} \right) + \frac{1}{2} \partial {\bf \Gamma} {\bf \Sigma} \ , \end{equation} which is very simple and we concentrate on the previous section as a critical case. It is simple to verify that this reduces to the component forms of the energy momentum tensor (\ref{bosonicenergymomentumtensor111}) and (\ref{bosonicenergymomentumtensor222}), which are presented below. The case with $\lambda = 1/2$ also simplifies and corresponds to the ``critical'' case in a sense we explain in the next subsection. The super energy momentum tensor is itself an anomalous superconformal field \begin{equation} {\bf T}(z_1, \theta_1) ~{\bf T}(z_2, \theta_2) \sim \frac{8\lambda - 6}{4 \hat{z}_{12}^3} + \frac{3}{2} \frac{\theta_{12}}{\hat{z}_{12}} {\bf T}(z_2, \theta_2) + \frac{1}{2}\frac{1}{\hat{z}_{12}} {\bf D}_2 {\bf T}(z_2, \theta_2) + \frac{\theta_{12}}{z_{12}} \partial_2 {\bf T}(z_2, \theta_2) \ , \end{equation} which tells us the central charge of super energy momentum tensor is $\hat{c} = \frac{2}{3} c = 8\lambda - 6$ and the conformal weight of the tensor is $3/2$. OPEs of the energy momentum tensor with the super fields can be calculated \begin{eqnarray} {\bf T}(z_1, \theta_1) ~{\bf \Gamma}(z_2, \theta_2) &\sim& (1 - \lambda) \frac{\theta_{12}}{\hat{z}_{12}^2} {\bf \Gamma} (z_2, \theta_2) + \frac{1}{2}\frac{1}{\hat{z}_{12}} {\bf D}_2 {\bf \Gamma} (z_2, \theta_2) + \frac{\theta_{12}}{\hat{z}_{12}} \partial_2 {\bf \Gamma} (z_2, \theta_2) \ , \nonumber \\ {\bf T}(z_1, \theta_1) ~{\bf \Sigma}(z_2, \theta_2) &\sim& (\lambda - \frac{1}{2}) \frac{\theta_{12}}{\hat{z}_{12}^2} {\bf \Sigma} (z_2, \theta_2) + \frac{1}{2}\frac{1}{\hat{z}_{12}} {\bf D}_2 {\bf \Sigma} (z_2, \theta_2) + \frac{\theta_{12}}{\hat{z}_{12}} \partial_2 {\bf \Sigma} (z_2, \theta_2) \ . \end{eqnarray} These equations tells us that the new fields ${\bf \Gamma}$ and ${\bf \Sigma}$ have conformal weights $h({\bf \Gamma}) = 1 - \lambda$ and $h({\bf \Sigma}) = \lambda - 1/2$, respectively. The dimensions of the component fields are \begin{eqnarray} {\bf \Gamma} = -\gamma + \theta c \ , & h(\gamma) = 1 - \lambda \ , & h(c) = 3/2 - \lambda \ , \\ {\bf \Sigma} = b + \theta \beta \ , & h(b) = \lambda -1/2 \ , & h(\beta) = \lambda \ . \end{eqnarray} And $\gamma, \beta$ and ${\bf \Gamma}$ are commuting fields and $b, c$ and ${\bf \Sigma}$ are anticommuting fields. Using the component fields we can rewrite the supersymmetric action \begin{eqnarray} S_1 &=& \int \frac{d^2z}{2\pi} \left( \beta \bar{\partial} \gamma + \bar{\beta} \partial \bar{\gamma} + b \bar{\partial} c + \bar{b} \partial \bar{c} \right) \ . \end{eqnarray} Given the conformal weights of the component fields, the central charge of the $\beta\gamma$ CFT and the $bc$ CFT are $3(2\lambda - 1)^2 -1$ and $-3(2\lambda -2)^2 + 1$, respectively. Thus the total central charge is $c = 12 \lambda - 9$, which agrees with the result from the OPE of the energy momentum tensor. And the OPEs of the component fields are \begin{eqnarray} \gamma(z_1) \beta(z_2) \hspace{0.1 in} \sim & \frac{1}{z_{12}} & \sim \hspace{0.1 in} - \beta(z_1) \gamma(z_2) \ , \\ b(z_1) c(z_2) \hspace{0.1 in} \sim & \frac{1}{z_{12}} & \sim \hspace{0.1 in} c(z_1) b(z_2) \ . \end{eqnarray} The energy momentum tensor in the component form can be written \begin{eqnarray} T_{b} = (\lambda -\frac{3}{2}) c (\partial b) + (\lambda -\frac{1}{2})(\partial c) b - (\lambda - 1) \gamma (\partial \beta) - \lambda (\partial \gamma) \beta &=& \sum_{m \in {\bf Z}} \frac{L_m}{z^{m+2}} \ , \\ T_{f} = -(\lambda - 1) \gamma (\partial b) + \frac{1}{2} c \beta - (\lambda - \frac{1}{2}) (\partial \gamma) b &=& \sum_{r \in {\bf Z} + \nu} \frac{G_r}{2 \cdot z^{r+3/2}} \ . \end{eqnarray} As is well known, the fields with the half integer conformal weight have both $NS$ and $R$ sectors. To make the expressions simple, we concentrate on the case of integer $\lambda$. The mode expansions and the hermiticity properties are \begin{eqnarray} \gamma(z) = \sum_{n \in {\bf Z}} \frac{\gamma_n}{z^{n + 1-\lambda}} \ , \hspace{0.3 in} \gamma_n^{\dagger} = \gamma_{-n} \ , && \beta(z) = \sum_{n \in {\bf Z}} \frac{\beta_n}{z^{n+ \lambda}} \ , \hspace{0.3 in} \beta_{n}^{\dagger} = - \beta_{-n} \ ,\\ c(z) = \sum_{r \in {\bf Z}+ \nu} \frac{c_r}{z^{r+3/2-\lambda}} \ , \hspace{0.2 in} c_r^{\dagger} = c_{-r} \ , && b(z) = \sum_{r \in {\bf Z} + \nu} \frac{b_r}{z^{r+\lambda- 1/2}} \ , \hspace{0.2 in} b_{r}^{\dagger} = b_{-r} \ . \end{eqnarray} There are two possible values for $\nu$. For the $NS$ sector $\nu = 1/2$ and for $R$ sector $\nu = 0$. And the mode expansions for the energy momentum tensors are \begin{eqnarray} && L_m^{\beta\gamma b c} = \sum_{n \in {\bf Z}}\Big( n - (1-\lambda)m \Big) \beta_{m-n}\gamma_n - \sum_{s \in {\bf Z} +\nu} \Big(s - (3/2 - \lambda )m \Big) b_{m-s} c_s + a\delta_{m,0} \ , \\ && G_r^{\beta\gamma b c} = \sum_{n \in {\bf Z}} \left( c_{r-n} \beta_n + \Big( n + 2 r (\lambda - 1)\Big) \gamma_n b_{r-n} \right) \ . \end{eqnarray} There is a normal ordering constant in each sector, $ a_R^{\beta\gamma b c} = \frac{4\lambda - 3}{8}$ and $a_{NS}^{\beta\gamma b c} = \frac{\lambda - 1}{2}$. \subsection{Possible Non-Relativistic Superstring Theories} \noindent It is interesting to construct a ``noncritical'' version of the non-relativistic superstring theory. Central charge of the ghost part is $\hat{c}_{{\bf BC}} = -10$ and that of the matter CFT is $\hat{c}_{\bf \Sigma\Gamma} = 8\lambda - 6$. Thus to be consistent the dimension $D$ of the spatial directions in target space is \begin{equation} D = 8(2 - \lambda). \end{equation} We summarized the interesting portion of theories in the table. \bigskip \begin{table}[h] \begin{tabular}{c||c|c|c|c|c|c|c|c|c} \hline \hline $\lambda$ & \hspace{0.1 in}$\cdots$\hspace{0.1 in} & \hspace{0.1 in}2\hspace{0.1 in} &\hspace{0.1 in} $\frac{3}{2}$\hspace{0.1 in} &\hspace{0.1 in}1\hspace{0.1 in} & \hspace{0.1 in}$\frac{1}{2}$ \hspace{0.1 in} &\hspace{0.1 in} 0\hspace{0.1 in} &\hspace{0.1 in}- $\frac{1}{2}$\hspace{0.1 in} &\hspace{0.1 in} -1\hspace{0.1 in} & \hspace{0.1 in}$\cdots$\hspace{0.1 in} \\ \hline $\hat{c}_{\bf \Sigma\Gamma} = 8\lambda - 6$ & $\cdots$ & 10 & 6 & 2 & -2 & -6 & -10 & -14 & $\cdots$ \\ \hline $D = 8(2 - \lambda)$ & $\cdots$ & 0 & 4 & 8 & 12 & 16 & 20 & 24 & $\cdots$ \\ \hline \hline \end{tabular} \caption{Table for the super string case. Conformal weight of the supersymmetric $\beta\gamma$ CFT and the number of spatial dimensions of target space are presented. For $\lambda > 2$, the geometric interpretation is not possible. As the parameter $\lambda$ is decreasing, the number of spatial dimensions is growing indefinitely and linearly. } \label{table:fermionictable3} \end{table} \bigskip Here we comment on the immediate observations of these possible consistent ``noncritical'' non-relativistic superstring theories. These theories have the same actions and the $SO(1,1) \times SO(D)$ symmetries in addition to Galilean symmetry. There exists an infinite range of possible consistent theories with geometric interpretation, for which we mean it is possible to have positive number of spatial coordinates. It will be interesting to quantize them explicitly. We can divide them in two categories, (i) with integer $\lambda$ cases and (ii) with half integer $\lambda$ cases, because there are two sectors for the fields with half integer conformal weight. For the integer $\lambda$ cases (i) with $D = 0, 8, 16, \cdots$, the bosonic commuting $\beta\gamma$ CFT has only one bosonic coordinate. From the explicit quantization of the previous section and from \cite{Kim:2007hb}, we know that it is relatively easy to quantize and establish the spacetime interpretation. On the other hand, there are two commuting bosonic sectors, $NS$ and $R$, for the half integer $\lambda$ cases (ii) with $D = 4, 12, 20, \cdots $. Of course, in case (ii) the zero modes of the $R$ sector of the $\beta\gamma$ CFT have a space and time interpretation. The case (ii) seems rather peculiar and it looks harder to quantize them. But these theories are expected to provide different perspective for a space and time interpretation. The challenges of establishing the zero modes of $\beta\gamma$ CFT in the new matter sector can be easily seen by the total normal ordering constant. As usual, the normal ordering constant for the $R$ sectors is $0$ due to the cancellation between the bosonic contribution and the fermionic contribution. And those of the $NS$ sectors are \begin{eqnarray} a^{(i)}_{NS} = \frac{\lambda -2}{2} \ , & \hspace{0.4 in} & a^{(ii)}_{NS} = \frac{2\lambda -3}{4} \ . \end{eqnarray} Thus the total normal ordering constant for the $NS$ sector depends on the parameter $\lambda$ and there is nontrivial mapping between the unit vertex operator $1$ and the corresponding state. We can see that the case with $\lambda = 1$, we considered in the previous section, is critical in the sense that the normal ordering constants $a^{(i)}_{NS} = - \frac{1}{2}$ recover those of the critical relativistic string theory. It is interesting to comment that there is another ``critical'' case for the case (ii) with $\lambda = \frac{1}{2}$. Thus the cases with $\lambda = 1$ and $\lambda = \frac{1}{2}$ tie together in a sense and we expect that the space and time interpretation is rather similar. This observation extends to all the other cases. The case with $\lambda = n $ and $\lambda = n + \frac{1}{2}$ tie together for integer $n$. Quantization of the theory with $\lambda = \frac{1}{2}$ and comparison to the critical case with $\lambda = 2$ will be very interesting. In the case $\lambda = 2$ with $D=0$, there are only ${\bf \Sigma \Gamma}$ CFT and ${\bf BC}$ CFT. Upon quantization, only the zero modes are present without oscillator excitations. The theory is topological. Furthermore there is a possible unification of these CFTs in a simple fashion. We comment this at the end of this section. As explained in the previous paragraph, this case is tied with the $\lambda = \frac{3}{2}$ case in a sense that the normal ordering constant is same and thus the zero modes have similar roles. But this is not a ``topological'' case because there are additional 4 spatial coordinates. \bigskip \noindent{\it unification of all the first order CFTs } \smallskip There is a curiosity related to a possible interesting ${\bf Z}_2$ graded algebra involving the nonzero conformal weight, the U(1) ghost number and the U(1) number of the matter ${\bf \Sigma\Gamma}$ CFT. We can make a table for basic properties of the first order matter CFT and the ghost CFT \bigskip \begin{table}[h] \begin{tabular}{|l|c|c|c||c||c|c|c|c|} \hline field & weight & $ U(1)^{\bf m}$ & $U(1)^{\bf gh}$ & \hspace{0.5 in} & field & weight &$ U(1)^{\bf m}$ &$ U(1)^{\bf gh}$ \\ \hline \hline $b_g$ & $\lambda_g$ & 0 & $-1$ & & $c_g$ & $1-\lambda_g$ & 0 & 1 \\ \hline $\beta_g$ & $\lambda_g - 1/2$ & 0 & $-1$ & & $\gamma_g$ & $3/2-\lambda_g$ & 0 & 1 \\ \hline $\beta$ & $\lambda$ & $-$1 & $0$ & & $\gamma$ & $1-\lambda$ & 1 & 0 \\ \hline $b$ & $\lambda -1/2$ & $-1$ & 0 & & $c$ & $3/2-\lambda$ & 1 & 0 \\ \hline \end{tabular} \caption{Table for the various properties of the first order matter CFT and the ghost CFT. We list the conformal weight, U(1) charge of the matter $\beta\gamma$ CFT and U(1) charge of the ghost CFT. } \label{table:fermionictable4} \end{table} From this table we can imagine that there are two grand supermultiplets ${\bf V}$ and ${\bf W}$ with new field $\Theta_{gh}$ which carries conformal weight, U(1) ghost charge and U(1) matter charge \begin{eqnarray} {\bf V} = {\bf \Sigma} + \Theta_{gh} {\bf B} = b + \theta \beta + \Theta_{gh} (\beta_g + \theta b_g)= b + \Theta_{gh} \beta_g + \theta (\beta + \Theta_{gh} b_g), \\ {\bf W} = {\bf C} + \Theta_{gh} {\bf \Gamma} = c_g + \theta \gamma_g + \Theta_{gh} (-\gamma + \theta c) = c_g - \Theta_{gh} \gamma + \theta (\gamma_g + \Theta_{gh} c). \end{eqnarray} If one investigates these grand multiplets a little further one can read off that $\Theta_{gh}$ is anticommuting field with conformal weight $\lambda - \lambda_g$, matter U(1) charge $-1$ and ghost number $1$. ${\bf V}$ is an anticommuting multiplet with the conformal weight $\lambda -1/2$, the U(1) matter charge $-1$ and the ghost U(1) number $0$, whereas ${\bf W}$ is an anticommuting multiplet with the conformal weight $1-\lambda_g $, the U(1) matter charge $0$ and the ghost U(1) number $1$. We comment on two cases with immediate interest. One is $\lambda = 1$ case with the conformal weight of the field $\Theta_{gh}$ as $-1$. Then all the fields have uniform gaps of their conformal weights. This is the case we quantized in the previous section. For $\lambda = 2$, the field $\Theta_{gh}$ has no conformal weight. This is a topological case with these two multiplets only without other matter sector. With these observation we can rewrite the superstring action in a very simple form for holomorphic part \begin{equation} S_{{\bf V}{\bf W}} = \int \frac{d^2 z d^2 \theta}{2\pi} d\Theta_{gh} \Big({\bf V} \bar{\bf D}_{\bar{\theta}} {\bf W} \Big) = \int \frac{d^2 z d^2 \theta}{2\pi} (\Sigma \bar{\bf D}_{\bar{\theta}} {\bf \Gamma} + \ {\bf B}\bar{\bf D}_{\bar{\theta}} {\bf C} ) \label{vwaction} \end{equation} Note that this action has still the derivative of the form $\bar{\bf D}_{\bar{\theta}} = \partial_{\bar{\theta}} + \bar{\theta} \partial_{\bar{z}}$ and we did not gauge the field $\Theta_{gh}$. It will be interesting if we can gauge the field $\Theta_{gh}$. \section{Non-Relativistic Strings with Higher Supersymmetry} Following Polchinski \cite{pol}, we would like to survey possible superconformal algebras and their related non-relativistic superstring theories. The basic idea is to find the sets of holomorphic and antiholomorphic currents, whose Laurent coefficients form a closed constraint algebra. This is motivated by the idea of enlarging the world sheet constraint algebra with supercurrents $T_F(z)$ and $\bar{T}_F(\bar{z})$. Here the constraint is part of the symmetry singled out to be imposed on physical states in OCQ or BRST sense. Here we assume that there is only one $(2,0)$ constraint current because the sum of the $\beta\gamma$, $bc$ and $X^i$ energy momentum tensors have geometric interpretation in terms of conformal invariance. This is similar to the relativistic case. Thus the result of the constraint current algebra in world sheet is the same as the relativistic case. Concentrating on holomorphic current with conformal weight as multiple of half integer and less than and equal to 2,\footnote{ For the ghost CFT, there are restrictions as we mentioned. But there is no restriction for the matter $\beta\gamma$ or $bc$ CFT because they are part of the $(2,0)$ constraint current and they are consistent part of the algebra as long as all the matter conformal weight sums up to satisfy the physical state conditions. } there are very limited possible algebras and it is given in the following table. \begin{table}[h] \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \hline \hline & $n_2$ & $n_{3/2}$ & $n_1$ & $n_{1/2}$ & $n_0$ & $c_{gh}$ & $c^{\bf m}_{\beta\gamma, bc, \cdots}$ & symmetry & $T_F$ Rep. \\ \hline \hline I&1 & 0 & 0 & 0 & 0 & $-26$ & 2(6$\lambda^2-6\lambda +1$) & & \\ \hline II&1 & 1 & 0 & 0 & 0 & $-15$ & 12 $\lambda -9$ & & \\ \hline III&1 & 2 & 1 & 0 & 0 & $-6$ & +6 & U(1) & $\pm 1$ \\ \hline IV&1 & 3 & 3 & 1 & 0 & 0 & 0 & SU(2) & {\bf 3} \\ \hline V&1 & 4 & 7 & 4 & 0 & 0 & $24(\lambda -2)$ & $SU(2)^2 \times U(1)$ & ({\bf 2}, {\bf 2}, 0) \\ \hline VI&1 & 4 & 6 & 4 & 1 & 0 & 0 & $SU(2)^2$ & ({\bf 2}, {\bf 2}) \\ \hline VII&1 & 4 & 3 & 0 & 0 & 12 & $36 - 24 \lambda$ & SU(2) & {\bf 2} \\ \hline \hline \end{tabular} \caption{Survey of possible string theory. The first five columns represent the number of reparametrization currents with corresponding spins as indicated in the subscript of $n_{spin}$. $n_{3/2}$ represent the number of supersymmetry. $c_{gh}$ is the total central charge of the supersymmetrized ghost CFT and $c^{\bf m}_{\beta\gamma, bc, \cdots}$ is the total central charge of the supersymmetrized $\beta\gamma$ CFT. The last two columns represent the symmetry and the representation of the supercharge. } \label{table:fermionictable5} \end{table} The cases I and II are explained already in the bosonic string theory \cite{Kim:2007hb} and in the previous section, respectively. These theories are explicitly quantized and have the non-relativistic dispersion relation. The cases III, IV and VI are rather different from the other cases because both the supersymmetric ghost ${\bf BC}$ CFT and the ${\Sigma\Gamma}$ CFT have the central charges independent of $\lambda$, which are same in magnitude with opposite sign. Thus there is no room for the spatial coordinates. But it is still possible to have some geometric interpretation from the matter ${\Sigma\Gamma}$ CFTs. In addition to the II case, there are two possible cases with infinite number of possible string theories, the cases V and VII. Both cases have 4 super charges in world sheet CFT. For case V, the central charge of the superconformal ghost CFTs is $0$ and the central charge of the matter ${\bf \Sigma\Gamma}$ CFTs is $24(\lambda - 2)$. Thus for $\lambda \leq 2$ cases, it is possible to have spatial $X$ CFTs. In the last case, VII, the central charge has positive contribution from the ghost CFTs. On the other hand, there are negative contribution from the matter ${\Sigma\Gamma}$ CFTs. We can make the parameter $\lambda$ large and there is corresponding string theory. It will be interesting to quantize these sets of theories. \section{Conclusions} In this paper we construct a supersymmetric version of the recently constructed non-relativistic string theory. The non-relativistic superstring theory has a first order ${\bf \Sigma\Gamma}$ SCFT on top of the usual eight second order ${\bf X}$ SCFTs. The fermionic sector has an anticommuting matter $bc$ CFT in addition to the eight $\psi^i$ fields. The component fields, $b$ and $c$, have the conformal weights $1/2$. These can be transformed into the $\psi^0$ and $\psi^1$ fields, and the fermionic action is the same as that of the relativistic superstring theory. The symmetry group is $SO(1,1) \times SO(8)$. We quantize the theory in an elementary fashion. In addition to the physical state conditions imposed by energy momentum tensor, there exist other conditions from the super current. These give us a non-relativistic analogue of the Dirac equation in the ground state of the $R$ sector. This equation can be solved with the manifest $SO(8)$ symmetry by exploiting $SO(1,1)$ symmetry. The fermionic spectrum is non-chiral because the non-relativistic Dirac equation connects the two irreducible spinor representations ${\bf 8}_c$ and ${\bf 8}_s$ for the $SO(8)$ group. For the closed string spectrum, modular invariance requires to project out the ground state in the $NS$ sector. The spectrum of this theory is very similar to that of Type IIB superstring theory, except for the chiral property and the energy dispersion relation. The one loop consistency check is straightforward and the theory is modular invariant. We present a noncritical version of the non-relativistic superstring theories by generalizing the conformal weight of the first order ${\bf \Sigma\Gamma}$ SCFT. It turns out that there is an infinite range of possible non-relativistic superstring theories. We present some immediate observations related to these possible consistent string theories. We further survey possible non-relativistic string theories with extended supersymmetry utilizing the world sheet constraint algebra. The matter $\beta\gamma$ CFT (and its supersymmetric partners) combined with the $X$ CFT (and its partners) form a $(2,0)$ constraint current (and its partners) to have a geometric interpretation. Thus the matter first order CFTs are not constrained severely compared to the ghost sector. There are three infinite series of possible string theories: two with the four super charges and one with the one super charge, which is considered in the present work. It will be interesting to quantize these noncritical non-relativistic string theories. \section{Future Directions} Understanding cosmological singularities such as the Big Bang is an interesting and outstanding problem. It requires understanding time-dependent backgrounds in string theory, which are very difficult to analyze\cite{Liu:2002ft}. Perturbative string theory breaks down in some spacetime regions where the string coupling becomes large. One clean example with the lightlike Linear Dilaton theory was recently proposed in \cite{Craps:2005wd}.\footnote{There are some direct generalizations of this simple solution\cite{Li:2005sz}. We thank Professor Nobuyoshi Ohta for drawing our attention for these solutions.} The Dilaton is proportional to a light cone coordinate, $- X^+$, and the theory is defined as an exact CFT that describes string propagating in flat spacetime with the string coupling, $g_s = e^{- Q X^+}$. Thus the spacetime is free at late times and strongly coupled at early times. At early times, there is a true singularity happening at a finite affine parameter, which requires a matrix string description as explained in \cite{Craps:2005wd}. It appears to be necessary to have a complete nonperturbative description of string theory to understand time dependent backgrounds in string theory. There is an interesting nonperturbative formulation of noncritical M-theory in (2+1) dimensions using the non-relativistic Fermi liquid and its time-dependent solutions\cite{Horava:2005tt}. Earlier work with time-dependent background with closed string tachyon condensation can be found in (1+1) noncritical string theory\cite{Karczmarek:2004ph}. On the other hand there are very interesting developments which emphasize the role of perturbative string theory in the analysis of time-dependent backgrounds. It is claimed that a certain spacetime singularity can be replaced by a tachyon condensate phase within perturbative string theory\cite{McGreevy:2005ci}. And very recent papers\cite{Horava:2007yh} argue, using alternative gauge choices to free world sheet gravitino, that spacetime decay to nothing in string and M-theory should be addressed at weak string coupling, where the nonperturbative instanton instability is expected to turn into a perturbative tachyon instability. See also \cite{Hellerman:2007zz}. Similar considerations in supercritical string theories can be found in \cite{Aharony:2006ra, Hellerman:2006nx}. It turns out that many interesting cosmological solutions have broken Lorentz symmetry. And it is interesting to consider these solutions with their manifest global symmetries. Furthermore fundamental issues related to time, especially to ``emergent time'', is not clear (see, {\it e.g.,}\cite{Seiberg:2006wf}). Thus it is interesting to consider alternative approaches, which can shed light on time-dependent backgrounds and on fundamental issues of time.\footnote{An example which motivates a different approach for time can be seen in the low energy limits of open string theory with magnetic and electric $NS-NS$ B-field. In the appropriate limits, the theory with electric $NS-NS$ B-field is reduced to noncommutative open string theory while the theory with magnetic $NS-NS$ B-field reduces to the noncommutative Yang-Mills theory. This suggest that time is rather different from space. This is motivated to consider non-relativistic string theories in \cite {Kim:2007hb}.} Our current work and a previous paper\cite{Kim:2007hb}, motivated by earlier works\cite{Gomis:2000bd, Danielsson:2000gi, Danielsson:2000mu}, provide examples for these alternative approaches. As we saw in the main body, the non-relativistic string theory shares many features with relativistic string theory. The difference between these two theories comes from the replacement of the $X^0$ and $X^1$ CFTs by $\beta\gamma$ CFT. This effect is minimal because these matter CFTs are part of the $(2,0)$ constraint current, which makes a geometric interpretation possible. As a result, the spectrum is very similar to that of Type IIB superstring theory. On the other hand, these non-relativistic string theories provide a very different perspective on time. Thus these non-relativistic string theories appear to be ideal for investigating general issues related to time-dependent backgrounds with broken Lorentz symmetry, such as the lightlike Linear Dilaton theory and supercritical string theories. We would like to comment on a few preliminary results for the correspondence between the critical non-relativistic string theory and the lightlike LDT.\footnote{This correspondence between the non-relativistic string theory and the the lightlike Linear Dilaton theory was pointed out by Professor Petr Ho\v{r}ava. We are grateful for his careful and extensive suggestions on this correspondence and on various references.} These two theories have the same set of global symmetries, which can be checked with the identification $X^+ = t$ in the lightlike LDT case. In the lightcone gauge, the spectrum of the lightlike LDT can be checked to be the same as that of the non-relativistic string theory. These equivalences are enough for us to be serious about investigating the exact mapping between these two theories. We hope to report these results in the near future. \section*{Acknowledgments} It is pleasure to thank Professor Ori Ganor for encouragements, Professor Petr Ho\v{r}ava for introducing crucial ideas and references and Professor Ashvin Vishwanath for answering many questions related to properties of non-relativistic system. Their careful comments and extensive discussions were critical to make this work possible. I also thank Jordan Carlson, Sharon Jue and Stefan Leichenauer for reading and commenting the manuscript. This work was supported in part by the Center of Theoretical Physics at UC Berkeley, and in part by the Director, Office of Science, Office of High Energy and Nuclear Physics, of the U.S. Department of Energy under Contract DE-AC02-05CH11231. \section*{Disclaimer} This document was prepared as an account of work sponsored by the United States Government. While this document is believed to contain correct information, neither the United States Government nor any agency thereof, nor The Regents of the University of California, nor any of their employees, makes any warranty, express or implied, or assumes any legal responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by its trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof, or The Regents of the University of California. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof or The Regents of the University of California. \section*{Appendix: Physical spectrum with $SO(7)$ symmetry} In this appendix, we consider a relativistic approach to investigate the spectrum of this non-relativistic string theory. It is interesting to compare these results with those in the main text. We have $SO(1,1) \times SO(8)$ symmetry and we want to analyze the non-relativistic mass shell condition (\ref{massshellforns1}) and the non-relativistic Dirac equation (\ref{nrDiraceq2}) \begin{eqnarray} \frac{\alpha'}{4} \vec{k}^2 - k^\gamma p' = 0 \ , \label{a1} \\ \frac{1}{2^{1/2}} \Big( \alpha'^{1/2} k^i \psi_{0, i} - ( k^\gamma + p' ) \psi_{0, 0} + (k^\gamma - p') \psi_{0, 1} \Big) = 0 \ . \label{a2} \end{eqnarray} Rather than breaking the $SO(1,1)$ symmetry, we can go to a frame, $k^i = 0 $ for $i = 2, \cdots , 8$ and $k^9 \neq 0$, which preserves the $SO(1,1) \times SO(7)$ symmetry, to solve these two equations (\ref{a1}) and (\ref{a2}). From the quantization procedure we know that there are eight physical degrees of freedom. There are only the $SO(7)$ manifest symmetry in the first excited level of the $NS$ sector, which has a vector representation ${\bf 7}$ of $SO(7)$. Then where is one extra degrees of freedom? It is a ``Dilaton'' originated from the conformal rescaling $SO(1,1)$, which transforms as a singlet under $SO(7)$. Thus the first excited level has eight degrees of freedom which transform as ${\bf 1 + 7}$ under the $SO(7)$ rotation. And then we can solve the non-relativistic Dirac equation (\ref{a2}) by using the $SO(1,1)$ symmetry by picking particular values of $k^\gamma$ and $p'$. Then the remaining symmetry group $SO(1,1) \times SO(7)$ is broken to $SO(7)$. The irreducible spinor representation of the $SO(7)$ group is ${\bf 8}$ as is well known. Thus there are actually eight independent degrees of freedom in the ground state of the $R$ sector. And it is obvious that there is no chance for the fermions to have any chiral property. We present a table for the holomorphic spectrum with $SO(7)$ symmetry. \begin{table}[h] \begin{tabular}{lcccccc \hline \hline sector & $\hspace{0.5 in}$ & $SO(7)$ spin & $\hspace{0.5 in}$ & $-\frac{\alpha'}{4} \vec{k}^2 + k^\gamma p'$ \\ \hline $NS_0$ & & ${\bf 1}$ && -1/2 \\ NS & & ${\bf 1 + 7}$ && 0 \\ $R$ & & ${\bf 8}$ && 0 \\ \hline \hline \end{tabular} \caption{Spectrum of the holomorphic sector for ground and first excited level of $NS$ sector and ground state of $R$ sector. ${\bf 7}$ and ${\bf 8}$ are the vector representation and the spinor representation of $SO(7)$, respectively. } \end{table} It is straightforward to construct the non-relativistic closed superstring spectrum. They are presented below. We would like to have a few comments. Compared the approach with the manifest $SO(8)$ symmetry, the $SO(7)$ symmetry is not efficient to describe the physical spectrum. Furthermore it is not clear that how we can demonstrate the modular invariance at all. The field contents are very similar to the relativistic string theory with a circle compactification. But in that case there are discrete momentum modes and discrete winding modes in the twisted sector. One the other hand, we have just continuous momentum without compact coordinate or twisted sector. \bigskip \begin{table}[h] \begin{tabular}{lcccccc} \hline \hline sector & $\hspace{0.25 in}$ & $SO(7)$ spin & $\hspace{0.5 in}$ & dimensions \\ \hline ($NS_0$, $NS_0$) && ${\bf 1} \times {\bf 1}$ &= & ${\bf 1}$ \\ \hline ($NS$, $NS$) && $({\bf 1+ 7}) \times ({\bf 1+ 7})$ &= &${\bf 1} + ({\bf 7 + 7}) + ({\bf 1+ 21 + 27})$ \\ ($NS$, $R$) && $({\bf 1 + 7}) \times {\bf 8}$ &= & ${\bf 8} + ({\bf 8 + 48})$ \\ ($R$, $NS$) && ${\bf 8} \times ({\bf 1 + 7})$ &= & ${\bf 8} + ({\bf 8 + 48})$ \\ ($R$, $R$) && ${\bf 8} \times {\bf 8}$ &= & ${\bf 1} + ({\bf 7 + 21}) + ( {\bf 1 + 7 + 27})$ \\ \hline \hline \end{tabular} \caption{Closed superstring spectrum for the ground and the first excited levels of $NS$ sector and ground state of $R$ sector. ${\bf 1, 7, 27}$ are the tensor representations and ${\bf 8, 48}$ are the spinor representations of $SO(7)$. } \end{table}
2023-04-23T06:10:13.744Z
2007-11-26T22:47:30.000Z
redpajama/arxiv
arxiv_0002
425
12,029
f62eb39dc9621039a2b9a8ec0482637fe3b47da5
\section{Introduction} Acetylene constitutes a key ingredient in the production of large complex hydrocarbon molecules in the dense interstellar medium (Herbst 1995). Because acetylene has no permanent dipole moment, it lacks a rotational spectrum that could be observed at radio wavelengths; observations of interstellar acetylene have therefore been limited to mid-infrared studies of rovibrational bands, carried out from ground-(e.g., Evans, Lacy \& Carr 1991; Carr et al. 1995) and space-based (e.g., Lahuis \& van Dishoeck 2000; Boonman et al. 2003; Lahuis et al. 2007) observatories. Acetylene has been detected in the gas-phase - either in absorption (e.g., Carr et al. 1995; Lahuis \& van Dishoeck 2000) or in emission (e.g. Boonman et al. 2003, this paper) - mostly toward young stellar objects. C$_2$H$_2$ can be used as a tracer of warm (100 K to 1000 K) molecular gas along often complicated sightlines. C$_2$H$_2$ abundance estimates, which were sometimes a few orders of magnitude higher than the predictions of cold gas-phase steady-state chemical models, have led to a better understanding of the role that warm gas-phase chemistry (e.g., Doty et al. 2002; Rodgers \& Charnley 2001) and/or grain mantle processing (e.g., Ruffle \& Herbst 2000) can play in star-forming regions, both locally and in extra-galactic objects (Lahuis et al. 2007). In this Letter, we present the first detection of the $\nu_5$ band of acetylene (C$_2$H$_2$) at 13.7 $\mu$m toward the star forming region Cepheus A East using the Infrared Spectrograph (IRS) onboard the {\it Spitzer Space Telescope}. This is the first map of C$_2$H$_2$ obtained toward any interstellar gas cloud. Section 2 describes the observations and data analysis. Sections 3 and 4 compare the spatial distribution of C$_2$H$_2$ to those of gaseous CO$_2$ and H$_2$ $S$(2), and discuss the C$_2$H$_2$-emitting gas in the context of shock chemistry and local outflow activity. The presence of C$_2$H$_2$ on interstellar dust grains will also be discussed in the context of cometary ices composition. \section{Observations and data analysis} Spectral maps were obtained with the IRS instrument onboard {\it Spitzer} as part of the Guaranteed Time Observer Program 113. 1$'$$\times$1$'$ square fields were observed with the short-low (SL), short-high (SH), and long-high (LH) modules, providing wavelength coverage from 5.2 to 25 $\mu$m. We obtained continuous spatial sampling by stepping the slit perpendicular and parallel to its long axis in steps of one-half its width and 4/5 its length, respectively. The data were processed with version 12 of the pipeline. We used the Spectroscopy Modeling Analysis and Reduction Tool (SMART, Higdon et al. 2004), along with locally developed routines (Neufeld et al. 2006), to extract wavelength- and flux-calibrated spectra and to generate spectral line intensity maps. To estimate the uncertainties in the C$_2$H$_2$ line intensities ($\nu_5$ band at 13.7 $\mu$m), we first calculated the standard deviation ($\sigma$) around our best fit to the local continuum for each point in the map. We then shifted the best-fit continuum by $\pm$1 $\sigma$ and generated the corresponding new estimates of the line intensities. The difference between the best-fit line intensities and the intensities obtained by shifting the best-fit continuum model constitutes our $\pm$1 $\sigma$ errors. While extinction due to the long wavelength wing of the silicate feature (9.7 $\mu$m) certainly occurs, corrections to the C$_2$H$_2$ line intensities were not applied. As we will see below, C$_2$H$_2$ arises in a warm component located mostly in front of the cold quiescent gas traced by the silicates (Sonnentrucker et al. 2007, ApJ in press) making a direct estimate of the fraction of dust located in this warm component nearly impossible. Since our study also indicates that the silicates exhibit a homogeneous distribution over the spatial extent of the gaseous C$_2$H$_2$ emission, we are confident that the C$_2$H$_2$ intensity variations we see are not predominantly due to extinction effects. \section{Results} Figure 1 ({\it upper}) shows a summed IRS spectrum in the wavelength range containing acetylene at the spatial location corresponding to one peak in the C$_2$H$_2$ line emission intensity map (HW5/6 in Fig.~2). Note that frequent order mismatches around 14 $\mu$m preclude any reliable detection of the HCN $\nu_2$ band at 14.05 $\mu$m with our data. The striking similarities between the C$_2$H$_2$ maps presented here (Fig.~2) and the gaseous CO$_2$ maps we found previously (Sonnentrucker et al. 2006) strongly suggest that C$_2$H$_2$ arises in the same warm postshock gas component exhibiting gaseous CO$_2$. Because the C$_2$H$_2$ emission lines are weaker by a factor $\sim$10 than those of CO$_2$, we first searched for the spatial positions where C$_2$H$_2$ was detected at a $>$1.5 $\sigma$ level. 38 such individual spatial positions fulfilled this criterion over the mapped region shown in Fig.~2. These are the spectra that we will consider when comparing the C$_2$H$_2$ distribution with that of gas-phase CO$_2$ and H$_2$ $S$(2), a tracer of shock activity (see Fig.~3). We constrained the gas temperature in the selected C$_2$H$_2$-emitting region, by comparing synthetic profiles of the $\nu_5$ acetylene band for temperatures ranging from 50 to 900 K with the observed C$_2$H$_2$ $Q$-branch (13.71 $\mu$m) profile. We find that temperatures between 50 and 200 K best fit the 38 selected C$_2$H$_2$ line profiles (Fig.~1, {\it lower}). Note that significantly higher temperatures, such as those found by Lahuis \& van Dishoeck (2000) toward other massive young stellar objects (300-1000 K), are ruled out in Cepheus A East. Most importantly, our moderate 50-200 K range coincides with the temperature range found for gaseous CO$_2$ in that same region (Sonnentrucker et al. 2006) arguing in favor of a common spatial origin for the two species. To estimate the column density associated with the C$_2$H$_2$ emission, the dominant excitation mechanism needs to be identified. Accurate collisional rates for vibrational excitation of acetylene are not available to our knowledge. If the collisional rates for C$_2$H$_2$ vibrational excitation were similar to those of CO$_2$ (Boonman et al. 2003), densities higher than $10^8$ cm$^{-3}$ would be required to account for the observed emission. Such a high density would give rise to strong high-lying H$_2$O rotational lines that are, however, not detected (e.g., van den Ancker et al. 2000). Additionally, considering the low temperature in the C$_2$H$_2$-emitting gas component relative to the energy of the $\nu_5=1$ state ($E/k\approx1050$ K), as well as the rather low local hydrogen density inferred over this region ($n_{\rm{H}}$$\sim$ few $\times$ 10$^{3}$ to a few $\times$ 10$^{7}$ cm$^{-3}$; Codella et al. 2005), it is quite unlikely that collisions are the dominant excitation mechanism. Radiative pumping by 13.7 $\mu$m continuum photons produced by warm dust local to C$_2$H$_2$ is also unlikely to produce the observed emission, since we previously determined that the radiation field in this region is dominated by dust close to the HW2 protostellar region (Sonnentrucker et al. 2006). Therefore, we conclude that radiative pumping by 13.7 continuum photons emanating from the HW2 protostellar region seems the most likely excitation mechanism for the observed acetylene molecules, as we also found for CO$_2$ (Sonnentrucker et al. 2006). We computed the C$_2$H$_2$ column densities over the mapped region, following the prescription described in Sonnentrucker et al. (2006). Under these assumptions, the derived $N$(C$_2$H$_2$) values are proportional to the acetylene intensities and to the square of the angular separation from the exciting source, HW2 (e.g., Gonz\'alez-Alfonso et al. 2002). Figure 2 displays the intensity map ({\it upper}) and the derived column density map ({\it lower}) for C$_2$H$_2$ toward Cepheus A East. The gray contours show the distribution of NH$_3$ (1,1) a tracer of cold quiescent molecular gas (Torrelles et al. 1993) toward the region fully sampled by {\it Spitzer}. Like gas-phase CO$_2$, the $Q$-branch of gaseous C$_2$H$_2$ mainly traces the walls of the cavity carved by the {\it northeast} component of the outflow originating at HW2 (e.g., G\'omez et al. 1999), an outflow that is apparently responsible for the disruption of the quiescent clouds as seen in the NH$_3$ distribution (e.g., Torrelles et al. 1993; Goetz et al. 1998; van den Ancker 2000). Intensity maxima for C$_2$H$_2$ are detected close to the HW6 radio continuum source, at the NE bridge that joins the Cep A-2 and Cep A-3 cloud cores (Torrelles et al. 1993), and along the western surface of the Cep A-3 cloud. Weaker emission is also found further from the NE bridge. The similarities in both the temperature and the spatial distribution of the C$_2$H$_2$ and CO$_2$ emitting gas add weight to the conjecture that both species arise from the same warm shocked component. We finally searched for correlations between the acetylene measurements and those obtained for gas-phase CO$_2$ and H$_2$ $S$(2), a tracer of shock activity, for those 38 positions where C$_2$H$_2$ was detected at a $>$1.5$\sigma$ level. Figure 3 compares the C$_2$H$_2$ intensity with that of gaseous CO$_2$ ({\it upper}), as well as the C$_2$H$_2$ column density with the CO$_2$ column density ({\it middle}) and the H$_2$ $S$(2) intensity ({\it lower}). In all cases, the good correlation found between acetylene, gaseous CO$_2$ and H$_2$ $S$(2) indicates that C$_2$H$_2$ does indeed arise in the warm shocked component also containing gaseous CO$_2$. \section{Discussion} Acetylene was previously observed toward low-to-high mass star-forming regions either in absorption (e.g., Lahuis \& van Dishoeck 2000) or in emission (e.g., Boonman et al. 2003) using the {\it Infrared Space Observatory} ({\it ISO}). Excitation temperatures ranging from $\sim$ 10 to 900 K and abundances with respect to H$_2$ ranging from a few $\times$ 10$^{-8}$ to a few $\times$ 10$^{-7}$ were derived. Steady-state models of gas-phase chemistry in cold (10-50K) dense (10$^3$-10$^5$ cm$^{-3}$) molecular clouds predict abundances for acetylene between a few $\times$ 10$^{-10}$ and 1$\times$ 10$^{-8}$ with respect to H$_2$ depending on the role that neutral-neutral destruction reactions may play (Bettens, Lee \& Herbst 1995; Lee et al. 1996). Similar abundances are predicted by models of gas-grain chemistry in quiescent clouds with the highest values obtained only after 10$^6$ years (Ruffle \& Herbst 2000). While such models could account for the observed abundances in the cold gas, they were unable to reproduce the enhancements observed toward much warmer gas components in those objects. Mechanisms such as C$_2$H$_2$ ice sublimation from grain mantles and/or C$_2$H$_2$ enhancements via warm gas-phase chemistry were then invoked (e.g., Carr et al. 1995; Doty et al. 2002; Boonman et al. 2003). For the NE outflow region in Cepheus A East, we derive C$_2$H$_2$/H$_2$ abundance ratios in the range 1 $\times$ 10$^{-9}$ to 4 $\times$ 10$^{-8}$, for an assumed H$_2$ column density of 1.5 $\times$ 10$^{22}$ cm$^{-3}$ (G\'omez et al. 1999). These values are averages along the observed sight-lines. The highest abundances are localized around and to the south of, the NE position as well as around the positions of the HW5/6 sources (see Fig.~2). This is precisely where the interaction between the {\it northeast} outflow and the ambient molecular clouds occurs, and it is in these regions that we observe strong H$_2$ $S$(2) emission, a tracer of warm shocked gas. Thus the spatial variation of the C$_2$H$_2$ abundance again strongly suggests an association with shock activity, perhaps as a result of (1) production in the gas phase via high temperature reactions, or (2) grain mantle sputtering. Models for chemistry in hot cores (Rodgers \& Charnley 2001; Doty et al. 2002) indicate that enhanced abundances of C$_2$H$_2$ ($\sim$ few $\times$ 10$^{-8}$) are expected in warm regions with $T \ge$ 200 K. The good correlations between H$_2$ $S$(2), gaseous CO$_2$ and C$_2$H$_2$ indicate that such high temperatures were reached in the warm gas at the passage of the non-dissociative shock. However, the chemical pathways leading to the production of C$_2$H$_2$ are slow, and enhanced abundances only occur after $\sim$ 10$^4$ years, a time scale much greater than that expected for shock heating of the gas ($\sim$ 300 years; e.g. Kaufman \& Neufeld 1996). Hence, enhanced production of C$_2$H$_2$ by high temperature gas-phase chemistry is unlikely to be predominant in the observed region. Thus the correlations shown in Fig.~3 argue in favor of grain mantle sputtering over gas-phase production as the origin of the C$_2$H$_2$ in Cepheus A East. This is the same production mechanism that we favored for gaseous CO$_2$ (Sonnentrucker et al. 2006). While models for the production of C$_2$H$_2$ in shocks are not available to our knowledge, our results further suggest that both C$_2$H$_2$ and CO$_2$ are released into the gas phase under very similar physical conditions. The gaseous $\rm C_2H_2/CO_2$ ratio is roughly constant, with a mean value of 0.08 for all sight-lines where we detected acetylene at the 1.5 $\sigma$ level. If this value reflects the composition of the grain mantle, and given a $N$(CO$_2$)$_{ice}$/$N$(H$_2$O)$_{ice}$ ratio in this source of 0.22 (Sonnentrucker et al. 2007, ApJ in press), then the required $N$(C$_2$H$_2$)$_{ice}$/$N$(H$_2$O)$_{ice}$ ratio is 0.02. This value is at least a factor 4 larger than those derived toward other star-forming regions (0.1-0.5\%, Evans et al. 1991; Lahuis \& van Dishoeck 2000) and those predicted by theoretical models (0.1-0.5\%, Hasegawa \& Herbst 1993; Ruffle \& Herbst 2000), and at least a factor 2 larger than the gaseous C$_2$H$_2$/H$_2$O ratios obtained in observations of cometary comae (0.1-0.9\%, Brooke et al. 1996; Weaver et al. 1999). We speculate that these discrepancies might result from (1) the destruction of CO$_2$ by reaction with atomic hydrogen in shocks faster than $\sim$ 30 km s$^{-1}$ (predicted by Charnley \& Kaufman, 2000) and/or (2) a greater efficiency for sputtering of C$_2$H$_2$ in slow shocks.\footnote{Although both these effects would be strongly dependent upon the shock velocity, the relative constancy of the $\rm C_2H_2/CO_2$ would not necessarily require any fine tuning of the shock velocity. In reality, any sight-line typically samples an ensemble of shocks with a {\it range} of shock velocities, and the constancy of the $\rm C_2H_2/CO_2$ would simply indicate that the admixture of shock velocities varies little from one sight-line to another (e.g. Neufeld et al.\ 2006).} In either case, the gaseous C$_2$H$_2$/CO$_2$ ratios we observed may exceed the solid C$_2$H$_2$/CO$_2$ ratio, and the $N$(C$_2$H$_2$)$_{ice}$/$N$(H$_2$O)$_{ice}$ ratio could be less than 0.02. Unfortunately, direct measurements of acetylene ice are not possible, the weak features expected from solid C$_2$H$_2$ being blended with much stronger features of CO and H$_2$O (Boudin et al. 1998). However, further observations of gaseous C$_2$H$_2$ at a higher signal-to-noise ratio would be very valuable as a probe of any variations in the $\rm C_2H_2/CO_2$ ratio which might provide important clues to the shock physics. \acknowledgments This work is based on observations made with the {\it Spitzer Space Telescope}, which is operated by the Jet Propulsion Laboratory under a NASA contract. P.S. and D.A.N acknowledge funding from LTSA program NAG-13114 to the Johns Hopkins University. We are grateful to J.M. Torrelles for providing us with the NH$_3$ maps. We thank the referee for helpful comments. {\it Facilities:} \facility{Spitzer (IRS)}.
2023-04-23T06:10:14.007Z
2007-10-20T17:40:05.000Z
redpajama/arxiv
arxiv_0002
440
2,825
2705ce88d57d7d01d874f38de9165d85ee079d59
\section{Introduction} \label{intro} Gas-rich dwarf galaxies are commonly classified into low surface-brightness dwarfs, called dwarf irregulars (dIrrs), and higher surface-brightness objects, usually called blue compact dwarf (BCD) galaxies. These classes of galaxies tend to have low metallicities, blue colors and complex and chaotic gas phases. A large fraction of these galaxies shows an ongoing star formation (SF) or at least hints that this process has been quenched in the recent past. In this case, these objects are commonly referred to as {\it starburst} galaxies and their gas consumption timescales are much shorter than the Hubble time (Kennicutt \cite{ken98}), making this a transient phase of their evolution. Owing to the energy released by stellar winds and supernovae (SNe), intense episodes of SF are also associated to the development of galactic winds or at least of large-scale outflows. The broad distinction between these two phenomena is the final fate of the outwards-directed flow of gas: galactic winds generally exceed the escape velocity while outflows do not, therefore they tend to recede towards the center of the galaxy. Clear signatures of outflows are present in NGC1705 (Hensler et al. \cite{hen98}; Heckman et al. \cite{hek01}), NGC1569 (Martin, Kobulnicky \& Heckman \cite{mkh02}), NGC3079 (Cecil et al. \cite{cec01}), IZw18 (Martin \cite{m96}), NGC3628 (Irwin \& Sofue \cite{is96}) among others. Perhaps the best examples of large-scale outflows driven by SN feedback are at large redshifts (Pettini et al. \cite{pet98}; Pettini et al. \cite{pet01}). Although it is not certain, in any of the above-mentioned objects, that the metals will definitely leave the parent galaxy, indirect hints of the ubiquity of galactic winds are given by the mass-metallicity relation (Tremonti et al. \cite{tre04}; Dave\'e, Finlator \& Oppenheimer \cite{dfo06}) and effective yields (Garnett \cite{gar02}). From a theoretical point of view, the study of the evolution of gas-rich dwarf galaxies through numerical simulations has been performed by several authors in the recent past. The overall picture is that the occurrence of large-scale outflows is initially driven by the thermal pressure of a very hot, high pressurized gas and is favored by a flat distribution of the interstellar medium (ISM), which allows an easy vertical transport of material. However, since the transport of gas along the disk is very limited, outflows are not able to eject a significant fraction of the ISM, whereas the fraction of ejected metals can be very large (D'Ercole \& Brighenti \cite{db99}; MacLow \& Ferrara \cite{mf99}; Recchi, Matteucci \& D'Ercole \cite{rmd}, hereafter RMD). For NGC1569, Martin et al. (\cite{mkh02}) derived a supersolar metal content in the galactic wind from X-ray spectra but also advocated mass-loading of it with the ISM. Most of these studies, however, have focused on flows in homogeneous media, neglecting the multiphase nature of the ISM, although several attempts to perform multiphase hydrodynamical simulations have been made in the past, particularly using the so-called {\it chemodynamical} approach (Theis, Burkert \& Hensler \cite{tbh92}; Rieschick \& Hensler \cite{rh00}; Hensler, Theis \& Gallagher \cite{hen04}). The multiphase nature of the ISM, in particular its clumpiness, is observationally well established in dwarf galaxies (Cecil et al. \cite{cec01}; Cannon et al. \cite{cann05}; Leroy et al. \cite{leroy06}) and it has a solid theoretical background with the seminal work of McKee \& Ostriker (\cite{mo77}). According to this model, the ISM is composed by a cold neutral phase (representing the cores of molecular clouds), confined by a warm medium (with temperatures of the order of 10$^4$ K) and these two phases (which are in pressure equilibrium) are embedded in a hot, diluted intercloud medium (HIM), continuously produced by SN explosions and stellar winds. Sufficiently dense clouds can pierce the HIM without being swept up, so they can become embedded therein (Vieser \& Hensler \cite{vh07b}). At the interface between clouds and HIM, condensation-evaporation processes establish the final fate of the cloud and its impact on the development of a galactic wind. In two previous papers, we have studied the dynamical and chemical evolution of model galaxies similar to IZw18 (Recchi et al. \cite{rec04}, hereafter Paper I) and NGC1569 (Recchi et al. \cite{rec06}, hereafter Paper II). The main results can be briefly summarized as follows: \begin{itemize} \item most of the analyzed models develop large-scale outflows. These outflows carry out of the galaxy mostly the chemical elements freshly produced during the most recent episodes of SF, with large escape fraction of metals with delayed production (like Fe and N). \item Models with very short burst(s) of SF can cool and mix the newly formed metals in a very short timescale, whereas, when the SF is more complex, most of the metals are either directly ejected outside the galaxy through galactic winds or are confined in a too hot medium, therefore cannot contribute to the chemical enrichment of the warm ionized medium observed by emission lines from the H\,{\sc ii}~ gas. \item Models with complex and long-lasting SF episodes reproduce the chemical composition and the abundance ratios of the above-mentioned galaxies much better than models with bursting SF. \end{itemize} In this paper we simulate models with structural parameters similar to IZw18 and NGC1569. We increase arbitrarily the gas density of some specific regions of the computational grid, in order to create a ``cloudy'' phase, and we address the question how and to which extent a ``cloudy gas phase'' alters the former results. The clouds possess a specific density profile and can be either added at the beginning of the simulation or continuously created during the evolution of the model. We then analyze the differences between the dynamical and chemical evolution of these models with the ones presented in Paper I and Paper II. We point out that, at variance with the above-mentioned works, in this paper we will not specifically look for the best initial setups and the best assumptions in order to reproduce chemical and dynamical features of well-known objects. We will just stress the main variations produced by a clumpy initial setup. For this reason, we will also consider models which failed in Paper I and II at reproducing the observations of IZw18 and NGC1569. The paper is organized as follows: in Sect.~\ref{cloud} we briefly recall the evolution of a cloud embedded in a hot medium; in Sect.~\ref{model} we present the model and the adopted assumptions in the simulations. Results are presented in Sect.~\ref{results_fix} (models with clouds fixed at the beginning of the simulation) and in Sect.~\ref{results_inf} (continuous creation of clouds). Finally, a discussion and some conclusions are drawn in Sect.~\ref{discussion}. \section{The dynamics of clouds embedded in a hot phase} \label{cloud} The ubiquitous coexistence of cool to warm clouds in the hot phase of the ISM has attracted during the recent years broad attention on the interaction of such clouds with this tenuous and hot medium. The studies deal with three major effects: the influence of heat conduction, of shock fronts, and of the dynamics of the flowing hot gas, like e.g. with or without the presence of large-scale galactic outflows, on the evolution of clouds (Hartquist et al. \cite{hart86}; Murray et al. \cite{m93}; Ferrara \& Shchekinov \cite{ferr93}; Vietri, Ferrara \& Miniati \cite{v97}; Fragile et al. \cite{frag04}; Marcolini et al. \cite{marco05}; Vieser \& Hensler \cite{vh07b} among others). Despite the very large variety of adopted methodologies, astrophysical contexts and involved physical processes, and in spite of clearly defined problems, a broad variety but not yet uniqueness of issues can be summarized as follows: \begin{itemize} \item Moving from the idealized situation treated in analytical thermal conduction studies (e.g. Cowie \& McKee \cite{cm77}) to saturated heat conduction and self-gravitating clouds can change the results from evaporation to condensation for the same state of the hot gas and the same cloud mass model (Vieser \& Hensler \cite{vh07a}). \item A clumpy medium embedded in a hot flow produces mass loading, namely the seeding of material, ablated from the clouds, into the global flow. It has been demonstrated that this kind of phenomena helps in clarifying the X-ray emission in starburst galaxies like M82 (Suchkov et al. \cite{suc96}; Strickland \& Stevens \cite{ss00}; Marcolini et al. \cite{marco05}). \item A single cloud overrun by a shock wave can be crushed within the so-called crushing time (i.e. the time needed for the internal forward shock to cross the cloud and reach its downstream surface) and will be destroyed to smaller fragments if cooling dominates. But vice versa it also evaporates in a parameter regime with exceeding thermal conduction (Orlando et al. \cite{orl05}). \item In a complex of clouds, if the cloud separation transverse to the flow is smaller than some critical value (a few times the typical cloud radius, the exact value depending on the authors), the clouds will merge into a single structure before the hot flow destroys them. \item Thermal conduction helps in stabilizing the surface of the cloud, making it less susceptible to Kelvin-Helmholtz and Rayleigh-Taylor instabilities (Orlando et al. \cite{orl05}; Vieser \& Hensler \cite{vh07b}). It can also generate an inward-propagating shock wave able to compress the core of the cloud. \end{itemize} In the present work we do not intend to simulate in great detail the interaction of a cloud or of a cloud complex with a diffuse hot medium, as made by the previously cited authors. We instead simulate galaxy models similar to well observed and studied gas-rich dwarf galaxies, relaxing the hypothesis of a smooth initial gaseous distribution (as assumed in Paper I and Paper II) and analyze how the inclusion of a clumpy medium changes the thermal and chemical evolution of the ISM. Indeed, the resolution required to properly take into consideration conductive fronts surrounding clouds is of the order of 0.1 pc (Marcolini et al. \cite{marco05}, Vieser \& Hensler 2007a,b), extremely computationally demanding in a simulation in which the large-scale evolution of the galaxy, up to distances of several kpc has to be taken into account. \section{Model description} \label{model} \subsection{The numerical code} \label{model_numerics} The simulations are performed by means of a 2-D hydrocode in cylindrical coordinates based on a second-order upwind scheme (Bedogni \& D'Ercole \cite{bd86}). The hydro solver is coupled with routines able to follow in detail the chemical and dynamical feedback on the galaxy as a consequence of SNeII, SNeIa and winds from intermediate-mass stars (IMS). The chemical evolution has also an impact on the dynamics of the system, due to the assumption of a metallicity-dependent cooling function (B\"ohringer \& Hensler \cite{bh89}). We point out that, as in the previous papers, when we plot abundances or abundance ratios produced by our models, we exclude grid points at temperature above 2 $\cdot$ 10$^4$ K. This is because the gas at these temperatures would be undetectable with optical spectroscopy and its metallicity could be guessed only through X-ray analysis, whose use in dwarf galaxies is still quite uncertain (Martin et al. \cite{mkh02}; Ott, Walter \& Brinks \cite{owb05}). The code has been described in detail in RMD and newer implementations and improvements are reported in Paper I and Paper II, therefore we refer the readers to these papers for technicalities. Given the importance of thermal conduction on the shaping and final fate of clouds embedded in a hot medium, we just briefly recall the numerical method adopted to treat this physical phenomenon in our code. To solve the heat transport equation, the operator splitting method is adopted and the one-dimensional problem is solved through the Crank-Nicholson method (see also D'Ercole \& Brighenti \cite{db99}). A saturated heat flux (Cowie \& McKee \cite{cm77}) is adopted if the mean free path of electrons is larger than the temperature scaleheight. \subsection{Model parameters} \label{model_parameters} The initial configurations of our models are aimed at reproducing the main structural parameters of two very well studied gas-rich dwarf galaxies: IZw18 and NGC1569. The initial setup is taken from our previous models and described in RMD (for IZw18) and in Paper II (for NGC1569). As described in Sect.~\ref{model_cloud}, the gaseous distribution is made clumpy, either perturbing the initial setup or adding clouds as the galaxy evolves, at the same rate as the SF. We also vary the initial mass function (IMF) of the stars. The models are identified through the notation XYZW, where X refers to the initial setup (I: setup similar to IZw18; N: setup similar to NGC1569), Y takes into consideration whether the clouds are put in the initial setup of the galaxy or are continuously created (B: clouds present from the beginning; C: continuous creation of clouds). The third index refers to the adopted IMF: S is for the Salpeter index (x=1.35). A flatter-than-Salpeter index is also tested: A (x=0.95) and a steeper index (x=1.7) is denoted with K. Finally, also the yields from IMS are allowed to change (R for Renzini \& Voli \cite{rv81}, V for van den Hoek \& Groenewegen \cite{vg97}, and M for Meynet \& Maeder \cite{mm02}). For instance, the model called IBSR starts with a setup aimed at simulating IZw18, puts the clouds at the beginning of the simulation, assumes the yields of Renzini \& Voli (\cite{rv81}) for IMS and a Salpeter IMF. The yields from massive stars are taken from Woosley \& Weaver (\cite{ww95}), unlike for model M, where, for self-consistency reasons, the yields of Meynet \& Maeder (\cite{mm02}) are assumed. In this case, the upper mass is also 60 M$_\odot$, at variance with the 40 M$_\odot$ value adopted for Woosley \& Weaver yields. For the SNeIa, the formulation of Matteucci \& Recchi (\cite{mr01}) has been adopted, with nucleosynthetic yields taken from the model W7 of Nomoto, Thielemann \& Yokoi (\cite{nty84}). The model parameters are summarized in Table~\ref{tab_models}. As applied in Paper I and Paper II, the SF is gasping (i.e. long episodes of SF are separated by short periods of quiescence). More details about the SF history of various models are given in the corresponding sections. We will be referring to {\it diffuse models} any time we will consider a model, similar to the described cloudy one, but with a homogeneous initial density distribution, described in detail either in Paper I or in Paper II. \begin{table*}[ht] \caption{Summary of model parameters} \label{tab_models} \begin{centering} \begin{tabular}{ccccc} \hline\hline \noalign{\smallskip} Model & Setup & cloud & IMF slope x & IMS yields$^*$ \\ & & creation & & \\ \noalign{\smallskip} \hline IBSR & IZw18 & no & 1.35 & RV81 \\ IBSV & IZw18 & no & 1.35 & VG97 \\ IBAV & IZw18 & no & 0.95 & VG97 \\ ICSV & IZw18 & yes & 1.35 & VG97 \\ ICKV & IZw18 & yes & 1.70 & VG97 \\ NCSM & NGC1569 & yes & 1.35 & MM02 \\ \hline \end{tabular} \end{centering} \medskip $^*$ RV81: Renzini \& Voli (\cite{rv81}); VG97: van den Hoek \& Groenewegen (\cite{vg97}); MM02: Meynet \& Maeder (\cite{mm02}). \end{table*} \subsection{Cloud description} \label{model_cloud} A random generator identifies grid points in the galactic region which are the cores of the clouds. In the models in which clouds are put ``ab initio'' in the setup of the model (models identified with the second index ``B'', see Sect.~\ref{model_parameters}), the central density is decided a priori (100 cm$^{-3}$), and the total number of clouds is 25. The exact number of clouds does not play a significant role; tests have been performed also with 50 or 75 clouds (reducing their masses) and this does not affect significantly the results. The probability of finding a cloud in a particular grid point is proportional to the gas density. Due to the flattened distribution of the ISM at the beginning of the simulation (see RMD or Paper II), this gives also a larger probability for the clouds to be found close to the disk. The density of the grid points outside the clouds is then reduced in order to get a consistent final gaseous mass. In models with continuous creation of clouds (identified with the second index ``C'') the mass is decided a priori: it is assumed that the clouds are created at the same rate as the SF rate. As explained in Paper I, all the stars formed within an interval of time $\Delta t$ are treated as a single stellar population. After this interval of time (typically 10$^6$ yr), a cloud is created, whose mass equals the mass of the single stellar population. Moreover, these clouds are given an initial infall velocity of 10 km s$^{-1}$. For any of these models, the clouds are approximately shaped according to a radial density profile $\rho_{\rm cl} \propto R_{\rm cl}^{-1.7}$, where $R_{\rm cl}$ is the distance from the center of the cloud and the exponent of the power-law density profile is taken from observations (de Heij, Braun \& Burton \cite{deh02}; Churchill, Vogt \& Charlton \cite{chu03}; Tatematsu et al. \cite{tate04}). It is important to stress that, given the dimensionality and the symmetry of our numerical code, the clouds are not spherical, but ring-like structures. The clouds are assumed to have primordial chemical composition (i.e. no metals in them) and the temperature is set to 10$^3$ K. Since this is also the minimum temperature allowed for the gas (see Paper I and Paper II), the clouds are not in pressure equilibrium with the surrounding medium. In Sect.\ref{results_IBRS_dyna} we offer arguments demonstrating that this assumption is not unrealistic and we briefly describe models in which this assumption is relaxed and clouds are put in pressure equilibrium with the surrounding medium. \section{Results of models with fixed clouds.} \label{results_fix} \subsection{Model IBSR} \label{results_IBRS} \subsubsection{Dynamical evolution} \label{results_IBRS_dyna} We use as a prototypical model with a fixed complex of clouds a setup similar to the model SR2 analyzed in Paper I. It has therefore the same SF history adopted therein, namely a long, moderate episode of SF lasting 270 Myr, a quiescent period of 10 Myr and a more vigorous burst (5 times more intense than the first, long-lasting SF episode) lasting 5 Myr (Aloisi, Tosi \& Greggio \cite{alo99}). We add randomly 25 clouds, according to the procedure described in Sect.~\ref{model_cloud}. The initial gas distribution is shown in Fig.~\ref{setup}. It is worth reminding that, since the central density of the clouds is decided a priori, each cloud has a different mass, the lightest ones being close to the center. In Fig.~\ref{masses} we plot the logarithmic distribution of the cloud masses. As we can see, the clouds span almost 2 orders of magnitude in mass, ranging from $\sim$ 10$^{3.5}$ to $\sim$ 10$^{5.5}$ M$_\odot$, with a mean value of 6.26 $\cdot$ 10$^4$ M$_\odot$. \begin{figure}[ht] \begin{center} \includegraphics[width=9cm]{recchifig1.eps} \caption{Density contours for model IBSR at the beginning of the simulation. The density scale (in g cm$^{-3}$) is on the right-hand strip.} \label{setup} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=9cm]{recchifig2.eps} \caption{Histogram of the logarithmic distribution of the cloud masses (in M$_\odot$) for the model IBSR. The dashed line represents the mean value (6.26 $\cdot$ 10$^4$ M$_\odot$).} \label{masses} \end{center} \end{figure} The evolution of this model in the first $\sim$ 120 Myr is shown in Fig.~\ref{ibsr}. This model is able to develop a large-scale outflow in a timescale of the order of $\sim$ 75 Myr, only slightly delayed compared to the diffuse model SR2 (for comparison, see figs. 1 and 2 of Paper I). This model shows however a much more distorted density structure and much larger eddies and regions of thermal instabilities. As we will see better later on, the clouds are destroyed in a relatively short timescale, but nevertheless they leave an imprint on the development of the outflow, strongly influenced by the patchiness of the medium in the central region of the galaxy. \begin{figure*}[ht] \begin{center} \vspace{-3.5cm} \includegraphics[width=\textwidth]{recchifig3.eps} \caption{Density contours and velocity fields for model IBSR at four different epochs (evolutionary times are labeled in the box on top of each panel). The logarithmic density scale (in g cm$^{-3}$) is given in the strip on top of the figure. In order to avoid confusion, velocities with values lower than 1/10 of the maximum value (indicated for each panel in the upper right box) are not drawn. This is valid also for Fig.~\ref{t_rho}.} \label{ibsr} \end{center} \end{figure*} To quantify this effect, we calculate the total thermal energy of model IBSR during the first $\sim$ 100 Myr and compare it with the value found for the model SR2 in Paper I. The thermal energy is calculated inside a region $R\leq 1$ kpc and $\leq 730$ pc, which we have called `galactic region' in RMD and which nearly coincides with the region where stars are distributed. This comparison is shown in Fig.~\ref{eth}. As we can see, the total thermal energy budget is clearly affected by the presence of the clouds, leading to a reduction of $\sim$20\% of the thermal energy deposited into the system, i.e. it leads to a $\sim$ 20\% increase of the radiative losses. Incidentally we notice that this value of the total thermal energy (corresponding to the explosion energy of just a few SNe) is strongly affected, more than from the radiative cooling of the superbubble, from our assumption of a low thermalization efficiency of SNeII. An extensive discussion about this debated parameter can be found in RMD and Paper I. In Fig.~\ref{radii} we also show the average radius of the superbubble for the two above-mentioned models. The cloudy model IBSR allows at the beginning a slightly faster expansion of the superbubble. This is due to the fact that, in order to reproduce the same total mass, in a cloudy model the density of the diffuse medium has to be reduced. Moreover, the presence of clouds strongly distorts the shape of the supershell and the highly pressurized gas inside the cavity can more easily find regions of lower pressure, from which it is easy to pierce the shell and break out. This creates the tongues visible in Fig.~\ref{ibsr}. As time goes by, the larger radiative losses experienced by the model IBSR slow down the expansion of the superbubble and at $\sim$ 50 Myr the average superbubble radius of the diffuse model overcomes the one of the cloudy medium. \begin{figure}[ht] \begin{center} \includegraphics[width=9.5cm]{recchifig4.eps} \caption{Thermal energy (in units of 10$^{52}$ erg) for the model IBSR (solid line) and for the reference diffuse model SR2 (Paper I) (dashed line). } \label{eth} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=9.5cm]{recchifig5.eps} \caption{Average superbubble radius (in kpc) for the model IBSR (solid line) and for the reference diffuse model SR2 (dashed line). } \label{radii} \end{center} \end{figure} We can better analyze the influence of clouds on the early development of galactic winds by zooming in the central region of the computational grid during the first tens of Myr. A plot of the density profiles, velocity fields and temperature profiles of the model IBSR in the first $\sim$ 20 Myr is shown in Fig.~\ref{t_rho}. We can first notice in this plot that the clouds, even if they are not encompassed in the superbubble, tend to expand. This is because they are not put in pressure equilibrium with the surrounding ISM, therefore their lifetime is relatively short (a few tens of Myr), irrespective of the presence of a surrounding HIM. We have also run models in which the clouds are put in pressure equilibrium with the surrounding ISM. This has been obtained by simply reducing the temperature of the clouds up to the value which compensate for the pressure of the ISM at the border of the cloud. This model shows the same behavior of the other models once the clouds are encompassed within the superbubble cavity, namely they are quickly evaporated. The clouds outside the superbubble show of course a different behavior, being more stable than the previously described ones, but this does not affect the global evolution of the model. Moreover, growing evidences are accumulating, both observationally (Ballesteros-Paredes, Hartmann \& V\'azquez-Semadeni \cite{balle99}; Hartmann, Ballesteros-Paredes \& Bergin \cite{hart01}; Hartmann \cite{hart03}) and theoretically (Elmgreen \cite{elme00}; Heitsch et al. \cite{hei06}) that molecular clouds are transient structures rather than well defined objects in quasi-equilibrium states. Therefore they must have a relatively short lifetime, as in our simulated clouds. Given the similarities between the behavior of equilibrium and non-equilibrium clouds and due to the more complex and slow computation of equilibrium clouds, we focus from now on only on models in which the clouds are not in pressure equilibrium. The clouds create a very patchy temperature distribution in the first 10--12 Myr but the evaporation process continues up to the moment (after 20 Myr) where the temperature inside the superbubble is almost uniform. We can notice once more a supershell which, owing to the interaction with clouds, strongly deviates from the spherical geometry, allowing therefore larger shears and eddies (and consequently a reduced thermal energy of the bubble), but also tongues and fingers from which it is easier to leak out of the superbubble. This is the reason why, after $\sim$ 100 Myr some outflowing gas is already $\sim$ 3 kpc above the galactic disk (third panel of Fig.~\ref{ibsr}) although the average superbubble radius is at this stage smaller than for the diffuse model SR2 (Fig.~\ref{radii}). \begin{figure}[ht] \begin{center} \includegraphics[width=12.5cm]{recchifig6.eps} \caption{ Density contours and velocity fields (left panels) and temperature contours (right panels) for the central region of model IBSR at four different epochs (evolutionary times are labeled in the box on top of each right panel). Logarithmic scales are given on top of each column of panels. } \label{t_rho} \end{center} \end{figure} \subsubsection{Chemical evolution} \label{results_IBRS_chem} \begin{figure}[ht] \includegraphics[width=9.5cm]{recchifig7.eps} \caption{ Evolution of 12 + log (O/H) (top panel), log (C/O) (second panel), log (N/O) (third panel) and [O/Fe] (bottom panel) for a prototypical model with a fixed initial cloud complex (IBSR model) (solid line) and a prototypical model with continuous creation of cloud (ICSV model) (dashed line). These models are compared with a model of similar mass but smooth ISM distribution (dotted line). The superimposed shaded areas indicate the observed values found in literature (if available), with relative error-bars.} \label{cno1} \end{figure} From a chemical point of view, the expected effect of clouds is to dilute the hot metal-rich gas through evaporation of the (metal-poor) clouds, allowing for a reduction of the metallicity without altering the abundance ratios (K\"oppen \& Hensler \cite{kh05}). However, the reduced thermal energy and, consequently, the reduced escape fraction of metals from the galactic region (Sect.~\ref{results_IBRS_dyna}) should produce an {\it increase} of the metallicity of the galactic ISM. In this case, the abundance ratios are affected if the ejection efficiencies depend on the different chemical species (here we simply define ejection efficiency as the fraction of metals outside the galactic region compared to the total amount which has been synthesized). To disentangle these two competing effects, we analyze the differences in the chemical evolution between diffuse and cloudy models. The comparison of 12+log(O/H), log(C/O), log(N/O) and [O/Fe] of the warm ionized phase is presented in Fig.~\ref{cno1}. In this plot we also show the evolution of a prototypical model of continuous creation of cloud (model ICSV, see Sect.~\ref{results_ICSV_chem}). The evolution of $\alpha$-elements is not significantly altered by the presence of clouds (the difference being always around $\sim$ 0.1 dex), but we can notice less negligible differences (of the order of 0.2 dex) in the log(N/O) abundance ratio. Indeed the diffuse model, due to the reduced radiative losses, attains a larger fraction of metals with temperature above 2 $\cdot$ 10$^4$ K, therefore excluded by this plot. Nitrogen is mostly produced during the thermal pulsing phase by AGB stars of masses $\sim$ 4 -- 7 M$_\odot$ therefore with a delay compared to the prompt production of oxygen. Soon after its production, nitrogen is also located inside a hot cavity (carved by SNe) most likely than oxygen. Moreover, as demonstrated in RMD, the ejection efficiency of nitrogen can be larger than the one of $\alpha$-elements, again favoring the decrease of N/O in the diffuse model, for which the development of a large-scale outflow is anticipated. Incidentally, we can notice that the assumption of a cloudy medium worsens the agreement between the predicted log (N/O) and the observations. However, as we have pointed out in Paper I, the Renzini \& Voli (\cite{rv81}) yields tend to overestimate the nitrogen production and only the assumption of the Meynet \& Maeder (\cite{mm02}) set of yields can reconcile the prediction of the models with the observed abundance ratios in IZw18. We stress once again that the main goal of this paper is a study of the effect of a cloud complex on the dynamical and chemical evolution of a gas-rich dwarf galaxy rather than the attempt to exactly reproduce the chemical features of specific objects. \subsection{Model IBSV} \label{results_IBRV} \subsubsection{Dynamical results} \label{results_IBSV_dyna} The model IBSV differs from the model IBSR only on the assumed IMS yields (van den Hoek \& Groenewegen \cite{vg97} instead of Renzini \& Voli \cite{rv81}; see Table~\ref{tab_models}). Dynamically, the only difference is therefore a variation of the cooling rates, due to our assumed metallicity-dependent cooling function. It is interesting to quantify this effect through a direct comparison of models differing solely in the adopted nucleosynthetic yields. This comparison is shown in Fig.~\ref{snap_comp} at two evolutionary times: 50 Myr (lower panels) and 100 Myr (upper panels). Overall, the agreement between the dynamics of these two models is very good. However, the extremely non-linear character of superbubble evolution can be noticed in this plot and small differences in the energy budget of the model (due to different cooling rates) can produce significant changes in the dynamics. In particular, in model IBSR after 100 Myr some gas is already flowing out of the galaxy through a narrow nozzle, whereas in model IBSV the occurrence of a large-scale outflow is slightly delayed. At this point, the total mass of gas within the galaxy is $\sim$ 9\% larger in model IBSV, a small but non-negligible factor. \subsubsection{Chemical results} \label{results_IBSV_chem} The comparison of log(C/O) and log(N/O) in models IBSR and IBSV is shown in Fig.~\ref{cn_rv_vg}. In this plot we do not show the evolution of oxygen because its production in IMS is negligible. Since the prescriptions for the yields from massive stars are the same (Woosley \& Weaver \cite{ww95}), we do not see significant differences in the two models. The yields of van den Hoek \& Groenewegen (\cite{vg97}) produce substantially less carbon and nitrogen compared to Renzini \& Voli (\cite{rv81}) yields (Chiappini, Romano \& Matteucci \cite{crm03}). The difference is particularly significant for what concerns N, whose production is approximately halved, resulting therefore in a N/O $\sim$ 0.3 dex lower than in the model IBSR. The same difference has been produced by diffuse models with different IMS yields (e.g. Recchi et al. \cite{rec02}; Paper I), indicating that the different dynamics of the cloudy model do not significantly affect this behavior. \begin{figure*}[ht] \begin{center} \hspace{-2.6cm}\includegraphics[width=16cm]{recchifig8.eps} \caption{ Density contours for model IBSR (left panels) and model IBSV (right panels) at two evolutionary times: 50 Myr (lower panels) and 100 Myr (upper panels). The density scale (in g cm$^{-3}$) is on the right-hand strip. } \label{snap_comp} \end{center} \end{figure*} \subsection{Varying IMF: model IBAV} \index{results_flatter} As shown in Table~\ref{tab_models}, a model similar to IBSV but with flatter IMF is also considered. This models has the same SF history considered so far (i.e. the one derived from the work of Aloisi et al. \cite{alo99}), but the energy injection rate is much larger. To be more precise, the total energy release is $\sim$ 2.5 times larger than in IBSV. Since the gas binding energy remains the same, in spite of the larger radiative losses due to the interactions clouds-HIM, this energy release is enough to unbind all the gas initially present in the galactic region (an ellipsoid of dimensions $\sim$ 1000 $\times$ 730 pc, see RMD) in $\sim$ 250 Myr. This complete blow-away does not happen in IBSV, where a large-scale outflow occurs in the polar direction but most of the gas close to the disk remains in the galactic region. \begin{figure}[ht] \includegraphics[width=9.5cm]{recchifig9.eps} \caption{ Evolution of log (C/O) (upper panel) and log (N/O) (lower panel) for models IBSR (solid line) and IBSV (dotted line).} \label{cn_rv_vg} \end{figure} \section{Results of models with continuously created clouds.} \label{results_inf} As described in Sect.~\ref{model_cloud}, in this set of models we produce a cloud each $\Delta t$ yr (typical value 10$^6$ yr), having a mass equal to the total amount of gas turned into stars in the same interval of time. In the framework of simple closed-box models of chemical evolution, this case, (infall rate equal to the SF rate) is called {\it extreme infall} (Larson \cite{lar72}) and leads to the simple expression for the metallicity $Z = y_Z [1 - e^{-(\mu^{-1} - 1)}]$, where $\mu$ is the gas mass fraction and $y_Z$ is the total yield (i.e. the ratio between the total mass in metals newly formed and the amount of mass locked up in low mass stars and remnants). The relaxation of the instantaneous recycling approximation and the inclusion of dynamical effects (winds and mixing and cooling of metals) changes this finding but K\"oppen \& Edmunds (\cite{ke99}) demonstrated that the ratio between infall and SF rate is the determining factor in the chemical evolution of galaxies. The clouds are given an infall velocity of 10 km s$^{-1}$ along the polar direction, their location in the computational grid is randomly chosen and their profile is again $\rho_{\rm cl} \propto R_{\rm cl}^{-1.7}$, but in this case the central density is constrained by the total mass of the cloud and by its location rather than being constant for each cloud. \subsection{Model ICSV} \label{results_ICSV} \subsubsection{Dynamical results} \label{results_ICSV_dyna} As prototype of this group of models, we use a setup similar to IBSV, the only difference being the mechanism of cloud formation. Given the assumed SF history, during the first episode the clouds have a mass of 6 $\cdot$ 10$^3$ M$_\odot$, which increases to 3 $\cdot$ 10$^4$ M$_\odot$ during the last burst. Snapshots of the evolution of this model in the first $\sim$ 55 Myr are shown in Fig.~\ref{9s}. In this figure the shocks created by the clouds in their descent to the galactic disk are quite evident, in particular in the bottom row of panels. In particular, a bow shock is created around the cloud and a reverse shock is generated downstream behind the cloud, leaving an underdensity region behind it. The structure is also highly Kelvin-Helmholtz unstable. The timescale for the growth of Kelvin-Helmholtz instabilities is approximately \begin{equation} t_{\rm K-H} = {q^{0.5} \over {k v_{\rm inf}}}, \end{equation} \noindent (Chandrasekhar \cite{cha61}) where $q$ is the ratio between the cloud and the intercloud densities, $v_{\rm inf}$ is the infall velocity of the cloud (relative to the local ISM) and $k$ is the wavenumber of the unstable mode. The most unstable models are the ones with $k \sim r_{\rm c}^{-1}$ (where $r_{\rm c}$ is the radius of the cloud), leading to a $t_{\rm K-H}$ between 10 and 20 Myr with our parameters (depending on the size of the clouds, which is not constant). This is therefore also the timescale for the fragmentation of the cloud and its mixing with the local ISM. At later times, a non-negligible probability exists, given our simplified assumptions, that a cloud is created directly inside the expanding superbubble. An example is visible in the upper right panel of Fig.~\ref{9s} (at (R, z) $\sim$ (20, 200) pc). In this case, given the much larger density ratio $q$ (between 10$^4$ and 10$^5$), the Kelvin-Helmholtz timescale becomes larger than the time considered in our simulations (a few hundreds of Myr). The cloud is therefore ablated by the flow of gas pushed by the exploding SNe (creating mass loading) and evaporated by the large temperature of the cavity (few 10$^6$ K) in a timescale of the order of few tens of Myr. Occasionally, clouds are created close enough in space and time, such that mutual interaction between clouds manifests. In this case, the clouds form coherent structures like the ones described by Poludnenko, Frank \& Blackman (\cite{polu02}) before being evaporated. \begin{figure*}[ht] \begin{center} \includegraphics[width=\textwidth]{recchifig10.eps} \caption{ Density contours for the warm gas for model ICSV at 9 evolutionary times (labeled in Myr at the top right corner of each panel). The density scale (in g cm$^{-3}$) is on the right-hand strip. } \label{9s} \end{center} \end{figure*} Due to the combined effect of thermal evaporation and bow shocks, the thermal energy budget is still smaller (by $\sim$ 15\%) than for model IBSR (plotted in Fig.~\ref{eth}). Owing to the larger radiative losses and to the ram pressure of the infalling clouds, a galactic wind develops after $\sim$ 130 Myr, delayed compared to model IBSR. However, as pointed out in Sect.~\ref{results_IBRS_dyna}, the infall of clouds can help in structuring and fingering the supershell, creating ways out for the hot gas. Indeed, the ram pressure of the infalling clouds is $p_{\rm ram} = \rho_{\rm ISM} \cdot v_{\rm inf}^2$ $\sim$ 10$^{-11}$ erg cm$^{-3}$, of the order of the ram pressure of the expanding supershell and larger than the thermal pressure of the hot cavity. Moreover, it is worth pointing out that, for this set of models, the total gaseous mass is kept constant by the assumption that the cloud creation rate should balance the SF rate (see Sect.~\ref{model_cloud}), at variance with model IBSR, in which a fraction of gas (at a rate of 6 $\cdot$ 10$^{-3}$ M$_\odot$ yr$^{-1}$, see Paper I) is continuously turned into stars. \subsubsection{Chemical results} \label{results_ICSV_chem} In Paper II (sect. 4.7) we have analyzed the effect of the infall (along the polar direction) of a very large and very massive cloud. In this case, the development of an outflow is completely hampered and all the metals freshly produced by the ongoing SF are trapped inside the galactic region, preventing the development of differential outflows, a natural outcome of this kind of simulations. As we have seen, in model ICSV the ``cap'' effect is less significant: it helps reducing the total thermal energy able to drive a large-scale outflow, but the clouds are dissolved on a timescale of the order of a few tens of Myr. Moreover, as seen in Sect.~\ref{results_ICSV_dyna}, they can pierce the supershell, creating funnels for the free flow of the hot, high-pressure gas. They can therefore (slightly) delay the development of an outflow, but they cannot prevent it. Consequently, the chemical evolution of this model is strongly affected by the dilution effect of the clouds. In particular, the process of cloud dissolution described in Sect.~\ref{results_ICSV_dyna} continuously allows a mixing of the metals with pristine gas. This effect results to be, in our simulation, much larger than the ``cap'' effect of the infalling clouds. We can see the chemical evolution of model ICSV in Fig.~\ref{cno1}. The final oxygen abundance is $\sim$ 0.3 dex smaller that the oxygen attained by model IBSR. Even more important is the fact that, after $\sim$ 90 Myr, the oxygen abundance mildly but constantly decreases as function of time. This is due to the fact that the selective loss of metals has not been suppressed and that the continuous creation (and subsequent disruption) of clouds mixes the ISM with unpolluted gas. Very significant is also the effect on the N/O abundance ratio (a difference of $\sim$ 0.6 -- 0.7 dex), but we stress once again that in this case the main reason of this difference is the choice of IMS yields. As we can see from Fig.~\ref{cn_rv_vg}, the final log(N/O) of model IBSV is $\sim$ 0.3 -- 0.4 dex larger than model ICSV. We also tested a model with a larger initial total mass of gas at the beginning of the simulation. The setup of this model is similar of model SV3 of Paper I (i.e. a total initial mass of 3 $\times$ 10$^7$ M$_\odot$ instead of the standard value of 1.7 $\times$ 10$^7$ M$_\odot$), but in it we apply the same procedure of continuous creation of clouds analyzed for model ICSV. The slightly lower density contrast parameter $q$ does not significantly affect the overall cloud-ISM interaction process and the dissolution timescale of the clumps but, due to the larger ISM pressure, the development of a large-scale outflow is largely delayed. It occurs only $\sim$ 250 Myr after the beginning of the SF process. \subsection{Model ICKV} \label{results_ICKV} \subsubsection{Dynamical results} \label{results_ICKV_dyna} This model has the same setup and same SF history of model ICSV, the only difference being a steeper (x=1.7) IMF. Consequently, the energy return rate is much smaller than the above-considered model (about a factor $\sim$ 2) and, since the binding energy of the gas is the same, the reduced power of the burst has deep consequences on the development of a large-scale outflow. In this model, a break-out of the superbubble with consequent outflow of gas happens at around $\sim$ 180 Myr, but its intensity is very mild and the continuous infall of clouds is sufficient to suppress its further development and to close the funnel. The final structure (after $\sim$ 300 Myr) is an elongated ellipsoid of $\sim$ 700 $\times$ 220 pc, with just some traces of gas which has managed to leak out and flows freely, mainly along the polar direction. Later on, no SF is occurring anymore, therefore the process of cloud formation is suppressed. Although the input of energy is continuous in these models (due to the contribution of SNeIa), the produced energy is not enough to break-out again, therefore the supershell tends to recede towards the center of the galaxy (see Recchi \& Hensler \cite{rh06}). Only $\sim$ 3\% of the gas produced during the two episodes of SF leave the galactic region at the end of the simulation. \subsubsection{Chemical results} \label{results_ICKV_chem} The chemical evolution is characterized by a strong bias towards low- and intermediate-mass stars, therefore its production of $\alpha$-elements is strongly reduced. The oxygen abundance smoothly increases during the first $\sim$ 180 Myr (approximately up to the break-out), then it stays almost constant. As stressed in Sect.~\ref{results_ICSV_chem}, this is mainly due to the dilution effect of the clouds (as we have seen, the differential loss of metals is very limited in this model). The final abundance is 12 + log (O/H) $\simeq$ 6.4, $\sim$ 0.3 dex less than model ICSV. Although in this model also the number of stars in the interval 4 -- 7 M$_\odot$ (i.e. the main producers of primary nitrogen) is reduced, the final log (N/O) is much larger ($\sim$ 0.4 -- 0.5 dex) compared to model ICSV. Finally, due to the fact that both massive and low-mass stars contribute to the final carbon abundance, the final log (C/O) does not deviate much from the value found in model ICSV, being only $\sim$ 0.1 dex larger. \subsection{Model NCSM} \label{results_NCSM} Model NCSM has an initial setup aimed at reproducing the gross characteristics of NGC1569. For this model we assume, according to the work of Angeretti et al. (\cite{ang05}) three episodes of SF: a most recent one occurred between 37 and 13 Myr ago, at a rate of 0.13 M$_\odot$ yr$^{-1}$; an intermediate episode, commencing 150 Myr ago and finishing 40 Myr ago, at a rate of 0.04 M$_\odot$ yr$^{-1}$ and an older episode of SF, ending 300 Myr ago (therefore implying 150 Myr of inactivity between this episode and the intermediate one) and commencing 600 Myr ago, at a rate of 0.05 M$_\odot$ yr$^{-1}$. The setup is the same as the model NGC -- 5 described in Paper II, namely the total mass inside the galaxy is 1.8 $\times$ 10$^8$ M$_\odot$, but we add continuously created clouds. At variance with the models ICSV and ICKV, the clouds are created every 5 Myr, therefore their masses are between 2 and 6.5 $\times$ 10$^5$ M$_\odot$, not far from the values actually observed in the complex of H\,{\sc i}~ clouds spiraling around NGC1569 (M\"uhle et al. \cite{mue05}) and significantly larger than the ones considered in the previous model with continuous creation of clouds. Moreover, due to the assumption made in Paper II about the nucleosynthetic prescriptions, also in this case we consider yields (from both massive and IMS) taken from Meynet \& Maeder (\cite{mm02}). \subsubsection{Dynamical results} \label{results_NCSM_dyna} \begin{figure*}[ht] \begin{center} \includegraphics[width=\textwidth]{recchifig11.eps} \caption{ Same as Fig.~\ref{9s} but for model NCSM. For reference, the density contours of model NGC -- 5 (presented in Paper II) are shown in the upper right panel.} \label{ncsm} \end{center} \end{figure*} The dynamical evolution of this model is affected by the larger masses of the clouds and by their lower creation rate. It is therefore less frequent the formation of clouds close in space and time, able therefore to mutually interact. The evolution of the model in the first $\sim$ 100 Myr is shown in Fig.~\ref{ncsm}. The impact of these clouds on the development of large-scale outflows is stronger than in the above-presented cases. We can clearly see the collision of a cloud with the expanding supershell in the lower right panel of this figure. These massive clouds lead also to a stronger evaporation rate, therefore they affect significantly the energy budget of the model. The total thermal energy is $\sim$ 35 -- 40\% smaller than the one attained by model NGC -- 5 (depending on the evolutionary time). Again, as we have noticed in Sect.~\ref{results_ICSV_dyna}, in spite of the reduced superbubble power, the supershell-cloud interaction can create holes in the supershell from which highly pressurized gas can escape. This is visible for instance in the central right and in the upper central panel of Fig.~\ref{ncsm}. At $\sim$ 100 Myr the dimension of the superbubble is only slightly smaller than the one attained at the same evolutionary time by model NGC -- 5. This demonstrates once again that the superbubble luminosity is just one factor, in many cases not the leading one, in determining the development and the shape of large-scale outflows. Very important factors are also the density distribution and the interaction with clouds. \subsubsection{Chemical results} \label{results_NCSM_chem} Similarly to what is seen in Sect.~\ref{results_ICSV_chem}, the dilution effect of the clouds overcomes the ``cap'' effect, due to the delayed development of an outflow. Therefore, model NCSM shows a lower metallicity compared to model NGC -- 5. In particular the oxygen abundance, after an initial phase of $\sim$ 250 Myr in which it grows constantly, remains in the range 12 + log(O/H) $\sim$ 7.6 -- 7.7, therefore 0.3 -- 0.4 dex below the value attained by model NGC -- 5. Similarly to the diffuse model, the last intense burst of SF has some effect on the chemical evolution of this model, resulting in an increase of $\sim$ 0.1 -- 0.2 dex. Also the log(N/O) is reduced by $\sim$ 0.2 -- 0.3 dex compared to the corresponding diffuse model. Since we had noticed in Paper II that the results model NGC -- 5 matched well the observed chemical composition of NGC1569 (taken from Kobulnicky \& Skillman \cite{ks97}), we can point out that the inclusion of a cloud complex {\it worsens} the agreement between model results and observations. In order to match the observed chemical composition, one should therefore reduce the total initial mass, in order to diminish the gas fraction and therefore increase the metallicity. Playing with this parameter is allowed by the present uncertainties about the total mass of NGC1569 (Stil \& Israel \cite{si02}; M\"uhle et al. \cite{mue03}), but, as already pointed out in the Introduction, our main focus is not the quest for the best setup able to reproduce the chemical composition of specific objects, but the study of the effect of a cloud complex and the differences with a model in which the gaseous distribution is smooth. \section{Discussion and conclusions} \label{discussion} In this paper we have computed the chemical and dynamical evolution of model galaxies, with structural parameters similar to IZw18 and NGC1569, but in which a complex of clouds has been added, both perturbing the initial gaseous distribution and creating clouds, at a rate which equals the SF rate, and with infall velocity of 10 km s$^{-1}$ along the polar direction. The main focus of our work has been the comparison of these models with those presented in previous publications, in which similar setups but a smooth distribution of gas was considered. We have seen that the clouds are subject to a variety of disruptive phenomena like evaporation (when embedded in a hot medium), formation of shocks, development of thermal instabilities (in particular the Kelvin-Helmholtz instability) and expansion due to the larger pressure compared to the surrounding interstellar medium. The average lifetime of the clouds is therefore relatively short, depending on the cloud size (which is not constant in our simulations) but being of the order of a few tens of Myr. In spite of their transient nature, the clouds leave a significant imprint on the dynamical and chemical evolution of dwarf galaxies. The clouds, when they evaporate inside the superbubble, produce mass loading, increase the mean density of the cavity and, therefore, enhance the radiative losses (which are proportional to the square of the density). This results in a significant decrease of the total thermal energy (of the order of $\sim$ 20 -- 40\% compared to the diffuse models, depending on the assumptions), therefore less energy to drive the development of a large-scale outflow. On the other hand, the relative motion of supershell and clouds, in particular when the clouds infall motion is considered, can structure, pierce and create holes and fingers in the expanding supershell. These holes destroy the spherical symmetry initially present and favor the rushing out of the highly pressurized gas contained in the cavity. Therefore, in spite of the reduced thermal energy budget, the creation of large-scale outflows is not suppressed but, in most of the explored cases, only slightly delayed. Complex structures and fingers are indeed relatively common features in galaxies showing large-scale outflows like NGC1800 (Hunter \cite{hun96}), NGC4214 (MacKenty et al. \cite{mack00}) or NGC1705 (Heckman et al. \cite{hek01}). The pressure inside the cavity is reduced compared to diffuse models, therefore in any case the total amount of ejected pristine gas is very small (smaller than in the models with smooth gas distribution) and, when averaging the size of the supershell in any direction, it turns out to be smaller than in diffuse models. But the piercing of the supershell can lead to an ejection efficiency of freshly produced metals as high as the one attained by diffuse models. This has, of course, important consequences on the chemical evolution of these objects. Since the differential winds are not suppressed, the diminished thermal energy of these models does not imply an increase of metals inside the galactic regions. On the other hand, the dilution effect of clouds plays a dominant role in determining the final metallicity of our model galaxies. Since the clouds have primordial chemical composition, their destruction and mixing with the surrounding medium reduces the total chemical composition without altering the abundance ratios. This produces a final metallicity $\sim$ 0.2 -- 0.4 dex smaller than the corresponding diffuse models. We have examined the effect of a different choice of the IMF slope and of the nucleosynthetic set of yields (in particular for what concerns intermediate-mass stars). Flatter-than-Salpeter IMF slopes lead to an excessive production of energy, able to unbind most of the gas before the end of the simulation. On the other hand, in models with steeper IMF the development of large-scale outflows is almost completely suppressed. Different sets of intermediate-mass stars yields affect in particular the log(N/O) ratio. Renzini \& Voli (\cite{rv81}) yields tend to overestimate the primary production of nitrogen. When compared to the results of models implementing van den Hoek \& Groenewegen (\cite{vg97}) yields, the results differ by $\sim$ 0.3 dex. Due to the assumption of a metallicity-dependent cooling function, also the dynamics is affected by the choice of the nucleosynthetic prescriptions. Our main results can be briefly summarized as follows: \begin{itemize} \item the clouds suffer thermal instabilities, formation of shocks and evaporation, therefore their lifetimes is limited to a few tens of Myr. \item In spite of that, they are able to increase the main density of the cavity, provoking a reduction of the total thermal energy by $\sim$ 20 -- 40\% compared with a diffuse model. \item The interaction clouds-supershell leads to strong structuring and piercing of the shell (in particular for models with continuous creation of infalling clouds), allowing the venting out of metals in spite of the reduced thermal energy. The development of large-scale outflows is therefore generally delayed but the ejection efficiency of metals remains unchanged. \item From a chemical point of view, the effect of the clouds is to significantly reduce the total metallicity of the galaxies, without altering the abundance ratios. \end{itemize} \begin{acknowledgements} We warmly thank the referee, Peter Berczik, for his suggestions that much improved the final version of this paper. S.R. acknowledges generous financial support from the Alexander von Humboldt Foundation and Deutsche Forschungsgemeinschaft (DFG) under grant HE 1487/28-1. \end{acknowledgements}
2023-04-23T06:10:14.823Z
2007-10-16T11:42:58.000Z
redpajama/arxiv
arxiv_0002
462
9,233
9dbc97c9cdfd398473381bc634474602453be885
\section{Introduction} \title{\huge An application of Mirror extensions \\} \author{ {\sc Feng Xu}\footnote{Supported in part by NSF.}\\ Department of Mathematics\\ University of California at Riverside\\ Riverside, CA 92521\\ E-mail: {\tt [email protected]}} \begin{document} \date{} \maketitle \begin{abstract} In this paper we apply our previous results of mirror extensions to obtain realizations of three modular invariants constructed by A. N. Schellekens by holomorphic conformal nets with central charge equal to $24$.\par 2000MSC:81R15, 17B69. \end{abstract} \newpage \section{Introduction} Partition functions of chiral rational conformal field theories (RCFT) are modular invariant (cf. \cite{Z}). However there are examples of ``spurious" modular invariants which do not correspond to any RCFT (cf.\cite{BE4}, \cite{SY} and \cite{FSS}) . It is therefore an interesting question to decide which modular invariants can be realized in RCFT. For many interesting modular invariants this question was raised for an example in \cite{Sch} and more recently in \cite{EW}. For results on related questions, see \cite{BE3}, \cite{BE4}, \cite{Reh1}, \cite{KL},\cite{KL2} and \cite{KLPR} for a partial list. \par In this paper we examine the holomorphic modular invariants with central charge 24 constructed by A. N. Schellekens in \cite{Sch}. Besides modular invariance, A. N. Schellekens showed that his modular invariants passed an impressive list of checks from tracial identities which strongly suggested that his modular invariants can be realized in chiral RCFT. Some of Schellekens's modular invariants were constructed using level-rank duality. In \cite{Xm} we proved a general theorem on mirror extensions (cf. Th. \ref{mainmirror}) which included modular invariants from level-rank duality (cf. \S\ref{lr2}). It is therefore an interesting question to see if mirror extensions can provide chiral RCFT realization of some of Schellekens's modular invariants. Our main result in this paper is to show that three of Schellekens's modular invariants can be realized by holomorphic conformal nets (cf. Th. \ref{main}): these nets are constructed by simple current extensions (cf. \S\ref{simpleextension}) of mirror extensions. Our results strongly suggest that there should be Vertex Operator Algebras which realize these modular invariants. We expect our methods to apply to other modular invariants in the literature, especially when level-rank duality plays a role.\par This paper is organized as follows: after a preliminary section on nets, mirror extensions and simple current extensions, we examine three of Schellekens's modular invariants in \cite{Sch}, and obtain realization of these invariants as simple current extensions of three mirror extensions. We end with two conjectures about holomorphic conformal nets with central charge 24 which are motivated by \cite{FLM} and \cite{Sch}, and we hope that these conjectures will stimulate further research. \section{Preliminaries} \subsection{Preliminaries on sectors} Given an infinite factor $M$, the {\it sectors of $M$} are given by $$\text{Sect}(M) = \text{End}(M)/\text{Inn}(M),$$ namely $\text{Sect}(M)$ is the quotient of the semigroup of the endomorphisms of $M$ modulo the equivalence relation: $\rho,\rho'\in \text{End}(M),\, \rho\thicksim\rho'$ iff there is a unitary $u\in M$ such that $\rho'(x)=u\rho(x)u^*$ for all $x\in M$. $\text{Sect}(M)$ is a $^*$-semiring (there are an addition, a product and an involution $\rho\rightarrow \bar\rho$) equivalent to the Connes correspondences (bimodules) on $M$ up to unitary equivalence. If ${\rho}$ is an element of $\text{End}(M)$ we shall denote by $[{\rho}]$ its class in $\text{Sect}(M)$. We define $\text{Hom}({\rho},{\rho}')$ between the objects ${\rho},{\rho}'\in {\mathrm {End}}(M)$ by \[ \text{Hom}({\rho},{\rho}')\equiv\{a\in M: a{\rho}(x)={\rho}'(x)a \ \forall x\in M\}. \] We use $\langle \lambda , \mu \rangle$ to denote the dimension of $\text{\rm Hom}(\lambda , \mu )$; it can be $\infty$, but it is finite if $\lambda,\mu$ have finite index. See \cite{J1} for the definition of index for type $II_1$ case which initiated the subject and \cite{PP} for the definition of index in general. Also see \S2.3 of \cite{KLX} for expositions. $\langle \lambda , \mu \rangle$ depends only on $[\lambda ]$ and $[\mu ]$. Moreover we have if $\nu$ has finite index, then $\langle \nu \lambda , \mu \rangle = \langle \lambda , \bar \nu \mu \rangle $, $\langle \lambda\nu , \mu \rangle = \langle \lambda , \mu \bar \nu \rangle $ which follows from Frobenius duality. $\mu $ is a subsector of $\lambda $ if there is an isometry $v\in M$ such that $\mu(x)= v^* \lambda(x)v, \forall x\in M.$ We will also use the following notation: if $\mu $ is a subsector of $\lambda $, we will write as $\mu \prec \lambda $ or $\lambda \succ \mu $. A sector is said to be irreducible if it has only one subsector. \subsection{Local nets} By an interval of the circle we mean an open connected non-empty subset $I$ of $S^1$ such that the interior of its complement $I'$ is not empty. We denote by ${\cal I}$ the family of all intervals of $S^1$. A {\it net} ${\cal A}$ of von Neumann algebras on $S^1$ is a map \[ I\in{\cal I}\to{\cal A}(I)\subset B({\cal H}) \] from ${\cal I}$ to von Neumann algebras on a fixed separable Hilbert space ${\cal H}$ that satisfies: \begin{itemize} \item[{\bf A.}] {\it Isotony}. If $I_{1}\subset I_{2}$ belong to ${\cal I}$, then \begin{equation*} {\cal A}(I_{1})\subset{\cal A}(I_{2}). \end{equation*} \end{itemize} If $E\subset S^1$ is any region, we shall put ${\cal A}(E)\equiv\bigvee_{E\supset I\in{\cal I}}{\cal A}(I)$ with ${\cal A}(E)=\mathbb C$ if $E$ has empty interior (the symbol $\vee$ denotes the von Neumann algebra generated). The net ${\cal A}$ is called {\it local} if it satisfies: \begin{itemize} \item[{\bf B.}] {\it Locality}. If $I_{1},I_{2}\in{\cal I}$ and $I_1\cap I_2=\varnothing$ then \begin{equation*} [{\cal A}(I_{1}),{\cal A}(I_{2})]=\{0\}, \end{equation*} where brackets denote the commutator. \end{itemize} The net ${\cal A}$ is called {\it M\"{o}bius covariant} if in addition satisfies the following properties {\bf C,D,E,F}: \begin{itemize} \item[{\bf C.}] {\it M\"{o}bius covariance}. There exists a non-trivial strongly continuous unitary representation $U$ of the M\"{o}bius group ${\rm\textsf{M\"ob}}$ (isomorphic to $PSU(1,1)$) on ${\cal H}$ such that \begin{equation*} U(g){\cal A}(I) U(g)^*\ =\ {\cal A}(gI),\quad g\in {\rm\textsf{M\"ob}},\ I\in{\cal I}. \end{equation*} \item[{\bf D.}] {\it Positivity of the energy}. The generator of the one-parameter rotation subgroup of $U$ (conformal Hamiltonian), denoted by $L_0$ in the following, is positive. \item[{\bf E.}] {\it Existence of the vacuum}. There exists a unit $U$-invariant vector $\Omega\in{\cal H}$ (vacuum vector), and $\Omega$ is cyclic for the von Neumann algebra $\bigvee_{I\in{\cal I}}{\cal A}(I)$. \end{itemize} By the Reeh-Schlieder theorem $\Omega$ is cyclic and separating for every fixed ${\cal A}(I)$. The modular objects associated with $({\cal A}(I),\Omega)$ have a geometric meaning \[ \Delta^{it}_I = U(\Lambda_I(2\pi t)),\qquad J_I = U(r_I)\ . \] Here $\Lambda_I$ is a canonical one-parameter subgroup of ${\rm\textsf{M\"ob}}$ and $U(r_I)$ is a antiunitary acting geometrically on ${\cal A}$ as a reflection $r_I$ on $S^1$. This implies {\em Haag duality}: \[ {\cal A}(I)'={\cal A}(I'),\quad I\in{\cal I}\ , \] where $I'$ is the interior of $S^1\smallsetminus I$. \begin{itemize} \item[{\bf F.}] {\it Irreducibility}. $\bigvee_{I\in{\cal I}}{\cal A}(I)=B({\cal H})$. Indeed ${\cal A}$ is irreducible iff $\Omega$ is the unique $U$-invariant vector (up to scalar multiples). Also ${\cal A}$ is irreducible iff the local von Neumann algebras ${\cal A}(I)$ are factors. In this case they are either ${\mathbb C}$ or III$_1$-factors with separable predual in Connes classification of type III factors. \end{itemize} By a {\it conformal net} (or diffeomorphism covariant net) ${\cal A}$ we shall mean a M\"{o}bius covariant net such that the following holds: \begin{itemize} \item[{\bf G.}] {\it Conformal covariance}. There exists a projective unitary representation $U$ of ${\mathrm {Diff}}(S^1)$ on ${\cal H}$ extending the unitary representation of ${\rm\textsf{M\"ob}}$ such that for all $I\in{\cal I}$ we have \begin{gather*} U(\varphi){\cal A}(I) U(\varphi)^*\ =\ {\cal A}(\varphi.I),\quad \varphi\in{\mathrm {Diff}}(S^1), \\ U(\varphi)xU(\varphi)^*\ =\ x,\quad x\in{\cal A}(I),\ \varphi\in{\mathrm {Diff}}(I'), \end{gather*} \end{itemize} where ${\mathrm {Diff}}(S^1)$ denotes the group of smooth, positively oriented diffeomorphism of $S^1$ and ${\mathrm {Diff}}(I)$ the subgroup of diffeomorphisms $g$ such that $\varphi(z)=z$ for all $z\in I'$. Note that by Haag duality we have $U(\varphi)\in {\cal A}(I), \forall \varphi\in {\mathrm {Diff}} (I).$ Hence the following definition makes sense: \begin{definition}\label{virnet} If ${\cal A}$ is a conformal net, the Virasoro subnet of ${\cal A}$, denoted by ${\mathrm {Vir}}_{\cal A}$ is defined as follows: for each interval $I\in {\cal I}$, ${\mathrm {Vir}}_{\cal A}(I)$ is the von Neumann algebra generated by $U(\varphi)\in {\cal A}(I), \forall \varphi\in {\mathrm {Diff}} (I).$ \end{definition} A (DHR) representation $\pi$ of ${\cal A}$ on a Hilbert space ${\cal H}$ is a map $I\in{\cal I}\mapsto \pi_I$ that associates to each $I$ a normal representation of ${\cal A}(I)$ on $B({\cal H})$ such that \[ \pi_{\widetilde I}\!\restriction\!{\cal A}(I)=\pi_I,\quad I\subset\widetilde I, \quad I,\widetilde I\subset{\cal I}\ . \] $\pi$ is said to be M\"obius (resp. diffeomorphism) covariant if there is a projective unitary representation $U_{\pi}$ of ${\rm\textsf{M\"ob}}$ (resp. ${\mathrm {Diff}}(S^1)$) on ${\cal H}$ such that \[ \pi_{gI}(U(g)xU(g)^*) =U_{\pi}(g)\pi_{I}(x)U_{\pi}(g)^* \] for all $I\in{\cal I}$, $x\in{\cal A}(I)$ and $g\in {\rm\textsf{M\"ob}}$ (resp. $g\in{\mathrm {Diff}}(S^1)$). By definition the irreducible conformal net is in fact an irreducible representation of itself and we will call this representation the {\it vacuum representation}.\par Let $G$ be a simply connected compact Lie group. By Th. 3.2 of \cite{FG}, the vacuum positive energy representation of the loop group $LG$ (cf. \cite{PS}) at level $k$ gives rise to an irreducible conformal net denoted by {\it ${{\cal A}}_{G_k}$}. By Th. 3.3 of \cite{FG}, every irreducible positive energy representation of the loop group $LG$ at level $k$ gives rise to an irreducible covariant representation of ${{\cal A}}_{G_k}$. \par Given an interval $I$ and a representation $\pi$ of ${\cal A}$, there is an {\em endomorphism of ${\cal A}$ localized in $I$} equivalent to $\pi$; namely ${\rho}$ is a representation of ${\cal A}$ on the vacuum Hilbert space ${\cal H}$, unitarily equivalent to $\pi$, such that ${\rho}_{I'}=\text{id}\restriction{\cal A}(I')$. We now define the statistics. Given the endomorphism ${\rho}$ of ${\cal A}$ localized in $I\in{\cal I}$, choose an equivalent endomorphism ${\rho}_0$ localized in an interval $I_0\in{\cal I}$ with $\bar I_0\cap\bar I =\varnothing$ and let $u$ be a local intertwiner in ${\mathrm {Hom}}({\rho},{\rho}_0)$ , namely $u\in {\mathrm {Hom}}({\rho}_{\widetilde I},{\rho}_{0,\widetilde I})$ with $I_0$ following clockwise $I$ inside $\widetilde I$ which is an interval containing both $I$ and $I_0$. The {\it statistics operator} $\epsilon ({\rho},\rho):= u^*{\rho}(u) = u^*{\rho}_{\widetilde I}(u) $ belongs to ${\mathrm {Hom}}({\rho}^2_{\widetilde I},{\rho}^2_{\widetilde I})$. We will call $\epsilon ({\rho},\rho)$ the positive or right braiding and $\widetilde\epsilon ({\rho},\rho):=\epsilon ({\rho},\rho)^*$ the negative or left braiding. The {\em statistics parameter} $\lambda_{\rho}$ can be defined in general. In particular, assume ${\rho}$ to be localized in $I$ and ${\rho}_I\in\text{End}(({\cal A}(I))$ to be irreducible with a conditional expectation $E: {\cal A}(I)\to {\rho}_I({\cal A}(I))$, then \[ \lambda_{\rho}:=E(\epsilon) \] depends only on the sector of ${\rho}$. The {\em statistical dimension} $d_{{\rho}}$ and the {\it univalence} $\omega_{\rho}$ are then defined by \[ d_{{\rho}} = |\lambda_{\rho}|^{-1}\ ,\qquad \omega_{\rho} = \frac{\lambda_{\rho}}{|\lambda_{\rho}|}\ . \] The {\em conformal spin-statistics theorem} (cf. \cite{GL2}) shows that \[ \omega_{\rho} = e^{i 2\pi L_0({\rho})}\ , \] where $L_0({\rho})$ is the conformal Hamiltonian (the generator of the rotation subgroup) in the representation ${\rho}$. The right hand side in the above equality is called the {\em univalence} of ${\rho}$. \par Let $\{[\lambda], \lambda\in {\cal L} \}$ be a finite set of all equivalence classes of irreducible, covariant, finite-index representations of an irreducible local conformal net ${\cal A}$. We will denote the conjugate of $[\lambda]$ by $[{\bar \lambda}]$ and identity sector (corresponding to the vacuum representation) by $[1]$ if no confusion arises, and let $N_{\lambda\mu}^\nu = \langle [\lambda][\mu], [\nu]\rangle $. Here $\langle \mu,\nu\rangle$ denotes the dimension of the space of intertwiners from $\mu$ to $\nu$ (denoted by $\text {\rm Hom}(\mu,\nu)$). We will denote by $\{T_e\}$ a basis of isometries in $\text {\rm Hom}(\nu,\lambda\mu)$. The univalence of $\lambda$ and the statistical dimension of (cf. \S2 of \cite{GL1}) will be denoted by $\omega_{\lambda}$ and $d{(\lambda)}$ (or $d_{\lambda})$) respectively. The following equation is called {\it monodromy equation} (cf. \cite{Rehs}):\par \begin{equation}\label{monodromy} \epsilon (\mu, \lambda) \epsilon (\lambda, \mu))T_e= \frac{\omega_\nu}{\omega_\lambda\omega_\mu} T_e \end{equation} where $\epsilon (\mu, \lambda)$ is the unitary braiding operator. \par We make the following definitions for convenience: \begin{definition}\label{localset} Let $\lambda,\mu$ be (not necessarily irreducible) representations of ${\cal A}$. $H(\lambda,\mu):=\varepsilon(\lambda,\mu)\varepsilon(\mu,\lambda).$ We say that $\lambda$ is local with $\mu$ if $H(\lambda,\mu)=1.$ \end{definition} \begin{definition}\label{localsystem} Let $\Gamma$ be a set of DHR representations of ${\cal A}.$ If $\Gamma$ is an abelian group with multiplication given by composition and $d_\lambda=1, \omega_\lambda=1, \forall \lambda\in \Gamma,$ then $\Gamma$ is called { a local system of automorphisms}. \end{definition} The following Lemma will be useful to check if a set is a local system of automorphims. \begin{lemma}\label{checklocal} (1) Assume that $[\mu]=\sum_{1\leq i\leq n} [\mu_i]$ and $\lambda, \mu_i, i=1,...,n$ are representations of ${\cal A}.$ Then $H(\lambda,\mu)=1$ if and only if $H(\lambda, \mu_i)=1$ for all $1\leq i\leq n;$\par (2) If $H(\lambda,\mu)=1$ and $H(\lambda,\nu)=1$, then $ H(\lambda, \mu\nu)=1$ ;\par (3) If $\lambda_1,...,\lambda_n$ generate a finite abelian group $\Gamma$ under composition, $\omega_{\lambda_i}=1, 1\leq i\leq n,$ and $H (\lambda_i,\lambda_j)=1, 1\leq i,j\leq n, $ then $\Gamma$ is a local system of automorphisms. \end{lemma} \trivlist \item[\hskip \labelsep{\bf Proof\ }] (1) and (2) follows from \cite{Rehren} or Lemma 3.8 of \cite{BE3}. As for (3), we prove by induction on $n.$ If $n=1,$ then $\varepsilon(\lambda_1,\lambda_1)= \omega_{\lambda_1}=1$ since $\varepsilon(\lambda_1,\lambda_1)$ is a scalar, and it follows that $\omega_{\lambda_1^i}= \varepsilon(\lambda_1,\lambda_1)^{i^2}=1, \forall i\geq 1.$ Assume that (3) has been proved for $n-1.$ Let $\mu$ be in the abelian group generated by $\lambda_1,...,\lambda_{n-1}.$ Since for any integer $k$ $H(\mu, \lambda_n^k)=1$ by (2) and assumption, by repeatedly applying (2) and monodromy equation, we have $\omega_{\mu\lambda_n^k} = \omega_{\mu}\omega_{\lambda_n^k}=1$ by induction hypotheses. It follows that (3) is proved.\null\hfill\qed\endtrivlist \par Next we recall some definitions from \cite{KLM} . Recall that ${{\cal I}}$ denotes the set of intervals of $S^1$. Let $I_1, I_2\in {{\cal I}}$. We say that $I_1, I_2$ are disjoint if $\bar I_1\cap \bar I_2=\varnothing$, where $\bar I$ is the closure of $I$ in $S^1$. When $I_1, I_2$ are disjoint, $I_1\cup I_2$ is called a 1-disconnected interval in \cite{Xjw}. Denote by ${{\cal I}}_2$ the set of unions of disjoint 2 elements in ${{\cal I}}$. Let ${{\cal A}}$ be an irreducible M\"{o}bius covariant net . For $E=I_1\cup I_2\in{{\cal I}}_2$, let $I_3\cup I_4$ be the interior of the complement of $I_1\cup I_2$ in $S^1$ where $I_3, I_4$ are disjoint intervals. Let $$ {{\cal A}}(E):= A(I_1)\vee A(I_2), \quad \hat {{\cal A}}(E):= (A(I_3)\vee A(I_4))'. $$ Note that ${{\cal A}}(E) \subset \hat {{\cal A}}(E)$. Recall that a net ${{\cal A}}$ is {\it split} if ${{\cal A}}(I_1)\vee {{\cal A}}(I_2)$ is naturally isomorphic to the tensor product of von Neumann algebras ${{\cal A}}(I_1)\otimes {{\cal A}}(I_2)$ for any disjoint intervals $I_1, I_2\in {{\cal I}}$. ${{\cal A}}$ is {\it strongly additive} if ${{\cal A}}(I_1)\vee {{\cal A}}(I_2)= {{\cal A}}(I)$ where $I_1\cup I_2$ is obtained by removing an interior point from $I$. \begin{definition}\label{rational} \cite{{KLM}, {LX}} A M\"{o}bius covariant net ${{\cal A}}$ is said to be completely rational if ${{\cal A}}$ is split, strongly additive, and the index $[\hat {{\cal A}}(E): {{\cal A}}(E)]$ is finite for some $E\in {{\cal I}}_2.$ The value of the index $[\hat {{\cal A}}(E): {{\cal A}}(E)]$ (it is independent of $E$ by Prop. 5 of \cite{KLM}) is denoted by $\mu_{{{\cal A}}}$ and is called the $\mu$-index of ${{\cal A}}$. \end{definition} Note that, by results in \cite{LX}, every irreducible, split, local conformal net with finite $\mu$-index is automatically strongly additive. Also note that if ${\cal A}$ is completely rational, then ${\cal A}$ has only finitely many irreducible covariant representations by \cite{KLM}. \par \begin{definition}\label{holomorphic} A M\"{obius} net ${\cal A}$ is called holomorphic if ${\cal A}$ is completely rational and $\mu_{\cal A}=1,$ i.e., ${\cal A}$ has only one irreducible representation which is the vacuum representation. \end{definition} Let ${\cal B}$ be a M\"{o}bius (resp. conformal) net. ${\cal B}$ is called a M\"{o}bius (resp. conformal) extension of ${\cal A}$ if there is a map \[ I\in{\cal I}\to{\cal A}(I)\subset {\cal B}(I) \] that associates to each interval $I\in {\cal I}$ a von Neumann subalgebra ${\cal A}(I)$ of ${\cal B}(I)$, which is isotonic \[ {\cal A}(I_1)\subset {\cal A}(I_2), I_1\subset I_2, \] and M\"{o}bius (resp. diffeomorphism) covariant with respect to the representation $U$, namely \[ U(g) {\cal A}(I) U(g)^*= {\cal A}(g.I) \] for all $g\in {\rm\textsf{M\"ob}}$ (resp. $g\in {\mathrm {Diff}}(S^1)$) and $I\in {\cal I}$. ${\cal A}$ will be called a M\"{o}bius (resp. conformal) subnet of ${\cal B}.$ Note that by Lemma 13 of \cite{L1} for each $I\in {\cal I}$ there exists a conditional expectation $E_I: {\cal B}(I)\rightarrow {\cal A}(I)$ such that $E$ preserves the vector state given by the vacuum of ${\cal B}$. \begin{definition}\label{ext} Let ${\cal A}$ be a M\"{o}bius covariant net. A M\"{o}bius covariant net ${\cal B}$ on a Hilbert space ${\cal H}$ is an extension of ${\cal A}$ if there is a DHR representation $\pi$ of ${\cal A}$ on ${\cal H}$ such that $\pi({\cal A})\subset {\cal B}$ is a M\"{o}bius subnet. The extension is irreducible if $\pi({\cal A}(I))'\cap {\cal B}(I) = {\mathbb C} $ for some (and hence all) interval $I$, and is of finite index if $\pi({\cal A}(I))\subset {\cal B}(I)$ has finite index for some (and hence all) interval $I$. The index will be called the index of the inclusion $\pi({\cal A})\subset {\cal B}$ and will be denoted by $[{\cal B}:{\cal A}].$ If $\pi$ as representation of ${\cal A}$ decomposes as $[\pi]= \sum_\lambda m_\lambda[\lambda]$ where $m_\lambda$ are non-negative integers and $\lambda$ are irreducible DHR representations of ${\cal A}$, we say that $[\pi]= \sum_\lambda m_\lambda[\lambda]$ is the spectrum of the extension. For simplicity we will write $\pi({\cal A})\subset {\cal B}$ simply as ${\cal A}\subset {\cal B}$. \end{definition} \begin{lemma}\label{indexab} If ${\cal A}$ is completely rational, and a M\"{o}bius covariant net ${\cal B}$ is an irreducible extension of ${\cal A}$. Then ${\cal A}\subset{\cal B}$ has finite index , ${\cal B}$ is completely rational and $$\mu_{\cal A}= \mu_{\cal B} [{\cal B}:{\cal A}]^2.$$ \end{lemma} \trivlist \item[\hskip \labelsep{\bf Proof\ }] ${\cal A}\subset{\cal B}$ has finite index follows from Prop. 2.3 of \cite{KL}, and the rest follows from Prop. 24 of \cite{KLM}. \null\hfill\qed\endtrivlist \begin{lemma}\label{extconformal} If ${\cal A}$ is a conformal net, and a M\"{o}bius covariant net ${\cal B}$ is an extension of ${\cal A}$ with index $[{\cal B}:{\cal A}]< \infty.$ Then ${\cal B}$ is a conformal net.\end{lemma} \trivlist \item[\hskip \labelsep{\bf Proof\ }] Denote by $\pi$ the vacuum representation of ${\cal B}.$ Denote by $\bold G$ the universal cover of ${\rm\textsf{M\"ob}}$. By definition $g\in {\bold G}\rightarrow U_\pi(g)$ is a representation of ${\bold G}$ which implements the M\"{o}bius covariance of $\pi\!\restriction\! {\cal A}.$ On the other hand by \S2 of \cite{AFK} there is a representation of $g\in {\bold G}\rightarrow V_\pi(g)$ which implements the M\"{o}bius covariance of $\pi\!\restriction\! {\cal A},$ and $V_\pi(g)\in \bigvee_{I\in {\cal I}} \pi({\mathrm {Vir}}_{\cal A}(I)),$ where ${\mathrm {Vir}}_{\cal A}$ is defined in definition \ref{virnet}. Since by assumption $\pi\!\restriction\! {\cal A}$ has finite index, by Prop. 2.2 of \cite{GL1} we have $U_\pi(g)=V_\pi(g), \forall g\in {\bold G}.$ Hence ${\mathrm {Vir}}_{\cal A}\subset {\cal B}$ verifies the condition in definition 3.1 of \cite{Ca}, and by Prop. 3.7 of \cite{Ca} the lemma is proved. \null\hfill\qed\endtrivlist The following is Th. 4.9 of \cite{LR} (cf. \S2.4 of \cite{KL}) which is also used in \S4.2 of \cite{KL}: \begin{proposition}\label{qlocal} Let ${\cal A}$ be a M\"{o}bius covariant net, ${\rho}$ a DHR representation of ${\cal A}$ localized on a fixed $I_0$ with finite statistics, which contains ${\mathrm {id}}$ with multiplicity one, i.e., there is (unique up to a phase) isometry $w\in{\mathrm {Hom}}({\mathrm {id}},{\rho}).$ Then there is a M\"{o}bius covariant net ${\cal B}$ which is an irreducible extension of ${\cal A}$ if and only if there is an isometry $w_1\in{\mathrm {Hom}}({\rho},{\rho}^2)$ which solves the following equations: \begin{align} w_1^* w & = w_1^* {\rho}(w) \in {\mathbb R_+} \label{a}\\ w_1w_1& = {\rho}(w_1) w_1 \label{b} \\ \epsilon({\rho},{\rho}) w_1 & = w_1 \label{c} \end{align} \end{proposition} \begin{remark}\label{outer} Let ${\cal A}\subset {\cal B}$ be as in Prop.\ref{qlocal}. If $U$ is an unitary on the vacuum representation space of ${\cal A}$ such that ${\mathrm {Ad}}_U {\cal A}(I)= {\cal A}(I), \forall I,$ then it is easy to check that $({\mathrm {Ad}}_U \rho{\mathrm {Ad}}_U^*, {\mathrm {Ad}}_U (w_1), {\mathrm {Ad}}_U (w))$ verifies the equations in Prop. \ref{qlocal}, and determines a M\"{o}bius covariant net ${\mathrm {Ad}}_U({\cal B})$. The spectrum of ${\cal A}\subset{\mathrm {Ad}}_U({\cal B})$ (cf. definition \ref{ext}) is ${\mathrm {Ad}}_U \rho{\mathrm {Ad}}_U^*$ which may be different from $\rho$, but ${\mathrm {Ad}}_U({\cal B})$ is isomorphic to ${\cal B}$ by definition. \end{remark} \subsection{Induction}\label{ind} Let ${\cal B}$ be a M\"obius covariant net and ${\cal A}$ a subnet. We assume that ${\cal A}$ is strongly additive and ${\cal A}\subset {\cal B}$ has finite index. Fix an interval $I_0\in{\cal I}$ and canonical endomorphism (cf. \cite{LR}) $\gamma$ associated with ${\cal A}(I_0)\subset{\cal B}(I_0)$. Then we can choose for each $I\subset{\cal I}$ with $I\supset I_0$ a canonical endomorphism $\gamma_{I}$ of ${\cal B}(I)$ into ${\cal A}(I)$ in such a way that $\gamma_{I}\!\restriction\!{\cal B}(I_0)=\gamma_{I_0}$ and $\rho_{I_1}$ is the identity on ${\cal A}(I_1)$ if $I_1\in{\cal I}_0$ is disjoint from $I_0$, where $\rho_{I}\equiv\gamma_{I}\!\restriction\!{\cal A}(I)$. Given a DHR endomorphism $\lambda$ of ${\cal A}$ localized in $I_0$, the inductions $\alpha_{\lambda},\alpha_{\lambda}^{-}$ of $\lambda$ are the endomorphisms of ${\cal B}(I_0)$ given by \[ \alpha_{\lambda}\equiv \gamma^{-1}\cdot{\mathrm {Ad}}\varepsilon(\lambda,\rho)\cdot\lambda\cdot\gamma \ , \alpha_{\lambda}^{-}\equiv \gamma^{-1}\cdot{\mathrm {Ad}}\tilde{\varepsilon}(\lambda,\rho)\cdot\lambda\cdot\gamma \] where $\varepsilon$ (resp. $\tilde\varepsilon$) denotes the right braiding (resp. left braiding) (cf. Cor. 3.2 of \cite{BE1}). In \cite{Xb} a slightly different endomorphism was introduced and the relation between the two was given in \S2.1 of \cite{X3m}. Note that ${\mathrm {Hom}}( \alpha_\lambda,\alpha_\mu)=:\{ x\in {\cal B}(I_0) | x \alpha_\lambda(y)= \alpha_\mu(y)x, \forall y\in {\cal B}(I_0)\} $ and ${\mathrm {Hom}}( \lambda,\mu)= :\{ x\in {\cal A}(I_0) | x \lambda(y)= \mu(y)x, \forall y\in {\cal A}(I_0)\} .$ The following is Lemma 3.6 of \cite{BE3} and Lemma 3.5 of \cite{BE1}: \begin{lemma}\label{aa'} $${\mathrm {Hom}} (\alpha_\lambda^{}, \alpha_\mu^{-}) = \{ T \in {\cal B}(I_0) | \gamma(T) \in {\mathrm {Hom}} (\rho \lambda, \rho\mu) | \varepsilon(\mu,\rho)\varepsilon(\rho,\mu)\gamma(T)=\gamma(T)\}. $$ \end{lemma} As a consequence of Lemma \ref{aa'} we have the following Prop. 3.23 of \cite{BE1} ( Also cf. the proof of Lemma 3.2 of \cite{Xb}): \begin{lemma}\label{a=a'} $[\alpha_\lambda^{}]=[\alpha_\lambda^{-}]$ iff $\varepsilon(\lambda, \rho) \varepsilon( \rho,\lambda)=1$ . \end{lemma} The following follows from Lemma 3.4 and Th. 3.3 of \cite{Xb} (also cf. \cite{BE1}) : \begin{lemma}\label{3.3} (1): $[\lambda]\rightarrow [\alpha_\lambda], [\lambda]\rightarrow [\alpha_\lambda^{-}]$ are ring homomorphisms;\par (2) $\langle \alpha_\lambda,\alpha_\mu\rangle = \langle \lambda \rho, \mu\rangle.$ \end{lemma} \subsection{Local simple current extensions}\label{simpleextension} \begin{proposition}\label{simple} (1) Assume that ${\cal B}$ is a M\"{o}bius extension of ${\cal A}$ of finite index with spectrum $[\pi]=\sum_{\lambda \in {\mathrm {exp}}} m_\lambda[\lambda].$ Let $\Gamma:=\{ \lambda| \lambda\in {\mathrm {exp}}\}.$ Assume that $d_\lambda=1, \forall \lambda\in {\mathrm {exp}}.$ Then $\Gamma$ is a local system of automorphisms;\par (2) If $\Gamma$ is a finite local system of automorphisms of ${\cal A}$, then there is a M\"{o}bius extension ${\cal B}$ of ${\cal A}$ with spectrum $[\pi]=\sum_{\lambda \in \Gamma} [\lambda].$ \end{proposition} \trivlist \item[\hskip \labelsep{\bf Proof\ }] Ad (1): By assumption we have $\alpha_\lambda^{}\succ 1, \forall \lambda\in \Gamma.$ By Lemma 3.10 of \cite{BE3} $\omega_\lambda=1.$ Since $d_\lambda=d_{\alpha_\lambda}=1,$ it follows that $[\alpha_\lambda]=[\alpha_\lambda^{-}]=[1].$ Note that if $\lambda\in \Gamma$ iff $[\alpha_\lambda]=[1]$ and it follows that $\Gamma$ is an abelian group with multiplication given by composition. By Lemma \ref{a=a'} and Lemma \ref{checklocal} (1) is proved.\par (2) It follows from Prop. 5.5 of \cite{Rehren} (also. cf Th. 5.2 of \cite{DHR}) that there is a M\"{o}bius extension ${\cal B}$ of ${\cal A}$ with spectrum $[\pi]=\sum_{\lambda \in \Gamma} [\lambda].$ \null\hfill\qed\endtrivlist \begin{remark} (1) We will use the notation ${\cal B}={\cal A}\ltimes \Gamma$ for the extension in Prop. \ref{simple}. (2) One can extend the above theorem to a case when ${\cal B}$ is not local but verifies twisted locality. Such extensions have been used for example in \cite{KLPR}. \end{remark} \subsection{Mirror extensions}\label{mirrorextension} In this section we recall the mirror construction as given in \S3 of \cite{Xm}. Let ${\cal B}$ be a completely rational net and ${\cal A}\subset {\cal B}$ be a subnet which is also completely rational. \begin{definition}\label{coset} Define a subnet $\widetilde{\cal A}\subset {\cal B}$ by $\widetilde{\cal A}(I):= {\cal A}(I)'\cap {\cal B}(I), \forall I\in {\cal I}.$ \end{definition} We note that since ${\cal A}$ is completely rational, it is strongly additive and so we have $\widetilde{\cal A}(I)= (\vee_{J\in {\cal I}}{\cal A}(J))'\cap {\cal B}(I), \forall I\in {\cal I}.$ The following lemma then follows directly from the definition: \begin{lemma}\label{cosetnet} The restriction of $\widetilde{\cal A}$ on the Hilbert space $ \overline{ \vee_I\widetilde{\cal A}(I)\Omega}$ is an irreducible M\"{o}bius covariant net. \end{lemma} The net $\widetilde{\cal A}$ as in Lemma \ref{cosetnet} will be called the {\it coset} of ${\cal A}\subset {\cal B}$. See \cite{Xcos} for a class of cosets from Loop groups. \par The following definition generalizes the definition in \S3 of \cite{Xcos}: \begin{definition}\label{cofinite} ${\cal A}\subset {\cal B}$ is called cofinite if the inclusion $\widetilde{\cal A}(I)\vee {\cal A}(I) \subset {\cal B}(I)$ has finite index for some interval $I$. \end{definition} The following is Prop. 3.4 of \cite{Xm}: \begin{proposition}\label{rationalc} Let ${\cal B}$ be completely rational, and let ${\cal A}\subset{\cal B}$ be a M\"{o}bius subnet which is also completely rational. Then ${\cal A}\subset {\cal B}$ is cofinite if and only if $\tilde{\cal A}$ is completely rational. \end{proposition} Let ${\cal B}$ be completely rational, and let ${\cal A}\subset{\cal B}$ be a M\"{o}bius subnet which is also completely rational. Assume that ${\cal A}\subset {\cal B}$ is cofinite. We will use $\sigma_i,\sigma_j,...$ (resp. $\lambda, \mu...$) to label irreducible DHR representations of ${\cal B}$ (resp. ${\cal A}$) localized on a fixed interval $I_0$. Since $ \widetilde{\cal A}$ is completely rational by Prop. \ref{rationalc}, $\widetilde{\cal A}\otimes {\cal A}$ is completely rational, and so every irreducible DHR representation $\sigma_i$ of ${\cal B}$, when restricting to $\widetilde{\cal A}\otimes {\cal A}$, decomposes as direct sum of representations of $ \widetilde{\cal A}\otimes {\cal A}$ of the form $(i,\lambda)\otimes \lambda$ by Lemma 27 of \cite{KLM}. Here $(i,\lambda)$ is a DHR representation of $\widetilde {\cal A}$ which may not be irreducible and we use the tensor notation $(i,\lambda)\otimes \lambda$ to represent a DHR representation of $ \widetilde{\cal A}\otimes {\cal A}$ which is localized on $I_0$ and defined by $$ (i,\lambda)\otimes \lambda (x_1\otimes x_2)= (i,\lambda)(x_1)\otimes \lambda(x_2), \forall x_1\otimes x_2\in \widetilde{\cal A}(I_0) \otimes {\cal A}(I_0). $$ We will also identify $\widetilde {\cal A}$ and ${\cal A}$ as subnets of $ \widetilde{\cal A}\otimes {\cal A}$ in the natural way. We note that when no confusion arise, we will use $1$ to denote the vacuum representation of a net. \begin{definition}\label{normal} A M\"{o}bius subnet ${\cal A}\subset{\cal B}$ is normal if $\widetilde {\cal A}(I)'\cap {\cal B}(I)= {\cal A}$ for some I. \end{definition} The following is implied by Lemma 3.4 of \cite{Reh1} (also cf. Page 797 of \cite{Xcos2}): \begin{lemma}\label{normal1} Let ${\cal B}$ be completely rational, and let ${\cal A}\subset{\cal B}$ be a M\"{o}bius subnet which is also completely rational. Assume that ${\cal A}\subset {\cal B}$ is cofinite. Then the following conditions are equivalent: \par (1) ${\cal A}\subset{\cal B}$ is normal; \par (2) $(1,1)$ is the vacuum representation of $\widetilde {\cal A}$ and $(1,\lambda) $ contains $(1,1)$ if and only if $\lambda=1$. \par \end{lemma} The following is part of Proposition 3.7 of \cite{Xm}: \begin{proposition}\label{mirror} Let ${\cal B}$ be completely rational, and let ${\cal A}\subset{\cal B}$ be a M\"{o}bius subnet which is also completely rational. Assume that ${\cal A}\subset {\cal B}$ is cofinite and normal. Then:\par (1) Let $\gamma$ be the restriction of the vacuum representation of ${\cal B}$ to $\widetilde{\cal A}\otimes {\cal A}$. Then $[\gamma]= \sum_{\lambda\in {\mathrm {exp}}} [(1,\lambda)\otimes \lambda]$ where each $(1,\lambda)$ is irreducible;\par (2) Let $\lambda\in {\mathrm {exp}}$ be as in (1), then $[\alpha_{(1,\lambda)\otimes 1}] = [\alpha_{1\otimes \bar\lambda}]$, and $[\lambda]\rightarrow [\alpha_{1\otimes \lambda}]$ is a ring isomorphism where the $\alpha$-induction is with respect to $\widetilde{\cal A}\otimes {\cal A} \subset {\cal B}$ as in subsection \ref{ind}; Moreover the set ${\mathrm {exp}}$ is closed under fusion; \par (3) Let $[\rho]= \sum_{\lambda\in {\mathrm {exp}}} m_\lambda[\lambda]$ where $m_\lambda = m_{\bar\lambda}\geq 0, \forall \lambda$, and $ [(1,\rho)]= \sum_{\lambda\in {\mathrm {exp}}} m_\lambda[(1,\lambda)]$. Then there exists an unitary element $T_\rho \in {\mathrm {Hom}}(\alpha_{(1,\rho)\otimes 1}, \alpha_{1\otimes \rho})$ such that $$ \epsilon((1,\rho),(1,\rho)) T_\rho^* \alpha_{1\otimes \rho}(T_\rho^*) =T_\rho^* \alpha_{1\otimes \rho}(T_\rho^*) \widetilde\epsilon(\rho,\rho) ;$$ \par (4) Let $\rho$, $(1,\rho)$ be as in (3). Then \begin{align*} {\mathrm {Hom}}(\rho^n, \rho^m) & = {\mathrm {Hom}}(\alpha_{1\otimes \rho^n},\alpha_{1\otimes \rho^m}), \\ {\mathrm {Hom}}((1,\rho)^n, (1,\rho)^m) & = {\mathrm {Hom}}(\alpha_{(1,\rho)^n\otimes 1}, \alpha_{(1,\rho)^m\otimes 1}), \forall n,m\in {\mathbb N}; \end{align*} \end{proposition} Denote by $\Delta_0:=\{ \lambda| [\lambda]=\sum_i{[\lambda_i]}, \lambda_i\in exp\}.$ Assume $\mu_i \in \Delta_0, i=1,...,n.$ For each $[\mu_i]=\sum_j m_{ij}[\lambda_j],$ choose DHR representations $M(\mu_i)$ of $\widetilde{\cal A}$ such that $[M(\mu_i)]=\sum_j m_{ij}[(1,\lambda_j)].$ Let $T_i\in {\mathrm {Hom}}(\alpha_{ m(\mu_i)\otimes 1},\alpha_{1\otimes \mu_i})$ be an unitary element (not necessarily unique up to phase when $\mu_i$ is not irreducible) as given is (3) of Prop. \ref{mirror}. Define $$ T_{i_1i_2...i_k}:=\alpha_{\mu_1...\mu_{k-1}\otimes 1}(T_{i_k})... \alpha_{\mu_1...\mu_{2}\otimes 1}(T_{i_3}) \alpha_{\mu_1\otimes 1}(T_{i_2})T_{i_1}\in {\mathrm {Hom}}(\alpha_{ M(\mu_1)...M(\mu_k)\otimes 1}, \alpha_{1\otimes \mu_1...\mu_k}) $$ For each $S\in {\mathrm {Hom}}(\mu_{i_1}...\mu_{i_k}, \mu_{j_1}...\mu_{j_m})$ we define $M(S):= T^*_{j_1...j_m} S T_{i_1...i_k}.$ \begin{lemma}\label{M} Assume that $S_1, T\in {\mathrm {Hom}}(\lambda,\mu), S_2\in {\mathrm {Hom}}(\nu,\lambda)$ where $\lambda,\mu$ are products of elements from $\{ \mu_1,...,\mu_n\}.$ If $\nu=\mu_{i_1}...\mu_{i_k}$ we denote $M(\mu_{i_1})... M(\mu_{i_k})$ by $M(\nu).$ Then: $$ M(S_1S_2)=M(S_1)M(S_2), M(\nu (T))= M(\nu)(M(T)), M(\varepsilon(\mu_i,\mu_j))= \tilde{\varepsilon}(M(\mu_i),M(\mu_j)). $$ \end{lemma} \trivlist \item[\hskip \labelsep{\bf Proof\ }] The first two identities follow directly from definitions. The third follows from (3) of Prop. 2.3.1 of \cite{X3m}, as (3) of Prop. \ref{mirror}. \null\hfill\qed\endtrivlist The following is Th. 3.8 of \cite{Xm}: \begin{theorem}\label{mainmirror} Let ${\cal B}$ be completely rational, and let ${\cal A}\subset{\cal B}$ be a M\"{o}bius subnet which is also completely rational. Assume that ${\cal A}\subset {\cal B}$ is cofinite and normal, and let ${\mathrm {exp}}$ be as in (1) of Prop.\ref{mirror}. Assume that ${\cal A}\subset{\cal C}$ is an irreducible M\"{o}bius extension of ${\cal A}$ with spectrum $[\rho]=\sum_{\lambda\in {\mathrm {exp}}} m_\lambda [\lambda], m_\lambda\geq 0.$ Then there is an irreducible M\"{o}bius extension $\widetilde {\cal C}$ of $\widetilde{\cal A}$ with spectrum $[(1,\rho)]=\sum_{\lambda\in {\mathrm {exp}}} m_\lambda [(1,\lambda)]$. Moreover $\widetilde {\cal C}$ is completely rational. \end{theorem} \begin{remark}\label{mc} Due to (5) of Prop. 3.7 of \cite{Xm}, the extension $\widetilde {\cal A}\subset\widetilde {\cal C}$ as given in Th. \ref{main} will be called the mirror or the conjugate of ${\cal A}\subset {\cal C}$. \end{remark} By Lemma \ref{indexab} and Th. \ref{mainmirror} we have: \begin{corollary}\label{mirrorindex} Let ${\widetilde {\cal C}} $ be the mirror extension as given in Th. \ref{mainmirror}. Then $\frac{\mu_{\widetilde{{\cal C}}}}{\mu_{\widetilde{{\cal A}}}}= \frac{\mu_{{{\cal C}}}}{\mu_{{\cal A}}}.$ \end{corollary} The mirror extension $\widetilde {\cal A}\subset\widetilde{C}$ is constructed as follows: let $(\rho,w,w_1)$ be associated with extension ${\cal A}\subset {\cal C}$ as given in Prop. \ref{qlocal}. Then the extension $\widetilde {\cal A}\subset\widetilde{C}$ is given by $(M(\rho), M(w), M(w_1))$ where the map $M$ is defined before Lemma \ref{M}. Let $\mu,\nu\in \Delta_0.$ Consider now inductions with respect to ${\cal A}\subset {\cal C}$ and $\widetilde {\cal A}\subset\widetilde{C}.$ \begin{proposition}\label{mirrora} Assume that $\mu,\nu\in \Delta_0$, $M(\rho)=(1,\rho), M(\mu)=(1,\mu), M(\nu)=(1,\nu).$ Then $$ \langle \alpha_{\mu}, \alpha_{\nu}\rangle =\langle \alpha_{M(\mu)}, \alpha_{M(\nu)}\rangle, \langle \alpha_{\mu}, \alpha_{\nu}^{-}\rangle =\langle \alpha_{M(\mu)}, \alpha_{M(\nu)}^{-}\rangle. $$ \end{proposition} \trivlist \item[\hskip \labelsep{\bf Proof\ }] By Lemma \ref{3.3} we have $$ \langle \alpha_{\mu}, \alpha_{\nu}\rangle = \langle\rho \mu,\nu\rangle, \langle \alpha_{M(\mu)}, \alpha_{M(\nu)}\rangle = \langle M(\rho) M(\mu),M(\nu)\rangle, $$ By Prop. \ref{mirror} we have proved the first equality. By Lemma \ref{aa'} we have $$ {\mathrm {Hom}} (\alpha_{\mu}, \alpha_{\nu}^{-}) = \{ T\in {\cal C}(I_0)| \gamma (T)\in {\mathrm {Hom}} (\rho\mu, \rho\nu), \varepsilon(\nu,\rho)\varepsilon(\rho,\nu)\gamma (T)= \gamma(T)\}. $$ By \cite{LR}, $\gamma ({\cal C}(I_0))= \{ x\in {\cal A}(I_0)| x= w_1^* \rho(x) w_1 \}.$ It follows that $\langle \alpha_{\mu}^{+}, \alpha_{\nu}^{-}\rangle$ is equal to the dimension of the following vector space $$\{ T'\in {\cal A}(I_0) | T'\in {\mathrm {Hom}} (\rho\mu, \rho\nu), \varepsilon(\nu,\rho)\varepsilon(\rho,\nu) T'= T', T'= w_1^* \rho(T') w_1 \}. $$ Now apply the map $M$ to the above vector space and use Lemma \ref{M} we have that $\langle \alpha_{\mu}^{+}, \alpha_{\nu}^{-}\rangle$ is equal to the dimension of the following vector space \begin{align*} \{ T'\in \widetilde{{\cal A}(I_0)} | T'\in {\mathrm {Hom}} (M(\rho)M(\mu), M(\rho)M(\nu)), \tilde{\varepsilon}(M(\nu),M(\rho))\tilde{\varepsilon}(M(\rho),M(\nu)) T'= T',\\ T'= M(w_1)^* M(\rho)(T') M(w_1) \}. \end{align*} Since $\tilde{\varepsilon}(M(\nu),M(\rho))\tilde{\varepsilon}(M(\rho),M(\nu))= (\varepsilon(M(\nu),M(\rho))\varepsilon(M(\rho),M(\nu)))^*$, we conclude that $\langle \alpha_{\mu}^{+}, \alpha_{\nu}^{-}\rangle$ is equal to the dimension of the following vector space \begin{align*} \{ T'\in \widetilde{{\cal A}(I_0)} | T'\in {\mathrm {Hom}} (M(\rho)M(\mu), M(\rho)M(\nu)), {\varepsilon(M(\nu),M(\rho))}{\varepsilon(M(\rho),M(\nu))} T'= T', \\ T'= M(w_1)^* M(\rho)(T') M(w_1) \} \end{align*} which is equal to $\langle \alpha_{ M(\mu)},\alpha_{M(\nu)}^-\rangle$ by Lemma \ref{aa'}. \null\hfill\qed\endtrivlist \subsection{A series of normal extensions}\label{lr2} Let $G= SU(n)$. We denote $LG$ the group of smooth maps $f: S^1 \mapsto G$ under pointwise multiplication. The diffeomorphism group of the circle $\text{\rm Diff} S^1 $ is naturally a subgroup of $\text{\rm Aut}(LG)$ with the action given by reparametrization. In particular the group of rotations $\text{\rm Rot}S^1 \simeq U(1)$ acts on $LG$. We will be interested in the projective unitary representation $\pi : LG \rightarrow U(H)$ that are both irreducible and have positive energy. This means that $\pi $ should extend to $LG\ltimes \text{\rm Rot}\ S^1$ so that $H=\oplus _{n\geq 0} H(n)$, where the $H(n)$ are the eigenspace for the action of $\text{\rm Rot}S^1$, i.e., $r_\theta \xi = {\mathrm {exp}}(i n \theta)$ for $\theta \in H(n)$ and $\text{\rm dim}\ H(n) < \infty $ with $H(0) \neq 0$. It follows from \cite{PS} that for fixed level $k$ which is a positive integer, there are only finite number of such irreducible representations indexed by the finite set $$ P_{++}^{k} = \bigg \{ \lambda \in P \mid \lambda = \sum _{i=1, \cdots , n-1} \lambda _i \Lambda _i , \lambda _i \geq 0\, , \sum _{i=1, \cdots , n-1} \lambda _i \leq k \bigg \} $$ where $P$ is the weight lattice of $SU(n)$ and $\Lambda _i$ are the fundamental weights. We will write $\lambda=(\lambda_1,...,\lambda_{n-1}), \lambda_0= k-\sum_{1\leq i\leq n-1} \lambda_i$ and refer to $\lambda_0,...,\lambda_{n-1}$ as components of $\lambda.$ We will use $k\Lambda_0$ or simply $1$ to denote the trivial representation of $SU(n)$. For $\lambda , \mu , \nu \in P_{++}^{k}$, define $N_{\lambda \mu}^\nu = \sum _{\delta \in P_{++}^{k} }S_\lambda ^{(\delta)} S_\mu ^{(\delta)} S_\nu ^{(\delta*)}/S_{\Lambda_0}^{(\delta})$ where $S_\lambda ^{(\delta)}$ is given by the Kac-Peterson formula: $$ S_\lambda ^{(\delta)} = c \sum _{w\in S_n} \varepsilon _w {\mathrm {exp}} (iw(\delta) \cdot \lambda 2 \pi /n) $$ where $\varepsilon _w = \text{\rm det}(w)$ and $c$ is a normalization constant fixed by the requirement that $S_\mu^{(\delta)}$ is an orthonormal system. It is shown in \cite{Kac2} P. 288 that $N_{\lambda \mu}^\nu $ are non-negative integers. Moreover, define $ Gr(C_k)$ to be the ring whose basis are elements of $ P_{++}^{k}$ with structure constants $N_{\lambda \mu}^\nu $. The natural involution $*$ on $ P_{++}^{k}$ is defined by $\lambda \mapsto \lambda ^* =$ the conjugate of $\lambda $ as representation of $SU(n)$.\par We shall also denote $S_{\Lambda _0}^{(\Lambda)}$ by $S_1^{(\Lambda )}$. Define $d_\lambda = \frac {S_1^{(\lambda )}}{S_1^{(\Lambda _0)}}$. We shall call $(S_\nu ^{(\delta )})$ the $S$-matrix of $LSU(n)$ at level $k$. \par We shall encounter the $\Bbb Z_n$ group of automorphisms of this set of weights, generated by $$ J : \lambda = (\lambda_1, \lambda_2, \cdots , \lambda_{n-1}) \rightarrow J(\lambda) = ( k -1- \lambda_1 -\cdots \lambda_{n-1}, \lambda_1, \cdots , \lambda_{n-2}). $$ We will identity $J$ with $k\Lambda_1$ in the following. Define ${\mathrm{col}}(\lambda) = \Sigma_i (\lambda_i - 1) i $. The central element $ {\mathrm {exp}} \frac{2\pi i}{n}$ of $SU(n)$ acts on representation of $SU(n)$ labeled by $\lambda$ as ${\mathrm {exp}}( \frac{2\pi i {\mathrm{col}}(\lambda)}{n})$. modulo $n$ ${\mathrm{col}}(\lambda)$ will be called the color of $\lambda.$ The irreducible positive energy representations of $ L SU(n)$ at level $k$ give rise to an irreducible conformal net ${\cal A}_{SU(n)_k}$ (cf. \cite{KLX}) and its covariant representations. ${\cal A}_{SU(n)_k}$ is completely rational (cf. \cite{W} and \cite{Xjw}), and $\mu_{{\cal A}_{SU(n)_k}}= \frac{1}{(S_{1}^1)^2}$ by \cite{Xjw}. We will use $\lambda=(\lambda_1,...\lambda_{n-1})$ to denote irreducible representations of ${\cal A}$ and also the corresponding endomorphism of $M={\cal A}(I).$ All the sectors $[\lambda]$ with $\lambda$ irreducible generate the fusion ring of ${\cal A}.$ \par For $\lambda$ irreducible, the univalence $\omega_\lambda$ is given by an explicit formula (cf. 9.4 of [PS]). Let us first define $h_\lambda = \frac {c_2(\lambda)}{k+n}$ where $c_2(\lambda)$ is the value of Casimir operator on representation of $SU(n)$ labeled by dominant weight $\lambda$. $h_\lambda$ is usually called the conformal dimension. Then we have: $\omega_\lambda = exp({2\pi i} h_\lambda)$. The conformal dimension of $\lambda=(\lambda_1,...,\lambda_{n-1})$ is given by \begin{equation}\label{cdim} h_\lambda= \frac{1}{2n(k+n)}\sum_{1\leq i\leq n-1} i(n-i) \lambda_i^2 + \frac{1}{n(k+n)}\sum_{1\leq j\leq i\leq n-1}j (n-i)\lambda_j\lambda_i + \frac{1}{2(k+n)}\sum_{1\leq j\leq n-1} j(n-j) \lambda_j \end{equation} \par Let $G \subset H$ be inclusions of compact simple Lie groups. $LG \subset LH$ is called a conformal inclusion if the level 1 projective positive energy representations of $LH$ decompose as a finite number of irreducible projective representations of $LG$. $LG \subset LH$ is called a maximal conformal inclusion if there is no proper subgroup $G'$ of $H$ containing $G$ such that $LG \subset LG'$ is also a conformal inclusion. A list of maximal conformal inclusions can be found in \cite{GNO}. \par Let $H^0$ be the vacuum representation of $LH$, i.e., the representation of $LH$ associated with the trivial representation of $H$. Then $H^0$ decomposes as a direct sum of irreducible projective representation of $LG$ at level $K$. $K$ is called the Dynkin index of the conformal inclusion. We shall write the conformal inclusion as $G_K\subset H_1$. Note that it follows from the definition that ${\cal A}_{H_1}$ is an extension of ${\cal A}_{G_K}$. We will be interested in the following conformal inclusion: $$ L(SU(m)_n \times SU(n)_m) \subset \ L \ SU(nm) $$ In the classification of conformal inclusions in [GNO], the above conformal inclusion corresponds to the Grassmanian $SU(m+n)/SU(n)\times SU(m)\times U(1)$.\par Let $\Lambda_0$ be the vacuum representation of $LSU(nm)$ on Hilbert space $H^0$. The decomposition of $\Lambda_0$ under $L(SU(m) \times SU(n))$ is known, see, e.g. \cite{ABI}. To describe such a decomposition, let us prepare some notation. We use $\dot S$ to denote the $S$-matrices of $SU(m)$, and $\ddot S$ to denote the $S$-matrices of $SU(n)$. The level $n$ (resp. $m$) weight of $LSU(m)$ (resp. $LSU(n)$) will be denoted by $\dot \lambda$ (resp. $\ddot \lambda$). \par We start by describing $\dot P_+^n$ (resp. $\ddot P_+^m$), i.e. the highest weights of level $n$ of $LSU(m)$ (resp. level $m$ of $LSU(n)$). $\dot P_+^n$ is the set of weights $$ \dot \lambda = \widetilde k_0 \dot \Lambda_0 + \widetilde k_1 \dot \Lambda_1 + \cdots + \widetilde k_{m-1} \dot \Lambda_{m-1} $$ where $\widetilde k_i$ are non-negative integers such that $$ \sum_{i=0}^{m-1} \widetilde k_i = n $$ and $\dot \Lambda_i = \dot \Lambda_0 + \dot \omega_i$, $1 \leq i \leq m-1$, where $\dot \omega_i$ are the fundamental weights of $SU(m)$. Instead of $\dot \lambda$ it will be more convenient to use $$ \dot \lambda + \dot \rho = \sum_{i=0}^{m-1} k_i \dot \Lambda_i $$ with $k_i = \widetilde k_i + 1$ and $\overset m-1 \to{\underset i=0 \to \sum} k_i = m + n$. Due to the cyclic symmetry of the extended Dynkin diagram of $SU(m)$, the group $\Bbb Z_m$ acts on $\dot P_+^n$ by $$ \dot \Lambda_i \rightarrow \dot \Lambda_{(i+ \dot \mu)\mod m}, \quad \dot \mu \in \Bbb Z_m. $$ Let $\Omega_{m,n} = \dot P_+^n / \Bbb Z_m$. Then there is a natural bijection between $\Omega_{m,n}$ and $\Omega_{n,m}$ (see \S2 of \cite{ABI}). The idea is to draw a circle and divide it into $m+n$ arcs of equal length. To each partition $\sum_{0\leq i\leq m-1} k_i = m+n$ there corresponds a "slicing of the pie" into $m$ successive parts with angles $2\pi k_i/(m+n)$, drawn with solid lines. We choose this slicing to be clockwise. The complementary slicing in broken lines (The lines which are not solid) defines a partition of $m+n$ into $n$ successive parts, $\sum_{0\leq i\leq n-1} l_i = m+n$. We choose the later slicing to be counterclockwise, and it is easy to see that such a slicing corresponds uniquely to an element of $\Omega_{n,m}$. \par We parameterize the bijection by a map $$ \beta : \dot P_+^n \rightarrow \ddot P_+^m $$ as follows. Set $$ r_j = \sum^m_{i=j} k_i, \quad 1 \leq j \leq m $$ where $k_m \equiv k_0$. The sequence $(r_1, \ldots , r_m)$ is decreasing, $m + n = r_1 > r_2 > \cdots > r_m \geq 1$. Take the complementary sequence $(\bar r_1, \bar r_2, \ldots , \bar r_n)$ in $\{ 1, 2, \ldots , m+n \}$ with $\bar r_1 > \bar r_2 > \cdots > \bar r_n$. Put $$ S_j = m + n + \bar r_n - \bar r_{n-j+1}, \quad 1 \leq j \leq n. $$ Then $m + n = s_1 > s_2 > \cdots > s_n \geq 1$. The map $\beta$ is defined by $$ (r_1, \ldots , r_m) \rightarrow (s_1, \ldots , s_n). $$ The following lemma summarizes what we will use: \begin{lemma}\label{lr} (1) Let $\dot Q$ be the root lattice of $SU(m), \ \dot \Lambda_i, \ 0 \leq i \leq m-1$ its fundamental weights and $\dot Q_i = (\dot Q + \dot \Lambda_i) \cap \dot P_+^n$. Let $\Lambda \in {\Bbb Z_{mn}}$ denote a level 1 highest weight of $SU(mn)$ and $\dot \lambda \in \dot Q_{\Lambda \text{\rm mod} m}$. Then there exists a unique $\ddot \lambda \in \ddot P_+^m$ with $\ddot \lambda = \mu \beta(\dot \lambda)$ for some unique $\mu \in \Bbb Z_n$ such that $H_{\dot \lambda} \otimes H_{\ddot \lambda}$ appears once and only once in $H^\Lambda$. The map $\dot \lambda \rightarrow \ddot \lambda = \mu \beta(\dot \lambda)$ is one-to-one. Moreover, $H^\Lambda$, as representations of $L(SU(m) \times SU(n))$, is a direct sum of all such $H_{\dot \lambda} \otimes H_{\ddot \lambda}$;\par (2) $\mu_{{\cal A}_{SU(n)_m}}= \frac{n}{m} \mu_{{\cal A}_{SU(m)_n}};$ \par (3) The subnets ${\cal A}_{{SU}(n)_m}\subset {\cal A}_{{SU}(nm)_1}$ are normal and cofinite. the set ${\mathrm {exp}}$ as in (1) Prop. \ref{mirror} is the elements of $P_{++}^{n+m}$ which belong to the root lattice of ${SU}(n)$. \end{lemma} \trivlist \item[\hskip \labelsep{\bf Proof\ }] (1) is Th. 1 of \cite{ABI}. (2) follows from Th. 4.1 of \cite{Xjw}. (3) is Lemma 4.1 of \cite{Xm}. \null\hfill\qed\endtrivlist \section{Schellekens's modular invariants and their realizations by conformal nets} In this section we examine three modular invariants constructed by A. N. Schelleken in \cite{Sch} which are based on level-rank duality. These are entries 18, 27, and 40 in the table of \cite{Sch}. Our goal in this section is to show that they can be realized by conformal nets as an application of mirror extensions in section \ref{mirrorextension}. For simplicity in this section we will use $G_k$ to denote the corresponding conformal net ${\cal A}_{G_k}$ when no confusion arises. \par \subsection{Three mirror extensions} \subsubsection{$\widetilde{SU(10)_2}$}\label{m1} $\widetilde{SU(10)_2}$ is the simplest nontrivial example of mirror extensions applying to $SU(2)_{10}\subset Spin(5)_1$ and $SU(2)_{10}\times SU(10)_2\subset SU(20)_1$ in Theorem \ref{mainmirror}. By Cor. \ref{mirrorindex} and Lemma \ref{lr} $$ \mu_{\widetilde{SU(10)_2}}=20. $$ Consider the induction for $SU(10)_2\subset \widetilde{SU(10)_2}.$ By Th. 5.7 of \cite{BEK} the matrix $Z_{\lambda\mu}=\langle \alpha_\lambda, \alpha_\mu^{-}\rangle$ commutes with the $S,T$ matrix of $SU(10)_2.$ Such matrices are classified in \cite{Gan}, and it follows that there are $15$ irreducible representations of $\widetilde{SU(10)_2}$ given as follows: $\alpha_{J}^i, 0\leq i\leq 9, \alpha_{J}^i\sigma, 0\leq j\leq 4.$ The fusion rules are determined by the following relations: $$ [\bar\sigma]= [\alpha_{J}^2 \sigma], [\alpha_{J}^5 \sigma]=[\sigma], [\sigma\bar{\sigma}]= [1]+[\alpha_J^5] $$ The restrictions of these representations to $SU(10)_2$ are given as follows: $$ [\alpha_{J}^i\!\restriction\! \ ]= [J^i(2\Lambda_0)]+[J^i(\Lambda_3+\Lambda_7)], 0\leq i\leq 9; [\alpha_{J}^i\sigma]=[J^i(\Lambda_0+\Lambda_3)]+[J^i(\Lambda_5+\Lambda_8)] ,0\leq j\leq 4. $$ It follows that modulo integers the conformal dimensions are given as $$ h_{\alpha_{J}^i}= \frac{i(10-i)}{10}, 0\leq i\leq 9 , h_\sigma= \frac{77}{80}, h_{a_J\sigma}= \frac{25}{16}, h_{a_J^2\sigma}= \frac{157}{80}, h_{a_J^3\sigma}= \frac{173}{80}=h_{a_J^4\sigma} . $$ \begin{remark} The modular tensor category (cf. \cite{Tu}) from representations of $\widetilde{SU(10)_2}$ as given above seems to be unknown before. It will be interesting to understand our construction from a categorical point of view. \end{remark} The following simple lemma will be used later: \begin{lemma}\label{so} ${\cal A}_{Spin(n)_1}$ is a completely rational net whose irreducible representations are in one to one correspondence with irreducible representations of ${\cal L} Spin(n)_1.$ When $n$ is odd there are three irreducible representations $1, \mu_0,\mu_1$ with index $1,1, \sqrt{2}$ respectively and fusion rules $[\mu_1^2]=[1]+[\mu_0];$ when $n=4k+2, k\in {\mathbb N}$ the fusion rule is ${\mathbb Z}_4; $ when $n=4k, k\in {\mathbb N}$ the fusion rule is ${\mathbb Z}_2\times {\mathbb Z}_2.$ \end{lemma} \trivlist \item[\hskip \labelsep{\bf Proof\ }] By Th. 3.10 of \cite{Bo} it is enough to prove that $\mu_{{\cal A}_{Spin(n)_1}} =4.$ When $n=5$ this follows from conformal inclusion $SU(2)_{10}\subset Spin(5)_1$ and Lemma \ref{indexab}. Consider the inclusion $SO(n)\times U(1)\subset SO(n+2).$ Note that the fundamental group of $SO(n)$ is ${\mathbb Z}_2.$ It follows that loops with even winding numbers in $LU(1)$ can be lifted to $LSpin(n),$ and we have a conformal inclusion $LSpin(n-2)_1\times LU(1)_4\subset LSpin(n)_1.$ Since $\mu_{{\cal A}_{U(1)_4}}=4$ by \S 3 of \cite{X3m}, and the index of ${\cal A}_{Spin(n-2)_1}\times {\cal A}_{U(1)_4}\subset {\cal A}_{Spin(n)_1}$ is checked to be $2$, by induction one can easily prove the lemma for all odd $n$. When $n$ is even we use the conformal inclusion ${\cal A}_{SU(n/2)_1}\times {\cal A}_{U(1)_{2n}}\subset {\cal A}_{Spin(n)_1}$ with index $n/2$. Note that $\mu_{{\cal A}_{SU(n/2)_1}}= n/2, \mu_{{\cal A}_{U(1)_{2n}}}=2n$ by \S 3 of \cite{X3m}, and by Lemma \ref{indexab} we have $\mu_{{\cal A}_{Spin(n)_1}} =4.$ \null\hfill\qed\endtrivlist \subsubsection{$\widetilde{SU(9)_3}$}\label{m2} $\widetilde{SU(9)_3}$ is an extension of $SU(9)_3$ by applying Th. \ref{mainmirror} to $SU(3)_9\subset (E_6)_1$ and $SU(3)_9\times SU(9)_3\subset SU(27)_1.$ By Cor. \ref{mirrorindex} and Lemma \ref{lr} $ \mu_{\widetilde{SU(9)_3}}= 9. $ Recall the branching rules for $SU(3)_9\subset (E_6)_1$ (We use $1_0$ to denote the vacuum representation of $(E_6)_1$ and $1_+, 1_-$ the other two irreducible representations of $(E_6)_1$): $$ [1_0\!\restriction\! \ ]= \sum_{0\leq i\leq 2} ([\dot{J}^i(9\Lambda_0)] +[\dot{J}^i(\Lambda_0+4\Lambda_1+4\Lambda_2)], \ [1_+\!\restriction\! \ ]=[1_-\!\restriction\! \ ] \sum_{0\leq i\leq 2} ([\dot{J}^i(5\Lambda_0+2\Lambda_1+2\Lambda_2)] $$ where $\dot J:=9\Lambda_1.$ Consider inductions with respect to $$ \widetilde{SU(9)_3}\subset SU(9)_3 .$$ By Th. \ref{mainmirror} and Lemma \ref{lr} the vacuum of $\widetilde{SU(9)_3}$ restricts to representation $$ \sum_{0\leq i\leq 2} ([{J}^{3i}(9\Lambda_0)] +[J^{3i}(\Lambda_3+\Lambda_7+\Lambda_8)] $$ of $SU(9)_3.$ Since $J$ is local with the above representation, by Lemma \ref{a=a'} $\alpha_J$ is a DHR representation of $\widetilde{SU(9)_3},$ and $[\alpha_J^3]=[1].$ One can determine the remaining irreducible representations of $\widetilde{SU(9)_3}$ by using \cite{Gan} as in \S\ref{m1}. Here we give a different approach which will be useful in \S\ref{m3}. We note that $M(\dot{J})= J^3, M(\dot {J}^i(5\Lambda_0+2\Lambda_1+2\Lambda_2))= J^{3i}(\Lambda_4+\lambda_6+\Lambda_8), i=0,1,2$ by Lemma \ref{lr}, where $M$ is defined as before Lemma \ref{M}. By Prop. \ref{mirrora} we have $$ \langle \alpha_{\Lambda_4+\Lambda_6+\Lambda_8}, \alpha_{\Lambda_4+\Lambda_6+\Lambda_8}^{-}\rangle= 2. $$ It follows that there are two irreducible DHR representations $\tau_1,\tau_2$ of $\widetilde{SU(9)_3}$ such that $\alpha_{\Lambda_4+\Lambda_6+\Lambda_8}\succ [\tau_1]+[\tau_2],$ and $\tau_1,\tau_2$ are the only two irreducible subsectors of $\alpha_{\Lambda_4+\Lambda_6+\Lambda_8}$ which are DHR representations. We have for $i=1,2$ $\langle \tau_i, \alpha_\mu^{-}\rangle\leq \langle \alpha_{\Lambda_4+\Lambda_6+\Lambda_8}, \alpha_\mu^- \rangle.$ Note that if the color of $\mu$ is nonzero, then $\langle \alpha_{\Lambda_4+\Lambda_6+\Lambda_8}, \alpha_\mu^-\rangle=0$ by Lemma \ref{3.3} since $\Lambda_4+\Lambda_6+\Lambda_8$ has color $0.$ If $\mu$ has color $0$, by Lemma \ref{lr} and Prop. \ref{mirrora} we have $$ \langle \alpha_{\Lambda_4+\Lambda_6+\Lambda_8}, \alpha_\mu^-\rangle $$ is nonzero only when $\mu=J^{3i}(\Lambda_4+\Lambda_6+\Lambda_8), i=0,1,2. $ It follows that $$ \langle \tau_i, \alpha_\mu \rangle =1 $$ when $\mu=J^{3i}(\Lambda_4+\Lambda_6+\Lambda_8), i=0,1,2,$ and $$ \langle \tau_i, \alpha_\mu \rangle =0 $$ when $\mu\neq J^{3i}(\Lambda_4+\Lambda_6+\Lambda_8), i=0,1,2,$ Hence the restriction of $\tau_i$ to $SU(9)_3$ are given as follows: $$ [\tau_i\!\restriction\! \ ]=\sum_{0\leq j\leq 2}[J^{3j}(\Lambda_4+\Lambda_6+\Lambda_8)] $$ It follows that the index of $\tau_i,i=1,2 $ is one, and since $$ [(\alpha_J \tau_i)\!\restriction\! \ ]=\sum_{0\leq j\leq 2}[J^{3j+1}(\Lambda_4+\Lambda_6+\Lambda_8)], $$ it follows that $[\alpha_j\tau_i]\neq [\tau_i].$ Hence the irreducible representations of $\widetilde{SU(9)_3}$ are given by $$ 1, \alpha_J, \alpha_J^2, \alpha_J^i \tau_k, 0\leq i\leq 2, k=1,2.$$ These representations generate an abelian group of order $9,$ it must be either ${\mathbb Z}_3\times {\mathbb Z}_3$ or ${\mathbb Z}_9$. Note that by Lemma \ref{3.3} $$\langle \alpha_J, \tau_i^k\rangle \leq \langle \alpha_J, \alpha_{\Lambda_4+\Lambda_6+\Lambda_8}^k\rangle =0, \forall k\geq 0 $$ since $J$ has color $3$ while $\Lambda_4+\Lambda_6+\Lambda_8$ has color $0,$ it follows that these representations generate an abelian group ${\mathbb Z}_3\times {\mathbb Z}_3.$ Modulo integers the conformal dimensions of $\tau_k, \alpha_J$ are given by $$ h_{\alpha_J}= \frac{4}{3}, h_{\tau_k}= \frac{7}{3}, h_{\alpha_J^2}= \frac{7}{3}, h_{\alpha_J\tau_k}= \frac{11}{3}, h_{\alpha_J^2\tau_k}= \frac{14}{3}, k=1,2. $$ \subsubsection{$\widetilde{SU(8)_4}$}\label{m3} From conformal inclusion $Spin(6)_8\subset Spin(20)_1$ and $Spin(6)\simeq SU(4)$ we obtain conformal inclusion $SU(4)_8\subset Spin(20)_1.$ For simplicity we use $(0), (5/4)_1, (5/4)_2, (1/2)$ to denote irreducible representations of $Spin(20)_1$ with conformal dimensions $0, 5/4,5/4, 1/2$ respectively. By comparing conformal dimensions the branching rules for $SU(4)_8\subset Spin(20)_1$ are given by: \begin{align*} [(0)\!\restriction\! \ ]& =\sum_{0\leq i\leq 3} ([\dot{J}^i]+[\dot{J}^i(4\Lambda_0+\Lambda_1+2\Lambda_2+\Lambda_3)]),\\ [(5/4)_1\!\restriction\! \ ]&=[(5/4)_2\!\restriction\! \ ]=\sum_{0\leq i\leq 3} [\dot{J}^i(3\Lambda_0+\Lambda_1+2\Lambda_2+3\Lambda_3)], \\ [(1/2)\!\restriction\! \ ]&=\sum_{0\leq i\leq 3} ([\dot{J}^i(6\Lambda_0+2\Lambda_2)]+[\dot{J}^i(3\Lambda_0+3\Lambda_2+2\Lambda_3)]). \end{align*} Note that all representations appearing above have color $0.$\par $\widetilde{SU(8)_4}$ is the extension of $SU(8)_4$ by applying Th. \ref{mainmirror} to $SU(4)_8\subset Spin(20)_1$ and $SU(4)_8\times SU(8)_4\subset SU(32)_1.$ By Lemma \ref{lr} the spectrum of $SU(8)_4\subset \widetilde{SU(8)_4}$ is given by $$ \sum_{0\leq i\leq 3} ([{J}^{2i}]+[{J}^{2i}(\Lambda_0+\Lambda_4+2\Lambda_5+\Lambda_7)]). $$ By Lemma \ref{so} and Lemma \ref{indexab} $\mu_{\widetilde{SU(8)_4}}= 8.$ By using Prop. \ref{mirrora} similar as in \S\ref{m2} we obtain all irreducible representations of $\widetilde{SU(8)_4}$ as follows: $$ 1, \alpha_J, (3/4)_1, (3/4)_2, (1/2), \alpha_J (3/4)_1, \alpha_J (3/4)_2, \alpha_J (1/2). $$ These representations restrict to $SU(8)_4$ as follows: \begin{align*} [\alpha_J\!\restriction\! \ ]& =\sum_{0\leq i\leq 3} ([{J}^{2i+1}]+[{J}^{2i}(\Lambda_0+\Lambda_4+2\Lambda_5+\Lambda_7)]),\\ [(\alpha_J^j(3/4)_k)\!\restriction\! \ ]& =\sum_{0\leq i\leq 3} [{J}^{2i}(\Lambda_0+\Lambda_3+2\Lambda_6+\Lambda_7)]), j=0,1, k=1,2;\\ [(\alpha_J^j(1/2))\!\restriction\! \ ]& =\sum_{0\leq i\leq 3} ([{J}^{2i+j}(2\Lambda_0+ \Lambda_3+\Lambda_5)]+[{J}^{2i+j}(2\Lambda_5+2\Lambda_7)]), j=0,1. \end{align*} The conformal dimensions modulo integers are as follows: $$ h_{\alpha_J}= 7/4, h_{(3/4)_k}= 3/4, k=1,2, h_{(1/2)}=1/2, h_{\alpha_J(3/4)_1}= h_{\alpha_J(3/4)_2}= 5/2, h_{\alpha_J(1/2)}=9/4, $$ which explain our notations. The irreducible representations of $\widetilde{SU(8)_4}$ generate an abelian group of order $8$ under compositions, so the abelian group is ${\mathbb Z}_2\times {\mathbb Z}_2\times {\mathbb Z}_2, {\mathbb Z}_2\times {\mathbb Z}_4 $ or ${\mathbb Z}_8.$ By Lemma \ref{3.3} $\langle \alpha_J, (3/4)_k^j\rangle = \langle \alpha_J, (1/2)^j\rangle= 0, k=1,2,\forall j\geq 0$ since the restriction of $\alpha_J$ to $SU(8)_4$ has color $4$ while the restriction of $(3/4)_k, (1/2)$ to $SU(8)_4$ has color $0$, it follows that ${\mathbb Z}_8$ is impossible. Note that the conjugate of $(1/2)$ has conformal dimension $1/2$, and it must be $(1/2)$, so $[(1/2)^2]= [1].$ To rule out the possibility of ${\mathbb Z}_2\times {\mathbb Z}_4 ,$ note that this can only happen when the order of $(3/4)_1$ is $4$, and we must have $[(1/2)]=[(3/4)_1^2], [(3/4)_2]=[(3/4)_1^3].$ By monodromy equation we have $$ \varepsilon((3/4)_1,(3/4)_1)^2= 1, \varepsilon((3/4)_1, (1/2))\varepsilon((1/2),(3/4)_1)=-1. $$ On the other hand by Lemma 4.4 of \cite{Rehren} we have $$ \varepsilon((3/4)_1, (1/2))\varepsilon((1/2),(3/4)_1)= \varepsilon((3/4)_1, (3/4)_1^2))\varepsilon((3/4)_1^2,(3/4)_1)=\varepsilon((3/4)_1,(3/4)_1)^4=1, $$ a contradiction. It follows that irreducible representations of $\widetilde{SU(8)_4}$ generate ${\mathbb Z}_2\times {\mathbb Z}_2\times {\mathbb Z}_2$ under compositions, and we have $$ \overline{[(3/4)_1]}= [(3/4)_1], [(1/2)(3/4)_1]=[(3/4)_2]. $$ \subsection{Further extensions by simple currents} \subsubsection{No. 40 of \cite{Sch}}\label{40} The modular invariant No. 40 in \cite{Sch} suggests that we look for simple current extensions of $\widetilde{SU(9)_2}\times SU(5)_1\times SO(7)_1.$ For simplicity we use $y^i=\Lambda_i, 0\leq i\leq 9 $ to denote the irreducible representation of $SU(5)_1.$ Note that $h_{y^2} =3/5.$ We use $(1/2), (7/16)$ to denote the irreducible representations of $SO(7)_1$ with conformal dimensions $1/2, 7/16.$ Note that the index of $(1/2), (7/16)$ are $1, 2$ respectively. By \S\ref{m1} the conformal dimension of $u=(\alpha_J, y^2, (1/2))$ is $h_{\alpha_J}+ h_y+ 1/2= 2.$ It follows that $u^i, 0\leq i\leq 9$ is a local system of automorphisms. By Prop. \ref{simple} there is a M\"{o}bius extension ${\cal D}=(\widetilde{SU(9)_2}\times SU(5)_1\times SO(7)_1)\ltimes {\mathbb Z}_{10}$ of $\widetilde{SU(9)_2}\times SU(5)_1\times SO(7)_1.$ By Cor. \ref{mirrorindex} and Lemma \ref{so} $\mu_{\cal D}= 4.$ Consider now the inductions for $$ \widetilde{SU(9)_2}\times SU(5)_1\times SO(7)_1\subset {\cal D} $$ By using formulas for conformal dimensions in \S\ref{m1} one checks easily that $$H((\sigma,y^3,(7/16)), u) =H((1,1,(1/2)),u)=1.$$ By Lemma \ref{a=a'} we conclude that $\alpha_{(\sigma,y^3,(7/16))}, \alpha_{(1, 1, (1/2))}$ are DHR representations of ${\cal D}$ with index $2, 1$ respectively. Note that by Lemma \ref{3.3} $$ \langle \alpha_{(\sigma,y^3,(7/16))}, \alpha_{(\sigma,y^3,(7/16))}\rangle =\sum_{0\leq i\leq 9}\langle (\sigma,y^3,(7/16)), (\sigma,y^3,(7/16))u^i\rangle = 2 $$ where in the last step we have used $[\sigma a_J^5]=[1].$ It follows that $[\alpha_{(\sigma,y^3,(7/16))}]= [\delta_1]+[\delta_2].$ Since $\mu_{\cal D}=4,$ the list of irreducible representations are given by $$ 1,\alpha_{(1, 1, (1/2))}, \delta_1, \delta_2. $$ The conformal dimensions modulo integers are $h_{\delta_1}= h_{\delta_2}=1, h_{\alpha_{(1, 1, (1/2))}}=1/2.$ These representations generate an abelian group of order $4$. To rule out ${\mathbb Z}_4$, note that $[\alpha_{(1, 1, (1/2))}^2]=[1].$ Without losing generality we assume that $\delta_1$ has order $4$. Then we must have $[\delta_1^2]=[\alpha_{(1, 1, (1/2))}], [\delta_1^3]=[\delta_2]. $ By monodromy equation we have $\varepsilon(\delta_1,\delta_1)^2= -1, \varepsilon(\delta_1, \delta_2)\varepsilon(\delta_2, \delta_1)=1.$ On the other hand by Lemma 4.4 of \cite{Rehren} we have $ \varepsilon(\delta_1, \delta_2)\varepsilon(\delta_2, \delta_1)=\varepsilon(\delta_1,\delta_1)^6 = -1,$ a contradiction. In particular we have $[\delta_1^2]=[1].$ Hence $1, \delta_1$ is a local system of automorphisms, and by Prop. \ref{simple} we conclude that the there is further extension ${\cal D}\ltimes {\mathbb Z}_2$ of ${\cal D}$. By Lemma \ref{indexab} we have $\mu_{{\cal D}\ltimes {\mathbb Z}_2}=1,$ i.e., ${\cal D}\ltimes {\mathbb Z}_2$ is holomorphic. The spectrum of $SU(10)_2\times SU(5)_1\times Spin(7)_1\subset {\cal D}\ltimes {\mathbb Z}_2$ is given by entry 40 in the table of \cite{Sch}: \begin{equation*} \sum_{o\leq i\leq 9} ([(J^i, y^{2i}, (1/2)^i)]+[(J^i(\Lambda_3+\Lambda_7), y^{2i}, (1/2)^i)]+ [(J^i(\Lambda_3+\Lambda_6), y^{2i+4}, (7/16))]) \end{equation*} \subsubsection{No. 27 of \cite{Sch}}\label{27} No. 27 in the table of \cite{Sch} suggests that we look for simple current extensions of $\widetilde{SU(9)_3}\times SU(3)_1\times SU(3)_1.$ Label irreducible representations of $SU(3)_1$ by their conformal dimensions as $1, (1/3)_1, (1/3)_2.$ Denote by $x_1=(\alpha_J, (1/3)_1, (1/3)_1), x_2=(\tau_1, (1/3)_1, (1/3)_2.$ By using formulas for conformal dimensions in \S\ref{m2} and Lemma \ref{checklocal} it is easy to check that the following set $ x_1^i x_2^j, 0\leq i,j\leq 2$ is a local system of automorphisms. Hence by Prop. \ref{simple} there is a M\"{o}bius extension ${\cal D}_1=(\widetilde{SU(9)_3}\times SU(3)_1\times SU(3)_1) \ltimes ({\mathbb Z}_3\times{\mathbb Z}_3)$ of $\widetilde{SU(9)_3}\times SU(3)_1\times SU(3)_1$ with spectrum $\sum_{0\leq i,j\leq 2}[x_1^i x_2^j].$ By Lemma \ref{indexab} $\mu_{{\cal D}_1}= 1,$ so ${\cal D}_1$ is holomorphic. The spectrum of ${SU(9)_3}\times SU(3)_1\times SU(3)_1\subset {\cal D}_1$ is given by (entry (27) of \cite{Sch}): \begin{align*} \sum_{ 0\leq i\leq 9} & ([(J^i, (1/3)_1^i, (1/3)_1^i)] + [(J^i(\Lambda_4 +\Lambda_6+\Lambda_8), (1/3)_1^{i-1}, (1/3)_1^{i+1})] \\ &+ [(J^i(\Lambda_4 +\Lambda_6+\Lambda_8), (1/3)_1^{i+1}, (1/3)_1^{i-1})] + [(J^i(\Lambda_3+\Lambda_7+\Lambda_8), (1/3)_1^i, (1/3)_1^i)] \end{align*} \begin{remark} One can choose other local systems of automorphisms which generate ${\mathbb Z}_3\times {\mathbb Z}_3.$ For an example one such choice is a local system of automorphisms given by $x_1'^i {x_2'}^j,0\leq i,j\leq 2$ with $x_1'=(\alpha_J, (1/3)_1, (1/3)_2), x_2'=(\tau_1, (1/3)_1, (1/3)_1).$ However by remark \ref{outer} it is easy to check that the corresponding extension is simply ${\mathrm {Ad}} U({\cal D}_1)$ which is isomorphic to ${\cal D}_1$, where ${\mathrm {Ad}} U$ implements the outer automorphism of the last factor of $SU(3)_1.$ Similar statement holds for other choices of local systems of automorphisms which generate ${\mathbb Z}_3\times {\mathbb Z}_3.$ \end{remark} \subsubsection{No. 18 of \cite{Sch}}\label{18} No. 18 in the table of \cite{Sch} suggests that we look for simple current extensions of $\widetilde{SU(8)_4}\times SU(2)_1\times SU(2)_1\times SU(2)_1.$ As before we label the non-vacuum representation $(1/4)$ of $SU(2)_1$ by its conformal dimension. Set $z_1=(\alpha_J, (1/4),0,0), z_2=((3/4)_1, 0, (1/4),0), z_3= ((3/4)_2, 0,0, (1/4)).$ Then by the formulas for conformal dimensions and fusion rules in \S\ref{m3} one checks easily that $H( z_i, z_j)=1, 1\leq i,j\leq 3.$ Hence $\{ z_1,z_2,z_3\}$ generate an abelian group ${\mathbb Z}_2\times {\mathbb Z}_2\times {\mathbb Z}_2$ which is a local system of automorphisms by Lemma \ref{checklocal}. By Prop. \ref{simple} we conclude that there is a M\"{o}bius extension ${\cal D}_2:=(\widetilde{SU(8)_4}\times SU(2)_1\times SU(2)_1\times SU(2)_1)\ltimes ({\mathbb Z}_2\times {\mathbb Z}_2\times {\mathbb Z}_2).$ By Lemma \ref{indexab} we have $\mu_{{\cal D}_2}=1,$ i.e., ${\cal D}_2$ is holomorphic. The spectrum of ${SU(8)_4}\times SU(2)_1\times SU(2)_1\times SU(2)_1\subset {\cal D}_2$ is given by (entry (18) of \cite{Sch}): \begin{align*} \sum_{0\leq i\leq 7} ([(J^i, (1/4)^i,0,0)]+ [(J^i(\Lambda_0+\Lambda_4+\Lambda_5+\Lambda_7), (1/4)^i,0,0)] \\ +[(J^i(\Lambda_5+2\Lambda_7), J_1^i,(1/4),(1/4))] +[(J^i(2\Lambda_0+\Lambda_3+\Lambda_5), (1/4)^i,(1/4),(1/4))] \\ + [(J^i(\Lambda_0+\Lambda_3+\Lambda_6+\Lambda_7), (1/4)^i,0,(1/4))] + [(J^i(\Lambda_0+\Lambda_3+\Lambda_6+\Lambda_7), (1/4)^i, (1/4),0)]) \end{align*} \subsubsection{The main Theorem} By Lemma \ref{extconformal} ${\cal D}\ltimes {\mathbb Z}_2, {\cal D}_1, {\cal D}_2$ as constructed in \S\ref{40}, \S\ref{27} and \S\ref{18} are in fact conformal nets since they contain conformal subnets with finite index, and in summary we have proved the following: \begin{theorem}\label{main} There are holomorphic conformal nets (with central charge $24$) which are conformal extensions of $SU(10)_2\times SU(5)_1\times Spin(7)_1, SU(9)_3\times SU(2)_1\times SU(2)_1, {SU(8)_4}\times SU(2)_1\times SU(2)_1\times SU(2)_1$ with spectrum given by the representations at the end of \S\ref{40}, \S\ref{27} and \S\ref{18} respectively. \end{theorem} \subsection{Two conjectures} The holomorphic conformal net corresponding to $V^\sharp$ of \cite{FLM} was constructed in \cite{KL2}. This net can also be constructed using the result of \cite{DX} as a simple current ${\mathbb Z}_2$ extension of a ${\mathbb Z}_2$ orbifold conformal net associated with Leech lattice given in \cite{DX}. Our first conjecture is an analogue of the conjecture in \cite{FLM} for $V^\sharp$: \begin{conjecture}\label{conj1} Up to isomorphism there exists a unique holomorphic conformal net with central charge $24$ and no elements of weight one. \end{conjecture} Our second conjecture is motivated by the results of \cite{Sch}: \begin{conjecture}\label{conj2} Up to isomorphism there exists finitely many holomorphic conformal nets with central charge $24$ . \end{conjecture} Note that if one can obtain a theorem like the theorem in \S2 of \cite{Sch} in the setting of conformal nets, then modulo conjecture (\ref{conj1}) conjecture ({\ref{conj2}) is reduced to show that up to equivalence, there are only finitely many conformal extensions of a given completely rational net, and this should be true in view of the results of \cite{IK}. However new methods have to be developed to carry though this idea. {\footnotesize
2023-04-23T06:10:14.946Z
2007-10-22T20:11:31.000Z
redpajama/arxiv
arxiv_0002
468
12,242
a07168dfeb0d5efb99146deeb5e168da122a3302
\section{Introduction} The largest scale velocity fields in the solar photosphere consist of rotation profile and a meridional flow pattern. Basically, the differential rotation is described as an integral of the zonal component $v_\varphi$ of the studied flow field. The integrated flow field may be obtained using spectroscopic method, using tracer-type measurements, or using helioseismic inversions. The latest case allows to measure the solar rotation not only as a function of heliographic latitude, but also as a function of depth. From the helioseismic inversion we know that throughout the convective envelope, the rotation rate decreases monotonically toward the poles by about 30~\%. Angular velocity contours at mid-latitudes are nearly radial. Near the surface at the top of the convection zone there is a layer of a large radial shear in the angular velocity. At low and mid-latitudes there is an increase in the rotation rate immediately below the photosphere which persists down to $r \sim 0.95~R_\odot$. The angular velocity variation across this layer is roughly 3~\% of the mean rotation rate and according to the helioseismic analysis of \cite{2002SoPh..205..211C} the angular velocity $\omega$ decreases within this layer approximately as $r^{-1}$, where $r$ is a radial coordinate. At higher latitudes, the situation is less clear. For the overview of solar differential rotation measurements see \cite{1985SoPh..100..141S} or a more recent review by \cite{2000SoPh..191...47B}. The \emph{torsional oscillations}, in which narrow bands of faster than average rotation, interpreted as zonal flows, migrate towards the solar equator during the sunspot cycle, were discovered by \cite{1980ApJ...239L..33H}. Later research \citep{2001ApJ...559L..67A} found that there exist two different branches of torsional oscillations. At latitudes below about 40\,$^\circ$, the bands propagate equatorward, but at higher latitudes they propagate poleward. The low-latitude bands are about 15\,$^\circ$ wide in latitude. The flows were studied in surface Doppler measurements \citep{2001ApJ...560..466U}, and also using local helioseismology \citep{1997ApJ...482L.207K}. The surface pattern of torsional oscillations penetrate deeply in the convection zone, possibly to its base, as suggested by \cite{2002Sci...296..101V}. The amplitude of the angular velocity variation is about 2--5~nHz, which is roughly 1~\% of the mean rotational rate (5--10~m\,s$^{-1}$). The direct comparison between different techniques inferring the surface zonal flow pattern \citep{2006SoPh..235....1H} showed that the results are sufficiently coherent. The surface magnetic activity corresponds well with the torsional oscillation pattern -- the magnetic activity belt tends to lie on the poleward side of the faster-rotating low-latitude bands. The magnetic activity migrates towards the equator with the low-latitude bands of the torsional oscillations as the sunspot cycle progresses \citep{2004ApJ...603..776Z}. Some studies \citep[e.\,g.][]{2002ApJ...575L..47B} suggest that meridional flows may diverge out from the activity belts, with the equatorward and poleward flows well correlated with the faster and slower bands of torsional oscillations. In a recent theoretical study by \cite{2007ApJ...655..651R} it was suggested that the poleward-propagating high-latitude branch of the torsional oscillations can be explained as a response of the coupled differential rotation/meridional flow system to periodic forcing in midlatitudes of either mechanical (Lorentz force) or thermal nature. The equatorward-propagating low-latitude branch is most likely not a consequence of the mechanical forcing alone, but rather of thermal origin. The axisymmetric flow in the meridional plane is generally known as the \emph{meridional circulation}. The meridional circulation in the solar envelope is much weaker than the differential rotation, making it relatively difficult to measure. Two principal methods are widely used to measure the meridional flow: feature tracking and direct Doppler measurement. There are several difficulties complicating the measurements of the meridional flow using tracers. Sunspots and filaments do not provide sufficient temporal and spatial resolution for such studies. Sunspots also cover just low latitudinal belts and do not provide any informations about the flow in higher latitudes. Doppler measurements do not suffer from the problem associated with the tracer-type measurements, however they introduce another type of noisy phenomena. It is difficult to separate the meridional flow signal from the variation of the Doppler velocity from the disc centre to the limb. Using different techniques, the parameters of the meridional flow show large discrepancies. It is generally assumed that the solar meridional flow in the close subphotospherical layers is directed poleward with one cell per hemisphere. Such flow is also produced by early global hydrodynamical simulation such as \cite{1982ApJ...256..316G}. As reviewed by \cite{1996ApJ...460.1027H}, the surface or near sub-surface velocities of the meridional flow are generally in the range 1--100~m\,s$^{-1}$, the most often measured values lie within the range of 10--20~m\,s$^{-1}$. The flow has often a complex latitudinal structure with both poleward and equatorward flows, multiple cells, and large asymmetries with respect to the equator. \cite{2004ApJ...603..776Z} used the time-distance helioseismology to infer the properties of the meridional flow in years 1996--2002. They found the meridional flows of an order of 20~m\,s$^{-1}$, which remained poleward during the whole period of observations. In addition to the poleward meridional flows observed at the solar minimum, extra meridional circulation cells of flows converging toward the activity belts are found in both hemispheres, which may imply plasma downdrafts in the activity belts. These converging flow cells migrate toward the solar equator together with the activity belts as the solar cycle evolves. \cite{2002ApJ...575L..47B} measured the meridional flow (and torsional oscillations) using the time-distance helioseismology and found the residual meridional flow showing divergent flow patterns around the solar activity belts below a depth of 18~Mm. The most complete maps of the torsional oscillations and the meridional flow available at present have been constructed on the basis of Mt.~Wilson daily magnetograms \citep[see][]{ulrich90}. The measurements cover more than 20 years (since 1986) and the results obtained using this very homogenous material agree well with the properties described above. The modern dynamo flux-transport models use the meridional flow and the differential rotation as the observational input. In the models by Dikpati et al. (\citeauthor{2006ApJ...638..564D} \citeyear{2006ApJ...638..564D} or \citeauthor{2006ApJ...649..498D} \citeyear{2006ApJ...649..498D}) the return meridional flow at the base of the convection zone is calculated from the continuity equation. They found the turnover time of the single meridional cell of 17--21~years. The meridional flow is assumed to be essential for the dynamo action, global magnetic field reversal and forecast of the future solar cycles. There are known many relations of the differential rotation profile to the phase of the progressing solar cycle -- see e.\,g. \cite{2003SoPh..212...23J} or \cite{2005ApJ...626..579J} -- showing for example different properties of the differential rotation profile in the odd and even solar cycles. The rotation of the sunspots in relation to their morphological type was studied e.\,g. by \cite{1986AA...155...87B} who found that more evolved types of sunspots (E, F, G and H type) rotate slower than less evolved types. \cite{2004SoPh..221..225R} investigated Greenwich Photoheliographic Results for the years 1874--1976 and found a clear evidence for the deceleration of the sunspots in the photosphere with their evolution. \cite{2002aprm.conf..427H} found that the leading part of a complex sunspot group rotate about 3~\% faster than the following part. The dependence of the rotation of sunspot on their size and position in the bipolar region was investigated by \cite{1994SoPh..151..213D}. They explained the observed behaviour through a subtle interplay between the forces of magnetic buoyancy and drag, coupled with the role of the Coriolis force acting on rising flux tubes. This dynamics of rising flux tubes also explains the faster rotation of smaller sunspots. In average, sunspots rotate about 5~\% faster than the surrounding plasma. In the theoretical study \citep{2004SoPh..220..333B} based on 3-D numerical simulations of compressible convection under the influence of rotation and magnetic fields in spherical shells, the author stated that in the presence of magnetic field the Maxwell stresses may oppose the Reynolds stresses and therefore the angular momentum is propagated more to the poles than without the presence of magnetic fields. As a consequence the rotation profile is more differential in the periods of lowered magnetic activity and it leads to the increase of the rotation in low latitudes. This behaviour was observed in many studies, e.\,g. \cite{1990ApJ...357..271H}. The subject of this work is a verification of the performance of the method described in \cite{svanda06} (hereafter Paper~I) on the real data and the investigation of long-term properties of the flows at largest scales obtained with this method. We shall also discuss the influence of magnetic fields on the measured zonal flow in the equatorial region. This topic will be studied more in detail in one of the next papers in the series. \begin{figure*} \centering \resizebox{\textwidth}{!}{\includegraphics{fig01.ps}} \caption{Mean meridional flow in time and heliographic latitude. It can be clearly seen that for almost all the processed measurements a simple model of one meridional cell per hemisphere would be sufficient. However, some local corruptions of this simple idea can be noticed on both hemispheres.} \label{fig:meridional} \end{figure*} \begin{figure*} \centering \resizebox{\textwidth}{!}{\includegraphics{fig02.ps}} \caption{Torsional oscillations. The residua of the mean zonal flow with respect to its parabolic fit displayed in time and heliographic latitude. In the period of weak magnetic activity the pattern of belts propagating towards the equator is very clear. In the periods of stronger magnetic activity the flow field is influenced by the local motions in active regions and therefore the pattern of torsional oscillations is not clearly seen.} \label{fig:torsional} \end{figure*} \section{Data processing} The horizontal photospheric velocity fields are calculated using local correlation tracking (LCT) algorithm \citep{1986ApOpt..25..392N} applied to series of processed full-disc dopplergrams measured by Michelson Doppler Imager \citep[MDI; ][]{1995SoPh..162..129S} on-board Solar and Heliospheric Observatory (SOHO). In the dopplergrams, the supergranular pattern is tracked in order to obtain properties of the velocity field of the larger scale. For this study we have processed all suitable data measured by MDI. The instrument observed full-disc dopplergrams approximately two months each year in a high cadence of one frame per minute. These \emph{Dynamic campaigns} provide suitable material for our method. Between May 23, 1996 and May 22, 2006 we have 806 days covered by high-cadence measurements. In each of these days, two 24-hour averages sampled every 12 hours were calculated. In some days, MDI had significant gaps in measurements, so in such cases we did not have enough homogeneous material to process. Therefore, in all the \emph{Dynamic campaigns} 502 days were useful for our analysis and we calculated 1004 full-disc horizontal velocity fields. The processing of almost 3~TB of primary data took several months using fast computers running in the network of W.~W.~Hansen Laboratory, Stanford university. The data were processed using the technique described in detail in Paper~I. We bring just a brief summary. The method processes 24-hours series of MDI full-disc dopplergrams, containing 1\,440 frames. The one-day series first undergoes the noise and disturbing effects removal. From all frames, the line-of-sight component of the Carrington rotation is subtracted and the effect of a perspective is corrected. The frames are transformed so that the heliographic latitude of the disc centre $b_0=0$ and the position angle of the solar rotation axis $P=0$. Then, the $p$-modes of the solar oscillations are removed using a weighted average \citep[see][]{1988SoPh..117....1H}. The weights have a Gaussian form given by the formula: \begin{equation} w(\Delta t)=e^{\frac{(\Delta t)^2}{2a^2}}-e^{\frac{b^2}{2a^2}}\left(1+\frac{b^2-(\Delta t)^2}{2a^2} \right), \end{equation} where $\Delta t$ is a time distance of a given frame from the central one (in minutes), $b=16$~minutes and $a=8$~minutes. We sample averaged images in the interval of 15 minutes. The filter suppresses more than five hundred times the solar oscillations in the 2--4 mHz frequency band. The processing of averaged frames consists of two main steps. In the first main step the mean zonal velocities are calculated and, on the basis of expansion to the Fay's formula $\omega=c_0+c_1 \sin^2 b + c_2\sin^4 b$, the differential rotation is removed. In the second main step, the LCT algorithm with an enhanced sensitivity is applied. Finally, the differential rotation (obtained in the first step) is added to the vector velocity field obtained in the second main step. Both main steps can be divided into several sub-steps, which are mostly common. \begin{enumerate} \item The data series containing 96 averaged frames is ``derotated'' using the Carrington rotation rate in the first step and using the calculated differential rotation in the second step. \item Derotated data are transformed into the Sanson-Flamsteed coordinate system to remove the geometrical distortion caused by the projection to the disc. The Sanson-Flamsteed (also known as sinusoidal) pseudo-cylindrical projection conserves the areas and therefore is suitable for the preparation of the data used by LCT. \item Remapped data undergoes the $k$-$\omega$ filtering \citep[e.~g.][]{1989ApJ...336..475T} with the cut-off velocity 1\,500~m\,s$^{-1}${} for suppression of the noise coming from the evolutionary changes of supergranules, of the numerical noise, and for the partial removal of the ``blind spot'' (an effect at the centre of the disc, where the supergranular structures are almost invisible in dopplergrams due to the prevailing horizontality of their internal velocity field). \item Finally, the LCT is applied: the lag between correlated frames is 4~hours, the correlation window has \emph{FWHM} 60\arcsec, the measure of correlation is the sum of absolute differences and the nine-point method for calculation of the subpixel value of displacement is used. The calculated velocity field is averaged over the period of one day. \item The resulting velocity field is corrected using the formula \begin{equation} v_{\rm cor}=1.13\,v_{\rm calc}, \end{equation} where $v_{\rm calc}$ is the magnitude of velocities coming from the LCT, and $v_{\rm cor}$ the corrected magnitude. The directions of the vectors before and after the correction are the same. The calibration formula was obtained from the tests on the synthetic data (see Paper~I). Finally, $v_x$ component is corrected for the data-processing bias of $-15$~m\,s$^{-1}${} determined in Paper~I too. \end{enumerate} \noindent In this study, we were interested only in the properties of the mean zonal and meridional components. Therefore, from each two-component horizontal velocity field the mean zonal and meridional component depending only on heliographic latitude were calculated as the longitudinal average of the flow map, using 135 longitudinal degrees around the central meridian. As stated in Paper~I, the accuracy for each velocity vector is 15~m\,s$^{-1}${} for velocities under 100~m\,s$^{-1}${} and 25~m\,s$^{-1}${} for velocities above 100~m\,s$^{-1}$. These inaccuracies have a character of a random error, therefore for the mean zonal and meridional components the accuracy is in the worst case 1~m\,s$^{-1}$. The performance of the method was verified by \cite{2007astro.ph..1717S}. The results of comparisons between the technique used in the present study and the time-distance helioseismology show that both methods reasonably match. The calculated surface flows may be biased by the projection effects, although the tests on the synthetic data (Paper~I) did not show any signs of them. \cite{2006ApJ...644..598H} showed that the apparent superrotation of the structures tracked in dopplergrams reported by many studies can be explained as the projection effect. However, this bias would produce the systematic error, which should influence neither the periodic analysis, nor the relative motions of the active regions with respect to its surroundings. \section{Results} \subsection{Long-term properties} For the study of long-term evolution of surface flows maps containing the mean zonal and meridional components were calculated. The maps of mean meridional component in time and heliographic latitude are shown in Fig.~\ref{fig:meridional}. We can clearly see that on the northern hemisphere dominates the flow towards the northern pole while on the southern hemisphere the flow towards southern pole prevails. The ``zero line'', the boundary between the flow polarity is not located exactly on the solar equator and seems to be shifted to the south in the period of increased solar activity (2001 and 2002). In agreement with \cite{1997AA...319..683M} and \cite{1998SoPh..183..263C} we found the meridional flow stronger in the periods of increased solar activity by about $~10$~m\,s$^{-1}${} than in periods with lower magnetic activity. A similar map was made in the same way for the zonal component. The mean equatorial zonal velocity for all the data is 1900~m\,s$^{-1}$. For all the processed data the dependence on latitude is close to a parabolic shape, parameters of which change slowly in time. The residua of the zonal velocity with respect to its parabolic fit given by \begin{equation} v_b=a_0+a_1 b + a_2 b^2, \end{equation} where $b$ is the heliographic latitude and $v_b$ the mean zonal velocity in the given latitude, were calculated in order to see if we are able detect torsional oscillations in our measurements. As it is displayed in Fig.~\ref{fig:torsional}, the method clearly reveals torsional oscillations as an excess of the mean zonal velocity with respect to the zonal velocity in the neighbourhood. The behaviour of torsional oscillations is in agreement with their usual description -- the excess in magnitude is in order of 10~m\,s$^{-1}$, they start at the beginning of the solar cycle in high latitudes and propagate towards the equator with the progress of the 11-year cycle. However, with our method the visibility of torsional oscillations decrease with increasing solar activity. In the periods of strong activity both belts are not so clearly visible since the large-scale velocity field and its parabolic fit are strongly influenced by the presence of magnetic regions. However, the torsional oscillations belts still remain visible when the mean zonal component is symmetrised with respect to the solar equator. We did not focus on study of meridional flow or torsional oscillations depending on time and latitude, we just used them to check the ability and performance of our method. \subsection{Periods in the mean components} The mean zonal and meridional components in the equatorial area (averaged in the belt $b=-5\,^\circ - +5\,^\circ$) were analysed in order to examine the periods contained in the data. Since the data are not equidistant at all, we cannot use a simple harmonic analysis, so that the \emph{Stellingwerf method} \citep{1978ApJ...224..953S} was applied. It works on the principle of the phase dispersion minimization. The method sorts the data for every searched period into the phase diagram. Then the phase diagram is divided in a few (mostly ten) parts and for every part the mean dispersion is calculated. If the studied period is reasonable, the data-points group along the periodic curve and the dispersion in each part of the phase diagram is smaller than the dispersion of the whole data series. The normalized parameter $\theta \in \left(0,1\right>$ describes the quality of a given period. We cannot completely exclude the influence of the calculated flow fields by the the position and orientation of the solar disc (position angle of the rotation axis $P$, heliographic latitude of the centre $b_0$), so we put also the series of both parameters sampled on the same dates when our measurements of surface flows exist through the period analysis. We also verified the periods caused by sampling of the data using the same method. Periodograms are displayed in Fig.~\ref{fig:periodograms}. \begin{figure*} \centering \resizebox{0.49\textwidth}{!}{\includegraphics{fig03a.eps}} \resizebox{0.49\textwidth}{!}{\includegraphics{fig03b.eps}}\\ \resizebox{0.49\textwidth}{!}{\includegraphics{fig03c.eps}} \resizebox{0.49\textwidth}{!}{\includegraphics{fig03d.eps}}\\ \caption{Periodograms determined using the Stellingwerf method. Parameter $\theta$ signifies the normalized phase variation. Upper left: Periodogram of mean equatorial zonal velocity. Upper right: Periodogram of mean equatorial meridional velocity. Bottom left: Periodogram of sampled heliographic latitude of the centre of the solar disc. Bottom right: Periodogram related to the sampling of data.} \label{fig:periodograms} \end{figure*} We conclude that we did not detect any significant period in the available data set. As it can be seen in Fig.~\ref{fig:periodograms}, there exist non-convincing (the values of the parameter $\theta$ are quite high, which means that the period probably is not significant) signs of periods detected in the real data, which are not present in the control data set. Values of the suspicious periods are 657~days (1.80~years) in the meridional component and 1712~days (4.69~years) in the zonal component. We note that the 1.8-year period was also detected by \cite{2004AA...418L..17K}. It is claimed to be related to a possible Rossby wave $r$-mode signature in the photosphere with azimuthal order $m \sim 50$ reported by \cite{2000Natur.405..544K}, but lately disputed e.\,g. by \cite{2006SPD....37.3002W}. The period estimate for such an $r$-mode is close to 1.8 years. According to \cite{2005AA...438.1067K}, such a periodicity was observed in the total magnetic flux only on southern hemisphere from 1997 to 2003. The coupling between the zonal flow and the meridional circulation could transfer the signal of the $r$-mode motion to the mean meridional component. The detected suspicious periods may not be of solar origin. The sparse data set suffers from aliasing caused by a bad coverage of the studied interval. To confirm the periods far more homogenous data set is needed. This may be a task for ongoing space borne experiment Helioseismic Michelson Imager (HMI), which will be a succesor of MDI. Detected periodicities are absent in Mt.~Wilson torsional oscillations time series, which is far more homogenous material than the one used in this study. \cite{1990ApJ...351..309S} also did not find any time variations in the study tracking the features in the low resolution dopplergrams covering homogenously 20 years of Mt.~Wilson observations. All the arguments written above led us to leave the detected periods as suspicious, as they cannot be confirmed from the current data set. \subsection{Relation to the magnetic activity} We also investigated the coupling of equatorial zonal velocity (average equatorial solar rotation) and the solar activity in the near-equatorial area (belt of heliographic latitudes from $-10\,^\circ$ to $+10\,^\circ$). The average equatorial zonal velocity incorporates the average supergranular network rotation and also the movement of degenerated supergranules influenced by a local magnetic field with respect to their non-magnetic vicinity. Indexes of the solar activity were extracted from the daily reports made by \emph{Space Environment Center National Oceanic and Atmospheric Administration (SEC NOAA)}. Only the days when the measurements of horizontal flows exist were taken into account. As the index of the activity we have considered the total area of sunspots in the near-equatorial belt and also their type. \begin{figure} \resizebox{0.5\textwidth}{!}{\includegraphics{fig04.ps}} \caption{Mean zonal equatorial velocity versus the sunspot area in near-equatorial belt. We decided to divide the data in two regimes along the velocity axis. Although the division is arbitrary, we believe that it is supported by the theory of the dynamical disconnection of sunspots from their roots.} \label{fig:activity_velocity} \end{figure} \begin{figure*}[!] \resizebox{\textwidth}{!}{\includegraphics{fig05.ps}} \caption{Sunspot area sampled in the same times when the measurements of the horizontal flows exist. Two regimes of the near-equatorial belt rotation are displayed. Diamonds denote ``fast'' rotating equatorial belts, crosses the ``scattered'' group.} \label{fig:velocity_regimes} \end{figure*} First of all we computed the correlation coefficient $\rho$ between the mean equatorial zonal velocity and the sunspot area in the near-equatorial belt and got a value of $\rho=-0.17$. We cannot conclude that there is any linear relation between these two indices. The dependence of both quantities is plotted in Fig.~\ref{fig:activity_velocity}. We can clearly find two different regimes which are divided by the velocity of approximately 1890~m\,s$^{-1}$. In one regime (77~\% of the cases), the equatorial belt rotates about 60~m\,s$^{-1}${} faster ($1910 \pm 9$~m\,s$^{-1}$; hereafter ``fast group'') than Carrington rotation, in the other one (23~\%) the rotation rate is scattered around the Carrington rate ($1860 \pm 20$~m\,s$^{-1}$; hereafter ``scattered group''). The division in two suggested groups using the speed criterion is arbitrary. If there exist only two groups, they certainly overlap and only a very detail study could resolve their members. We may also see more than two groups in Fig.~\ref{fig:activity_velocity}. The arguments for division in just two groups will follow. For both regimes, there does not exist a typical sunspot area. The distribution of both regimes in time is displayed in Fig.~\ref{fig:velocity_regimes}. The data in the periods of larger solar activity (years 2001 and 2002, these are also the only years when the data cover two Carrington rotations continuously) show that both regimes alternate with a period of one Carrington rotation. The histogram of the mean zonal equatorial velocity has a similar, i.~e. bimodal, character like in Fig.~\ref{fig:activity_velocity} with a greater second peak, because such a histogram is constructed not only from belts cointaining magnetic activity but also from the belts where not magnetic activity was detected. The mean equatorial rotation for all the data is 1900~m\,s$^{-1}$, 1896~m\,s$^{-1}${} for the equatorial rotation in a presence of sunspots, and 1904~m\,s$^{-1}${} for days without sunspots in the equatorial region. Such bimodal velocity distribution is in disagreement with the results obtained by time-distance helioseismology by \cite{2004ApJ...607L.135Z}. In this work authors found that the stronger the magnetic field the faster such a magnetic element rotates. They observed about 70~m\,s$^{-1}${} faster rotation than the average for magnetic areas with magnetic fields stronger than 600~G. It might be possible that the size of the magnetic area and its magnetic field strength play different roles in the influencing of the plasma motions. However, \cite{2005AA...436.1075M} showed that the size of the magnetic area and the maximum magnetic field strength or the total magnetic flux correlate quite well. \begin{figure*}[!] \resizebox{0.48\textwidth}{!}{\includegraphics{fig06.ps}} \resizebox{0.505\textwidth}{!}{\includegraphics{fig07.ps}} \caption{Left: Distribution of two equatorial rotation modes in year 2002. Right: Derivatives of the mean zonal velocity (solid curve) and the sunspot area in the near-equatorial region (dashed curve) in year 2002. Both quantities correlate with each other quite nicely.} \label{fig:gradients2002} \end{figure*} \begin{figure*}[!] \resizebox{0.48\textwidth}{!}{\includegraphics{fig08.ps}} \resizebox{0.48\textwidth}{!}{\includegraphics{fig09.ps}} \caption{Active region morphological types (left) and the number of active regions in the near-equatorial belt distributed as a function of time and sunspot area.} \label{fig:spottypes2002} \end{figure*} Detailed studies of the sunspot drawings obtained from the Patrol Service of Ond\v{r}ejov Observatory and the Mt.~Wilson Observatory drawings archive revealed that in the ``fast'' group, the new or growing young active regions were present in the equatorial belt. On the contrary, in the ``scattered'' group the decaying or recurrent active regions prevailed in the equatorial area. The deceleration of the sunspot group with its evolution was noticed e.\,g. by \cite{2004SoPh..221..225R}. Moreover, our results suggest that the new and rapidly growing sunspots in the studied sample (March to May 2001 and April to June 2002) move with the same velocity. This behaviour could be explained by emergence of the local magnetic field from a confined subphotospheric layer. According to the rough estimate \citep{2002SoPh..205..211C} the speed of $1910 \pm 9$~m\,s$^{-1}${} corresponds to the layer of $0.946 \pm 0.008\ R_\odot$, where the angular velocity of rotation suddenly changes. During the evolution, the magnetic field is disrupted by the convective motions. An interesting behaviour is displayed by the alternation of the ``fast'' and ``scattered'' regimes (see Fig.~\ref{fig:gradients2002}) with the period of one Carrington rotation. It suggests that active regions in the equatorial region emerge in groups. We have to keep in mind that our study produces a very rough information due to the averaging of all efects in the equatorial belt. The observed behaviour could be a manifestation of the disconnection of magnetic field lines from the base of the surface shear during evolution of the growing sunspot group. This behaviour was theoretically studied by \cite{2005AA...441..337S}. They suggested the dynamical disconnection of bipolar sunspot groups from their magnetic roots deep in the convection zone by upflow motions within three days after the emergence of the new sunspot group. The motion of sunspots changes during those three days from ``active'' to ``pasive''. The active mode is displayed by motions reasonable faster with respect to a non-magnetic origin. The passive mode means mostly the deceleration of sunspot motions and influence of the sunspot motions only by the shallow surface plasma dynamics. The theory of the disconnection of sunspot groups from their magnetic root supports the division of the data set in two groups. As an example, we selected the active region NOAA~9368 (Fig.~\ref{fig:group_evolution}) to show the behaviour of large-scale velocities in time. We see that the leading part of the active region rotates faster than the surroundings in the first day of observation and the whole group slows down in next two days. The inspection of details in the behaviour of selected active regions in the whole data set will be the subject of the ongoing studies. Division of the equatorial belt into 10 sectors helped the investigation of the behaviour of different active region types. The mean zonal velocity in the sector containing the studied active region according to morphological type of active regions are summarized in Table~\ref{tab:sunspot_types}. It can be clearly seen that, in average, more evolved active regions rotate slower than less evolved or young active regions, what is in agreement with e.~g. \cite{1986AA...155...87B}. We have also replotted Fig.~\ref{fig:activity_velocity} using segmented equatorial belt and different active region types and got a very similar result. We did not find any particular behaviour for various active region types (see Fig.~\ref{fig:spottypes2002}). \begin{table}[b] \caption{Mean synodic rotation velocities of different active regions types and the average equatorial rotation of all data. The measured speed values may be systematically biased by the projection effect \cite[see][]{2006ApJ...644..598H}.} \centering \begin{tabular}{cc} \hline \hline Sunspot & Mean rotation \\ type & [m\,s$^{-1}$]\\ \hline A & $1893 \pm 48$ \\ B & $1893 \pm 49$ \\ C & $1890 \pm 71$ \\ D & $1880 \pm 73$ \\ E & $1880 \pm 50$ \\ F & $1874 \pm 73$ \\ H & $1872 \pm 51$ \\ \hline Average & $1900$ \\ \hline \end{tabular} \label{tab:sunspot_types} \end{table} We have also focused how the presence of the magnetic active areas will influence the average flow field. Since we found that a direct correlation is weak due to existence of two different regimes, we decided to study the temporal change of both quantities. The aim is to study whether an emerging active region in the near-equatorial belt will influence the average equatorial rotation. We computed numerical derivatives of the total sunspot area in the near-equatorial belt and of the average zonal equatorial flow. We have found that the correlation coefficient between both data series is $\rho=0.36$ and is higher for the ``fast group'' ($\rho=0.41$) than for the ``scattered group'' ($\rho=0.24$). The correlation is higher in periods of increased magnetic activity in the equatorial belt. For example, for data in year 2001 the correlation coefficient is $\rho_{2001}=0.58$ and for year 2002 $\rho_{2002}=0.52$; see Fig.~\ref{fig:gradients2002}. In both particular cases, the correlation is higher for the ``fast regime'' ($\rho \sim 0.7$) than for the second group. \begin{figure} \resizebox{0.49\textwidth}{!}{\includegraphics{fig10.ps}} \caption{Case study: Evolution of the flows in and around the active region NOAA~9368 on March 6, March 7, and March 8, 2001. The leading polarity rotate significantly faster than the following one and the non-magnetic surroundings in the first day. The whole group slows down in the other two days. As the background image, the MDI magnetogram smoothed to the resolution of the measured flow field is used. The magnetic field intensities are displayed in the range of $-$1800 (black) to $+$1800 (white) Gauss in the linear scale.} \label{fig:group_evolution} \end{figure} It is important to know that LCT in our method measures basically the motions of supergranules influenced by the magnetic field, not by spots that are recorded in the used solar activity index. Therefore the results can be biased by the fact that the presence of magnetic field does not necessarily mean the presence of sunspot in the photosphere. We think that despite an apparent disagreement, our apparently conflicting results can be valid and are in agreement with the results published earlier. Basically, as described by e.\,g. \cite{1990ApJ...357..271H} and explained in the model of \cite{2004SoPh..220..333B}, the solar rotation in the lower latitudes is slower in the presence of magnetic field. This is summarized in Table~\ref{tab:sunspot_types} and displayed in Fig.~\ref{fig:activity_velocity}. In most cases, ``spotty'' equatorial belts seem to rotate slower than the average for the whole data series. However, it is clear that emerging active regions cause in most cases the increase of the rotation rate. This is in agreement with a generally accepted statement found first by \cite{1970SoPh...12...23H} and \cite{1978ApJ...219L..55G}. The relation, obtained using linear fit on our data set, can be described by the equation \begin{equation} \Delta v \sim 0.2\, \Delta A_{\rm sunspots}\ {\rm m\,s^{-1}}, \end{equation} where $\Delta v$ is a change of the equatorial rotation speed with respect to the Carrington rotation and $\Delta A_{\rm sunspots}$ is a relative change of sunspot area in the equatorial belt (in 10$^{-6}$ of solar hemisphere). We estimate that strong local magnetic areas rotate few tens of m\,s$^{-1}${} faster than the non-magnetic surroundings. The difference in the behaviour of the flow fields in the regions occupied by the magnetic field and their vicinity was studied also e.\,g. by \cite{1992ApJ...393..782T} using the high-resolution data obtained at the Swedish Vacuum Solar Telescope on La Palma, Canary Islands. The authors found that the magnitude of horizontal velocities measured by LCT on the granular scale is in the regions of the quiet Sun larger than in an active-region plage. The high-resolution velocity fields are of different nature than the large-scale ones studied by our method. Flow fields on granular scales are mostly chaotic due to the turbulent behaviour. In the regions occupied by the magnetic field, the motions become more organized, the chaotic component is suppressed, and therefore the amplitude of the horizontal velocity is generally lower. \cite{1990ApJ...351..309S} tracked the features in the low-resolution dopplergrams measured at the Mt.~Wilson 46~m tower telescope for the period of 20~years. They interpreted the detected flows as the velocity of the supergranular network, although the spatial resolution was lower than the size of individual supergranules. They found two regimes taking place in the rotation of the photosphere -- the quiet Sun and active regions. The results showed that the regions occupied by the magnetic field display a slower rotation than the non-magnetic vicinity. In general, the results of this study are in agreement with the results of the current one. \citeauthor{1990ApJ...351..309S} found the mean rotational rate of magnetic regions with value of (1864$\pm$1)~m\,s$^{-1}$, which, within the statistical errors, agrees with our findings. However, the synodical equatorial rotation of all Doppler structures measured by \citeauthor{1990ApJ...351..309S} is (1924$\pm$6)~m\,s$^{-1}$, which is more than 1~\% faster than the average rate measured in the present study. We could probably explain such a disagreement by the different resolution of the used data. In the study of \cite{1990ApJ...351..309S}, the full solar disc was sampled in a 34$\times$34 pixels array, while in the current one, the size of the solar disc was 1000$\times$1000 pixels. We assume that this faster rotational rate is due to the effect of undersampling of Doppler structures in quiet regions. The results of our study should not be influenced by this effect, since the supergranular structures are well resolved in the data. \section{Conclusions} We have verified that the method developed and tested using the synthetic data (Paper~I) is suitable for application to real data obtained by the MDI on-board SoHO and maybe also to the data that will be produced by its successor Helioseismic Michelson Imager (HMI) on-board the Solar Dynamic Observatory (SDO). HMI will have a greater resolution and will cover larger time span than two months each year. We verified that the long-term evolution of the horizontal velocity fields measured using our method is in agreement with generally accepted properties. During the periodic analysis of the equatorial area we found two suspicious periods in the real data, which are not present in the control data set containing the inclination of the solar axis towards the observer, the quantity that can bias systematically and periodically the results by a few m\,s$^{-1}$. The periods of 1.8~year and 4.7~years need to be confirmed using a more homogenous data set. We also found that the presence of the local magnetic field generally speeds-up the region occupied by the magnetic field. However, we cannot conclude that there exists a dependence of this behaviour for different types of sunspots. We can generally say that the more evolved types of active regions rotate slower than the young ones, however the variance of the typical rotation rate is much larger than the differences between the rates for each type. We have found that the distribution of active regions rotation is bimodal. The faster-rotating cases correspond to new and growing active regions. Their almost constant rotation speed suggests that they emerge from the base of the surface radial shear at $0.95\ R_\odot$. The decaying and recurrent regions rotate slower with a wider scatter in their velocities. This behaviour suggests that during the sunspot evolution, sunspots loose the connection to their magnetic roots. Both regimes alternate with a period of approximately one Carrington rotation in years 2001 and 2002, which suggests that new active regions emerge in groups and may have a linked evolution. \begin{acknowledgements} The authors of this paper were supported by the Czech Science Foundation under grants 205/03/H144 (M.~\v{S}.) and 205/04/2129 (M.~K.), by the Grant Agency of the Academy of Sciences of the Czech Republic under grant IAA 3003404 (M.~S.) and by ESA-PECS under grant No. 8030 (M.~\v{S}.). The Astronomical Institute of Academy of Sciences is working on the Research project AV0Z10030501 of the Academy of Sciences, Astronomical Institute of Charles University on the Research Program MSM0021620860 of the Ministry of Education. The MDI data were kindly provided by the SoHO/MDI consortium. SoHO is the project of international cooperation between ESA and NASA. Authors would like to acknowledge the staff of W.~W.~Hansen Experimental Physics Laboratory, computer resources of which were used during the data processing. We thank to the referee Roger~K.~Ulrich, whose useful comments significantly improved the quality of the paper. \end{acknowledgements} \newcommand{\SortNoop}[1]{}
2023-04-23T06:10:15.635Z
2007-10-14T21:41:43.000Z
redpajama/arxiv
arxiv_0002
492
6,766
2b19dcd37d37c618f87df3121e58b59d560a5917
\section{Introduction} The structural coloration of living organisms is currently receiving much attention \cite {zi-nas-03,Vigneron-pre-2006-magpie,Parker-nat-2001,welch-pre-2006,Parker-nat-2003,Ghiradella-sci-1972,Tada-ao-1998,Yoshioka-prsl-2004,Lawrence-ao-2002,Vukusic-ao-2002,Argyros-mi-2002,Vertezy-ma-2004}% . The keys to the visual effects occurring in insects -- mainly butterflies and beetles -- are being progressively revealed, and the relationship between the cuticle's nanomorphology and its optical properties is becoming ever more accurate, often with the support of physical modelling and numerical simulations. In a few cases, artificial structures which mimic the natural functions \cite{Vigneron-pre-vit-06} have consolidated our understanding of the colouration mechanisms. However, some families of insects, though visually appealing, have been studied relatively less. In this work, we report on the analysis of the iridescence of the wings of a giant wasp, \textit{Megascolia procer javanensis} (Betrem \& Bradley 1964). Wasps are winged insects which belong to the order Hymenoptera, which also includes ants and bees. Although some wasps have developed social behaviors, the vast majority of the 200,000 species are solitary. Wasps can be found in most regions of both hemispheres. Some are solid black or dark blue, but most of them display conspicuous red, orange, or yellow markings. The wings are opaque or transparent. \textit{Megascolia procer javanensis} is a large and robust insect, about 5 cm in length, which belongs to the small family of Scoliidae\cite {Osten-lbb-2000} (see Fig. \ref{fig1}). Organisms in this family have long been observed and studied in relation to their parasitic behaviour, in particular by the French naturalist Jean-Henry Fabre\cite{Fabre-se-1891}. Members of these Scoliidae are indeed external parasites of Scarabaeid larvae, which means that they are able to sting and thereby paralyze a grub% \cite{Vereecken-nfg-2003}, lay an egg on it, and leave it in the soil, so that the developing larva will feed on the grub. The specimen under study here \cite{Betrem-zm-1964-1,Betrem-zm-1964-2} originates from the island of Java. The body of this insect is slightly hairy and the wings show a large number of parallel, longitudinal, wrinkles (see Fig. \ref{fig2}). Moreover, in this particular species, the wings appear black and mostly opaque, with iridescent green to bluish green reflections visible at increasing viewing angles. \begin{figure}[t] \centerline{\ \includegraphics[width=8.0 cm]{fig1.eps}} \caption{(Colour online) A collected specimen of male \textit{Megascolia procer javanensis} (Hymenoptera). Note the dark wings of this Scoliid, showing bluish reflections.} \label{fig1} \end{figure} The objective of the present paper is to clarify the relationship between the physical structure of the wings, as revealed by scanning electronic microscopy (SEM), and their optical properties. The next sections will show that the wings can be modelled by a thin optical layer probably made of chitin covering a simple chitin/melanin mixture substrate. In order to confirm this interpretation, the experimental spectra of the scattered light will be compared with the results of numerical simulations based on the thin layer model. \begin{figure}[b] \centerline{\ \includegraphics[width=8.0 cm]{fig2.eps}} \caption{(Colour online) The wings of \textit{Megascolia procer} are highly sophisticated organs, balancing low inertia, strength and optimized aerodynamics. Note the rippled surface, which produce a wavy cross-section for the bearing membrane. } \label{fig2} \end{figure} \section{Nanomorphology} Wasp wings are made from a material containing chitin, proteins and melanin, just like the insect's cuticle. A scanning electron microscope image of a wing, fractured in the direction normal to its surface, reveals that the wings are covered by a thin layer of unknown medium, with a measured average thickness $300$~nm (see Fig. \ref{fig3}). A similar uniform layer coverage has also been observed in other insects, such as dragonflies \cite {Hooper-oe-2006}. The bulk of the wing, below the thin layer, is structured as a multilayer, likely to improve mechanical strength. The thickness of the layers varies along the length of the wing (from about $400$~nm to $1$~$\mu $% m), plausibly to provide a variable flexibility. Except for its melanin content, which leads to opacity, this multilayer is probably not directly involved in any blue-green colouring process, because it has a thickness over $400$~nm, with an average refractive index above $1.5$, making it a Bragg mirror that essentially reflects at wavelengths longer than about $% \lambda ~=~1200$~nm: well in the infrared. An harmonic of such a resonance could, in principle, be found in the visible, near $600$ nm (orange-red), but this is actually not observed. These findings imply implies that the multilayer is too lightly contrasted and/or is too absorbent to produce multiple interface scattering. Another reason to rule out the bulk wing structure as a possible origin of the iridescence is that the layer thickness varies along the wing. If such a structure were to be selectively reflecting, its central colour would vary drastically along the length of the wing and this, again, is not observed. The blue-green hue of the reflection is very uniform on the forewing and hindwing surfaces. As Fig. \ref{fig3} shows, the surface of the wing is slightly corrugated, with randomly distributed rounded protrusions. A typical distance between the protrusions' centres is 1.2~$\mu$m, which is also their diameter. The protrusions then form a disordered field of touching islands, with an overall thickness of about $80$~nm. This structure, again, should not seriously impact upon the colour production, but could be expected to broaden the reflection both in the spectral- and emergence angles- domains. \begin{figure}[t] \centerline{\ \includegraphics[width=8.0 cm]{fig3.eps}} \caption{\textit{Megascolia Procer Javanensis} wing section (scanning electronic microscopy picture) and model used for simulations.} \label{fig3} \end{figure} \section{Optical properties of the wing membrane} The reflection factor of the wing membrane was measured for several incidence angles, in the specular geometry. For this purpose, a piece of wing was cut from a dry specimen and glued on a black substrate. The reflection spectra were obtained using an Avaspec 2048/2 fibre optic spectrophotometer and the reflected light was compared with a diffuse PTFE-based white reference tile. This normalization produces the \symbol{92} reflection factor'' shown in Fig. \ref{fig4}. This quantity is closely related to the reflectance, which expresses the reflected power in units of the incident power. In the reflectance range of interest, due to the flat response of the white standard, these quantities differ only by a normalization factor. The results of the measurements are given in Fig. \ref{fig4}, where the red curves (experimental results) describe the spectral response of a wing under varying incidences. All theses measurements were performed on a forewing, in a specular geometry (with an emergence angle equal to the incidence angle, both measured from the normal to the wing surface). The incidence plane was directed along the length of the wing. At normal incidence the backscattering measurement reveals oscillations with reflection maxima for wavelengths near 325~nm, 505~nm and 1015~nm. The broad lineshape of these reflection band is reminiscent of the single slab resonances, except for the increase of the reflection maxima. The overall effect obtained is a lack of reflected intensities between 600 and 800~nm, which basically covers the orange-red colorimetric region. This is consistent with the blue-green colouration of the wing. Smooth oscillations of the reflectance spectrum, as a function of the wavelength, indicate the interference of a small number of waves and this is consistent with the interference occuring in the thin layer and with a low refractive index contrast at the substrate interface. When the angle of incidence is increased, the spectrum is slightly blue-shifted, as one would expect from this single slab interference mechanism. It is important to know the refractive indices in order to make more precise predictions and to build an accurate model of the wing's optical behaviour. This is the subject of the next section. \section{Detailed modelling} \label{model} Electron microscopy reveals a wing made of a stack of \symbol{92} mechanical'' (thin and rigid) slabs covered by a biopolymer layer. The thickness of the mechanical layers varies along the wing, while the thickness of the upper surface layer is constant. On the other hand, the hue selection is constant over the whole surface of the wings, which suggests that the stack multilayer should be considered an homogeneous and very absorbant substrate and the upper surface layer (of thickness about $300$~nm), the optical filter. The upper layer is transparent, and its dielectric function is likely to be close to that of chitin (indeed, in ethanol, with a refractive index $1.4$, close enough to that of chitin, the iridescence of the wasp's wing is reversibly suppressed, as Fig. \ref{fig5} shows). \begin{figure}[t] \centerline{\ \includegraphics[width=5.5 cm]{fig4.eps}} \caption{(Colour online) Wings reflection as a function of the wavelength for various incidence angles. Experimental data (solid line) and simulations results (dashed line). } \label{fig4} \end{figure} In the context of a single optical slab model, the reflection coefficient $% \mathcal{R}$ is given by \begin{equation} \mathcal{R}=\left| \frac{r_1+r_2e^{2i\beta }}{1+r_1r_2e^{2i\beta }}\right| ^2 \label{1} \end{equation} where $r_1$ ($r_2$) is the Fresnel reflection coefficient on the air/slab interface (on the slab/substrate interface), and \begin{equation} \beta =\frac{2\pi }\lambda h{\sqrt{n_L^2-\sin ^2i}} \label{2} \end{equation} $h$ is the slab thickness, $n_L$ the refraction index of the upper layer, $% \lambda $ the wavelength of the incident light and $i$ the angle of incidence. In this context, the condition for constructive interference in the reflected beam under the incidence $i$ is that the incident wavelength verifies (see Fig. \ref{fig6}) \begin{equation} \lambda _{\max }=\frac{{2h\sqrt{n_L^2-\sin ^2i}}}m \label{3} \end{equation} where $m$ is a positive integer. By contrast, the condition for destructive interference in the reflected beam is that the incident wavelength verifies \begin{equation} \lambda _{\min }=\frac{{4h\sqrt{n_L^2-\sin ^2i}}}{2m+1} \label{4} \end{equation} The wavelengths related to the main reflection maxima or minima for many incidence $i$ are known from experiment. Using Eqs. 3 and 4 it is then possible to fit both the overlayer thickness $h$ and its refractive index $n_L$. The precise dispersion of the upper layer material remains unknown and it can be neglected in the present case. One gets $h=286$ nm (that is in agreement with SEM assessment) and $n_L=1.76$ (in accordance with the above hypothesis). Though the basic molecule of chitin is well defined - (C$_8$H$_{13}$O$_5$N)$_n$ - the full composition of the hard surfaces of arthropods is highly variable and the refractive index of the \symbol{92} chitinous'' exoskeleton of insects and other classes of animals is not universal : values from 1.52 to as much as 2 have been reported in various studies. The melanin contents, which increases opacity, also causes an increase of the refractive index, a correlation which can be expected from Kramers-Kronig causal constraints. An average index of 1.76 gives a material which is less refractive than what we will find in the wing substrate, so that we will refer to the overlayer material as being \symbol{92} chitin'', emphasizing the relative lack of melanin in this layer. \begin{figure}[t] \centerline{\ \includegraphics[width=7.5 cm]{fig5.eps}} \caption{(Colour online) (a) In this upper panel, the wing on the left has been kept dry as a reference, and the wing on the right is covered with a macroscopic layer of liquid ethanol (refractive index 1.4). The wet wing loses its blue-green iridescence and shows a dark-brown appearance, due to the strong attenuation of the outer interface refractive index contrast. (b) When ethanol is removed by evaporation, the wing returns to its original iridescent appearance.} \label{fig5} \end{figure} The refractive index of the opaque melanin-loaded chitin, which makes the flexible wing substrate also calls for attention. We first note that a single slab overlayer model implies that the maxima of the reflection amplitude (neglecting absorption) reach the values \begin{equation} \mathcal{R}_{\max }=\left( \frac{r_1+r_2}{1+r_1r_2}\right) ^2 \label{5} \end{equation} whereas minima of the amplitude take the values \begin{equation} \mathcal{R}_{\min }=\left( \frac{r_1-r_2}{1-r_1r_2}\right) ^2 \label{6} \end{equation} Since $r_1$ must be constant (as $n_L$ dispersion is neglected in this model), only a strong dispersion of the substrate refractive index $n_S$ can make $% \mathcal{R}_{\max }$ and $\mathcal{R}_{\min }$ (through $r_2$) vary with the wavelength $% \lambda $. An experimental assessment of $\mathcal{R}_{\max }$ and $\mathcal{% R}_{\min }$ allows to fit the real part $n_S$ of the complex index in the following way : \begin{equation} n_S^{\prime }=\left\{ \begin{array}{c} 1.89\text{ if }\lambda \leq 400~\text{nm} \\ 1.45\times 10^6\lambda +1.31\text{ if }\lambda \geq 400~\text{nm} \end{array} \right. \label{7} \end{equation} An imaginary part must also be considered, as melanin involves absorption. Following Albuquerque et al. \cite{Albuquerque-Ebj-2006}, for visible wavelengths, this imaginary part will be assigned the value $n_S^{\prime \prime }\sim 0.02$. The transmission coefficient of a 5-$\mu$m thick slab with this absorption response will not exceed 21\% at 800 nm and will even be smaller for shorter wavelengths. \begin{figure}[t] \centerline{\ \includegraphics[width=7.0 cm]{fig6.eps}} \caption{Schematic representation of the wing structure, showing how the reflection takes place through the multiple paths of light.} \label{fig6} \end{figure} Fig. \ref{fig4} also shows the results of calculations (black curves) based on the above refractive indexes. This calculation uses a simple one-dimensional coupled-modes theory, that combines a scattering matrix formalism with a plane wave representation of the fields. This method is well known \cite{Vigneron-spie-2006} and do not need a detailed description. In Fig. \ref{fig4}, experimental and theoretical results are shown and compared for a few specific angles of incidence (0$^o$, 15$^o$, 30$^o$, 45$^o$ and 60$^o$). The measured and computed location and line width of the reflection bands correlate satisfactorily. The thin layer model is then clearly consistent with the observed reflection factor. The location of the maxima and minima can easily be understood from a thin-film interference model. Indeed, given the progression of the refractive indexes ($1$ outside, $n_L$ in the film, $n_S$ for the substrate), constructive interference under the incidence $i$ will occur for incident wavelengths that verify equations (3), just as destructive interference will occur for incidence wavelengths that verify equation(4) (see Fig. \ref{fig6}). For instance, at normal incidence, this simple formula predicts the following destructively reflected wavelengths~: 671~nm, 403~nm,... These clearly agree with the spectral observations (656~nm, 404~nm,...and match spectral calculations. The blue-shift of the maxima and minima of the reflection coefficient with the increase in the angle of incidence is well predicted. The maxima-minima damping is also well described justifying the introduced dispersion of the melanin/chitin mixture. In addition, calculated curves do not present significant differences if $n_S^{\prime \prime }$ is set equal to zero (not shown). As a consequence, melanin absorption do not play a significant role in the present context, except for providing a dark background which makes easier perceiving the colored iridescence. The color perceived from the reflection intensity can be described by chromaticity coordinates that can be calculated in the framework of standard human colorimetry. Fig. 7 shows the color trajectories with the D65 illuminant, in the xy CIE 1931 chromaticity standard, as a function of the angle of incidence, for both the theory and experimental reflectance curves. \begin{figure}[t] \centerline{\ \includegraphics[width=8.2 cm]{fig7.eps}} \caption{(Colour online) Colorimetric trajectories for light reflected from the the wing of the wasp, for varying incidence angles, in the range from 0 to 60$^{o}$. Solid line is from experimental data ; dashed line is from the model calculation.} \label{fig7} \end{figure} The fine details perceived in the lineshape of the $m=1$ reflection, near $\lambda $~= 1006~nm is not properly accounted by the monolayer model. The appearance of structures on the lineshape of this interference fringe could be due to secondary interferences through the full thickness of the wing, but the fundamental would appear far in the infrared, and harmonics in the visible range should be weak, du to the strong melanin absorption. We suggest that these structures are associated with the surface roughness of the overlayer. As seen above, the wing surface presents a roughness on a typical length scale of 1200~nm and, via the decay to and from guided modes (Fano resonances) \cite{Sarrazin-prb-2003} this can cause diffraction and addition of spectral oscillations on the short-wavelength side of the fringe. With a 1200~nm \symbol{92} horizontal'' grating on such structure, diffraction can change the reflected wavelength by about $\Delta \lambda \approx 100$~nm, consistent with the observed spectral location of the structure (see, for instance, on Fig. 4 the weak peak near 980~nm for $\theta=15^o$). At the moment, however, theory seems unable to accurately predict the observed intensities of these fringe perturbations, so that the question - out of the scope of the present paper - should be given some further attention. \section{Conclusion} We have shown that the iridescence of the wings of \textit{Megascolia Procer Javanensis} can be reasonably well understood as resulting from the interference of light in a thin optical chitin layer covering a chitin/melanin absorbing structure. This wasp is equipped with opaque wings which contain a high concentration of melanin. The black background defined by this chitin/melanin structure allows for a particularly highly visible structural blue-green colouration, generated by an extremely simple device, using a minimal number of interfering waves~: a constant-thickness overlayer covering all four wings. This is among the most elementary interference filters and, in spite of its simplicity, it turns out to be very effective. It is interesting to note that, in a very different context, evolution has produced similar structures on birds~: the domestic pigeon \cite {Yin-pre-2006}, which displays some feathers with green iridescence, and others with violet iridescence, use the same strategy. In the bird, the \symbol{92} active'' overlayer is the cortex of the barbules. \begin{acknowledgments} This investigation was conducted with the support of the European NEST STREP BioPhot project, under contract no. 12915. The use of Namur Interuniversity Scientific Computing Facility (Namur-ISCF) is acknowledged. This work has also been partly supported by the European Regional Development Fund (ERDF) and the Walloon Regional Government under the "PREMIO" INTERREG IIIa project. M.R. was supported by the Belgian Fund for Industrial and Agronomic Research (FRIA). V.W. acknowledges the support of the Belgian National Fund for Scientific Research (FNRS). \end{acknowledgments}
2023-04-23T06:10:15.706Z
2008-10-07T20:27:44.000Z
redpajama/arxiv
arxiv_0002
499
3,205
2552fbfe8a2af921a49a15dbba7c0e234b4f4cb8
\section{Introduction} The SUSY flavour problem has been traditionally used to justify different departures from the ``natural'' gravity-mediated MSSM setting. However, in this talk we will take a different point of view and we will show that the so-called {\it supersymmetric flavour problem} does not really exist, or more exactly, it can not be detached from the {\it Standard Model flavour problem}. In fact a correct solution to the Standard Model flavour problem will probably pass unscathed all the stringent constraints on flavour changing neutral currents after the inclusion of the MSSM soft sector. The supersymmetric flavour problem is usually stated as follows: The SUSY soft-breaking terms and they have a completely different origin from the Yukawa couplings in the superpotential and we have no information on their structure. In principle, we could expect that all the entries in the soft breaking matrices were $O(1)$ in any basis and in particular in the basis where the Yukawa couplings are diagonal. In this situation FCNC and $CP$ violation observables would receive too large contributions from loops involving SUSY particles and this disagrees strongly with the stringent phenomenological bounds on these processes. As formulated above we can only agree with this statement, however, it is trivial to reformulate this statement in terms of the Yukawa couplings of the superpotential: We have no theoretical guidance to build the Yukawa couplings. If we had to write an SM Lagrangian ignoring the measured quark and lepton masses and mixings, any flavour structure would be possible and in fact we would naturally expect all the different entries in the Yukawa matrices to be $O(1)$. Clearly this would never agree with the observed fermion masses and mixing angles. Therefore we have to conclude that there is a much stronger flavour problem in the SM than in the MSSM. The real {\bf flavour problem} is simply our inability to understand the complicated structures in the quark and lepton Yukawa couplings and likewise soft-breaking flavour structures in the MSSM. At this point we have to emphasize that the presence of new physics, as for instance supersymmetry, is not a problem for flavour but on the contrary a necessary tool to advance in our understanding of the flavour problem. In the framework of the Standard Model all the information we can extract on flavour are the Yukawa eigenvalues (quark and lepton masses) and the left-handed misalignment between up and down quarks (CKM matrix) or leptons (MNS matrix) and this is not enough to determine the full structure of the Yukawa matrices. However, in supersymmetric extensions of the SM, the new interactions can provide additional information on the physics of flavour which will be fundamental to improve our knowledge on flavour. In the following we show that finding a solution to the ``SM'' flavour problem will also solve the so-called ``supersymmetric flavour problem'' to a sufficient degree. \section{Flavour symmetries} The flavour structure associated to the SM Yukawas is very special: a strong hierarchy in the couplings and a peculiar structure of the mixing matrices. In a truly fundamental theory we would expect all dimensionless couplings to be $O(1)$ and thus these small couplings must be explained. The basic idea of flavour symmetries is to use an spontaneously broken family symmetry in analogy with the gauge sector to generate these couplings. A scalar vev breaking the flavour symmetry normalized with a large mediator mass provides a small expansion parameter that enters in different powers in the fermion Yukawa couplings\cite{Froggatt:1978nt}. In the limit of exact symmetry the Yukawa couplings are forbidden and only when the symmetry is broken these couplings appear as a function of small vevs. Similarly, in a supersymmetric theory, the flavour symmetry applies both to the fermion and sfermion sectors. Therefore, the structures in the soft-breaking matrices and the Yukawa couplings are related. The starting point in our analysis is then the texture in the Yukawa couplings. However, the complete texture of the Yukawa matrices can not be fixed though Standard Model interactions. Still, it is reasonable to assume that the smallness of CKM mixing angles is due to the smallness of the off-diagonal elements in the Yukawa matrices with respect to the corresponding diagonal elements. Then we can fix the elements above the diagonal, corresponding to the left-handed mixings, but not the elements below the diagonal \cite{Roberts:2001zy}. Therefore, we can consider two complementary situations that we call symmetric and asymmetric Yukawa textures. In the symmetric textures we make the additional simplifying assumption of choosing the matrices to be symmetric. Note that this situation is not unusual in many flavour models \cite{King:2001uz,King:2003rf,Ross:2004qn} as well as in GUT theories. Asymmetric textures are also common in simple Abelian flavour symmetries with a single flavon field \cite{Leurer:1992wg,Dudas:1995yu}. The simplest example is provided by a $U(1)$ flavour symmetry, as originally considered by Froggatt and Nielsen \cite{Froggatt:1978nt}, which generates an asymmetric texture. As an example we can assign the three generations of SM fields the charges: $Q_i=(3,2,0)$, $d_i^c=(0,0,1)$, $u_i^c=(3,2,0)$ with a single flavon field of charge $-1$. The vev of the flavon field normalized to the mass of the heavy mediator fields $M_f$, is $\epsilon = v/M_f \ll 1$. The superpotential of this model is: \begin{eqnarray} W_{\rm Y} = Q_i d^c_j H_1 \left(\frac{\theta}{M_{\rm fl}}\right)^{q_i+d_j} + Q_i u^c_j H_2 \left(\frac{\theta}{M_{\rm fl}}\right)^{q_i+u_j} \end{eqnarray} where unknown $O(1)$ coefficients have been suppressed for clarity. Then we have, \begin{equation} Y_u= \left( \matrix{ \epsilon^6&\epsilon^5&\epsilon^3 \cr \epsilon^5&\epsilon^4&\epsilon^2 \cr \epsilon^3&\epsilon^2&1 } \right) ~~~~,~~~~ Y_d = \left( \matrix{ \epsilon^4&\epsilon^3&\epsilon^3 \cr \epsilon^3&\epsilon^2&\epsilon^2 \cr \epsilon&1&1 } \right) ~~~~,~~~~ \end{equation} The soft masses are couplings $\phi^{\dagger }\phi$, clearly invariant under any symmetry, and therefore always allowed. Hence, diagonal soft masses are allowed in the limit of unbroken symmetry and unsuppressed. Assuming diagonal masses of different generations are equal in the symmetric limit\footnote{Unlike in the case of non-Abelian symmetries, this is not guaranteed by symmetry, but it is still possible in some cases like dilaton domination in gravity mediation models.}, the universality is then broken by the flavon vevs. Any combination of two MSSM scalar fields $\phi_i$ and an arbitrary number of flavon vevs invariant under the symmetry will contribute to the soft masses: \begin{eqnarray} {\cal L}_{m^2}& =& m_0^2 \left(\phi_1^* \phi_1 + \phi_2^* \phi_2 + \phi_3^* \phi_3 \right. \nonumber \\ &+& \left.\left(\frac{\langle\theta\rangle}{M_{\rm fl}}\right)^{q_j-q_i}~ \phi_i^* \phi_j + {\rm h.c.} \right). \end{eqnarray} Thus, the structure of the right-handed down squark mass matrix we would have in this model is: \begin{eqnarray} M^2_{\tilde{D}_R} \simeq \left( \begin{array}{ccc} 1 & {\epsilon} & {\epsilon} \\ {\epsilon} & 1 & 1\\ {\epsilon} & 1 & 1 \end{array} \right) m_0^2 \, . \end{eqnarray} In this case, we would expect large mixings in the second and third generations of right-handed down sfermions. Notice however, that this simple model is already ruled-out by the stringent constraints in the 1--2 sector unless sfermions are very heavy. Symmetric textures are obtained, for instance, from a spontaneously broken $SU(3)$ family symmetry. The basic features of this symmetry are the following. All left handed fermions ($\psi_i$ and $\psi^c_i$) are triplets under $SU(3)_{fl}$. To allow for the spontaneous symmetry breaking of $SU(3)$ it is necessary to add several new scalar fields which are either triplets ($\overline{\theta}% _{3}$, $\overline{\theta}_{23}$, $\overline{\theta}_{2}$) or antitriplets ($% \theta_{3}$, $\theta_{23}$). We assume that $SU(3)_{fl}$ is broken in two steps. The first step occurs when $\theta_3$ and $\bar \theta_{3}$ get a large vev breaking $SU(3)$ to $SU(2)$. Subsequently a smaller vev of $\theta_{23}$ and $\bar \theta_{23}$ breaks the remaining symmetry. After this breaking we obtain the effective Yukawa couplings through the Froggatt-Nielsen mechanism \cite{Froggatt:1978nt} integrating out heavy fields. In fact, to reproduce measured masses and mixings, the large third generation Yukawa couplings require a $\theta_3, \bar \theta_{3}$ vev of the order of the mediator scale, $M_f$, while $\theta_{23}/M_f, \bar \theta_{23}/M_f$ have vevs of order $\varepsilon=0.05$ in the up sector and $\bar \varepsilon=0.15$ in the down sector with different mediator scales in both sectors. Moreover in the minimization of the scalar potential it is possible to ensure that the fields $\theta_{23}$ and $\bar \theta_{23}$ get equal vevs in the second and third components. In this model, CP is spontaneously broken by the flavon vevs that are complex generating the observed CP violation in the CKM matrix. The basic structure of the Yukawa superpotential is then given by: \begin{eqnarray} W_{\rm Y} &=& H\psi _{i}\psi _{j}^{c} \left[ \theta _{3}^{i}\theta _{3}^{j}+\theta _{23}^{i}\theta _{23}^{j}\right.\nonumber\\&+&\left. \epsilon ^{ikl}% \overline{\theta }_{23,k}\overline{\theta }_{3,l}\theta _{23}^{j}\left( \theta _{23}\overline{\theta _{3}}\right) +\dots \right]. \end{eqnarray} This structure is quite general for the different $SU(3)$ models we can build, for additional details we refer to \cite{Ross:2004qn,King:2001uz,King:2003rf}. The Yukawa textures are then symmetric and suppressing $O(1)$ coefficients: \begin{eqnarray} \label{fit} Y_d\propto\left( \begin{array}{ccc} 0 & \bar \varepsilon^{3} & {\ \bar \varepsilon^{3}} \\ \bar \varepsilon^{3} & {\bar \varepsilon^{2}} & {\ \bar \varepsilon^{2}} \\ {\ \bar \varepsilon^{3}} & {\ \bar \varepsilon^{2}} & 1% \end{array}% \right),~~~~~~ Y_u\propto \left( \begin{array}{ccc} 0 & {\ \varepsilon^{3}} & {\ \varepsilon^{3}} \\ {\ \varepsilon^{3}} & {\ \varepsilon^{2}} & \varepsilon^{2} \\ {\ \varepsilon^{3}} & \varepsilon^{2} & 1% \end{array} \right ) \, . \end{eqnarray} In the same way after $SU(3)$ breaking the scalar soft masses deviate from exact universality. In first place we must notice that a mass term $\psi _{i}^{\dagger }\psi _{i}$ is invariant under any symmetry and hence gives rise to a common contribution for the family triplet. However, $SU(3)$ breaking terms give rise to important corrections \cite{Ross:2004qn,Ross:2002mr}. Any invariant combination of flavon fields can also contribute to the sfermion masses. Including these corrections the leading contributions to the sfermion mass matrices are: \begin{eqnarray} (M^2_{\tilde f})^{ij}&=& m_0^2\left(\delta ^{ij} +\frac{\displaystyle{1}}{\displaystyle{M_f^{2}}}\left[\theta _{3}^{i\dagger }\theta _{3}^{j} +\theta _{23}^{i\dagger }\theta_{23}^{j}\right]\right. \nonumber \\ &+& \left.\Frac{1}{M_f^4}(\epsilon ^{ikl}\overline{\theta }_{3,k} \overline{\theta }_{23,l})^{\dagger }(\epsilon ^{jmn} \overline{\theta }_{3,m}\overline{\theta }_{23,n})\right) , \end{eqnarray} where $f$ represents the $SU(2)$ doublet or the up and down singlets with $M_f=M_L, M_u, M_d$. For instance, the down squark and charged slepton mass matrices after running to the electroweak scale and in the basis of diagonal charged lepton Yukawas (the so-called SCKM basis) are, \begin{eqnarray} &{M^2_{\tilde{D}_R}} \simeq 6~{ M_{1/2}^2}~{\bf {\hbox{1\kern-.8mm l}}}\qquad\qquad\qquad\qquad\qquad\qquad \\&+\left( \begin{array}{ccc} 1 + { \bar \varepsilon^3} & {\bar\varepsilon^3} & {\bar\varepsilon^3} \\ {\bar\varepsilon^3} & 1 + {\bar\varepsilon^2} & {\bar \varepsilon^2} \\ {\bar\varepsilon^3} & {\bar\varepsilon^2} & 1 + {\bar \varepsilon} \end{array} \right) { m_0^2} \nonumber \\ &{M^2_{\tilde{D}_L}}~\simeq~6~{ M_{1/2}^2}~{\bf {\hbox{1\kern-.8mm l}}}\qquad\qquad\qquad\qquad\qquad\qquad\\ &+\left( \begin{array}{ccc} 1 + { \varepsilon^3} & {\varepsilon^2 \bar \varepsilon} & {\varepsilon^2 \bar \varepsilon}+ {c_{\rm run}}~{\bar \varepsilon^3} \\ {\varepsilon^2 \bar \varepsilon} & 1 + {\varepsilon^2} & {\varepsilon^2} + {c_{\rm run}}~{\bar \varepsilon^2} \\ {\varepsilon^2 \bar \varepsilon}+ {c_{\rm run}}~{\bar \varepsilon^3} & {\varepsilon^2} + {c_{\rm run}}~{\bar \varepsilon^2} & 1 + {\bar \varepsilon} \end{array} \right) { m_0^2} \nonumber \\ &{M^2_{\tilde{E}_R}} \simeq 0.15~{M_{1/2}^2}~{\bf {\hbox{1\kern-.8mm l}}}\qquad\qquad\qquad\qquad\qquad\qquad \\&~~~~~ + \left( \begin{array}{ccc} 1 + {\bar \varepsilon^3} &\frac{\bar\varepsilon^3}{3}e^{i \alpha} & {\bar\varepsilon^3} e^{i \beta}\\ \frac{\bar\varepsilon^3}{3} e^{-i \alpha}& 1 + {\bar\varepsilon^2} & {\bar \varepsilon^2} e^{i \omega} \\ {\bar\varepsilon^3} e^{-i \beta}& {\bar\varepsilon^2}e^{-i \omega} & 1 + {\bar \varepsilon} \end{array} \right) {m_0^2} \nonumber \\ &{M^2_{\tilde{E}_L}}~\simeq 0.5~{ M_{1/2}^2}~{\bf {\hbox{1\kern-.8mm l}}}\qquad\qquad\qquad\qquad\qquad\qquad \\&~~~~+\left( \begin{array}{ccc} 1 + { \varepsilon^3} & \frac{\varepsilon^2 \bar \varepsilon}{3} & {\varepsilon^2 \bar \varepsilon}+ {c_{\rm run}}~{\bar \varepsilon^3} \\ \frac{\varepsilon^2 \bar \varepsilon}{3} & 1 + {\varepsilon^2} & {\varepsilon^2} + 3 {c_{\rm run}}~{\bar \varepsilon^2} \\ {\varepsilon^2 \bar \varepsilon}+ {c_{\rm run}}~{\bar \varepsilon^3} & {\varepsilon^2} + 3 {c_{\rm run}}~{\bar \varepsilon^2} & 1 + {\bar \varepsilon} \end{array} \right) { m_0^2} \, ,\nonumber \label{soft2} \end{eqnarray} where we include a contribution from the RGE evolution of the sfermion masses with a coefficient $c_{\rm run}$ typically of order $0.1$, which in these cases is more important than the ``tree level'' contributions. Therefore we can see that the ``natural'' structures in the soft mass matrices for the symmetric Yukawas are different from tose in the asymmetric case and this provides a chance to distinguish the two Yukawa structures through an analysis of the flavour structures in the soft SUSY sector. As said above, in this $SU(3)$ flavour model CP violation is only broken spontaneously by the flavon vevs below the Planck scale. In this way all terms in the K\"ahler potential, giving rise to the soft masses and the $\mu$ term by the Giudice-Masiero mechanism are real before the breaking of the flavour symmetry. After breaking the flavour symmetry phases $O(1)$ will appear in the Yukawa matrices and the off-diagonal elements of the soft mass matrices. In this way $\mu$ is real before the breaking of the flavour symmetry. In fact, even after the breaking of flavour and CP symmetries $\mu$ receives complex corrections only at the two-loop level and therefore is still real to a very good approximation \cite{Ross:2004qn}. Similarly, diagonal elements in the trilinear terms are also real at leading order in the SCKM basis. In this way electric dipole moments (EDMs) are under control and the SUSY CP problem is solved. Nevertheless off-diagonal phases in the soft mass matrices contribute to the EDMs. For instance we have a contribution to the electron EDM as $d_e \propto m_\tau \mu \tan \beta \cdot {\rm Im }[ { \delta^{e_R}_{13}} \cdot {\delta^{e_L}_{31}}]$. In figure \ref{fig:EDM} we show the expected contributions to the electron EDM assuming that the phases in the off-diagonal elements are $O(1)$ and the lepton Yukawas have CKM-like mixings \cite{Masiero:2002jn}. We can see here that in this model, reaching a sensitivity of $10^{-29}$ e~cm in the electron EDM will allow us to explore a significant region of the parameter space even for intermediate values of $\tan \beta$ \cite{WIP}. \begin{figure} \includegraphics[scale=.45]{scan_flav10.ps} \includegraphics[scale=.45]{scan_flav30.ps} \caption{Values of $d_e$ in the $M_0$--$M_{1/2}$ plane for $\tan \beta =10,30$ and $A_0=0$. The hatched region corresponds to the reach of the future MEG experiment on $\mu\to e \gamma$ in the same model.} \label{fig:EDM} \end{figure} \section{Conclusions} The flavour problem in supersymmetric extensions of the SM is deeply related to the origin of flavour in the Yukawa matrices. It is natural to think that the same mechanism generating the flavour structures in the Yukawa couplings is responsible for the structure in the SUSY soft-breaking terms. In this way finding a solution to the ``flavour problem'' in the SM can also provide a solution to the SUSY flavour problem. In fact, the analysis of the new supersymmetric interactions can provide additional information on the physics of flavour which will be fundamental to improve our knowledge on flavour. We have seen that measuring the flavour structures in the soft masses can help us to ``measure'' the right-handed mixings in the Yukawa matrices. As an example, in an $SU(3)$ flavour model where the SUSY CP problem is also solved, we have shown the expected values for the electron EDM associated with flavour non-diagonal SUSY phases.
2023-04-23T06:10:15.776Z
2007-10-19T16:27:05.000Z
redpajama/arxiv
arxiv_0002
502
2,799
de85ba55a1294d780fc628a81be557efff3c6952
\section{Introduction} One of the most interesting examples where quantum field theory might provide some guiding rules for the search for new physics could be that of the origin of internal symmetry patterns in particle physics owing to space-time properties at very small distances. In this connection, the relativistic or Lorentz invariance seems to play a special role with respect to the observed internal local symmetries. The old idea \cite{bjorken} that spontaneous Lorentz invariance violation (SLIV) may lead to an alternative theory of QED, with the photon as a massless vector Nambu-Goldstone boson, still remains extremely attractive in numerous theoretical contexts \cite% {book} (for some later developments, see the papers \cite{cfn}). At the same time, Lorentz violation on its own has attracted considerable attention in recent years as an interesting phenomenological possibility appearing in various quantum field and string theories [4-9]. Actually, the SLIV idea is in accordance with superstring theory, particularly with the observation that the relativistic invariance could spontaneously be violated in superstrings \cite{alan1}. The first models realizing the SLIV conjecture were based on the four fermion (current-current) interaction, where the gauge field appears as a fermion-antifermion pair composite state \cite{bjorken}, in complete analogy with the massless composite scalar field in the original Nambu-Jona-Lazinio model \cite{NJL}. Unfortunately, owing to the lack of a starting gauge invariance in such models and the composite nature of the Goldstone modes which appear, it is hard to explicitly demonstrate that these modes really form together a massless vector boson as a gauge field candidate. Actually, one must make a precise tuning of parameters, including a cancellation between terms of different orders in the $1/N$ expansion (where $N$ is the number of fermion species involved), in order to achieve the massless photon case (see, for example, the last paper in \cite{bjorken}). Rather, there are in general three separate massless Goldstone modes, two of which may mimic the transverse photon polarizations, while the third one must be appropriately suppressed. In this connection, a more instructive laboratory for SLIV consideration proves to be a simple class of QED type models [11-14] having from the outset a gauge invariant form. In these models the spontaneous Lorentz violation is realized through the nonlinear dynamical constraint $A_{\mu }A^{\mu }=n_{\nu }n^{\nu }M^{2}$ (where $n_{\nu }$ is a properly oriented unit Lorentz vector, $n_{\nu }n^{\nu }=\pm 1$, while $M$ is the proposed SLIV scale) imposed on the starting vector field $A_{\mu }$, in much the same way as it occurs for the corresponding scalar field in the nonlinear $% \sigma $-model for pions \cite{GL}. Note that a correspondence with the nonlinear $\sigma $-model for pions may be somewhat suggestive, in view of the fact that pions are the only presently known Goldstones and their theory, chiral dynamics \cite{GL}, is given by the nonlinearly realized chiral $SU(2)\times SU(2)$ symmetry rather than by an ordinary linear $% \sigma $-model. The above constraint means in essence that the vector field $% A_{\mu }$ develops some constant background value $<A_{\mu }(x)>$ $=n_{\mu }M $\ and the Lorentz symmetry $SO(1,3)$ formally breaks down to $SO(3)$ or $% SO(1,2)$ depending on the time-like ($n_{\nu }n^{\nu }>0$) or space-like ($% n_{\nu }n^{\nu }<0$) nature of SLIV. This allows one to explicitly demonstrate that gauge theories, both Abelian and non-Abelian, can be interpreted as spontaneously broken theories[11-14], although the physical Lorentz invariance still remains intact. However, the question naturally arises of whether a gauge symmetry is necessary to start with. If so, this would in some sense depreciate the latter approach as compared with those of the original composite models \cite% {bjorken}, where a gauge symmetry was hoped to be derived (while this has not yet been achieved). Remarkably, as we will see, it happens that one does not need to specially postulate the starting gauge invariance, when considering the nonlinear $\sigma $-model type spontaneous Lorentz violation in the framework of an arbitrary relativistically invariant Lagrangian for elementary vector and matter fields, which are proposed only to possess some global internal symmetry. In the present article we start by a priori only assuming a global symmetry but no gauge invariance, taking all the terms in the Lagrangian allowed by Lorentz invariance. With such a Lagrangian, the vector field $A_{\mu }$ typically develops a non-zero vacuum expectation value, \begin{equation} <A_{\mu }(x)>=n_{\mu }M. \label{Avev} \end{equation}% In the limit analogous to the approximation of the linear $\sigma $-model by the nonlinear $\sigma $-model, we get the nonlinear constraint\footnote{% Actually, some way to appreciate a possible origin for the supplementary condition\ (\ref{con}) might be by the inclusion of a \textquotedblleft standard\textquotedblright\ quartic vector field potential $U(A_{\mu })=-% \frac{m_{A}^{2}}{2}A^{2}+\frac{\lambda _{A}}{4}(A^{2})^{2}$ in the vector field Lagrangian, as can be motivated to some extent \cite{alan1} from superstring theory. This potential inevitably causes the spontaneous violation of Lorentz symmetry in a conventional way, much as an internal symmetry violation is caused in a linear $\sigma $ model for pions \cite{GL}% . As a result, one has a massive \textquotedblleft Higgs" mode (with mass $% \sqrt{2}m_{A}$) together with massless Goldstone modes associated with the photon. Furthermore, just as in the pion model, one can go from the linear model for the SLIV to the non-linear one by taking the limit $\lambda_{A}\rightarrow \infty ,$ $m_{A}^{2}\rightarrow \infty $ (while keeping the ratio $m_{A}^{2}/\lambda _{A}$ to be finite). This immediately leads to the constraint (\ref{con}) for the vector potential $A_{\mu }$ with $% n^{2}M^{2}=m_{A}^{2}/\lambda _{A}$, as appears from the validity of its equation of motion. Another motivation for the nonlinear vector field constraint (\ref{con}) might be an attempt to avoid an infinite self-energy for the electron in classical electrodynamics, as was originally suggested by Dirac \cite{dir} and extended later to various vector field theory cases \cite{vent}.} \begin{equation} A^{2}=n^{2}M^{2}\qquad (A^{2}\equiv A_{\mu }A^{\mu },\quad n^{2}\equiv n_{\nu }n^{\nu }). \label{con} \end{equation}% In this paper we shall simply postulate that the existence of the constraint (\ref{con}) is to be upheld by adjusting the parameters of the Lagrangian. We then show that the SLIV conjecture, which is related to the condensation of a generic vector field or vector field multiplet, happens by itself to be powerful enough to impose gauge invariance, provided that we allow the corresponding Lagrangian density to be adjusted to ensure self-consistency without losing too many degrees of freedom. Due to the Lorentz violation, this theory acquires on its own a gauge-type invariance, which gauges the starting global symmetry of the interacting vector and matter fields involved. In essence, the gauge invariance (with a proper gauge-fixing term) appears as a necessary condition for these vector fields not to be superfluously restricted in degrees of freedom. In fact the crucial equations (\ref{id}) and (\ref{id1}) below express the relations needed to reduce the number of independent equations among the equations of motion and the constraint (\ref{con}). But notice that we are not assuming gauge invariance to derive equations (\ref{id}) and (\ref{id1}); our philosophy is to derive gauge invariance not to put it in. Due to the constraint (\ref{con}% ), the true vacuum in a theory is chosen by the Lorentz violation, SLIV. The self-consistency problem to which we adjusted the couplings in the Lagrangian might have been avoided by using a Lagrange multiplier associated with the constraint (\ref{con}). However it is rather the philosophy of the present article to look for consistency of the equations of motion and the constraint, without introducing such a Lagrange multiplier. In the next Sec.~2 we consider the global Abelian symmetry case, which eventually appears as ordinary QED taken in a nonlinear gauge. While such a model for QED was considered before on its own [11-14], we actually derive it now using the pure SLIV conjecture. Then in Sec.~3 we generalize our consideration to the global non-Abelian internal symmetry case and come to a conventional Yang-Mills theory with that symmetry automatically gauged. Specifically, we will see that in a theory with a symmetry group $G$ having $% D$ generators not only the pure Lorentz symmetry $SO(1,3)$, but the larger accidental symmetry $SO(D,3D)$ of the Lorentz violating\ vector field constraint also happens to be spontaneously broken. As a result, although the pure \ Lorentz violation still generates only one true Goldstone vector boson, the accompanying pseudo-Goldstone vector bosons related to the $% SO(D,3D)$ breaking also come into play properly completing the whole gauge field multiplet of the internal symmetry group taken. Remarkably, they appear to be strictly massless as well, being protected by the simultaneously generated non-Abelian gauge invariance. When expressed in terms of the pure Goldstone vector modes these theories, both Abelian and non-Abelian, look essentially nonlinear and contain Lorentz and $CPT$ violating couplings. However, due to cancellations, they appear to be physically indistinguishable from the conventional QED and Yang-Mills theories. On the other hand, their generic, SLIV induced, gauge invariance could of course be broken by some high-order operators, stemming from very short gravity-influenced distances that would lead to the physical Lorentz violation. This and some other of our conclusions are discussed in the final Sec.~4. \section{\protect\bigskip Abelian theory} Suppose first that there is only one vector field $A_{\mu }$ and one complex matter field $\psi $, a charged fermion or scalar, in a theory given by a general Lorentz invariant Lagrangian $L(A,\psi )$ with the corresponding global $U(1)$ charge symmetry imposed. Before proceeding further, note first that, while a conventional variation principle requires the equation of motion \begin{equation} \frac{\partial L}{\partial A_{\mu }}-\partial _{\nu }\frac{\partial L}{% \partial (\partial _{\nu }A_{\mu })}=0 \label{eqm} \end{equation}% to be satisfied, the vector field $A_{\mu }$, both massive and massless, still contains one superfluous component which is usually eliminated by imposing some supplementary condition. This is typically imposed by taking the 4-divergence of the Euler equation (\ref{eqm}). Such a condition for the massive QED case (with the gauge invariant $F_{\mu \nu }F^{\mu \nu }$ form for the vector field kinetic term) is known to be the spin-1 or Lorentz condition $\partial _{\mu }A^{\mu }=0$, while for the conventional massless QED many other conditions (gauges) may alternatively be taken. Let us now subject the vector field $A_{\mu }(x)$ in a general Lagrangian $% L(A_{\mu},\psi )$ to the SLIV constraint (\ref{con}), which presumably chooses the true vacuum in a theory. Once the SLIV constraint is imposed, any extra supplementary condition is no longer possible, since this would superfluously restrict the number of degrees of freedom for the vector field which is inadmissible. In fact a further reduction in the number of independent $A_{\mu }$ components would make it impossible to set the required initial conditions in the appropriate Cauchy problem and, in quantum theory, to choose self-consistent equal-time commutation relations% \footnote{% For example the need for more than two degrees of freedom is well-known for a massive vector field and for quantum electrodynamics. In the massive vector field case there are three physical spin-1 states to be described by the $A_{\mu }$, whereas for QED, apart from the two physical (transverse) photon spin states, one formally needs one more component in the $A_{\mu }$ (% $A_{0}$ or $A_{3}$) as the Lagrange multiplier to get the Gauss law. So, in both cases only one component in the $A_{\mu }$ may be eliminated.} \cite% {ogi3}. It is also well-known \cite{GL} that there is no way to construct a massless field $A_{\mu }$, which transforms properly as a 4-vector, as a linear combination of creation and annihilation operators for helicity $\pm 1 $ states. Under this assumption of not getting too many constraints\footnote{% The fact that there is a threat of too many supplementary conditions (an inconsistency) is because we have chosen not to put a Lagrange multiplier term for the constraint (\ref{con}) into Eq.~(\ref{eqm}). Had we explicitly introduced such a Lagrange multiplier term, $F(x)(A^{2}-n^{2}M^{2})$, into the Lagrangian $L$, the equation of motion for the vector field $A_{\mu }$ would have changed, so that the 4-divergence of this equation would now determine the Lagrange multiplier function $F(x)$ rather than satisfy the identity (\ref{id}) appearing below.}, we shall now derive gauge invariance. Since the 4-divergence of the vector field Euler equation (\ref{eqm}) should be zero if the equations of motion are used, it means that this divergence must be expressible as a sum over the equations of motion multiplied by appropriate quantities. This implies that, without using the equations of motion but still using the constraint (\ref{con}), we have an identity for the vector and matter (fermion field, for definiteness) fields of the following type: \begin{eqnarray} \partial _{\mu }\left( \frac{\partial L}{\partial A_{\mu }}-\partial _{\nu }% \frac{\partial L}{\partial (\partial _{\nu }A_{\mu })}\right) &\equiv &\left( \frac{\partial L}{\partial A_{\mu }}-\partial _{\nu }\frac{\partial L% }{\partial (\partial _{\nu }A_{\mu })}\right) (c)A_{\mu }+ \notag \\ &&+\left( \frac{\partial L}{\partial \psi }-\partial _{\nu }\frac{\partial L% }{\partial (\partial _{\nu }\psi )}\right) (it)\psi + \label{id} \\ &&+\overline{\psi }(-it)\left( \frac{\partial L}{\partial \overline{\psi }}% -\partial _{\nu }\frac{\partial L}{\partial (\partial _{\nu }\overline{\psi }% )}\right) . \notag \end{eqnarray}% Here the coefficients $c$ and $t$ of the Eulerians on the right-hand side (which vanish by themselves when the equations of motion are fulfilled) are some dimensionless constants whose particular values are conditioned by the starting Lagrangian $L(A_{\mu },\psi )$ taken, for simplicity, with renormalisable coupling constants. This identity (\ref{id}) implies the invariance of $L$ under the vector and fermion field local transformations whose infinitesimal form is given by\footnote{% Actually, one can confirm this proposition by expanding the action with the transformed Lagrangian density $\int d^{4}xL(A^{\prime },\psi ^{\prime })$ in terms of functional derivatives and then using the identity equation (\ref% {id}).}% \begin{equation} \delta A_{\mu }=\partial _{\mu }\omega +c\omega A_{\mu },\text{ \ \ }\delta \psi \text{\ }=it\omega \psi \label{trans} \end{equation}% where $\omega (x)$ is an arbitrary function, only being restricted by the requirement to conform with the nonlinear constraint (\ref{con}). Conversely, the identity (\ref{id}) in its turn follows from the invariance of the Lagrangian $L$ under the transformations (\ref{trans}). Both direct and converse assertions are in fact particular cases of Noether's second theorem \cite{noeth}. Apart from this invariance, one has now to confirm that the transformations (\ref{trans}) in fact form an Abelian symmetry group. Constructing the corresponding Lie bracket operation $(\delta _{1}\delta _{2}-\delta _{2}\delta _{1})$ for two successive vector field variations we find that, while the fermion transformation in (\ref{trans}) is an ordinary Abelian local one with zero Lie bracket, for the vector field transformations there appears a non-zero result \begin{equation} (\delta _{1}\delta _{2}-\delta _{2}\delta _{1})A_{\mu }=c(\omega _{1}\partial _{\mu }\omega _{2}-\omega _{2}\partial _{\mu }\omega _{1}) \label{SL} \end{equation}% unless the coefficient $c=0$. Note also that for non-zero $c$ the variation of $A_{\mu }$ given by (\ref{SL}) is an essentially arbitrary vector function. Such a freely varying $A_{\mu }$ is only consistent with a trivial Lagrangian (i.e. $L=const$). Thus, in order to have a non-trivial Lagrangian, it is necessary to have $c=0$ and the theory then possesses an Abelian local symmetry\footnote{% We will see below (Sec.~3) that non-zero $c$-type coefficients appear in the non-Abelian internal symmetry case, resulting eventually in a Yang-Mills gauge invariant theory.}. Thus we have shown how the choice of a true vacuum conditioned by the SLIV constraint (\ref{con}) enforces the modification of the Lagrangian $L$, so as to convert the starting global $U(1)$ charge symmetry into a local one (% \ref{trans}). Otherwise, the theory would superfluously restrict the number of degrees of freedom for the vector field and that would be inadmissible. This SLIV induced local Abelian symmetry (\ref{trans}) now allows the Lagrangian $L$ to be determined in full. For a minimal theory with renormalisable coupling constants, it is in fact the conventional QED Lagrangian which we eventually come to: \begin{equation} L(A_{\mu },\psi )=-\frac{1}{4}F_{\mu \nu }F^{\mu \nu }+\overline{\psi }% (i\gamma \partial -m)\psi -eA_{\mu }\overline{\psi }\gamma ^{\mu }\psi \label{lagr1} \end{equation}% with the SLIV constraint $A^{2}=n^{2}M^{2}$ imposed on the vector field $% A_{\mu }$. In the derivation made, we were only allowed to use gauge transformations consistent with the constraint (2) which now plays the role of a gauge-fixing term for the resulting gauge invariant theory\footnote{% As indicated in refs.~\cite{nambu,dir}, the SLIV\ constraint equation for the corresponding finite gauge function $\omega (x)$, $(A_{\mu }+\partial_{\mu }\omega)(A^{\mu} + \partial^{\mu}\omega)=n^{2}M^{2}$, appears to be mathematically equivalent to the classical Hamilton-Jacobi equation of motion for a charged particle. Thus, this equation should have a solution for some class of gauge functions $\omega (x)$, inasmuch as there is a solution to the classical problem.} (\ref{lagr1}). Note that a quartic potential $U(A_{\mu})$ of the type discussed in footnote 1 would give vanishing contributions on both sides of Eq.~(\ref{id}), when the nonlinear constraint (\ref{con}) with the SLIV scale $M^2$ given in the footnote is imposed. Furthermore the contribution of such a potential to the Lagrangian (\ref{lagr1}) would then reduce to an inessential constant. One can rewrite the Lagrangian $L(A_{\mu },\psi )$ in terms of the physical photons now identified as being the SLIV generated vector Goldstone bosons. For this purpose let us take the following handy parameterization for the vector potential $A_{\mu }$ in the Lagrangian $L$: \begin{equation} A_{\mu }=a_{\mu }+\frac{n_{\mu }}{n^{2}}(n\cdot A)\qquad (n\cdot A\equiv n_{\nu }A^{\nu }) \label{par} \end{equation}% where $a_{\mu }$ is the pure Goldstonic mode satisfying \begin{equation} \text{\ }n\cdot a=0,\text{\ }\qquad (n\cdot a\equiv n_{\nu }a^{\nu }) \label{sup} \end{equation}% while the effective \textquotedblleft Higgs" mode (or the $A_{\mu }$ component in the vacuum direction) is given by the scalar product $n\cdot A$% . Substituting this parameterization (\ref{par}) into the vector field constraint (\ref{con}), one comes to the equation for $n\cdot A$: \begin{equation} \text{\ }n\cdot A\text{\ }=(M^{2}-n^{2}a^{2})^{\frac{1}{2}}=M-\frac{% n^{2}a^{2}}{2M}+O(1/M^{2}) \label{constr1} \end{equation}% where $a^{2}=a_{\mu }a^{\mu }$ and taking, for definiteness, the positive sign for the square root and expanding it in powers of $a^{2}/M^{2}$. Putting then the parameterization (\ref{par}) with the SLIV constraint (\ref% {constr1}) into our basic gauge invariant Lagrangian (\ref{lagr1}), one comes to the truly Goldstonic model for QED. This model might seem unacceptable since it contains, among other terms, the inappropriately large Lorentz violating fermion bilinear $eM\overline{\psi }(\gamma \cdot n/n^{2})\psi $, which appears when the expansion (\ref{constr1}) is applied to the fermion current interaction term in the Lagrangian $L$ (\ref{lagr1}). However, due to local invariance of the Lagrangian (\ref{lagr1}), this term can be gauged away by making an appropriate redefinition of the fermion field according to \begin{equation} \psi \rightarrow e^{ieM(x\cdot n/n^{2})}\psi \label{red} \end{equation}% through which the $eM\overline{\psi }(\gamma \cdot n/n^{2})\psi $ term is exactly cancelled by an analogous term stemming from the fermion kinetic term. So, one eventually arrives at the essentially nonlinear SLIV Lagrangian for the Goldstonic $a_{\mu }$ field of the type (taken to first order in $a^{2}/M^{2}$) \begin{eqnarray} L(a_{\mu },\psi ) &=&-\frac{1}{4}f_{\mu \nu }f^{\mu \nu }-\frac{1}{2}\delta (n\cdot a)^{2}-\frac{1}{4}f_{\mu \nu }h^{\mu \nu }\frac{n^{2}a^{2}}{M}+ \label{NL} \\ &&+\overline{\psi }(i\gamma \partial +m)\psi -ea_{\mu }\overline{\psi }% \gamma ^{\mu }\psi +\frac{en^{2}a^{2}}{2M}\overline{\psi }(\gamma \cdot n)\psi . \notag \end{eqnarray}% We have denoted its field strength tensor by $f_{\mu \nu }=\partial _{\mu }a_{\nu }-\partial _{\nu }a_{\mu }$, while $h_{\mu \nu }=n^{\mu }\partial ^{\nu }-n^{\nu }\partial ^{\mu }$ is a new SLIV oriented differential tensor acting on the infinite series in $a^{2}$ coming from the expansion of the effective \textquotedblleft Higgs" mode (\ref{constr1}), from which we have only included the first order term $-n^{2}a^{2}/2M$ throughout the Lagrangian $L(a_{\mu },\psi )$. We have also explicitly introduced the orthogonality condition $n\cdot a=0$ into the Lagrangian through the second term, which can be treated as the gauge fixing term (taking the limit $% \delta \rightarrow \infty $). Furthermore we have retained the notation $% \psi $ for the redefined fermion field. This nonlinear QED model was first studied on its own by Nambu long ago \cite% {nambu}. As one can see, the model contains the massless vector Goldstone boson modes (keeping the massive ``Higgs" mode frozen), and in the limit $% M\rightarrow \infty $ is indistinguishable from conventional QED taken in the general axial (temporal or pure axial) gauge. So, for this part of the Lagrangian $L(a_{\mu},\psi )$ given by the zero-order terms in $1/M$, the spontaneous Lorentz violation simply corresponds to a non-covariant gauge choice in an otherwise gauge invariant (and Lorentz invariant) theory. Remarkably, also all the other (first and higher order in $1/M$) terms in $% L(a_{\mu},\psi )$ \ (\ref{NL}), though being by themselves Lorentz and $CPT$ violating ones, appear not to cause physical SLIV effects due to strict cancellations in the physical processes involved. So, the non-linear constraint (\ref{con}) applied to the standard QED Lagrangian (\ref{lagr1}) appears in fact to be a possible gauge choice, while the $S$-matrix remains unaltered under such a gauge convention. This conclusion was first confirmed at the tree level \cite{nambu} and recently extended to the one-loop approximation \cite{ac}. All the one-loop contributions to the photon-photon, photon-fermion and fermion-fermion interactions violating Lorentz invariance were shown to be exactly cancelled with each other, in the manner observed earlier for the simplest tree-order diagrams. This suggests that the vector field constraint $A^{2}=n^{2}M^{2}$, having been treated as a nonlinear gauge choice at the tree (classical) level, remains as just a gauge condition when quantum effects are taken into account as well. To resume let us recall the steps made in the derivation above. We started with the most general Lorentz invariant Lagrangian $L(A_{\mu},\psi )$, proposing only a global internal $U(1)$ symmetry for the charged matter fields involved. The requirement for the vector field equations of motion to be compatible with the true vacuum chosen by the SLIV (\ref{con}) led us to the necessity for the identity (\ref{id}) to be satisfied by the Lagrangian $% L$. According to Noether's second theorem \cite{noeth}, this identity implies the invariance of the Lagrangian $L$ under the $U(1)$ charge gauge transformations of all the interacting fields. And, finally, this local symmetry allows us to completely establish the underlying theory, which appears to be standard QED (\ref{lagr1}) taken in the nonlinear gauge (\ref% {con}) or the nonlinear $\sigma $ model-type QED in a general axial gauge - both preserving physical Lorentz invariance. \section{Non-Abelian theory} Now we extend our discussion to the non-Abelian global internal symmetry case for a general Lorentz invariant Lagrangian $\mathcal{L}(\boldsymbol{A}% _{\mu },\boldsymbol{\psi })$ for the vector and matter fields involved. This symmetry is given by a general group $G$ with $D$ generators $t^{\alpha }$ \begin{equation} \lbrack t_{\alpha },t_{\beta }]=ic_{\alpha \beta \gamma }t_{\gamma },\text{ \ }Tr(t_{\alpha }t_{\beta })=\delta _{\alpha \beta }\text{ \ \ }(\alpha ,\beta ,\gamma =0,1,...,D-1) \label{com} \end{equation}% where $c_{\alpha \beta \gamma }$ are the structure constants of $G$. The corresponding vector fields, which transform according to the adjoint representation of $G$, are given in the matrix form $\boldsymbol{A}_{\mu }=% \boldsymbol{A}_{\mu }^{\alpha }t_{\alpha }$. The matter fields (fermions or scalars) are, for definiteness, taken in the fundamental representation column $\boldsymbol{\psi }^{\sigma }$ ($\sigma =0,1,...,d-1$) of $G$. Let us again, as in the above Abelian case, subject the vector field multiplet $% \boldsymbol{A}_{\mu }^{\alpha }(x)$ to a SLIV constraint of the form \begin{equation} Tr(\boldsymbol{A}_{\mu }\boldsymbol{A}^{\mu })=\boldsymbol{n}^{2}M^{2},\text{ \ \ }\boldsymbol{n}^{2}\equiv \boldsymbol{n}_{\mu }^{\alpha }\boldsymbol{n}% ^{\mu ,\alpha }=\pm 1, \label{CON} \end{equation}% that presumably chooses the true vacuum in a theory. Here, as usual, we sum over repeated indices. This covariant constraint is not only the simplest one, but the only possible SLIV condition which could be written for the vector field multiplet $\boldsymbol{A}_{\mu }^{\alpha }$ and not be superfluously restricted (see discussion below). Although we only propose the $SO(1,3)\times G$ invariance of the Lagrangian $% \mathcal{L}(\boldsymbol{A}_{\mu},\boldsymbol{\psi })$, the chosen SLIV constraint (\ref{CON}) in fact possesses a much higher accidental symmetry $% SO(D,3D)$ determined by the dimensionality $D$ of the $G$ adjoint representation to which the vector fields $\boldsymbol{A}_{\mu }^{\alpha }$ belong\footnote{% Actually, in the same way as in the Abelian case$^{1}$, such a SLIV constraint (\ref{CON}) might be related to the minimisation of some $% SO(D,3D) $ invariant vector field potential $\mathcal{U}(\boldsymbol{A}_{\mu })=-\frac{m_{A}^{2}}{2}\,Tr(\boldsymbol{A}_{\mu }\boldsymbol{A}^{\mu })+% \frac{\lambda _{A}}{4}[Tr(\boldsymbol{A}_{\mu }\boldsymbol{A}^{\mu })]^{2}$ followed by taking the limit $m_{A}^{2}\rightarrow \infty ,$ $\lambda _{A}\rightarrow \infty $ (while keeping the ratio $m_{A}^{2}/\lambda _{A}$ finite). Notably, the inclusion into this potential of another possible, while less symmetrical, four-linear self-interaction term of the type $% (\lambda _{A}^{\prime }/4)Tr(\boldsymbol{A}_{\mu }\boldsymbol{A}^{\mu }% \boldsymbol{A}_{\nu }\boldsymbol{A}^{\nu })$ would lead, as one can easily confirm, to an unacceptably large number ($4D$) of vector field constraints at the potential minimum.}. This symmetry is indeed spontaneously broken at a scale $M$ \begin{equation} <\boldsymbol{A}_{\mu }^{\alpha }(x)>\text{ }=\boldsymbol{n}_{\mu }^{\alpha }M \label{vev} \end{equation}% with the vacuum direction given now by the `unit' rectangular matrix $% \boldsymbol{n}_{\mu }^{\alpha }$ describing simultaneously both of the generalized SLIV cases, time-like ($SO(D,3D)$ $\rightarrow SO(D-1,3D)$) or space-like ($SO(D,3D)$ $\rightarrow SO(D,3D-1)$) respectively, depending on the sign of $\boldsymbol{n}^{2}\equiv \boldsymbol{n}_{\mu }^{\alpha }% \boldsymbol{n}^{\mu ,\alpha }=\pm 1$. This matrix has in fact only one non-zero element for both cases, subject to the appropriate $SO(D,3D)$ rotation. They are, specifically, $\boldsymbol{n}_{0}^{0}$ or $\boldsymbol{n}% _{3}^{0}$ provided that the vacuum expectation value (\ref{vev}) is developed along the $\alpha =0$ direction in the internal space and along the $\mu =0$ or $\mu =3$ direction respectively in the ordinary four-dimensional one. As we shall soon see, in response to each of these two breakings, side by side with one true vector Goldstone boson corresponding to the spontaneous violation of the actual $SO(1,3)\otimes G$ symmetry of the Lagrangian $\mathcal{L}$, $D-1$ vector pseudo-Goldstone bosons (PGB) related to a breaking of the accidental $SO(D,3D)$ symmetry of the constraint (\ref{CON}) per se are also produced\footnote{% Note that in total there appear $4D-1$ pseudo-Goldstone modes, complying with the number of broken generators of $SO(D,3D)$, both for time-like and space-like SLIV. From these $4D-1$ pseudo-Goldstone modes, $3D$ modes correspond to the $D$ three component vector states as will be shown below, while the remaining $D-1$ modes are scalar states which will be excluded from the theory. In fact $D-r$ actual scalar Goldstone bosons (where $r$ is the rank of the group $G$), arising from the spontaneous violation of $G$, are contained among these excluded scalar states.}. Remarkably, in contrast to the familiar scalar PGB case \cite{GL}, the vector PGBs remain strictly massless being protected by the simultaneously generated non-Abelian gauge invariance. Together with the above true vector Goldstone boson, they just complete the whole gauge field multiplet of the internal symmetry group $G$. Let us now turn to the possible supplementary conditions which can be imposed on the vector fields in a general Lagrangian $\mathcal{L}(% \boldsymbol{A}_{\mu },\boldsymbol{\psi })$, in order to finally establish its form. While generally $D$ supplementary conditions may be imposed on the vector field multiplet $\boldsymbol{A}_{\mu }^{\alpha }$, one of them in the case considered is in fact the SLIV constraint (\ref{CON}). One might think that the other conditions would appear by taking 4-divergences of the equations of motion \begin{equation} \frac{\partial \mathcal{L}}{\partial \boldsymbol{A}_{\mu }^{\alpha }}% -\partial _{\nu }\frac{\partial \mathcal{L}}{\partial (\partial _{\nu }% \boldsymbol{A}_{\mu }^{\alpha })}=0, \label{eqmI} \end{equation}% which are determined by a variation of the Lagrangian $\mathcal{L}$. The point is, however, that due to the $G$ symmetry this operation would lead, on equal terms, to $D$ independent conditions thus giving in total, together with the basic SLIV constraint (\ref{CON}), $D+1$ constraints for the vector field multiplet $\boldsymbol{A}_{\mu }^{\alpha }$ which is inadmissible. Therefore, as in the above Abelian case, the 4-divergences of the Euler equations (\ref{eqmI}) should not produce supplementary conditions at all once the SLIV occurs. This means again that such 4-divergences should be arranged to vanish (though still keeping the global $G$ symmetry) either identically or as a result of the equations of motion for vector and matter fields (fermion fields for definiteness) thus implying that, in the absence of these equations, there must hold a general identity of the type \begin{eqnarray} \partial _{\mu }\left( \frac{\partial \mathcal{L}}{\partial \boldsymbol{A}% _{\mu }^{\alpha }}-\partial _{\nu }\frac{\partial \mathcal{L}}{\partial (\partial _{\nu }\boldsymbol{A}_{\mu }^{\alpha })}\right) &\equiv &\left( \frac{\partial \mathcal{L}}{\partial \boldsymbol{A}_{\mu }^{\beta }}% -\partial _{\nu }\frac{\partial \mathcal{L}}{\partial (\partial _{\nu }% \boldsymbol{A}_{\mu }^{\beta })}\right) C_{\alpha \beta \gamma }\boldsymbol{A% }_{\mu }^{\gamma }+ \notag \\ &&+\left( \frac{\partial \mathcal{L}}{\partial \boldsymbol{\psi }}-\partial _{\nu }\frac{\partial \mathcal{L}}{\partial (\partial _{\nu }\boldsymbol{% \psi })}\right) (iT_{\alpha })\boldsymbol{\psi }+ \label{id1} \\ &&+\overline{\boldsymbol{\psi }}(-iT_{\alpha })\left( \frac{\partial \mathcal{L}}{\partial \overline{\boldsymbol{\psi }}}-\partial _{\nu }\frac{% \partial \mathcal{L}}{\partial (\partial _{\nu }\overline{\boldsymbol{\psi }}% )}\right) . \notag \end{eqnarray} The coefficients $C_{\alpha \beta \gamma }$ and $T_{\alpha }$ of the Eulerians on the right-hand side of the identity (\ref{id1}) can readily be identified with the structure constants $c_{\alpha \beta \gamma }$ and generators $t_{\alpha }$ (\ref{com}) of the group $G$. This follows because the right hand side of the identity (\ref{id1}) must transform in the same way as the left hand side, which transforms as the adjoint representation of $G$. Note that these coefficients consist of dimensionless constants corresponding to the starting `minimal' Lagrangian $\mathcal{L}(\boldsymbol{A% }_{\mu },\boldsymbol{\psi })$ which is taken, for simplicity, with renormalisable coupling constants. According to Noether's second theorem \cite{noeth}, the identity (\ref{id1}) again means the invariance of $% \mathcal{L}$ under the vector and fermion field local transformations having the infinitesimal form \begin{equation} \delta \boldsymbol{A}_{\mu }^{\alpha }=\partial _{\mu }\omega ^{\alpha }+C_{\alpha \beta \gamma }\omega ^{\beta }\boldsymbol{A}_{\mu }^{\gamma },% \text{ \ \ }\delta \boldsymbol{\psi }\text{\ }=iT_{\alpha }\omega ^{\alpha }% \boldsymbol{\psi } \label{trans1} \end{equation}% where $\omega ^{\alpha }(x)$ are arbitrary functions only being restricted, again as in the above Abelian case, by the requirement to conform with the corresponding nonlinear constraint (\ref{CON}). Note that the existence of the starting global $G$ symmetry in the theory is important for our consideration, since without such a symmetry the basic identity (\ref{id1}) would be written with arbitrary coefficients $C_{\alpha \beta \gamma }$ and $T_{\alpha }$. Then this basic identity may be required for only some particular vector field $\boldsymbol{A}_{\mu }^{\alpha _{0}}$ rather than for the entire set $\boldsymbol{A}_{\mu }^{\alpha }$. This would eventually lead to the previous pure Abelian theory case just for this $% \boldsymbol{A}_{\mu }^{\alpha _{0}}$ component leaving aside all the other ones. Just the existence of the starting global symmetry $G$ ensures a non-Abelian group-theoretical solution for the local transformations (\ref% {trans1}) in the theory. So, we have shown that in the non-Abelian internal symmetry case, as well as in the Abelian case, the imposition of the SLIV constraint (\ref{CON}) converts the starting global symmetry $G$ into the local one $G_{loc}$. Otherwise, the theory would superfluously restrict the number of degrees of freedom for the vector field multiplet $\boldsymbol{A}_{\mu }^{\alpha }$, which would certainly not be allowed. This SLIV induced local non-Abelian symmetry (\ref{trans1}) now completely determines the Lagrangian $\mathcal{L} $, following the standard procedure (see, for example, \cite{rabi}). For a minimal theory with renormalisable coupling constants, this corresponds in fact to a conventional Yang-Mills type Lagrangian \begin{equation} \mathcal{L}(\boldsymbol{A}_{\mu },\psi )=-\frac{1}{4}\,Tr(\boldsymbol{F}% _{\mu \nu }\boldsymbol{F}^{\mu \nu })+\overline{\boldsymbol{\psi }}(i\gamma \partial -m)\boldsymbol{\psi }+g\overline{\boldsymbol{\psi }}\boldsymbol{A}% _{\mu }\gamma ^{\mu }\boldsymbol{\psi } \label{nab} \end{equation}% (where $\boldsymbol{F}_{\mu \nu }\boldsymbol{~=~}\partial _{\mu }\boldsymbol{% A}_{\nu }-\partial _{\nu }\boldsymbol{A}_{\mu }-ig[\boldsymbol{A}_{\mu },% \boldsymbol{A}_{\nu }]$ and $g$ stands for the universal coupling constant in the theory) with the SLIV constraint (\ref{CON}) imposed. These constrained gauge fields $\boldsymbol{A}_{\mu }^{\alpha }$ contain, as we directly confirm below, one true Goldstone and $D-1$ pseudo-Goldstone vector bosons, corresponding to the spontaneous violation of the accidental $% SO(D,3D)$ symmetry of the constraint (\ref{CON}). Actually, as in the above Abelian case, after the explicit use of the corresponding SLIV constraint (\ref{CON}), which is so far the only supplementary condition for the vector field multiplet $\boldsymbol{A}_{\mu }^{\alpha }$, one can identify the pure Goldstone field modes $\boldsymbol{a}% _{\mu }^{\alpha }$ as follows: \begin{equation} \text{\ \ }\boldsymbol{A}_{\mu }^{\alpha }=\boldsymbol{a}_{\mu }^{\alpha }+% \frac{\boldsymbol{n}_{\mu }^{\alpha }}{\boldsymbol{n}^{2}}(\boldsymbol{n}% \cdot \boldsymbol{A}),\text{ \ }\boldsymbol{n}\cdot \boldsymbol{a}\equiv \boldsymbol{n}_{\mu }^{\alpha }\boldsymbol{a}^{\mu ,\alpha }\text{\ }=0. \label{sup'} \end{equation}% At the same time an effective \textquotedblleft Higgs" mode (i.e., the $% \boldsymbol{A}_{\mu }^{\alpha }$ component in the vacuum direction $% \boldsymbol{n}_{\mu }^{\alpha }$) is given by the product $\boldsymbol{n}% \cdot \boldsymbol{A}\equiv \boldsymbol{n}_{\mu }^{\alpha }\boldsymbol{A}% ^{\mu ,\alpha }$ determined by the SLIV constraint \begin{equation} \text{\ }\boldsymbol{n}\cdot \boldsymbol{A}\text{\ }=\left[ M^{2}-% \boldsymbol{n}^{2}\boldsymbol{a}^{2}\right] ^{\frac{1}{2}}=M-\frac{% \boldsymbol{n}^{2}\boldsymbol{a}^{2}}{2M}+O(1/M^{2}). \label{constr''} \end{equation}% where $\boldsymbol{a}^{2}=\boldsymbol{a}_{\nu }^{\alpha }\boldsymbol{a}^{\nu ,\alpha }.$ As earlier in the Abelian case, we take the positive sign for the square root and expand it in powers of $\boldsymbol{a}^{2}/M^{2}$. Note that, apart from the pure vector fields, the general Goldstonic modes $% \boldsymbol{a}_{\mu }^{\alpha }$ contain $D-1$ scalar fields, $\boldsymbol{a}% _{0}^{\alpha ^{\prime }}$ or $\boldsymbol{a}_{3}^{\alpha ^{\prime }}$ ($% \alpha ^{\prime }=1...D-1$), for the time-like ($\boldsymbol{n}_{\mu }^{\alpha }=n_{0}^{0}g_{\mu 0}\delta ^{\alpha 0}$) or space-like ($% \boldsymbol{n}_{\mu }^{\alpha }=n_{3}^{0}g_{\mu 3}\delta ^{\alpha 0}$) SLIV respectively. They can be eliminated from the theory if one imposes appropriate supplementary conditions on the $\boldsymbol{a}_{\mu }^{\alpha }$ fields which are still free of constraints. Using their overall orthogonality (\ref{sup'}) to the physical vacuum direction $\boldsymbol{n}_{\mu }^{\alpha }$, one can formulate these supplementary conditions in terms of a general axial gauge for the entire $\boldsymbol{a}_{\mu }^{\alpha }$ multiplet \begin{equation} n\cdot \boldsymbol{a}^{\alpha }\equiv n_{\mu }\boldsymbol{a}^{\mu ,\alpha }=0,\text{ \ }\alpha =0,1,...D-1. \label{sup''} \end{equation}% Here $n_{\mu }$ is the unit Lorentz vector, analogous to that introduced in the Abelian case, which is now oriented in Minkowskian space-time so as to be parallel to the vacuum matrix\footnote{% For such a choice the simple identity $\boldsymbol{n}_{\mu }^{\alpha }\equiv \frac{n\cdot \boldsymbol{n}^{\alpha }}{n^{2}}n_{\mu }$ holds, showing that the rectangular vacuum matrix $\boldsymbol{n}_{\mu }^{\alpha }$ has the factorized \textquotedblleft two-vector" form.} $\boldsymbol{n}_{\mu }^{\alpha }$. As a result, apart from the \textquotedblleft Higgs" mode excluded earlier by the above orthogonality condition (\ref{sup'}), all the other scalar fields are also eliminated, and only the pure vector fields, $% \boldsymbol{a}_{i}^{\alpha }$ ($i=1,2,3$ ) or $\boldsymbol{a}_{\mu ^{\prime }}^{\alpha }$ ($\mu ^{\prime }=0,1,2$) for time-like or space-like SLIV respectively, are left in the theory. Clearly, the components $\boldsymbol{a}% _{i}^{\alpha =0}$ and $\boldsymbol{a}_{\mu ^{\prime }}^{\alpha =0}$ correspond to the Goldstone boson, for each type of SLIV respectively, while all the others (for $\alpha =1...D-1$) are vector PGBs. We now show that these Goldstonic vector fields, denoted generally as $% \boldsymbol{a}_{\mu }^{\alpha }$ but with the supplementary conditions (\ref% {sup''}) understood, appear truly massless in the SLIV inspired gauge invariant Lagrangian $\mathcal{L}$ (\ref{nab}) subject to the SLIV constraint (\ref{CON}). Actually, substituting the parameterization (\ref% {sup'}) with the SLIV constraint (\ref{constr''}) into the Lagrangian (\ref% {nab}), one is led to a highly nonlinear Yang-Mills theory in terms of the pure Goldstonic modes $\boldsymbol{a}_{\mu }^{\alpha }$. However, as in the above Abelian case, one should first use the local invariance of the Lagrangian $\mathcal{L}$ to gauge away the apparently large Lorentz violating terms, which appear in the theory in the form of fermion and vector field bilinears. As one can readily see, they stem from the expansion (\ref{constr''}) when it is applied to the couplings $g\overline{\boldsymbol{% \psi }}\boldsymbol{A}_{\mu }\gamma ^{\mu }\boldsymbol{\psi }$ \ and $-\frac{1% }{4}g^{2}Tr([\boldsymbol{A}_{\mu }\boldsymbol{,A}_{\nu }]^{2})$ respectively in the Lagrangian (\ref{nab}). Analogously to the Abelian case, we make the appropriate redefinitions of the fermion ($\boldsymbol{\psi }$) and vector ($% \boldsymbol{a}_{\mu }\equiv \boldsymbol{a}_{\mu }^{\alpha }t^{\alpha }$) field multiplets: \begin{equation} \boldsymbol{\psi }\rightarrow U(\omega )\boldsymbol{\psi }\text{ },\text{ \ \ }\boldsymbol{a}_{\mu }\rightarrow U(\omega )\boldsymbol{a}_{\mu }U(\omega )^{\dagger },\text{ \ }U(\omega )=e^{igM(x\cdot \boldsymbol{n}^{\alpha }/% \boldsymbol{n}^{2})\boldsymbol{t}^{\alpha }}. \label{red1} \end{equation}% Since the phase of the transformation matrix $U(\omega )$ is linear in the space-time coordinate, the following equalities are evidently satisfied: \begin{equation} \partial _{\mu }U(\omega )=igM\boldsymbol{n}_{\mu }U(\omega )=igMU(\omega )% \boldsymbol{n_{\mu }},\text{ \ \ }\boldsymbol{n_{\mu }}\equiv \boldsymbol{n}% _{\mu }^{\alpha }t^{\alpha }. \end{equation}% One can readily confirm that the above-mentioned Lorentz violating terms are thereby cancelled with the analogous bilinears stemming from their kinetic terms. So, the final Lagrangian for the Goldstonic Yang-Mills theory takes the form (to first order in $(\boldsymbol{a}^{2}/M^{2}$) \begin{eqnarray} \mathcal{L}(\boldsymbol{a}_{\mu }^{\alpha },\boldsymbol{\psi }) &=&-\frac{1}{% 4}Tr(\boldsymbol{f}_{\mu \nu }\boldsymbol{f}^{\mu \nu })-\frac{1}{2}% \boldsymbol{\delta }(n\cdot \boldsymbol{a}^{\alpha })^{2}+\frac{1}{4}Tr(% \boldsymbol{f}_{\mu \nu }\boldsymbol{h}^{\mu \nu })\frac{\boldsymbol{n}^{2}% \boldsymbol{a}^{2}}{M}+ \notag \\ &&+\overline{\boldsymbol{\psi }}(i\gamma \partial -m)\boldsymbol{\psi }+g% \overline{\boldsymbol{\psi }}\boldsymbol{a}_{\mu }\gamma ^{\mu }\boldsymbol{% \psi }-\frac{g\boldsymbol{n}^{2}\boldsymbol{a}^{2}}{2M}\overline{\boldsymbol{% \psi }}(\gamma \cdot \boldsymbol{n})\boldsymbol{\psi }. \label{nab3} \end{eqnarray}% Here the tensor $\boldsymbol{f}_{\mu \nu }$ is, as usual, $\boldsymbol{f}% _{\mu \nu }\boldsymbol{~=~}\partial _{\mu }\boldsymbol{a}_{\nu }-\partial _{\nu }\boldsymbol{a}_{\mu }-ig[\boldsymbol{a}_{\mu },\boldsymbol{a}_{\nu }]$% , while $\boldsymbol{h}_{\mu \nu }$ is a new SLIV oriented tensor of the type \begin{equation*} \boldsymbol{h}_{\mu \nu }=\boldsymbol{n}_{\mu }\partial _{\nu }-\boldsymbol{n% }_{\nu }\partial _{\mu }+ig([\boldsymbol{n}_{\mu },\boldsymbol{a}_{\nu }]-[% \boldsymbol{n}_{\nu },\boldsymbol{a}_{\mu }]) \end{equation*}% acting on the infinite series in $\boldsymbol{a}^{2}$ coming from the expansion of the effective \textquotedblleft Higgs" mode (\ref{constr''}), from which we have only included the first order term $-\boldsymbol{n}^{2}% \boldsymbol{a}^{2}/2M$ throughout the Lagrangian $\mathcal{L}(\boldsymbol{a}% _{\mu }^{\alpha }\boldsymbol{,\psi })$. We have explicitly introduced the (axial) gauge fixing term into the Lagrangian, corresponding to the supplementary conditions (\ref{sup''}) imposed. We have also retained the original notations for the fermion and vector fields after the transformations (\ref{red1}). The theory we here derived is in essence a generalization of the nonlinear QED model \cite{nambu} for the non-Abelian case. As one can see, this theory contains the massless vector Goldstone and pseudo-Goldstone boson multiplet $% \boldsymbol{a}_{\mu }^{\alpha }$ gauging the starting global symmetry $G$ and, in the limit $M\rightarrow \infty $, is indistinguishable from conventional Yang-Mills theory taken in a general axial gauge. So, for this part of the Lagrangian $\mathcal{L}(\boldsymbol{a}_{\mu }^{\alpha }% \boldsymbol{,\psi })$ given by the zero-order terms in $1/M$, the spontaneous Lorentz violation again simply corresponds to a non-covariant gauge choice in an otherwise gauge invariant (and Lorentz invariant) theory. Furthermore one may expect that, as in the nonlinear QED model \cite{nambu}, all the first and higher order terms in $1/M$ in $\mathcal{L}$\ (\ref{nab3}% ), though being by themselves Lorentz and $CPT$ violating ones, do not cause physical SLIV effects due to the mutual cancellation of their contributions to the physical processes involved. Recent tree level calculations \cite{cjm} related to the Lagrangian $\mathcal{L}(\boldsymbol{a}_{\mu }^{\alpha }% \boldsymbol{,\psi })$ seem to confirm this proposition. Therefore, the SLIV constraint (\ref{CON}) applied to a starting general Lagrangian $\mathcal{L}(% \boldsymbol{A}_{\mu }^{\alpha },\boldsymbol{\psi })$, while generating the true Goldstonic vector field theory for the non-Abelian charge-carrying matter, is not likely to manifest itself in a physical Lorentz invariance violating way. \section{Conclusion} The spontaneous Lorentz violation realized through a nonlinear vector field constraint of the type $A^{2}=M^{2}$ ($M$ is the proposed scale for Lorentz violation) is shown to generate massless vector Goldstone bosons gauging the starting global internal symmetries involved, both in the Abelian and the non-Abelian symmetry case. The gauge invariance, as we have seen, directly follows from a general variation principle and Noether's second theorem \cite% {noeth}, as a necessary condition for these bosons not to be superfluously restricted in degrees of freedom\ once the true vacuum in a theory is chosen by the SLIV constraint. It should be stressed that we can of course only achieve this derivation of gauge invariance by allowing all the coupling constants in the Lagrangian density to be determined from the requirement of avoiding any extra restriction imposed on the vector field(s) in addition to the SLIV constraint. Actually, this derivation excludes \textquotedblleft wrong\textquotedblright\ couplings in the vector field Lagrangian, which would otherwise distort the final Lorentz symmetry broken phase with unphysical extra states including ghost-like ones. Note that this procedure might, in some sense, be inspired by string theory where the coupling constants are just vacuum expectation values of the dilaton and moduli fields \cite{string}. So, the adjustment of coupling constants in the Lagrangian would mean, in essence, a certain choice for the vacuum configurations of these fields, which are thus correlated with the SLIV. Another important point for this gauge symmetry derivation is that we followed our philosophy of imposing the SLIV constraints, (\ref{con}) and (% \ref{CON}) respectively, without adding a Lagrange multiplier term, as one might have imagined should come with these constraints. Had we done so the equations of motion would have changed and the Lagrange multiplier might have picked up the inconsistency, which we required to be solved in the Abelian case by Eq.~(\ref{id}) and in the non-Abelian case by Eq.~(\ref{id1}% ). In the Abelian case a massless vector Goldstone boson appears, which is naturally associated with the photon. In the non-Abelian case it was shown that the pure Lorentz violation still generates just one genuine Goldstone vector boson. However the SLIV constraint (\ref{CON}) manifests a larger accidental $SO(D,3D)$ symmetry, which is not shared by the Lagrangian $% \mathcal{L}$. The spontaneous violation of this $SO(D,3D)$ symmetry generates $D-1$ pseudo-Goldstone vector bosons which, together with the genuine Goldstone vector boson, complete the whole gauge field multiplet of the internal symmetry group $G$. Remarkably, these vector bosons all appear to be strictly massless, as they are protected by the simultaneously generated non-Abelian gauge invariance. These theories, both Abelian and non-Abelian, though being essentially nonlinear, appear to be physically indistinguishable from the conventional QED and Yang-Mills theories due to their generic, SLIV enforced, gauge invariance. One could actually see that just this gauge invariance ensures that our theories do not have unreasonably large (proportional to the SLIV scale $M$ ) Lorentz violation in the fermion and vector field interaction terms. It appears also to ensure that all the physical Lorentz violating effects, even those suppressed by this SLIV scale, are non-observable. In this connection, the only way for physical Lorentz violation then to appear would be if the above gauge invariance is somehow broken at very small distances. One could imagine how such a breaking might occur. Only gauge invariant theories provide, as we have learned, the needed number of degrees of freedom for the interacting vector fields once the SLIV occurs. Note that a superfluous restriction on a vector (or any other) field would make it impossible to set the required initial conditions in the appropriate Cauchy problem and, in quantum theory, to choose self-consistent equal-time commutation relations \cite{ogi3}. One could expect, however, that gravity could in general hinder the setting of the required initial conditions at extra-small distances. Eventually this would manifest itself in the violation of the above gauge invariance in a theory through some high-order operators stemming from the gravity-influenced area, which could lead to physical Lorentz violation. We may return to this interesting possibility elsewhere. \section*{Acknowledgments} We would like to thank Rabi Mohapatra for useful discussions and comments.
2023-04-23T06:10:16.242Z
2007-12-10T11:02:42.000Z
redpajama/arxiv
arxiv_0002
524
8,087
1ff376ad7d4568bcab17b521fcabb0f1169c623f
\section{Introduction} Let $(M,g)$ be a Riemannian manifold and let $\lambda_1$ denote the first eigenvalue of the Beltrami-Laplace operator on $M$. If we assume that $M$ is of dimension $2$ and has volume $1$ it is well known by a theorem of Yang-Yau that $\lambda_1$ is a bounded function of the metric $g$ on $M$. One can ask if there is a Riemannian metric which achieves $$ \mbox{Sup}\{\lambda_1(g)|\, g \, \text{is a Riemannian metric,} \, \text{vol}(g)=1\}. $$ For $S^2$, this metric is known to be the Fubini-Study metric. In \cite{n}, Nadirashvili studies the same problem for $\bT^2$. He defines the notion of $\lambda_1$-critical metric which is roughly speaking a critical point for the function $\lambda_1(g)$. Note that $\lambda_1$ is not a differentiable function of $g$ in general so this definition requires some care. We will say more on this ahead. In higher dimensional Riemannian manifolds El Soufi-Ilias, generalising a result of Nadirashvili, prove the following characterisation of $\lambda_1$-critical metrics \begin{theorem}[El Soufi-Ilias, Nadirashvili] A Riemannian metric $g$ on $M$ is critical for $\lambda_1$ iff $g$ admits a set of eigenfunction $\{f_a, a=0,\cdots, N\}$ for $\lambda_1(g)$ such that $F=(f_0,\cdots,f_N)$ embeds $M$ into $S^N$, with $g=F^*g_{FS}$ and $F(M)$ minimal in $S^N$. \end{theorem} Therefore $\lambda_1$-critical metrics yield minimal submanifolds of spheres. We are interested in the more symmetric case when $(M,g)$ admits an isometric group action by a group $G$. In \cite{cde}, Colbois-Dryden-El Soufi introduce the notion of $\lambda_1^G$-critical invariant metrics where $\lambda_1^G$ is the smallest positive eigenvalue of the Laplacian restricted to $G$-invariant eigenfunctions. Again this notion is subtle as $\lambda_1^G$ is not in general a differentiable function the invariant metric but it is analogous to the notion introduced by Nadirashvili. They prove the following theorem \begin{theorem}[Colbois-Dryden-El Soufi] If $G$ has dimension greater than $1$ then $M$ admits no $G$-invariant metric which is critical for $\lambda_1^G$. \end{theorem} Given a group character $\chi$ it is easy to generalize the above notions to the setting of $\chi$-equivariant functions. These are functions $f:M\rightarrow \bC$ that satisfy $f(h\cdot x)=\chi(h)f(x)$, for all $x\in M$, $h\in G$. We have a notion of equivariant first eigenvalue $\lambda_1^\chi$ and $\lambda_1^\chi$-critical metric. More specifically we are interested in the case of toric manifolds. These are symplectic manifolds $(M^{2n},\omega)$ admitting a Hamiltonian $\bT^n$-action. Symplectic toric manifolds always admit a large family of compatible integrable $\bT^n$-invariant complex structures thus they carry several K\"ahler structures (see \cite{g} , \cite{a}). In fact for a fixed $\omega$, toric K\"ahler structures in the class $[\omega]$ are very well understood and are parametrised by a subset of the set of continuous functions on the moment polytope of $(M,\omega,\bT^n)$ which we denote by $\mbox{Spot}(M,\omega,\bT^n)$ and which we will describe carefully in the next section. We want to think of $\lambda_1^{\bT}$ as a function on $\mbox{Spot}$. That is, we want to consider only toric {\it K\"ahler} metrics in the class $[\omega]$. Because we are not considering all $\bT^n$ invariant functions the results in \cite{cde} do not apply to our setting (except in dimension $2$). There has recently been an interest in considering spectral problems in the realm of K\"ahler geometry. In \cite{ajk} the authors define $\lambda_1$-extremal K\"ahler metric on a K\"ahler manifold as being those which are critical for $\lambda_1$ restricted to the space of K\"ahler metrics in a given class. We will define an analogous notion of criticality in our setting. More specifically given a toric K\"ahler manifold we are looking for torus invariant K\"ahler metrics which are critical for $\lambda_1^{\bT}$. In this note our goal is prove the following theorems \begin{theorem}\label{alld} Let $(M,\omega,g,\bT^n)$ be a toric K\"ahler manifold. Then, there are no analytic toric K\"ahler structures compatible with $\omega$ and in the class $[\omega]$ which are critical for $\lambda_1^{\bT}$. \end{theorem} Given $k\in \bZ$, $k$ corresponds to an $S^1$-character. We will prove the following \begin{theorem}\label{s2} Let $k$ be an integer. There are no $\lambda_1^k$ critical $S^1$-invariant metrics on $S^2.$ \end{theorem} When $k=1$ this is a consequence of the Colbois-Dryden-El Soufi theorem from above. We would like be able to remove the analyticity assumption. It is know due to results of Morrey that solutions to elliptic systems of PDE's whose coefficients are analytic have analytic solution if any. We will see that critical toric K\"ahler metrics and their eigenfunctions for the smallest eigenvalue are solutions to a system of PDE's whose coefficients are analytic. Unfortunately the system is not elliptic. This paper is organised in the following way: in section \ref{back} we give some background on $\lambda_1$-critical metrics and on toric K\"ahler geometry, in section \ref{crit} we use the techniques developed to deal with criticality in the Riemannian case and adapt them to our setting so as to extract a useful characterisation of $\lambda_1^{\bT}$-critical metrics. We then use this characterisation to derive our main theorems in section \ref{proof}. The last section is somewhat independent of the rest of the paper. There, we show that there is an obvious system of PDEs that is satisfied the pair toric K\"ahler metric/corresponding eigenfunctions but the system is nowhere elliptic. \noindent \textbf{Acknowledgements.} I would like to thank Christine Breiner and Heather Macbeth for many illuminating conversations about $\lambda_1$-critical metrics. \section{Background}\label{back} \subsection{$\lambda_1$-critical metrics}\label{back_crit} Let $(M,g)$ be a Riemannian manifold. To fix conventions our Laplacian is given by $\Delta=d^*d$ and has positive eigenvalues. In coordinates $x_i$ on $M$ write $g=g_{ij}dx_i\otimes dx_j$. The Laplacian of a function $f$ on $M$ is given by \begin{equation}\label{lap_in_coord} \Delta f=-\frac{1}{\sqrt{d\varpi}}\frac{\partial}{\partial x_i}\left(\sqrt{d\varpi}g^{ij}\frac{\partial f}{\partial x_j}\right), \end{equation} where $g^{ij}$ denote the entries of the inverse of the matrix $\{g_{ij}\}$ and $d\varpi=\det{g_{ij}}$. The smallest eigenvalue of the Laplacian is called first eigenvalue and is denoted by $\lambda_1(M,g)$. If we fix $M$, then $\lambda_1$ can be seen as a function on the space of all Riemannian metrics on $M$. Its is not a differentiable function of $g$ but it is Lipschitz. In fact given a one-parameter family of Riemannian metrics on $M$, $g_t$ with $g_0=g$ and analytic in $t$, if $\lambda_1(g)$ is a multiple eigenvalue, then $\lambda_1$ may become non-differentiable at $g$. Despite this, there are real valued functions $\Lambda_{0,t}, \cdots \Lambda_{N,t}$ and one parameters families of functions on $M$ $f_{0,t}, \cdots f_{N,t}$ satisfying $$ \Delta f_{l,t}= \Lambda_{l,t} f_{l,t}, \quad l=0,\cdots N $$ and such that $\lambda_1(g_t)=\min\{\Lambda_{l,t},\, l=1,\cdots N\}$ so that the function $\lambda_1(g_t)$ has a right and left derivative $$ \frac{d\lambda_1(g_t)}{dt}(0^+)=\min \left\{\frac{d\Lambda_{l,t}} {dt}(0), \, l=0,\cdots N \right\} $$ $$ \frac{d\lambda_1(g_t)}{dt}(0^-)=\max \left\{\frac{d\Lambda_{l,t}} {dt}(0), \, l=0,\cdots N \right\} $$ \begin{definition} The metric $g$ is $\lambda_1$-critical if for any $1$-parameter family of metrics $g_t$ analytic in $t$ $$ \frac{d\lambda_1(g_t)}{dt}(0^-)\cdot \frac{d\lambda_1(g_t)}{dt}(0^+)<0. $$ \end{definition} (see \cite{n} and \cite{ei} for more details). \subsection{Toric Geometry} We will try to be brief and assume some familiarity with the subject. For more details see \cite{g} and \cite{a}. \begin{definition} A K\"ahler manifold $(M,\omega,g)$ where $\omega$ is a symplectic form and $g$ is a Riemannian metric is said to be toric if it admits an isometric, Hamiltonian $\bT^n$-action. \end{definition} In this case there is a moment map associated to the action $\phi:M\rightarrow (\mbox{Lie}(\bT^n))^*\simeq \bR^n$ and the moment map image $P$ is a convex polytope of a special type (a Delzant polytope). In particular it can be written in the form $$ P=\left\{x\in \bR^n: x\cdot \nu_k-c_k>0, \, k=1,\cdots, d\right\} $$ and at every vertex, there is an $SL(n,\bZ)$ transformation taking a neighbourhood of that vertex into a neighbourhood of $0$ in $$ \left\{x\in \bR^n: x_k>0, \, k=1,\cdots, n\right\}. $$ There is an open dense set in $M$ which we denote by $M^0$ where $\bT^n$ acts freely and there is an equivariant symplectomorphism $\psi: M^0\rightarrow P\times \bR^n$ whose first factor is given by the moment map $\phi$. Here the $\bT^n$-action on $ P\times \bR^n$ is given by the usual $\bT^n$-action on the second factor. Said differently, there are $\bT^n$-equivariant Darboux coordinates $(x,\theta)$ on $M^0$. We refer to these as action-angle coordinates. Given a polytope in $\bR^n$ of Delzant type one can construct from it a toric K\"ahler manifold $M_P$ in a canonical manner (see \cite{g}). It was shown by Delzant that in fact $P$ determines $(M,\omega)$ up to symplectomorphism. Abreu showed there is an effective way to parametrize all compatible $\bT^n$-invariant K\"ahler metrics. \begin{definition}\label{spot} Let $P$ be a Delzant polytope. A function $s\in \mathcal{C}^\infty(P)$ is called a symplectic potential if \begin{itemize} \item ${\rm{ Hess\,}} s$ is positive definite, \item $s-\sum_{k=1}^d \left(x\cdot \nu_k-c_k\right)\log(x\cdot \nu_k-c_k)$ is smooth on $\barP$, \item ${\rm{ Hess\,}} s$ when restricted to each face of $P$ is positive definite. \end{itemize} We denote the set of all such functions by $\mbox{Spot}(P)$. \end{definition} One can associate to each $s\in \mbox{Spot}(P)$ a K\"ahler structure $g_s$ whose corresponding K\"ahler metric in action-angle coordinates can be written as $$ (s)_{ij}dx_i\otimes dx_j+(s)^{ij}d\theta_i\otimes d\theta_j. $$ In fact it can be shown that all toric K\"ahler structures arise this way. The K\"ahler structure constructed in \cite{g} is called the Guillemin K\"ahler structure. Its symplectic potential is $$ s_G=\sum_{k=1}^d \left(x\cdot \nu_k-c_k\right)\log(x\cdot \nu_k-c_k)-\left(x\cdot \nu_k-c_k\right). $$ We make use of the following very elementary fact: \begin{fact} Smooth $\bT^n$-invariant functions on a toric K\"ahler manifold $M$ are in 1 to 1 correspondence with smooth functions on the closure of the moment polytope, $\bar{P}$ of $M$. \end{fact} \begin{proof} We denote the space of smooth $\bT^n$-invariant functions by $\mathcal{C}^\infty_T(M)$. Denote the moment map for the $\bT^n$-action by $\phi$. Given an invariant function $F$ on $M$, set $f$ to be $f(x)=F(\phi^{-1}(x))$. This is well defined because $\phi(p)=\phi(q)$ implies $p$ and $q$ are in the same $\bT^n$-orbit and $F$ is invariant. Conversely given $f\in \mathcal{C}^\infty(P)$, we define $F=f\circ \phi$. \end{proof} Similarly we have: \begin{fact} Continuous $\bT^n$-equivariant complex functions on a toric K\"ahler manifold $M$ are in 1 to 1 correspondence with continuous complex functions on the closure of the moment polytope $\bar{P}$ of $M$ that vanish on $\partialP$ \end{fact} \begin{proof} Characters in $\bT^n$ can be identified with elements in $\bZ^n$. Given $k\in\bZ^n$ we denote the space of continuous $k$-equivariant functions by $\mathcal{C}_k(M)$. We start by noting that if $F:M\rightarrow\bC$ is $k$-equivariant for $k\ne 0$ then $F$ vanishes on points with non trivial isotropy. Let $F$ be equivariant. If $p$ is a point where $\bT^n$ does not act freely i.e. if $\phi(p)\in \partial P$ then for $e^{{\bf{ i}}\theta}$ non-trivial in the stabiliser group of $p,$ $F(e^{{\bf{ i}}\theta}p)=F(p)=e^{{\bf{ i}}\theta\cdot }F(p)$ so that $F(p)=0.$ Let $\psi: M^0\rightarrow P\times \bR^n$ denote the action-angle coordinates map. If $f$ is a function on $P$, we define a $k$-equivariant function on $M^0$ by setting $F\circ\psi^{-1}(x,\theta)=f(x)e^{{\bf{ i}} k\cdot \theta}$. If $f$ vanishes on $\partialP$ we can extend $F$ by continuity to $M$ to be zero on $M\setminus M^0$. Conversely, given $F$ $k$-equivariant, define $f$ on $P$ by $f(x)=F\circ\psi^{-1}(x,0)$ and extend by $0$ to the boundary. As we have seen $F$ vanishes on $M \setminus M^0$ and $\phi(M\setminus M^0)=\partial P$ so that $f$ is continuous on $\partial P$. \end{proof} \subsection{Equivariant spectrum on toric manifolds} Let $(M,g)$ be a Riemannian manifold with an isometric $G$-action. Let $\chi$ be a group character and let $\mathcal{C}^\chi(M)$ denote the set of continuous $\chi$-equivariant functions. $$ \mathcal{C}^\chi(M)=\{F\in \mathcal{C}(M,\bC):F(h\cdot p)=\chi(h)F(p),\,\forall h\in G\}. $$ The Laplacian induced from $g$ commutes with the $G$-action because $G$ acts by isometries hence it restricts to $\mathcal{C}^\chi(M)\cap \mathcal{C}^\infty(M)$ for any given character of the group $G$. \begin{definition} Let $(M,g)$ be a Riemannian manifold with an isometric $G$-action. The $\chi$-equivariant first eigenvalue is the smallest eigenvalue of $\Delta_{|\mathcal{C}^\chi(M)\cap \mathcal{C}^\infty(M)}$ i.e. $$ \lambda_1^\chi(M,g, G)=\mbox{Sup}\left\{\frac{\int_M|dF|^2d\varpi_g}{\int_M|F|^2d\varpi_g}, \, F\in \mathcal{C}^\chi(M)\cap \mathcal{C}^\infty(M)\right\}. $$ \end{definition} When $\chi$ is the trivial character we often write $\lambda_1^\chi=\lambda_1^G.$ We will be using these notions in the setting of Toric K\"ahler manifolds and we will think of $\lambda_1^k$ as a function of the symplectic potential inducing the K\"ahler metric i.e. given $(M,\omega,\bT^n)$ symplectic toric with moment polytope $P$ and given $k\in \bZ^n$, we consider $$ \lambda_1^k:\mbox{Spot}(P)\rightarrow \bR^+ $$ and its variations. Given $k\in \bZ^n$, if $F$ is $k$-equivariant, it can be written in action-angle coordinates as $f(x)e^{{\bf{ i}} k\cdot \theta}$ so that from equation (\ref{lap_in_coord}) we have \begin{equation} \Delta F=-e^{{\bf{ i}} k\cdot \theta}\left(\frac{\partial}{\partial x_i}\left(s^{ij}\frac{\partial f}{\partial x_j}\right)-f k_ik_js_{ij}.\right) \end{equation} Note that because $(x,\theta)$ are Darboux coordinates $d\varpi=1$. The space of $k$-equivariant eigenfunctions for $\lambda_1^k$, which we denote by $E_1^k$, (or $E_1^\bT$ if $k=0$ in the invariant case) can be identified with a subset of $\mathcal{C}^\infty(P)$. Namely, if $k\ne 0$ $$ E_1^k\simeq\left\{f\in \mathcal{C}^\infty(P):\frac{\partial}{\partial x_i}\left(s^{ij}\frac{\partial f}{\partial x_j}\right)-f k^t{\rm{ Hess\,}} (s)k=\newb{-}\lambda_1^k f, \, f=0 \,\text{in} \,\partial P\right\} $$ and $$ E_1^\bT\simeq\left\{f\in \mathcal{C}^\infty(P):\frac{\partial}{\partial x_i}\left(s^{ij}\frac{\partial f}{\partial x_j}\right)=\newb{-}\lambda_1^{\bT} f\right\}. $$ In the invariant case, we often identify $f\in\mathcal{C}(P)$ with the associated eigenfunction on $M$ i.e. we confuse $f$ with $f\circ\phi$ and we write $\Delta f$ to mean $-\frac{\partial}{\partial x_i}\left(s^{ij}\frac{\partial f}{\partial x_j}\right).$ \section{Critical $\lambda_1^{\bT}$, $\lambda_1^k$ metrics}\label{crit} In this section we fix a toric symplectic manifold $(M,\omega,\bT^n)$ with moment polytope $P$. The first goal is to define critical metrics for the invariant/equivariant first eigenvalue. This is almost exactly a repetition of subsection \ref{back_crit}. To avoid the repetition and give a more unified treatment of the equivariant extremization problem and the classical extremization problem, we could have used the framework developed by Macbeth in \cite{mb}. This would involve showing that the measure described in the main theorem there is of a special type because the spaces $E_1^k$ and $E_1^\bT$ are finite dimensional. Instead we will go through the argument in subsection \ref{back_crit} again. We want to define critical values for $\lambda_1^k:\mbox{Spot}(P)\rightarrow \bR^+$ but as in the Riemannian case discussed in subsection \ref{back_crit} $\lambda_1^k:\mbox{Spot}(P)\rightarrow \bR^+$ is not a differentiable function at all points. Given a one parameter family in $\mbox{Spot}(P)$ with $s_0=s$ and analytic in $t$, there are real valued functions $\Lambda_{0,t}, \cdots \Lambda_{N,t}$ and one parameters families of functions on $P$, $f_{0,t}, \cdots f_{N,t}$ satisfying $$ \Delta f_{l,t}+f_{l,t}k^t{\rm{ Hess\,}} (s) k= \Lambda_{l,t} f_{l,t}, \quad k=0,\cdots N. $$ and such that $\lambda_1^k(s_t)=\min\{\Lambda_{l,t},\, l=1,\cdots N\}$ so that the function $\lambda_1^k$ has a right and left derivative $$ \frac{d\lambda_1^k(s_t)}{dt}(0^+)=\min \left\{\frac{d\Lambda_{l,t}} {dt}(0), \, l=0,\cdots N \right\} $$ $$ \frac{d\lambda_1^k(s_t)}{dt}(0^-)=\max \left\{\frac{d\Lambda_{l,t}} {dt}(0), \, l=0,\cdots N \right\} $$ \begin{definition} The symplectic potential $s$ is $\lambda_1^k$-critical if for any $1$-parameter family of symplectic potentials $s_t$, analytic in $t$, $$ \frac{d\lambda_1^k(s_t)}{dt}(0^-)\cdot \frac{d\lambda_1^k(s_t)}{dt}(0^+)<0. $$ \end{definition} Setting $\delta s$ to be $ \frac{d s}{dt}(0), $ we write $d\lambda_1^k(f_l,\delta s)=\frac{d\Lambda_{l,t}}{dt}(0).$ In fact, we can define $d\lambda_1^k(f,\delta s)$ for any $f\in E_1^k$ as follows. Consider the Riemannian metrics corresponding to $s_t=s+t\delta s$ for $t$ sufficiently small. For each such $t,$ $E_1^k(s_t)$ is the first $k$-equivariant eigenspace. We extend $f$ to a one parameter family $f_t$ such that $f_t\in E_1^k(s_t)$ and let $\Lambda_t$ be the eigenvalue corresponding to $f_t$. $$ d\lambda_1^k(f,\delta s)=\frac{d\Lambda_{t}}{dt}(0). $$ As we will see ahead this does not actually depend on $f_t$ but only on $f$. In fact the same phenomenon occurs in the non equivariant case of subsection \ref{back_crit} and is a manifestation of something more general that is explained and exploited in \cite{mb}. We now use the toric framework to calculate $d\lambda_1^k(f,\delta s).$ \begin{lemma} Let $(M,\omega,\bT^n)$ be toric with moment polytope $P$. Let $s\in \mbox{Spot}(P)$. Given $\delta s\in \mathcal{C}^\infty(P)$ such that $\delta s$ and $d\delta s$ vanish on $\delP$ and $f\in \mathcal{C}^{\infty}(P)$ corresponding to an eigenfunction of the Laplacian associated to $s$, $$ d\lambda_1^{\bT}(f,\delta s)=-\int_P \frac{\partial^2 \left(s^{il}f_l s^{jr}f_r\right)}{\partial x_i \partial x_j}\delta s dx, $$ where we write $f_r$ for $\frac{\partial f}{\partial x_r}$. If furthermore $f=0$ on $\partial P$ then $$ d\lambda_1^k(f,\delta s)=\int_P \left(-\frac{\partial^2 \re\left(s^{il}f_l s^{jr}\bar{f}_r\right)}{\partial x_i \partial x_j}+k^t{\rm{ Hess\,}} |f|^2k\right)\delta s dx. $$ \end{lemma} \begin{proof} Consider the path $s_t=s+t\delta s$ in $\mbox{Spot}(P),$ the corresponding path of Riemannian metrics on $M$ which we denote by $g_t$ and a path $f_t$ in $\mathcal{C}(P)$ corresponding to a path of eigenfunctions in $E_1^\bT(g_{t}),$ the eigenspace for the smallest invariant eigenfunction for the Laplacian associated with $g_t,$ such that $f_0=f.$ We have $\Delta_t f_t={\lambda_1^{\bT}}_t f_t.$ We want to calculate $$ \frac{d}{dt}_{|t=0}\lambda_1^{\bT}(f_t,g_t). $$ We may assume that $\int_P f_t^2dx=1$ for all $t$ and taking derivatives this implies $\int_P f\dot{f}dx=0$ where $\dot{f}=\frac{d f_t}{dt}.$ The quantity $ \frac{d}{dt}_{|t=0}\lambda_1^{\bT}(f_t,g_t)$ is given by \begin{IEEEeqnarray*}{l} \frac{d}{dt}_{|t=0}\int_P |df_t|^2_{g_t}dx\\ =\frac{d}{dt}_{|t=0}\int_P (\partial f_t)^t{\rm{ Hess\,}} ^{-1}(s_t)\partial f_t dx\\ =-\int_P (\partial f)^t{\rm{ Hess\,}} ^{-1}(s)\frac{d{\rm{ Hess\,}} (s_t)}{dt}_{|t=0}{\rm{ Hess\,}} ^{-1}(s)\partial f dx+2\int_P (\partial \dot{f})^t{\rm{ Hess\,}} ^{-1}(s)\partial f dx\\ =-\int_P (\partial f)^t{\rm{ Hess\,}} ^{-1}(s){{\rm{ Hess\,}} (\delta s)}{\rm{ Hess\,}} ^{-1}(s)\partial f_t dx+2\int_M \langle d\dot (f\circ \phi),d(f\circ\phi)\rangle dx\\ =-\int_P (\partial f)^t{\rm{ Hess\,}} ^{-1}(s){{\rm{ Hess\,}} (\delta s)}{\rm{ Hess\,}} ^{-1}(s)\partial f_t dx+2\int_M \dot f\circ \phi \Delta (f\circ\phi) dx\\ =-\int_P (\partial f)^t{\rm{ Hess\,}} ^{-1}(s){{\rm{ Hess\,}} (\delta s)}{\rm{ Hess\,}} ^{-1}(s)\partial f_t dx+2\lambda_1^{\bT}\int_P \dot f f dx\\ =-\int_P (\partial f)^t{\rm{ Hess\,}} ^{-1}(s){{\rm{ Hess\,}} (\delta s)}{\rm{ Hess\,}} ^{-1}(s)\partial f_t dx\\ =- \int_P\left(s^{il}f_l s^{jr}f_r\right)(\delta s)_{ij}dx,\\ \end{IEEEeqnarray*} where we have used $\phi$ to mean the moment map for the torus action on $M$. The conditions that $s$ and $ds$ vanish on $\partial P$ ensure that we can integrate the above by parts without picking up boundary terms and hence $$ \frac{d}{dt}_{|t=0}\lambda_1^{\bT}(f_t,g_t)=- \int_P\frac{\partial^2\left(s^{il}f_l s^{jr}f_r\right)}{\partial x_i\partial x_j}(\delta s) dx, $$ as claimed. The $k$-equivariant case is similar. \begin{IEEEeqnarray*}{l} \frac{d}{dt}_{|t=0}\lambda_1^k(f_t,g_t)\\ =\frac{d}{dt}_{|t=0}\int_P |d(e^{{\bf{ i}} k\cdot \theta}f_t)|^2_{g_t}dx\\ =\frac{d}{dt}_{|t=0}\int_P \left(\re\left( (\partial f_t)^t{\rm{ Hess\,}} ^{-1}(s_t)\partial \bar{f}_t \right) +|f_t|^2k^t{\rm{ Hess\,}} (s_t) k\right)dx\\ = \int_P\left(-\re(s^{il}f_l s^{jr}\bar{f}_r)+|f|^2k_ik_j\right)(\delta s)_{ij}dx.\\ \end{IEEEeqnarray*} Integrating by parts we get $$ \frac{d}{dt}_{|t=0}\lambda_1^{\bT}(f_t,g_t)= \int_P\left(-\frac{\partial^2\re\left(s^{il}f_l s^{jr}\bar{f}_r\right)}{\partial x_i\partial x_j}+k^t{\rm{ Hess\,}} |f|^2 k\right)\delta s dx. $$ \end{proof} We are now ready to prove our main characterisation of $\lambda_1^{\bT}$-critical metrics in this section. \begin{proposition}\label{characterisation_crit} In the same setting as above, the symplectic potential $s$ is $\lambda_1^k$-critical iff for all $\delta s\in \mathcal{C}^\infty(\bar{P})$ there are functions on $P$, $\{f_0,\cdots, f_N\}$, corresponding to $k$-equivariant eigenfunctions in $E_1^k(s)$ and $\alpha_0,\cdots, \alpha_N \in [0,1]$ satisfying $$ \sum_{a=1}^N \alpha_a \left(\left(\frac{\partial^2 \re \left(s^{il}f_{a,l} s^{jr}\bar{f}_{a,r}\right)}{\partial x_i \partial x_j}\right)-k^t{\rm{ Hess\,}} |f_a|^2k\right)=0. $$ \end{proposition} Again this lemma has an analogous counterpart in the classical critical first eigenvalue problem and it is a manifestation of a more general phenomenon which is treated in \cite{mb}. To use Macbeth's results in our setting, we would need to prove that the measure described in the main theorem there is of a special type (the relevant fact being that $E_1^k(s)$ is finite dimensional). We have chosen to derive the results so as to be self-contained. \begin{proof} The condition that $s$ is critical can be rewritten as $$ s \, \mbox{ is critical} \, \iff \forall \delta s\in \mathcal{C}^\infty(\bar{P}),\, \exists \,f,h\in E_1^k(s): d\lambda_1^k(f,\delta s)<0<d\lambda_1^k(h,\delta s). $$ Now fix $\delta s\in \mathcal{C}^\infty(\bar{P})$ and consider $d\lambda_1^k(.,\delta s)$ as a function on the finite dimensional vector space $E_1^k(s)$. By restriction to the sphere in $E_1^k(s)$ with respect to the $\mL^2$ norm we see that $$ s \, \mbox{ is critical} \, \implies \forall \delta s\in \mathcal{C}^\infty(\bar{P}),\, \exists \,f\in E_1^k(s),\,\int_P|f|^2dx=1: d\lambda_1^k(f,\delta s)=0. $$ The relevant thing to note is that multiplying $f$ by a fixed constant changes $d\lambda_1^k(f,\delta s)$ by multiplication by a positive constant. Now assume that $\delta s$ and its derivatives vanish along $\partial P$ so that from the previous lemma $$ d\lambda_1^k(f,\delta s)=\int_P \left(-\re\left(\frac{\partial^2 \left(s^{il}f_l s^{jr}\bar{f}_r\right)}{\partial x_i \partial x_j}\right)+k^t{\rm{ Hess\,}} |f|^2k\right)\delta s dx. $$ We set $$ Q_{s}(f)=-\re\left(\frac{\partial^2 \left(s^{il}f_l s^{jr}\bar{f}_r\right)}{\partial x_i \partial x_j}\right)+k^t{\rm{ Hess\,}} |f|^2k, $$ so that $$ d\lambda_1^k(f,\delta s)=\int_P Q_{s}(f)\delta s dx. $$ If $s$ is critical $$ \forall \delta s\in \mathcal{C}^\infty(\bar{P}),\delta s,d(\deltas)=0 \,\mbox{on}\, \partial P,\, \exists \,f\in E_1^k(s): \int_P |f|^2=1,\, \int_P Q_{s}(f)\delta s=0 . $$ We want to prove that $0$ is the convex hull generated by $\{Q_s(f), f\in E_1^k(s)\}$. Let $\mathcal{K}$ be this convex hull. Suppose $0\notin \mathcal{K}$. By the Hahn-Banach separation theorem applied in $\mL_k^2(M)$ (the $\mL^2$ completion of the space of $k$-equivariant functions on $M$ which we are identifying with a subspace of $\mL^2(P)$) there is $ x$, a linear bounded functional on $\mL_k^2(M)$ such that $ x_{|\mathcal{K}}>0$. By Riesz's representation theorem there is $\beta\in \mL^2(P)$ such that $$ x(h)=\int_P\beta hdx>0, \, \forall h\in \mathcal{K}. $$ Suppose that $\beta$ and its first order derivatives vanish on $\partial P$. Then because $s$ is critical there is $f\in E_1^k(s)$ with $\mL^2$-norm equal to 1 such that $$ \int_P Q_s(f)\beta=0, $$ but by assumption $\int_P Q_s(f)\beta= x(Q_s(f))>0$ because $Q_s(f)\in \mathcal{K}$ and we get a contradiction. But since $\beta$ (or its first order derivatives) may not vanish on $\partial P$, we need a slight modification of the above argument. Consider the smooth bump function $\rho_\epsilon$ which is identically equal to $1$ on $P\setminus \mathcal{V}_\epsilon (\partial P)$ where $\mathcal{V}_\epsilon (\partial P)$ denotes a tubular neighbourhood of radius $\epsilon$ of $\delP$. Let $\beta_\epsilon$ denote $\rho_\epsilon\beta$. Then because $s$ is critical, there is $f_\epsilon \in E_1^k(s)$ with $\mL^2$-norm equal to 1 such that $$ \int_P Q_s(f_\epsilon)\beta_\epsilon=0, $$ and $\int_P |f_\epsilon|^2=1.$ Now $\{f_\epsilon\}$ is bounded and contained in a finite dimensional space so that it admits a convergent subsequence. Let $f\in E_1^k(s)$ be the limit. Because the subsequence converges in that finite dimensional subspace, $Q_s(f_\epsilon)$ converges to $Q_s(f)$ in the same subsequence. The sequence $\beta_\epsilon$ also converges a.e. to $\beta$ so that $Q_s(f_\epsilon)\beta_\epsilon$ has a subsequence that converges a.e. to $Q_s(f)\beta$. On the other hand for that subsequence $|Q_s(f_\epsilon)\beta_\epsilon|\leq C \beta$ for some constant $C$. This is because in the subsequence there is bound on the $\mL^\infty$-norm of $Q_s(f_\epsilon)$. By the bounded convergence theorem $$ \int_P Q_s(f_\epsilon)\beta_\epsilon\rightarrow \int_P Q_s(f)\beta=0. $$ But $\int_P Q_s(f)\beta= x(Q_s(f))>0$ because $Q_s(f)\in \mathcal{K}$ and we get a contradiction. We conclude that $0\in \mathcal{K}$ and the proposition follows. \end{proof} \section{Proof the of main Theorems \ref{s2}, \ref{alld}}\label{proof} The idea is to exploit the characterisation given in Proposition \ref{characterisation_crit} for critical toric K\"ahler metrics to conclude that such metrics do not exist. \subsection{The proof of theorem \ref{s2}} \begin{proof Let $k\in \bZ$ be fixed. Under the right normalisation, $(S^2,\omega_{FS},S^1)$ is a toric symplectic manifold with moment polytope $]-1,1[$. Any $S^1$-invariant metric on $S^2$ is described by a symplectic potencial $s\in \mbox{Spot}(]-1,1[)$. From Proposition \ref{characterisation_crit} if it is critical for $\lambda_1^k$ then there are function $\{f_0,\cdots, f_N\}$ and $\alpha_0,\cdots a_N\in [0,1]$ satisfying \begin{equation}\label{lap_S2} \left(\frac{f_a'}{s''}\right)'=\left(\newb{-}\lambda+k^2s''\right)f_a \end{equation} and $$ \sum_{a=0}^N\alpha_a\left( \left|\frac{f_a'}{s''}\right|^2-k^2|f_a|^2\right)''=0. $$ As the $\alpha_a$ are all positive (and smaller than $1$) they can be absorbed into the $f_a$'s at the cost of loosing the normalisation for $\int_P |f_a|^2dx$'s. We write $$ \sum_{a=0}^N \left( \left|\frac{f_a'}{s''}\right|^2-k^2|f_a|^2\right)''=0. $$ Now $$ \sum_{a=0}^N\left( \left|\frac{f_a'}{s''}\right|^2-k^2|f|^2\right)'=2\re\left( \left(\frac{f_a'}{s''}\right)' \frac{\bar{f}_a'}{s''}-k^2f_a'\bar{f_a}\right) $$ and replacing in Equality (\ref{lap_S2}) we see that $$ \sum_{a=0}^N\left( \left|\frac{f_a'}{s''}\right|^2-k^2|f|^2\right)'=2\sum_{a=0}^N\re\left( \left(\newb{-}\lambda+k^2s''\right)f_a \frac{\bar{f}_a'}{s''}-k^2f_a'\bar{f_a}\right) $$ $$ \sum_{a=0}^N\left( \left|\frac{f_a'}{s''}\right|^2-k^2|f|^2\right)'=\newb{-}2\lambda \sum_{a=0}^N\frac{\re(f_a\bar{f}_a')}{s''}. $$ This then implies that $$ \sum_{a=0}^N\frac{\re(f_a\bar{f}_a')}{s''} $$ is constant. Because $\frac{1}{s''}$ vanishes at $1$ and $-1$, this is actually zero and $\sum_{a=0}^N{\re(f_a\bar{f}_a')}=0$ so that $\sum_{a=0}^N|f_a|^2$ is constant. We look at two cases separately: \begin{itemize} \item In the case where $k\ne 0$, the $f_a$ all vanish at $1$ and $-1$ and so $\sum_{a=0}^N |f_a|^2=0$ so that $f_a=0$ for all $a$; a contradiction. \item In the case when $k=0$ we may assume that the $f_a$ are real. We have $$ \sum_{a=0}^N \left( \left(\frac{f_a'}{s''}\right)^2\right)''=2\sum_{a=0}^N \left( \left(\frac{f_a'}{s''}\right)''\frac{f_a'}{s''}+\left(\left(\frac{f_a'}{s''}\right)'\right)^2\right)=0 $$ and replacing Equality (\ref{lap_S2}) for $k=0$ again we find that $$ \begin{aligned} 0&=&2\sum_{a=0}^N \left((\newb{-}\lambda f_a)'\frac{f_a'}{s''}+\left(\left(\frac{f_a'}{s''}\right)'\right)^2\right)\\ &=&2\sum_{a=0}^N \newb{-}\lambda\frac{(f_a')^2}{s''}+\lambda^2 f_a^2,\\ \end{aligned} $$ and $\sum_{a=0}^N \frac{(f_a')^2}{s''}= \lambda \sum_{a=0}^N f_a^2$ and hence it is constant. But, because $\frac{1}{s''}$ vanishes at $0$, $\sum_{a=0}^N \frac{(f_a')^2}{s''}=0$ and each $f_a'$ vanishes which is also a contradiction. \end{itemize} \end{proof} \subsection{Proof of theorem \ref{alld}} We start with a useful calculation. \begin{lemma} In the same context as above, let $f$ be an invariant eigenfunction for the eigenvalue $\lambda$ of the Laplacian on toric K\"ahler manifold with symplectic potential $s$ then \begin{equation}\label{expande_dlambda} \frac{\partial^2 \left(s^{il}f_{l} s^{jr}f_{r}\right)}{\partial x_i \partial x_j}=\lambda^2 f^2+2\lambda\partial f^t({\rm{ Hess\,}} s)^{-1}\partial f+{\rm{ Tr \,}} (D(({\rm{ Hess\,}} s)^{-1}\partial f))^2 \end{equation} \end{lemma} \begin{proof} \begin{IEEEeqnarray*}{l} \frac{\partial^2 \left(s^{il}f_{l} s^{jr}f_{r}\right)}{\partial x_i \partial x_j}=\frac{\partial \left(s^{il}f_{l}\right)}{\partial x_i}\frac{\partial\left( s^{jr}f_{r}\right)}{\partial x_j}+2\frac{\partial^2 \left(s^{il}f_{l} \right)}{\partial x_i \partial x_j}s^{jr}f_{r}+ \frac{\partial (s^{il}f_{l})}{\partial x_j}\frac{\partial\left( s^{jr}f_{r}\right)}{\partial x_i} \\ =(\lambda f)( \lambda f)+2\frac{\partial (\newb{-} \lambda f)}{ \partial x_j}s^{jr}f_{r}+\frac{\partial \left(s^{il}f_{l}\right)}{\partial x_j}\frac{\partial\left( s^{jr}f_{r}\right)}{\partial x_i} \\ =\lambda^2 f^2\newb{-}2\lambda\partial f^t({\rm{ Hess\,}} s)^{-1}\partial f+\frac{\partial \left(s^{il}f_{l}\right)}{\partial x_j}\frac{\partial\left( s^{jr}f_{r}\right)}{\partial x_i}. \\ \end{IEEEeqnarray*} Where we have used the fact that $$ \frac{\partial (s^{il}f_{l})}{\partial x_j}=\newb{-}\lambda f. $$ Now $$\frac{\partial \left(s^{il}f_{l}\right)}{\partial x_j}=\left[D\left(({\rm{ Hess\,}} s)^{-1}\partial f\right)\right]_{ij}$$ and the result follows. \end{proof} As a result of this calculation and of Proposition \ref{characterisation_crit} it follows that the symplectic potential $s$ is $\lambda_1^{\bT}$-critical iff for all $\delta s\in \mathcal{C}^\infty(\bar{P})$ there are functions on $P$ $\{f_0,\cdots, f_N\}$ corresponding to invariant eigenfunctions in $E_1^\bT(s)$ satisfying $$ \sum_{a=1}^N \left(\lambda^2f_a^2\newb{-}2\lambda\partial f_a({\rm{ Hess\,}} s)^{-1}\partial{f}_a+{\rm{ Tr \,}} (D({\rm{ Hess\,}} s)^{-1}\partial{f}_a)^2)\right)=0. $$ We are now ready to prove our main theorem. \begin{proof} Suppose that there exists a $\lambda_1^{\bT}$-critical metric on a toric K\"ahler manifold. We are going to derive a contradiction from this assumption. Let $P$ denote the moment polytope of our toric K\"ahler manifold. Assume without loss of generality that $0$ is a vertex of $P$ and that $P$ is standard at $0$. We can alway achieve this applying an $SL(n,\bZ)$ transformation which will lift to an equivariant diffeomorphism taking critical symplectic potentials for $\lambda_1^{\bT}$ to taking critical symplectic potentials for $\lambda_1^{\bT}$. We start by showing that \begin{equation}\label{main_crit_relation} \sum_{a=1}^N \left(\lambda^2f_a^2\newb{-}2\lambda\partial f_a({\rm{ Hess\,}} s)^{-1}\partial{f}_a+{\rm{ Tr \,}} (D(({\rm{ Hess\,}} s)^{-1}\partial{f}_a)^2)\right)=0 \end{equation} implies that $f_a(0)=0, \, \forall a=0,\cdots, N.$ The above relation holds at $x=0$. Now $({\rm{ Hess\,}} s)^{-1}(0)=0$ and we are going to show that $$ {\rm{ Tr \,}} (D({\rm{ Hess\,}} s)^{-1}\partial{f}_a)^2)(0)=\sum_{a=1}^N|\partial f_a|^2(0). $$ It will then follows that $f_a(0),\partial f_a(0) =0, \, \forall a=0,\cdots, N.$ Because $s\in \mbox{Spot}(P),$ there is $v\in \mathcal{C}^\infty (\bar{P})$ such that $s=s_G+v$ and $s_G=\sum_{k=1}^d \left(x\cdot \nu_k-c_k\right)\log(x\cdot \nu_k-c_k)-\left(x\cdot \nu_k-c_k\right)$ where $$ P=\left\{x\in \bR^n: x\cdot \nu_l-c_l>0, \, l=1,\cdots d\right\}. $$ It is not hard to see that $$ {\rm{ Hess\,}} s_G=\sum_{l=1}^d \frac{\nu_l\nu_l^t}{x\cdot \nu_l-c_l}. $$ Because $P$ is standard at zero $\{\nu_1,\cdots,\nu_n\}$ is the canonical basis of $\bR^n$ so that $$ {\rm{ Hess\,}} s_G= \left( \begin{array}{cccc} \frac{1}{x_1} & 0& \cdots &0 \\ \hfill & \hfill & \ddots &\hfill \\ 0 & \cdots & 0 & \frac{1}{x_m} \\ \end{array} \right)+A $$ where $A$ is smooth on a neighbourhood of $0.$ Hence, on a neighbourhood of $0$, there is a smooth $B$ such that $$ {\rm{ Hess\,}} s=\left( \begin{array}{cccc} \frac{1}{x_1} & 0& \cdots &0 \\ \hfill & \hfill & \ddots &\hfill \\ 0 & \cdots & 0 & \frac{1}{x_n} \\ \end{array} \right)+B. $$ So \begin{equation}\label{hess_0} ({\rm{ Hess\,}} s)^{-1}=\mbox{Diag}(x_1,\cdots, x_n)-\mbox{Diag}(x_1,\cdots, x_n)B\mbox{Diag}(x_1,\cdots, x_n)+\cdots \end{equation} and therefore writing $\partial_lf=f_l,\, l=1,\cdots n$ $$ ({\rm{ Hess\,}} s)^{-1}\partial f=\left( \begin{array}{c} {x_1}f_1 \\ \vdots \\ {x_n}f_n \\ \end{array} \right)+O(2) $$ where for any positive integer $l$, $O(l)$ denotes a function which vanishes to order at least $l$ at zero i.e. a function which is bounded by $c||x||^l$ on some neighbourhood of zero for some constant $C$. Hence $$ D(({\rm{ Hess\,}} s)^{-1}\partial f)=\left( \begin{array}{ccc} \partial_1({x_1} f_1)& \cdots &x_1f_{1n} \\ \hfill & \ddots &\hfill \\ x_ nf_{1n}& \cdots& \partial_n({x_n} f_n)\\ \end{array} \right)+O(1) $$ where $f_{ij}=\frac{\partial^2f}{\partial x_i\partial x_j}$ for all $i,j=1,\cdots n$ and $$ {\rm{ Tr \,}} (D(({\rm{ Hess\,}} s)^{-1}\partial f))^2=\sum_{l=1}^n(f_l+x_lf_{ll})^2+\sum_{l,r=1, l\ne r}^ nx_lx_rf_{lr}^2+O(1). $$ In particular ${\rm{ Tr \,}} (D(({\rm{ Hess\,}} s)^{-1}\partial f))^2(0)=\sum_{l=1}^n(f_l)^2(0)=|\partial f|^2(0)$ as claimed. Next we want to prove that if we assume that $f_a=O(l)$ for all $a=0,\cdots, N$ and some integer $l>1$ then in fact $f_a=O(l+1).$ Consider the equality $$ \sum_{a=0}^N \left(\lambda^2f_a^2\newb{-}2\lambda\partial f_a({\rm{ Hess\,}} s)^{-1}\partial{f}_a+{\rm{ Tr \,}} (D({\rm{ Hess\,}} s)^{-1}\partial{f}_a)^2)\right)=0. $$ \begin{itemize} \item Because $f_a=O(l)$ it follows that $\lambda^2\sum_{a=0}^N f_a^2=O(2l).$ \item It follows from Equation (\ref{hess_0}) that $({\rm{ Hess\,}} s)^{-1}=O(1)$ and since $\partial f_a=O(l-1)$, $\lambda \sum_{a=0}^N \partial f_a({\rm{ Hess\,}} s)^{-1}\partial{f}_a=O(2l-1).$ \item As for $\sum_{a=0}^N {\rm{ Tr \,}} (D({\rm{ Hess\,}} s)^{-1}\partial{f}_a)^2)$, to study its asymptotic behaviour near $0$ we essentially need to retrace the steps in the above analysis taking into account that $f_a=O(l)$. If $f=O(l)$ then $$ ({\rm{ Hess\,}} s)^{-1}\partial f=\left( \begin{array}{c} {x_1}f_1 \\ \vdots \\ {x_n}f_n \\ \end{array} \right)+O(l+1) $$ $$ D(({\rm{ Hess\,}} s)^{-1}\partial f)=\left( \begin{array}{ccc} \partial_1({x_1} f_1)& \cdots &x_1f_{1n} \\ \hfill & \ddots &\hfill \\ x_ nf_{1n}& \cdots& \partial_n({x_n} f_n)\\ \end{array} \right)+O(l), $$ and $$ \left( \begin{array}{ccc} \partial_1({x_1} f_1)& \cdots &x_1f_{1n} \\ \hfill & \ddots &\hfill \\ x_ nf_{1n}& \cdots& \partial_n({x_n} f_n)\\ \end{array} \right)=O(l-1), $$ so that $$ \begin{aligned} {\rm{ Tr \,}} (D(({\rm{ Hess\,}} s)^{-1}\partial f))^2&=&\sum_{l=1}^n(f_l+x_lf_{ll})^2+\sum_{l,r=1, l\ne r}^ nx_lx_rf_{lr}^2+O(2l-1).\\ \end{aligned} $$ At this point we may conclude that it follows from Equation (\ref{main_crit_relation}) and the analysis above that $$ \sum_{a=0}^N\left( (f_{a,l}+x_lf_{a,ll})^2+\sum_{l,r=1}^ nx_lx_rf_{a,lr}^2\right)=O(2l-1), $$ when in fact this expression only needs to be $O(2l-2)$. Consider the analytic expansion of $f_a$ around $0$. We have $f_a=P_a+O(l+1)$ where $P_a$ is a homogeneous polynomial of order $l$. Therefore $$ \sum_{a=0}^N\left( (\partial_l(x_lP_{a,l}))^2+\sum_{l,r=1}^ nx_lx_rP_{a,lr}^2\right) $$ must be a polynomial of order $2l-1$. Let $v=(x_1,\cdots,x_n)$ be a generic vector in $\{x=(x_1,\cdots,x_n)\in \bR^n: x_1,\cdots,x_n>0\}$ then $$ t^{2l-2}\sum_{a=0}^N\left( (\partial_l(x_lP_{a,l}))^2(v)+\sum_{l,r=1}^ nx_lx_rP_{a,lr}^2(v)\right) $$ must be of order at least $2l-1$ in $t$ so that $$ \sum_{a=0}^N\left( (\partial_l(x_lP_{a,l}))^2(v)+\sum_{l,r=1}^ nx_lx_rP_{a,lr}^2(v)\right)=0 $$ and so because all terms in the sum are non negative they must vanish. We conclude that $\partial_l(x_lP_{a,l})\equiv0$ and $P_{a,lr}\equiv0$ so that $P_a$ must be constant for all $a=0,\cdots N$. Because $P_a$ is of degree greater than $1$ then it actually must vanish so that $f_a=O(l+1)$ as claimed. \end{itemize} Since we have proved that $f_a=O(1)$ and $f_a=O(k)\implies f_a=O(k+1)$ it follows that all derivatives of $f_a$ vanish at zero for all $a=0,\cdots, N$. At this point we use the analyticity hypothesis. Because our Riemannian metric is analytic, the eigenfunctions for its Laplace operator are analytic as well. This follows from elliptic regularity. We may then conclude that all $f_a\equiv 0$ which is impossible. No critical metric exists. \end{proof} \section{Concluding remarks} We would like to be able to use the equations that we derived from the $\lambda_1^{\bT}$-criticality on the metric and the corresponding eigenfunctions to conclude that both metric and eigenfunctions are analytic. The symplectic potential of a $\lambda_1^{\bT}$-critical metric and its eigenfunction satisfy the following system of PDE's for function on $P$ \begin{equation}\label{systemPDE} \begin{cases} \frac{\partial}{\partial x_i}\left(s^{ij}\frac{\partial f_a}{\partial x_j}\right)=\lambda_1^{\bT} f_a,\, \forall a=0,\cdots, N\\ \sum_{a=0}^N \frac{\partial^2\left(s^{il}f_{a,l}s^{jr}f_{a,r}\right)}{\partial x_i\partial x_j}=0. \end{cases} \end{equation} This can be written in the form $F(x,s,f, \partial s, \partial f,\cdots)=0$ for an analytic function $F$ (here we write $f=(f_0,\cdots,f_N)$). It would follow from a result of Morrey (see \cite{m}) that if this system is elliptic in some suitable sense then its solutions are analytic. In fact the system is not elliptic. We will prove this here for the sake of completeness. \begin{lemma} The system (\ref{systemPDE}) is nowhere elliptic. \end{lemma} \begin{proof} This is essentially a matter of chasing through the definition of ellipticity. See \cite{m} for more details. Writing $F=(F_0,\cdots, F_N, F_{N+1})$ with $$ \begin{cases} F_a=\frac{\partial}{x_i}\left(s^{ij}\frac{\partial f_a}{\partial x_j}\right)-\lambda_1^{\bT} f_a,\, \forall a=0,\cdots, N\\ F_{N+1}=\sum_{a=0}^N \frac{\partial^2\left(s^{il}f_{a,l}s^{jr}f_{a,r}\right)}{\partial x_i\partial x_j}, \end{cases} $$ we essentially want to calculate $\det DF$. We start by calculating each partial derivative. We set $f_{N+1}=s$ and below we will omit the dependence of $F$ on variables that are fixed. \begin{enumerate} \item Given $a=0,\cdots, N$ $$ \frac{d }{dt}_{|t=0}F_a(f_a+tv)=\frac{\partial}{x_i}\left(s^{ij}\frac{\partial v}{\partial x_j}\right)-\lambda_1^{\bT} v, $$ so that $$ L_{aa}(x,D)=D_is^{ij}D_j=D^t({\rm{ Hess\,}} s)^{-1}D, $$ where we have used the notation in \cite{m}. \item Also given $a,b<N+1$ distinct $$ \frac{d }{dt}_{|t=0}F_a(f_b+tv)=0, \, a\ne b, $$ so that $$ L_{ab}(x,D)=0, \, a\ne b. $$ \item Now given $a<N+1$ the derivative of $F_a$ with respect to $s$ is given by $$ \frac{d }{dt}_{|t=0}F_a(s+tv)=-\frac{\partial}{\partial x_i}\left(s^{il}v_{lr}s^{rj}\frac{\partial f_a}{\partial x_j}\right), $$ and $$ L_{aN+1}(x,D)=-D_is^{il}D_lD_rs^{rj}\frac{\partial f_a}{\partial x_j}=-D^t({\rm{ Hess\,}} s)^{-1}DD^t({\rm{ Hess\,}} s)^{-1}\partial f_a. $$ \item As for the derivative of $F_{N+1}$ with respect to $f_a$ for $a<N+1$ $$ \frac{d }{dt}_{|t=0}F_{N+1}(f_a+tv)= 2\frac{\partial^2 \left(s^{il}f_{a,l}s^{jr}v_{r}\right) }{\partial x_i\partial x_j}, $$ and $$ \begin{aligned} L_{N+1,a}(x,D)&=& 2D_iD_js^{il}f_{a,l}s^{jr}D_r\\ &=&2D^t({\rm{ Hess\,}} s)^{-1}DD^t({\rm{ Hess\,}} s)^{-1}\partial f_a. \end{aligned} $$ \item Last, we calculate the derivative of $F_{N+1}$ with respect to $s$ $$ \frac{d }{dt}_{|t=0}F_{N+1}(s+tv)=-2\sum_{a=1}^N \frac{\partial^2\left(s^{iq}v_{qp}s^{pl}f_{a,l}s^{jr}f_{a,r}\right)}{\partial x_i\partial x_j}, $$ and $$ \begin{aligned} L_{N+1N+1}(x,D)&=&-2\sum_{a=0}^N D_iD_js^{iq}D_qD_ps^{pl}f_{a,l}s^{jr}f_{a,r}\\ &=&-2D^t(\hesss)^{-1}D \sum_{a=0}^N(D^t(\hesss)^{-1}\partial f_a)^2. \end{aligned} $$ \end{enumerate} To sum up \begin{center} \begin{tabular}{|l|l|l| } \hline &$\frac{d }{dt}_{|t=0}F_a(f_b+tv)$ & $L_{ab}(x,D)$ \\ \hline \hline $a=b<N+1$&$\frac{\partial}{x_i}\left(s^{ij}\frac{\partial v}{\partial x_j}\right)-\lambda_1^{\bT} v$ & $D^t({\rm{ Hess\,}} s)^{-1}D$ \\ \hline $a,b<N+1, a\ne b$&$0$ & $0$ \\ \hline $a<N+1, b=N+1$ &$-2\sum_{a=1}^N \frac{\partial^2 \left(s^{il}f_{a,l}s^{jr}v_r\right)}{\partial x_i\partial x_j}$ & $-2D^t({\rm{ Hess\,}} s)^{-1}DD^t({\rm{ Hess\,}} s)^{-1}\partial f_a$ \\ \hline $a=b=N+1$&$-2\sum_{a=1}^N \frac{\partial^2\left(s^{iq}v_{qp}s^{pl}f_{a,l}s^{jr}f_{a,r}\right)}{\partial x_i\partial x_j}$ & $-2D^t(\hesss)^{-1}D \sum_{a=0}^N(D^t(\hesss)^{-1}\partial f_a)^2$ \\ \hline \end{tabular} \end{center} The system is elliptic iff $$ \det DF:=\det(L_{ij}(x,D))_{i,j=0}^{N+1}\ne 0,\, \forall D\ne 0 $$ Now $DF$ is given by $D^t ({\rm{ Hess\,}} s)^{-1}D$ times $$ \left( \begin{array}{cccc} 1&0& \cdots & -D^t({\rm{ Hess\,}} s)^{-1}\partial f_0\\ \hfill & \ddots &\hfill &\hfill \\ 0& \cdots& 1&-D^t({\rm{ Hess\,}} s)^{-1}\partial f_N\\ 2D^t({\rm{ Hess\,}} s)^{-1}\partial f_0&\cdots&2D^t({\rm{ Hess\,}} s)^{-1}\partial f_N&-2\sum_{a=0}^N(D^t(\hesss)^{-1}\partial f_a)^2\\ \end{array} \right) $$ The matrix above is clearly singular at all points as its last line is a linear combination of the previous $N$ lines. \end{proof}
2023-04-23T06:10:16.936Z
2017-08-15T02:10:56.000Z
redpajama/arxiv
arxiv_0002
553
7,880
01035f19a6d7b24e8d38128fc7bb0bdf512b68b7
\section{Introduction} Search systems are widely used to help users retrieve information from large and often difficult to categorize data sources. Search engines are typically complicated ecosystems that contain many components. As shown in Figure \ref{figure:overview}, after a user issues a query, the language generation modules \citep{bar2011, li2006, cao2008} serve as search assistants to generate a better query. The language understanding modules \cite{kang2003query,guo2009,blanco2015} aim to extract semantic information from the query and documents, such as user intent and named entities. Finally, all the extracted information is used for document retrieval and ranking. A common part of these search components is that they all deal with large amounts of text data, such as queries, user profiles and documents. Text data is sequential, and understanding such sequential information is a nontrivial task with traditional methods. For example, the systems need to handle (1) synonyms/similar concepts, {\it e.g.}, "software engineer" vs "programmer", (2) disambiguation, {\it e.g.}, "job opening" vs "restaurant opening", and (3) word importance weighting, {\it e.g.}, the important words in a query "looking for a research scientist job" would be "research scientist", and so on. Traditional methods highly rely on sparse features such as unigram features, which do not have strong generalization power to handle these cases. On the other hand, deep learning has shown great success in NLP tasks \cite{lecun2015}, indicating its potential in search systems. In addition to alleviating the language problems mentioned above, there are several other benefits of using deep learning, such as end-to-end training, automatic feature engineering, etc. Developing deep NLP models for search systems requires considering three challenges of the complicated ecosystem of search engines \cite{Croft2010,mitra2018}. Firstly, serving \textbf{latency} constraints preclude complex models from being used in production. In addition, directly trained deep learning models can have \textbf{robustness} issues such as overfitting. The last challenge is \textbf{effectiveness}: often production models are very strong baselines that are trained on millions of data examples with many handcrafted features, and have been tweaked for years. In this paper, we focus on developing practical solutions to tackle the three challenges, and share real world experiences. As shown in Figure \ref{figure:tasks}, we have picked five representative search tasks that cover the classic NLP problems: classification, ranking, sequence tagging, language modeling, and sequence-to-sequence. For each of the tasks, we investigate the unique challenges, provide practical deep NLP solutions, and analyze the offline/online experiments results. By providing a comprehensive study, we hope the readers can not only learn how to handle the different challenges in search systems, but also generalize and apply it to new tasks in other industry productions such as recommender systems. The contribution of the paper is: \begin{itemize} \item To our best knowledge, this is the first comprehensive study for applying deep NLP models in search productions. Five tasks in search systems are selected, which cover most common NLP problems. For each task, we provide practical solutions, and report experiments in LinkedIn's commercial search engines. \item We go beyond the task boundary and summarize the observations and solutions into lessons. We believe our experience would be a valuable resource for other vertical searches, as well as other applications such as recommender systems. \item We successfully deploy BERT to two real world productions: query intent and document ranking. \end{itemize} \begin{figure}[ht] \centering \includegraphics[scale=0.5]{figures/system-overview.pdf} \caption{Overview of a search system.} \label{figure:overview} \end{figure} \section{Search Systems at LinkedIn} \subsection{LinkedIn Search Systems Overview} LinkedIn provides multiple vertical searches, each corresponding to a document type, e.g., \textit{people}, \textit{job}, \textit{company}, etc. In this paper, the experiments are conducted on 3 vertical searches (\textit{people search, job search, help center search}), and \textit{federated search}. Federated search retrieves the documents from all vertical searches and blends them into the same search result page. When users go to LinkedIn, federated search is the default. People search is the most popular search engine, which retrieves member profiles; job search returns job posts for job seekers; help center search provides answers on how to use LinkedIn with a lot of natural language queries, \textit{e.g.}, "how to change my account password". A typical search system is shown in Figure \ref{figure:overview}. There are three main components: (1) \textbf{Language understanding} constructs important features for the other two components; (2) \textbf{Language generation} improves user experiences by suggesting queries that are more likely to lead to desirable search results. (3) \textbf{Document retrieval \& ranking} produces the final results of search systems and presents them to users. \subsection{Characteristics of Vertical Search Data} Search data is different from classic NLP task data, mainly from two aspects: data genre and training data size. In classic NLP datasets \cite{pang2005,hu2004}, the data unit is one or several complete sentences with dozens of words in proper grammar. In a search system, queries have several keywords without grammar, which introduces ambiguity. Meanwhile, LinkedIn searches are vertical searches instead of web search such as Google. The vocabulary is mostly domain specific entities, \textit{e.g.}, people names, company, skills, etc. In addition, the training data size is noteworthy. Classic NLP datasets are human annotated, therefore the size is usually around tens of thousands of sentences, \textit{e.g.}, 11k training examples in the sentiment analysis dataset \cite{pang2005}. However, search training data is usually derived from click-through data, hence contains millions of noisy training examples. \subsection{Challenges of Deep NLP for Search} There are several common challenges to applying deep NLP models to search systems. The first challenge is the online production \textbf{latency}. Deep learning models are known for their large compute time. Assuming the time complexity of a bag-of-words model is $O(n)$, where $n$ is the number of words in a sentence, then a LSTM model \cite{Hochreiter1997} has a time complexity of $O(nd^2)$, where $d$ is the number of dimensions used for word embeddings. In this paper, we analyze the unique latency challenge faced by each task, and provide multiple practical solutions to resolve or mitigate the latency issue. The second challenge is \textbf{robustness}. Deep learning models have many parameters, hence are more likely to overfit to the training data, and ignore the infrequent patterns. For example, in query tagging, the LSTM models always recognize the query "linkedin facebook" as one company, since queries with two companies are rare. In addition, deep NLP models tend to over-generalize word semantics, \textit{e.g.}, in people search, professor profiles are matched to the query "student". In this paper, we show several tasks that could lead to overfitting, and provide practical solutions such as more careful training data creation to tackle it. The third challenge is \textbf{effectiveness}. The traditional production models are usually optimized for many iterations and are trained with millions of examples, hence hard to beat. In general, we reuse the existing handcrafted features to alleviate the issue. We also analyze why deep learning models do not work well in some scenarios. \begin{figure}[ht] \centering \includegraphics[scale=0.5]{figures/representative-tasks.pdf} \caption{Deep NLP models for representative search tasks.} \label{figure:tasks} \end{figure} \subsection{Search Tasks} The core of deep NLP is sequential modeling with CNN/\-LSTM/\-Transformer \cite{Lecun1995,Hochreiter1997,vaswani2017} networks to generate word or sequence embeddings. By adding specific loss functions on top of these embeddings, they can serve different NLP problems: predicting a sequence level label is a classification problem; predicting a label for each word is a sequential tagging problem, etc. In total there are five common NLP problems: classification, ranking, sequence tagging, language modeling, and sequence-to-sequence. For each of the NLP tasks, we present a corresponding search task in Figure \ref{figure:tasks}. These search tasks cover major components in search systems, ranging from language understanding/generation to document ranking. We also present a fundamental model pre-training task \cite{devlin2019} that can potentially benefit all tasks. \section{Representative Search Tasks} \label{section:five-task} In this section, we introduce each specific task, outline the challenges, show how to overcome the challenges, and analyze the offline/online experiment results. The experiments are conducted on the LinkedIn English market. Offline results are reported on the test set. All reported online metrics are statistically significant with $p < 0.05$. \subsection{Query Intent Prediction} \subsubsection{Introduction} Query intent prediction \cite{kang2003query,hu2009understanding} is an important product in modern search engines. Query intent is used in federated search to predict the probability of a user's intent towards seven search verticals: \textit{people}, \textit{job}, \textit{feed}, \textit{company}, \textit{group}, \textit{school}, \textit{event}. The predicted intent is an important feature leveraged by downstream tasks such as search result blending \cite{li2008learning} to rank higher the documents from relevant search verticals. The challenge of this task is there are very few words in a query, hence it is hard to disambiguate words, such as "michael dell" (person names) vs "dell engineer jobs" (company). Deep NLP models can alleviate this issue, especially BERT which produces contextualized word embeddings \cite{devlin2019} (more details can be found in Section \ref{section:bert-pretraining}). \subsubsection{Approach} The query intent prediction task is modeled as a multi-class classification problem. CNNs have achieved significant performance gains in text classification problems\cite{kalchbrenner2014convolutional,kim2014,hashemi2016}. In our approach (Figure \ref{figure:qim-cnn}), we combine the extracted text embedding with handcrafted features, and use a hidden layer to enable feature non-linearity. The existing handcrafted features are powerful. For example, the query tagger (Section \ref{section:qt}) feature can identify almost all people names accurately, including those out of the word embedding vocabulary. The member behavior features can enable personalization (e.g., whether a user clicks on job postings in a certain period of time). \begin{figure}[ht] \centering \includegraphics[scale=0.5]{figures/qim-cnn.pdf} \caption{\small CNN based query intent prediction.} \label{figure:qim-cnn} \end{figure} \subsubsection{Experiments} The label is inferred from click-through behaviors in the search log: if a user clicked on a document from one of the seven verticals, then the query is assigned the corresponding vertical label. We use 24M queries for training and 50k for dev and test. Besides the production baseline SCRF model, another baseline is bidirectional LSTM \cite{graves2013}. For the CNN and LSTM model, the vocabulary size is 100k with word embedding dimension as 64; word embedding is pre-trained with GloVe \cite{pennington2014};\footnote{In the five tasks except question suggestion, we always pre-train the word embedding on millions of training data with GloVe. In query tagger, we add additional queries and member profiles to enrich the corpus for pre-training. We find word embedding pre-training always yields comparable or better relevance performance.} hidden layer dimension is 200 (Figure \ref{figure:qim-cnn}). In the CNN model, 128 filters of size 3 is used to capture word tri-grams. In LSTM, the hidden state size is 128. \subsubsection{Results} The baseline model is the production model, logistic regression on bag-of-words features and other handcrafted features. The offline relevance and latency performance are shown in Table \ref{table:offline-qim-cnn}. Both CNN and LSTM models outperform the production model, meaning that the features automatically extracted by CNN/LSTM can capture the query intents. For online experiments (Table \ref{table:qim-cnn-online}), we choose CNN instead of LSTM, since the relevance difference is small, but CNN is faster than LSTM. The online results show CNN increases the job documents click metrics. \begin{table} \centering \caption{\small Offline comparison w.r.t. production baseline.} \label{table:offline-qim-cnn} \begin{tabular}{lcc} \toprule & \textbf{Accuracy} & \textbf{P99 Latency} \\ \midrule LR (baseline) & - & - \\ CNN w/o handcrafted features & $+1.03\%$ & $+0.44$ms \\ CNN & $+1.49\%$ & $+0.45$ms \\ LSTM & $+1.61\%$ & $+0.96$ms \\ \bottomrule \end{tabular} \end{table} \begin{table} \centering \caption{\small Online comparison for CNN based query intent prediction model vs production baseline. CTR@5 calculates the proportion of searches that received a click at top 5 items. } \label{table:qim-cnn-online} \begin{tabular}{lc} \toprule \textbf{Metrics} & \textbf{Percentage Lift} \\ \midrule CTR@5 of job posts & $+0.43\%$ \\ \bottomrule \end{tabular} \end{table} \subsubsection{Related Work} The traditional methods use many handcrafted features, such as unigram, language model scores, lexicon matching features, etc \cite{arguello2009sources, cao2009context}. For Deep NLP models, offline experiments with CNN are conducted on 10k queries \cite{hashemi2016}. In contrast, our CNN model is trained on millions of queries with many handcrafted features (personalization, query tagger, etc). We also provided detailed analysis on latency and online impact on the vertical prediction task. \subsection{Query Tagging} \label{section:qt} \subsubsection{Introduction} The goal of query tagging is to identify the named entities in queries. At LinkedIn, we are interested in 7 types of entities: \textit{first name, last name, company name, school name, geolocation, title, and skill}. After entities are identified, many important features can be constructed for downstream tasks such as query intent prediction or search ranking. Query tagging is not a trivial task. For example, lexicon matching cannot solve the simple case such as "research scientist", because there are three entities matched: "research" as skill, "scientist" as title, "research scientist" as title. In addition, for ambiguous queries, the query tagger should produce the most probable hypotheses, \textit{e.g.}, "vera wang" as a company rather than a person. \subsubsection{Approach} Query tagging is a named entity recognition task on query data. The production model uses three categories of features: character based, word based and lexicon based, as summarized in Table \ref{table:qt-ftr}. It is worth noting that we are able to extract powerful lexicon features leveraging large amount of user generated data, \textit{i.e.}, collecting the lexicon items from the corresponding fields of 600 million member profiles. Because of this, we choose semi-markov conditional random field (SCRF) \cite{sarawagi2005} as a baseline model, which can better exploit lexicon features than CRF \cite{lafferty2001}. The bidirectional LSTM-CRF architecture \cite{lample2016} proves to be a successful model on classic NLP datasets \cite{sang2003}. We further extend it to bidirectional LSTM-SCRF. Essentially, the deep part, Bidirectional LSTM, is used to replace the word-based features in Table \ref{table:qt-ftr}. \begin{table} \centering \caption{\small Features used in SCRF based query tagger.} \label{table:qt-ftr} \begin{tabular}{ll} \toprule \textbf{Type} & \textbf{Description}\\ \midrule char based & prefix/suffix features, such as "er", "ist" \\ word based & word \\ & lemma \\ & brown cluster id \cite{brown1992} \\ & bigram with previous word, bigram with next word \\ lexicon & profile lexicon, collected from member profiles \\ & clickthrough lexicon, collected from clickthrough data \\ \bottomrule \end{tabular} \end{table} \subsubsection{Experiments} \label{section:qt-exp} \begin{table} \centering \caption{\small Query tagging results, measure by F1 score.} \label{table:qt-res} \begin{tabular}{lll} \toprule \textbf{Model} & \textbf{Hand-crafted Ftrs} & \textbf{F1}\\ \midrule SCRF (baseline) & char/word/lexicon & - \\ CRF & char/word/lexicon & $-0.6\%$ \\ SCRF-nolex & char/word & $-6.1\%$ \\ \midrule LSTM-SCRF & char/lexicon & $-0.3\%$ \\ LSTM-SCRF-all & char/word/lexicon & $-0.1\%$ \\ \bottomrule \end{tabular} \end{table} Queries from LinkedIn federated search are collected and manually annotated. Meanwhile, a few thousand queries are generated to overcome the robustness problem (explained later). In total, we have 100k training queries, 5k dev and 5k test queries. For all models, the Adagrad optimizer \cite{duchi2011} is used with learning rate $10^{-3}$; batch size is 100; word embedding size is 50. This is tuned on a dev set. \subsubsection{Results} The traditional method SCRF achieves the best results in Table \ref{table:qt-res}. LSTM-SCRF has all the handcrafted features except word based features, however, it cannot outperform the SCRF baseline, and neither can LSTM-SCRF-all with all handcrafted features. Due to no significant offline experiment gain, these models are not deployed online. We believe the major reason is the strength of lexicon features, therefore LSTM does not help much. The SCRF-nolex (without lexicon based features) performance also indicates lexicon features are the most important. Meanwhile, looking at the data, we found most entities are already covered by the lexicons that are built on large scales of data. Other reasons could be due to the data genre: Queries are much shorter than natural language sentences. Therefore, LSTM's ability to extract long distance dependencies is not helpful in this task. \subsubsection{Related Work} The early works use CRF/SCRF to extract entities in queries \cite{li2009extracting,guo2008unified}. SCRF \cite{sarawagi2005} yields stronger performance superior by better exploiting lexicon features. The deep NLP models are mainly designed for natural language sentences with LSTM-CRF \cite{lample2016,Ma:16}, where LSTM is able to generate more powerful features by summarizing the long distance dependency. In this paper, we presented a strong production baseline SCRF with lexicons constructed from large scale datasets, and analyzed why deep NLP models failed to improve the relevance performance. \subsection{Document Ranking} \subsubsection{Introduction} Ranked documents are the final results presented to users. Given a query, a searcher profile and a set of retrieved documents, the goal is to assign a relevance score to each document and generate the ranking. Latency is the biggest challenge for this task. Although it has the same time complexity as query intent/tagging tasks, the data unit of document ranking task is a set of documents, therefore the absolute time is not affordable. The other challenge comes from effectiveness. The production model is optimized in many iterations, with many strong handcrafted features. In this paper, the experiments are conducted on people search and help center search. Table \ref{table:ranking-data} shows the statistics of two searches. \subsubsection{Approach} The production model uses XGBoost \cite{chen2016} for training, which works well with large scale training data and is effective in modeling feature non-linearity. In people search, many non-text features are used, such as personalized features based on social network structures and past user behaviors, and document popularity features based on search log statistics. In addition, millions of training examples are available for people search. \begin{table} \centering \footnotesize \caption{\small Statistics of two document ranking datasets.} \label{table:ranking-data} \begin{tabular}{lp{3cm}p{3cm}} \toprule & \textbf{People Search} & \textbf{Help Center} \\ \midrule \# docs & 600M & 2,700\\ \# training data (queries) & 5M & 340,000\\ \bottomrule \end{tabular} \end{table} \begin{figure}[ht] \centering \includegraphics[scale=0.7]{figures/detext-model-clear.pdf} \caption{Model architecture for ranking model (The learning-to-rank layer is not included).} \label{figure:ranking-model} \end{figure} One benefit of a deep learning approach is that we can easily combine the deep NLP based semantic matching with other techniques that prove to be effective. Figure \ref{figure:ranking-model} shows the architecture. In Figure \ref{figure:ranking-model}, the input is a query, multiple document fields, and a hand crafted feature vector. After text embeddings are extracted, cosine similarity is computed for each query/document field embedding. The cosine similarity is combined with hand crafted features, and a Multi-Layer Perception \cite{pal1992} layer is used with a hidden layer, followed by a learning-to-rank layer. In general, we extend the previous work \cite{Huang2013,Zamani2018}, by combining hand crafted features with cosine similarities and adding a hidden layer. \noindent\textbf{Online Production Deployment} As mentioned before, latency is a major issue for ranking tasks. To reduce latency, we use two different online deployment strategies, based on the particular production environment. For the help center task, since there are only thousands of documents in total, we pre-compute the document embeddings. By doing this, the computation is reduced to query embedding extraction and cosine similarity between query and document embeddings. Pre-computing the document embeddings requires nontrivial infrastructure changes for people search, where there are 600 million documents (member profiles). It needs a lot of space to store the embeddings, as well as complicated designs to keep them fresh. In the P99 case in people search, there are thousands of retrieved documents on a searcher machine.\footnote{there are many searcher machines, each responsible for a part of the 600m profiles} Our solution is a two-pass ranking strategy: firstly apply a lightweight model without deep learning modules (MLP layer only), and choose the top hundreds of documents to send to deep models for reranking. After applying this change, the P99 latency is significantly reduced. \subsubsection{Experiments} \begin{table} \centering \caption{\small Offline document ranking results (NDCG@10).} \label{table:ranking-offline} \begin{tabular}{lccc} \toprule \textbf{Models} & \textbf{People Search} & \textbf{Help Center}\\ \midrule XGBoost (baseline) & - & - \\ CNN-ranking & $+3.02\%$ & $+11.56\%$\\ CNN-ranking {\footnotesize w/o handcrafted features} & $-4.52\%$ & $+11.07\%$\\ \bottomrule \end{tabular} \end{table} \begin{table} \centering \caption{\small Online document ranking results on two searches. "Sat click" is a people search metric: number of satisfactory searches that include (1) connecting, messaging or following a profile (2) viewing a profile with dwelling time > 5 seconds. "Happy path rate" is a help center search metric: proportion of users who searched and clicked a document without using help center search again in that day.} \label{table:ranking-online} \begin{tabular}{llc} \toprule \textbf{Search} & \textbf{Metrics} & \textbf{Percentage lift} \\ \midrule \multirow{2}{*}{People Search} & CTR@5 & $+1.13\%$\\ & Sat Click & $+0.76\%$\\ \midrule Help Center & Happy Path Rate & $+15.0\%$\\ \bottomrule \end{tabular} \end{table} The experiments are conducted on two searches (Table \ref{table:ranking-data}). On average each query has around 10 documents. Dev and test size is 50k. For all models that apply, Adam optimizer \cite{kingma2015} is used with learning rate $10^{-3}$; batch size is 256; word embedding size is 64. \subsubsection{Results} We report offline and online results in Table \ref{table:ranking-offline} and \ref{table:ranking-online}\footnote{The online setting is slightly different in help center search. Since there are only thousands of documents in total, the production model will score all the documents. Therefore, for cnn-ranking we also adopt this setting.}, respectively. In general, the offline and online performance are consistent. The baseline is the production model trained with xgboost. Our model \textit{CNN-ranking} significantly outperforms the baseline by a large margin. To understand the impact of handcrafted features, we perform another experiment using CNN-ranking without any these features. It turns out the handcrafted features in people search are more powerful than the help center, largely because people search contains a lot of social network features and document clickthrough features. By comparing CNN-ranking vs baseline across the two searches, it is interesting to see that the impact of deep NLP models is significantly larger in the help center setting. This is again caused by the data genre. In help center, the queries and documents are mostly natural language sentences, such as a query "how to hide my profile updates" to document "Sharing Profile Changes with Your Network", where CNNs can capture the semantic meaning. In people search, it is more important to perform exact matching, for example, query "facebook" should not return member profiles who work at "linkedin". Finally, we present the latency of different deployment strategies in Table \ref{table:ranking-latency}. For people search, we do not observe significant difference in terms of online relevance metrics between two pass ranking and all-decoding. We manually checked the ranking scores, and found that the documents discarded in the first phrase ranking usually have very low scores. \begin{table} \centering \footnotesize \caption{\small{P99 latency in document ranking.}} \label{table:ranking-latency} \begin{tabular}{lccc} \toprule \textbf{Deployment Strategy} & \textbf{Search} & \textbf{\#Docs} & \textbf{@P99 latency}\\ \midrule Two pass ranking & People search & 100-999 & +21ms\\ All-decoding & & 1000-9999 & +55ms \\ \midrule Document pre-computing & Help center & 1000-9999 & +25ms\\ \bottomrule \end{tabular} \end{table} \subsubsection{Related Work} There are many existing works on deep NLP models for search ranking \cite{Huang2013,Shen:14,Palangi:16,guo2016,xiong2017,dai2018}. Our model adopted many designs in previous work to achieve balance between efficiency and effectiveness: multiple document fields \cite{Zamani2018} to improve document understanding, text-level interaction instead of word-level interaction \cite{Huang2013}, etc. In addition, we also show that combining existing handcrafted features with deep features in an MLP layer to optimize relevance performance. For production deployment, while the previous work focus on the embedding precomputing \cite{ramanath2018, yin2016, Grbovic2018}, we demonstrate that two pass ranking can work well with significantly less infra changes. \subsection{Query auto completion} \subsubsection{Introduction} Query auto completion \cite{bar2011} is a language generation task. The input is a prefix typed by a user, and the goal is to return a list of completed queries that match the user's intent. As a search assistance component, it improves user experience in two aspects: (1) Saves user keystrokes and returns search results in less time. (2) More importantly, it guides users to better queries and search results, \textit{e.g.}, for a prefix \textit{sof}, the query \textit{software engineer} is considered better than \textit{software developer}, since the former is a more common job title that leads to better recall. Query auto completion has a strict latency requirement, since the model needs to return results for each keystroke. \subsubsection{Approach} \label{section:qac-prod} The traditional auto completion system has two separate steps: candidate generation and candidate ranking. The \textit{candidate generation} performs a lookup from a query prefix to a completed query, which is extremely efficient. This is done by memorizing each query prefix to the set of all possible queries that have been seen in the search log. For unseen query prefixes, a heuristic is used to generate the candidates \cite{mitra2015}. The \textit{candidate ranking} component is to rank the completed queries with frequency based counting or ML models like XGBoost. Several features are constructed for the completed queries, with the most effective feature being the query frequency. In summary, both candidate generation and candidate rank stages are lightweight and can be finished within several milliseconds. Since query auto completion is a language generation task, an ideal model is neural language modeling \cite{Mikolov2010} with beam search decoding \cite{park2017}. This neural language modeling approach achieves impressive relevance results, but at the cost of latency. During beam search decoding, generation and ranking are performed at the same time for many iterations (one iteration for one generated token); while in traditional methods, there is only one generation and one ranking step. According to reported previous work \cite{park2017}, relevance performance can be increased by $10\%$, while latency can rise to over 1 second \cite{Wang2018}. \begin{figure}[ht] \centering \includegraphics[scale=0.7]{figures/qac-model.pdf} \caption{Model architecture for query auto completion. The learning-to-rank layer is not included.} \label{figure:qac-model} \end{figure} To reduce latency, instead of a end-to-end neural network approach, we apply deep learning only for the ranking component (Figure \ref{figure:qac-model}). A LSTM based language model is used to assign a score for each candidate. We notice that the majority of time is spent on the computing the normalization constant of the word probability, since it sums over the entire vocabulary: \begin{equation} \log{P(w_i|h_i)}=\log{\frac{\exp(v_i^{\top}h_i)}{\sum_j{\exp(v_j^{\top}h_i)}}}=v_i^{\top}h_i-\log{\sum_j{\exp(v_j^{\top}h_i)}} \end{equation} Therefore, following the unnormalized language model approach \cite{sethy2015}, we approximate the denominator by $\log{\sum_j{\exp(v_j^{\top}h_i)}} = b$, where $b$ is another parameter to estimate. The computation time reduction is summarized in Table \ref{table:qac-latency}. \begin{table} \centering \caption{\small P99 Latency in query auto completion task. Each prefix has 100 candidates to rank.} \label{table:qac-latency} \begin{tabular}{lc} \toprule \textbf{Models} & \textbf{Latency} \\ \midrule Baseline & - \\ Language Model (ranking only) & +61ms \\ Unnormalized LM (ranking only) & +9ms \\ \bottomrule \end{tabular} \end{table} \subsubsection{Experiments} \begin{table} \centering \small \caption{\small Offline relevance performance in query auto completion task, measured by mean reciprocal rank (MRR@10 \cite{craswell2009}).} \label{table:qac-offline} \begin{tabular}{lccc} \toprule Ranking Models & All & Seen prefix & Unseen prefix\\ \midrule baseline & - & - & - \\ Unnormalized LM & $+3.2\%$ & $+0.06\%$ & $+6.0\%$ \\ CLSM \cite{mitra2015} & $+2.1\%$ & $+0.06\%$ & $+3.9\%$\\ \bottomrule \end{tabular} \end{table} \begin{table} \centering \caption{\small Online metrics of query auto completion at job search. "Job Applies" is the number of job posts applied from search. "Job Views" is the number of job posts clicked from search.s} \label{table:qac-online} \begin{tabular}{lc} \toprule Metric & Percentage Lift\\ \midrule Job Applies & +1.45\% \\ Job Views & +0.43\% \\ \bottomrule \end{tabular} \end{table} The model is applied on job search query auto completion. We follow the experiment setting and the train/dev/test splitting schema defined in \cite{mitra2015}. The LSTM has one layer with a 100 dimension hidden state. Adagrad is used with learning rate $10^{-3}$. The vocabulary is 100k words with 100 dimensional embedding. \subsubsection{Results} XGBoost is used as the product baseline model for candidate ranking component. Table \ref{table:qac-offline} and \ref{table:qac-online} show the offline/online results, respectively. All deep learning models outperforms the traditional methods by a large margin. More interestingly, the gain is mostly from unseen query prefixes. For seen prefixes, the frequency based methods can do a fairly good job. We also compare with CLSM model \cite{mitra2015}. It formulates the problem as ranking the completed suffix given a prefix, \textit{i.e.}, the cosine similarity between the prefix and suffix is the ranking score. It has worse results than language modeling based methods, since the task nature is to rank the likelihood of completed queries, while CLSM focuses on extracting ngram patterns. \subsubsection{Related Work} Traditional query auto completion systems use a two-step framework, candidate generation and ranking \cite{Cai:16}, similar to the production baseline described in Section \ref{section:qac-prod}. The deep NLP models in previous work perform candidate generation and ranking at the same time via beam search \cite{park2017}. This neural beam search framework is followed by adding personalization \cite{Jaech2018,fiorini2018,Jiang:18} and time information \cite{fiorini2018}. While impressive relevance performance is achieved, the model latency does not meet industry requirement. In this paper, we proposed a novel approach to effectively model the query context and reduce the latency. \subsection{Query Suggestion} \subsubsection{Introduction} Query suggestion \cite{fonseca2005,cao2008} is another essential part of our search experience. Many search engines offer such a function, \textit{e.g.}, Google's "Searches related to ...", and LinkedIn's "People also search for", to assist users to seek relevant information. \subsubsection{Approach} The production baseline is based on frequency counting of search queries. It collects query pairs, and for each input query, sorts the suggestions by the query pair frequency. Heuristics are used to make sure the query pairs are semantically related: (1) The query pairs must be in the same session, where sessions are defined by queries separated by no more than 10 minutes; and either (2) two queries must share one common word or (3) the two queries co-occur for several distinct users. We formulate the problem as machine translation in sequence-to-sequence (seq2seq) framework~\cite{Sutskever2014}, where the main benefit is to generalize to infrequent and unseen queries. We find the deep learning model can overfit to reformulation pairs if not careful handled. When training data contain query \textit{generalization}, e.g. "research scientist --> scientist", the trained model will degrade to only delete words, to produce low perplexity suggestions. We handle this by removing these types of examples from the training data. The seq2seq model has a large latency, which can be an issue in production. In this case, we serve our model in parallel with search result ranking, which gives us plenty of time (more than 100ms) to run the seq2seq model. \subsubsection{Experiments} The seq2seq model is tested in federated search. Training/dev/test data size is 300m/50k/50k query pairs. For the seq2seq model, we use a small model to reduce latency (dimension in LSTM is 100 units, 2 layers deep). Stochastic gradient descent is used with a learning rate of 1.0. \subsubsection{Results} In our offline experiments in Table \ref{table:qs-offline}, we measure both the relative lift of MRR@10, and the coverage. The MRR of the model is computed based on the position of the gold standard query in the top 10 list, if it appears at all. The coverage indicates whether the model is able to produce any suggestion at all from the input. The MRR improves significantly relative to the baseline by using a deep learning method. The coverage of the deep learning method is trivially 100\%, although in practice it is lower due to filtering of unknown words or the rare blacklisted query. Overall, we can see that query suggestion is an area where deep learning provides great impact over traditional methods. Online experiments (Table \ref{table:qs-online}) display evidence of an enhanced user experience, particularly on finding jobs and overall proportion of successful LinkedIn use. \begin{table} \centering \caption{\small Offline experiments for query suggestion task.} \label{table:qs-offline} \begin{tabular}{lcc} \toprule \textbf{Model} & \textbf{MRR@10} & \textbf{Coverage} \\ \midrule Frequency baseline & - & 67.3\% \\ Seq2Seq & +11.1\% & 100\% \\ \bottomrule \end{tabular} \end{table} \begin{table} \centering \caption{\small Query suggestion online results.} \label{table:qs-online} \begin{tabular}{lc} \toprule Metric & Percentage Lift \\ \midrule Job views & $+0.6\%$ \\ Job apply & $+1.0\%$ \\ \bottomrule \end{tabular} \end{table} \subsubsection{Related Work} Traditional approaches to this problem are frequency based counting of search queries, or collaborative filtering \cite{paine2007recommending}, both of which only work on previously seen searches. He et al.\ \cite{He2016} applied seq2seq to rewrite queries into document titles, which is later used as features for search ranking. For query suggestions, multiple session models have been proposed \cite{Dehghani:2017,Ren2018} by exploiting the other queries in a search session. However, these works focus on offline results. In this paper, we outlined a practical solution for the related search production, from query reformulation collection to model training and online deployment. Our approach is applicable to any search engine with search logs. \subsection{BERT Pre-training} \label{section:bert-pretraining} BERT \cite{devlin2019} is a pre-trained language representation model that is proven to be beneficial for many NLP tasks. In this paper, our goal is to pre-train a BERT model on LinkedIn data (hence the name LiBERT). The advantage of a LinkedIn specific BERT model is: (1) Better relevance for domain specific tasks. In Google's pre-trained model, "linkedin" and many other companies are not in the vocabulary. (2) Smaller model structure for the ease of deployment. Our model is 6 layer with 34m parameters, compared to 12 layers and 110M parameters in Google's BERT-Base model. \subsubsection{Model Pre-training} The original BERT models were pre-trained on Wikipedia and BooksCorpus data, which are widely used datasets in language modeling. The text data in LinkedIn search systems are of a different genre. For pre-training the LiBERT models, we extracted data from different domains at LinkedIn. Table \ref{table:bert-data} shows the data source of the pre-training data as well as the statistics. The large amount of text data from these domains should cover most NLP use cases in search products and are general enough for downstream fine-tuning tasks. The major challenge of productionizing BERT is the latency. In this paper, we train a smaller LiBERT on LinkedIn data to minimize the latency. We experimented with the original BERT implementation \cite{devlin2019}, as well as our more light-weight LiBERT architecture (LiBERT). The model specification is 6 transformer layers (L=6), 12 hidden size (H=512), 8 attention heads (A=8), and 64 positional embeddings (P=64). A much smaller number of positional embeddings is used, due to that the length of search queries are much shorter than the sentences in classic NLP tasks. The resulting model has 1/3 as many parameters as BERT-Base. \begin{table} \centering \caption{\small Corpus used for LiBERT pre-training.} \label{table:bert-data} \begin{tabular}{llc} \toprule \textbf{Corpus} & \textbf{Description} & \textbf{\#Words}\\ \midrule Search & Search queries & 890M \\ Member profiles & Headlines, summaries, positions & 728M \\ Job listings & Job titles and descriptions & 604M \\ Ads & Ads titles and descriptions & 637M \\ \bottomrule \end{tabular} \end{table} \subsubsection{Experiments on Fine-tuning Tasks} Fine-tuning experiments are conducted on two tasks: help center search ranking and query intent classification. For help center ranking, the deployment strategy is document embedding pre-computing. The CNN model in the classification and ranking task (Figure \ref{figure:qim-cnn} and \ref{figure:ranking-model}) is replaced by our LiBERT model. Since queries are short, we only keep the first 16 words in queries, to further reduce P99 computation. In Table \ref{table:bert-offline}, the LiBERT model achieves comparable performance to BERT-Base in both tasks, even though a simpler architecture is used. This is due to the in-domain pre-training data that is more relevant to the task. Similar trend of performance improvements are observed in online experiments, as shown in Table \ref{table:bert-online}. We also reported the P99 offline latency. In the help center search task, recall that document pre-computing is used, and all 2,700 document embeddings are directly compared to the queries. As shown in Table \ref{table:bert-latency}, LiBERT can significantly reduce the computation time, compared to BERT-Base. Accordingly, we deployed the LiBERT based models to production. \begin{table}[] \centering \footnotesize \caption{\small LiBERT fine-tuning offline performance comparison on query intent and help center ranking tasks.} \label{table:bert-offline} \begin{tabular}{lcccccc} \toprule \textbf{Model} & \multicolumn{4}{c}{\textbf{BERT HParams}} & \textbf{Query intent} & \textbf{Help center} \\ & $\#L$ & $\#H$ & $\#A$ & $\#P$ & Accuracy & NDCG@10 \\ \midrule CNN & - & - & - & - & - & - \\ BERT-Base & $12$ & $768$ & $12$ & $512$ & $+2.89\%$ & $+2.15\%$\\ LiBERT & $6$ & $512$ & $8$ & $64$ & $+3.28\%$ & $+2.13\%$ \\ \bottomrule \end{tabular} \end{table} \begin{table}[] \centering \caption{\small Online experiments of LiBERT over the baseline model (CNN) in query intent prediction and help center ranking tasks.} \label{table:bert-online} \begin{tabular}{llc} \toprule \textbf{Task} & \textbf{Metrics} & \textbf{Percentage lift} \\ \midrule Query intent & CTR@5 & $+0.17\%$\\ & SAT click & $+1.36\%$\\ \midrule Help center & Happy Path Rate & $+11.3\%$\\ \bottomrule \end{tabular} \end{table} \begin{table}[] \centering \caption{\small Offline P99 latency of LiBERT on query intent classification and help center ranking.} \label{table:bert-latency} \begin{tabular}{lcc} \toprule \textbf{Model} & \textbf{Classification} & \textbf{Ranking} \\ \midrule CNN & +0.5ms & +25ms\\ BERT-Base & +53ms & +65ms \\ LiBERT & +15ms & +44ms \\ \bottomrule \end{tabular} \end{table} \section{Lessons Learned} Our previous section focuses on task specific challenges and solutions for each individual task. In this section, we go beyond the task boundary and generalize the interesting observations into lessons. We believe these lessons are not limited to search systems, but also can be applied to other areas such as recommender systems. \subsection{When is Deep NLP Helpful?} In general, the deep NLP models achieve better relevance performance than traditional methods in search tasks, and are particularly powerful in the following scenarios: \begin{itemize} \item \textbf{Language generation tasks}. Based on the offline experiments, query suggestion (Table \ref{table:qs-offline}) benefits most from deep NLP modeling. This is determined by the nature of language generation tasks. In the seq2seq framework, the hidden state summarizes the context well, and the decoder can produce related queries for any input query.\\ \item \textbf{Data with rich paraphrasing}. In document ranking, the improvement brought by using CNN is larger in help center than in people search (Table \ref{table:ranking-offline} and \ref{table:ranking-online}), since help center has a lot of natural language queries while people search queries are combination of keywords. \end{itemize} \subsection{When is Deep NLP not Helpful?} There are two cases in search systems where no gain is observed by applying deep NLP models: (1) query tagging (Table \ref{table:qt-res}); (2) seen prefixes in query auto completion (Table \ref{table:qac-offline}). There are mainly two reasons for this. The first reason is for the particular task, the handcrafted features are powerful enough to solve most of the problems. In query tagging, there is a lexicon built on 700 million user profiles, therefore the lexicons can give a pretty accurate estimation of the entity label. Similarly, for seen prefixes of query auto completion, the frequencies of prefixes to completed queries is collected on million of entries in the search logs, which makes them reliable. The second reason is the data genre. Firstly, query tagging datasets contain a lot of people names (over 50\%) where word embedding do not help much. In fact, except for query intent and query tagging, the other three tasks work on datasets without people names. Secondly, it lacks language variation, similar to what we mentioned in the last section. It is interesting that in classic NLP data sets, e.g. CoNLL'03 \cite{sang2003}, the LSTM-CRF \cite{lample2016} outperforms CRF by a large margin. In the NLP data set, each data unit is a complete sentence, which provides context clues. For example, the headline "Jordan expels Iraqi diplomat" makes it obvious what kind of entity "Jordan" is, but it would be rare to see this in search data. \subsection{Latency is the Biggest Challenge} We found latency to be the biggest challenge of applying deep NLP models to search productions. We summarize as below the practical solutions to reduce latency: Dense matrix multiplication and evaluating softmax are costly, and these are often performed on every word in a text. \begin{itemize} \item \textbf{Algorithm redesign}. In query auto completion, instead of a fully neural network approach, we only apply deep learning for candidate ranking. In addition, the unnormalized language model is applied to further reduce latency. \item \textbf{Parallel computing}. In query suggestion, the seq2seq model runs in parallel with search ranking, which leave more than 100ms buffer for seq2seq model. \item \textbf{Embedding pre-computing}. In help center search ranking, the document embeddings are pre-computed, leaving only query processing to be computed at run-time. \item \textbf{Two pass ranking}. In people search ranking, a lightweight model is applied to handle all documents, then a deep model reranks the top ones. Compared to embedding pre-computing, two pass ranking has less of an infrastructure requirement. \end{itemize} It is also worth noting that due to the nature of the production setting, computation time may not be a blocker. In query intent, the BERT model is computationally heavy, but it only needs to handle several words in a query (+15ms), therefore the search system can afford the absolute additional time. \subsection{How to Ensure Robustness?} Robustness proves to be a challenge in multiple tasks, since deep learning models are more likely to overfit the training data and over generalize word semantics, compared to traditional methods. Our solution is to manipulate the training data and reuse handcrafted features to enforce robustness. \begin{itemize} \item \textbf{Training data manipulation}. In query suggestion, we observe that with generalization pairs ("senior research scientist" -$>$ "research scientist"), the trained seq2seq will mostly generate queries by deleting original words. We address it by removing the generalization pairs in the training data. \item \textbf{Reuse handcrafted features}. In document ranking, the deep NLP model may rank the topically related documents higher than the keywords matched documents. For example, in people search, matching professor profiles to query "student". This can be alleviated by incorporating the existing handcrafted features (\textit{e.g.}, keywords matching features such as cosine similarity) into neural networks. \end{itemize} \section{Related Work} The task specific related work is mentioned in Section \ref{section:five-task}, therefore, in this section, we mainly focus on the common text based features and machine learning approaches. \subsection{Traditional NLP Approaches} For language understanding tasks, the text based features are usually sparse features such as unigrams, bigrams \cite{finkel2005incorporating} (classification and sequential tagging tasks), and dense matching features between queries and documents such as BM25 \cite{robertson:09} (ranking tasks). In terms of algorithms, logistic regression, SVM, neural networks or decision tree \cite{cortes1995,zhang2000neural,burges2010} are used for classification and ranking. The latter three introduce feature non-linearity but are usually not applied on sparse text features. For language generation tasks, usually there are two separate steps: candidate generation and candidate ranking \cite{bar2011,fonseca2005,cao2008,Shokouhi2013,li2006}. The generation is usually done by string matching, without explicitly modeling the word semantics. Ranking is usually similar to document ranking. \subsection{Deep NLP Models} CNN/LSTM/Transformer \cite{Lecun1995,vaswani2017,Hochreiter1997} are used to generate contextualized embeddings for words and sentences. These embeddings are able to replace the sparse textual features such as unigrams/bigrams (in classification and sequential tagging) and textual matching features (in ranking). For language generation tasks, a language model with beam search decoding can perform the generation and ranking at the same time. Attention \cite{bahdanau2014neural} is usually used in sequence-to-sequence framework \cite{Sutskever2014}. Recently, the pre-trained language models \cite{peters2018,radford2018,devlin2019} have shown impressive results on many NLP tasks by exploiting unsupervised data. \subsection{Deep NLP for Search} As mentioned in Section \ref{section:five-task}, deep NLP models have been applied in most of the search tasks \cite{hashemi2016,Huang2013,mitra2015,He2016,guo2016,xiong2017,dai2018}. Additional information has been incorporated into the deep NLP models, such as personalization \cite{Jaech2018,fiorini2018}, session-aware \cite{Dehghani:2017,Ren2018}, etc. Instead, the focus of this paper is to overcome the challenges of productionizing deep NLP models: latency, robustness, effectiveness \cite{mitra2018}. All the resulting models have been deployed in the LinkedIn commercial search engines. For example, an efficient BERT based ranking model is designed to enable document pre-computing which is crucial for productionization. In contrast, the existing BERT based ranking models \cite{dai2019,macavaney2019,nogueira2019} do not allow document pre-computing. There are some existing efforts that report online experiments, however, most models are for the document ranking task \cite{ramanath2018,yin2016,Grbovic2018,li2019}. In this paper, we have a set of much broader tasks that covers query understanding and language generation. \section{Conclusions} Industry provides its own set of challenges, different in key ways from classic studied NLP tasks: (1) the amount and type of data, (2) constraints on latency or infrastructure, and (3) the highly optimized production baselines. This paper focused on how to apply deep NLP models to five representative search productions, illuminating the challenges along the way. All resulting models except query tagging are deployed in LinkedIn's search engines. More importantly, we also summarized the lessons learned across the tasks. We listed the factors to consider when estimating the potential improvement brought by deep NLP models for a new task. We showed robustness and overfitting can be typically handled with careful data analysis. Latency, which is almost always a concern, can be addressed creatively in many ways, such as architecture simplification, two-pass systems, memorization, or parallel computation. \bibliographystyle{ACM-Reference-Format}
2023-04-23T06:10:19.443Z
2021-08-31T02:42:31.000Z
redpajama/arxiv
arxiv_0002
613
7,909
4b944320c8fb6adf18ddd5df8318ddfe43f787e4
\section{Introduction} If $A$ is an $m\times n$ matrix over a field $F$ then we denote by $\mathfrak{rs}(A)$ the subspace of $F^{n}$ generated by the row vectors of $A$ and it is called the row space of $A$. We denote the set of $n\times n$ row echelon forms over $F$ by $\mathscr{E}_{F}(n)$. Also, $\mathscr{E}_{F}(d,n)$ denotes the set of $n\times n$ row echelon forms over $F$ of rank $d$. For the definition of row echelon form (row-reduced echelon matrix) please consider \cite[Chap. 1, p. 11]{Hoffman}. \\ The set of $d$ dimensional subspaces of an $n$ dimensional vector space over a field $F$ is denoted by $\Gr_{F}(d,n)$ or shortly by $\Gr(d,n)$ if there is no confusion on the base field and it is called the Grassmannian. Grassmannians have rich mathematical structures with wide applications and have been studied in the literature over the years from algebraic, topological and geometric points of view, see e.g. \cite{Fedorov}, \cite{Ghorpade}, \cite{Kleiman} and \cite{Neretin}. \\ If $F$ is a finite field of size $q$ then the well known Gaussiann formula computes the cardinality of the Grassmannian $\Gr(d,n)$ as a rational formula:\\ $$|\Gr(d,n)|= \frac{(q^{n}-1)(q^{n}-q)...(q^{n}-q^{d-1})}{(q^{d}-1)(q^{d}-q)...(q^{d}-q^{d-1})} .$$ \\ In this paper we give a new formula for the size of the Grassmannian. In fact, Theorem \ref{Theorem I} and Theorem \ref{formula} are the main contributions of this paper. Theorem \ref{Theorem I} establishes a one to one correspondence between the Grassmannian $\Gr_{F}(d,n)$ and $\mathscr{E}_{F}(d,n)$. Specially, if the base field is finite then we obtain a polynomial-type formula for the cardinality of the Grassmannian $\Gr(d,n)$, see Theorem \ref{formula}. \\ \section{Main Results} \begin{theorem}\label{Theorem I} The map $R\rightsquigarrow \mathfrak{rs}(R)$ is a bijection from $\mathscr{E}_{F}(n)$ onto $\bigcup\limits_{d=0}^{n}\Gr(d,n)$. \\ \end{theorem} {\bf Proof.} Without loss of generality, we may work with the vector space $F^{n}$. First we show that the above map is surjective. If $W\in\bigcup\limits_{d=0}^{n}\Gr(d,n)$ then $\dm W= d$ for some $0\leq d\leq n$. Let $\mathcal{B}=\{\alpha_{1},...,\alpha_{d}\}$ be an ordered basis of $W$ where $\alpha_{i}\in F^{n}$. Let $A$ be the $n\times n $ matrix over $F$ whose first $d$ rows are the vectors of $\mathcal{B}$ and the remaining $n-d$ rows of $A$ are zero. It is clear that $\mathfrak{rs}(A)=W$. It is well known that there exists an $n\times n$ row echelon form $R$ over $F$ which is row-equivalent to $A$. By \cite[Chap. 2, Theorem 9]{Hoffman}, row-equivalent matrices have the same row spaces. Thus $\mathfrak{rs}(R)=\mathfrak{rs}(A)=W$. Hence, the above map is surjective. Suppose $\mathfrak{rs}(R)=\mathfrak{rs}(R')$ for some $R, R'\in\mathscr{E}_{F}(n)$. We shall prove that $R=R'$. Let $\rho_{1},...,\rho_{d}$ (resp. $\rho'_{1},...,\rho'_{d'}$) be the non-zero row vectors of $R$ (resp. $R'$). Assume that the leading non-zero entry of $\rho_{i}$ (resp. $\rho'_{j}$) occurs in column $k_{i}$ (resp. $k'_{j}$). We have $\dm\big(\mathfrak{rs}(R)\big)=\dm\big(\mathfrak{rs}(R')\big)$. By \cite[Chap. 2, Theorem 10]{Hoffman}, $d=\dm\big(\mathfrak{rs}(R)\big)$ and $d'=\dm\big(\mathfrak{rs}(R')\big)$. Therefore $d=d'$. Now to prove $R=R'$ it suffices to show that $\rho_{i}=\rho'_{i}$ for all $i$ with $1\leq i\leq d$. If $\beta=(b_{1},...,b_{n})\in\mathfrak{rs}(R)$ then there exist scalers $c_{1},...,c_{d}\in F$ such that $\beta=\sum\limits_{i=1}^{d}c_{i}\rho_{i}$. We have $c_{\ell}=b_{k_{\ell}}$ for all $\ell$. Because if $\rho_{i}=(R_{i1},...,R_{in})$ then from $\beta=(b_{1},...,b_{n})=\sum\limits_{i=1}^{d}c_{i}\rho_{i}$ we get that $b_{k_{\ell}}=\sum\limits_{i=1}^{d}c_{i}R_{i k_{\ell}}=\sum\limits_{i=1}^{d}c_{i}\delta_{i\ell}=c_{\ell}$. Therefore: \begin{equation}\label{equ I} \beta=\sum\limits_{i=1}^{d}b_{k_{i}}\rho_{i}. \end{equation} The expression (1), yields that if $\beta\neq 0$ then the index of the first non-zero component of $\beta$ belongs to the set $\{k_{1},...,k_{d}\}$. Because at least one of the $b_{k_{1}},...,b_{k_{d}}$ is non-zero. Suppose $b_{k_{t}}$ is the first non-zero between them, i.e. $k_{t}$ is the least index between the indices $\{k_{1},...,k_{d}\}$ for which $b_{k_{t}}\neq 0$. Therefore we can write: \begin{equation}\label{equ 2} \beta=\sum\limits_{i=t}^{d}b_{k_{i}}\rho_{i}. \end{equation} We prove that $b_{j}=0$ for all $j$ with $j< k_{t}$. From (\ref{equ 2}), we have $b_{j}=\sum\limits_{i=t}^{d}b_{k_{i}}R_{ij}$. Since $j<k_{t}<k_{t+1}<...<k_{d}$, then by the definition of row-reduced echelon matrix, $R_{ij}=0$ for all $i$ with $t\leq i\leq d$. Hence, $b_{j}=0$. Therefore, $b_{k_{t}}$ is the first non-zero component of $\beta$. On the other hand, since $\rho'_{j}=(R'_{j1},...,R'_{jn})\in\mathfrak{rs}(R')=\mathfrak{rs}(R)$ thus the expression (\ref{equ I}) implies that $\rho'_{j}=\sum\limits_{i=1}^{d}R'_{jk_{i}}\rho_{i}$ for all $j$ with $1\leq j\leq d$. By the definition of row-reduced echelon matrix, the first non-zero component of $\rho'_{j}$ occurs in $k'_{j}$, i.e. $R'_{jk'_{j}}=1$. Now, using what we have just proved in the above for the first non-zero component, we have $k'_{j}\in\{k_{1},...,k_{d}\}$ for all $j$ with $1\leq j\leq d$. Also, since $k_{1}<k_{2}<...<k_{d}$ (resp. $k'_{1}<k'_{2}<...<k'_{d}$), one has then $k'_{j}=k_{j}$ for all $j$ with $1\leq j\leq d$. Finally, we have $\rho'_{j}=\sum\limits_{i=1}^{d}R'_{jk_{i}}\rho_{i}= \sum\limits_{i=1}^{d}R'_{jk'_{i}}\rho_{i}= \sum\limits_{i=1}^{d}\delta_{ji}\rho_{i}=\rho_{j}$ for all $j$. Therefore $R=R'$. $\Box$ \\ \begin{corollary}\label{Corollary I} There exists a one-to-one correspondence between the Grassmannian $\Gr_{F}(d,n)$ and $\mathscr{E}_{F}(d,n)$. \\ \end{corollary} {\bf Proof.} It is an immediate consequence of Theorem \ref{Theorem I} and \cite[Chap. 2, Theorem 10]{Hoffman}. $\Box$ \\ In Theorem \ref{formula}, by $S^{(d)}$ we mean the set of all $(s_{1},...,s_{d})\in\mathbb{N}^{d}$ such that $1\leq s_{1}\leq n-d+1$ and $s_{i-1}< s_{i}\leq n-d+i$ for all $i$ with $2\leq i\leq d$. \\ \begin{theorem}\label{formula} If $F$ is a finite field with $q$ elements Then: $$|\Gr_{F}(d,n)|=\sum\limits_{(s_{1},..., s_{d})\in S^{(d)}}q^{d(n-d)+\frac{d(d+1)}{2}-\sum\limits_{i=1}^{d}s_{i}}.$$ \\ \end{theorem} {\bf Proof.} For each $\mathbf{s}=(s_{1},...,s_{d})\in S^{(d)}$ consider the subset $\mathcal{R}_{\mathbf{s}}\subseteq\mathscr{E}_{F}(d,n)$ consisting of all $R\in\mathscr{E}_{F}(d,n)$ such that for each $1\leq i\leq d$ the leading non-zero entry of $i-$th row of $R$ occurs in the column $s_{i}$. Clearly the $\mathcal{R}_{\mathbf{s}}$ are pairwise disjoint sets and $\mathscr{E}_{F}(d,n)=\bigcup\limits_{\mathbf{s}\in S^{(d)}}\mathcal{R}_{\mathbf{s}}$. Therefore by Corollary \ref{Corollary I}, we have $|\Gr(d,n)|=\sum\limits_{\mathbf{s}\in S^{(d)}}|\mathcal{R}_{\mathbf{s}}|$. It suffices to show that: $$|\mathcal{R}_{\mathbf{s}}|= q^{d(n-d)+\frac{d(d+1)}{2}-\sum\limits_{i=1}^{d}s_{i}}.$$ If $R\in\mathcal{R}_{\mathbf{s}}$ then for each $1\leq i\leq d$, at each $i-$th row of $R$, $d$ entries are $0$ or $1$ (the entry $1$ occurs in the column $s_{i}$ and zero entries in the columns $s_{j}$, $j\neq i$). Also, on every $i-$th row of $R$, by the definition of row echelon form, we have $R_{j, s_{i}}=0$ for each $j < s_{i}$, the number of these entries is $s_{i}-1$, since we have counted $i-1$ of them already in the above, therefore on every $i-$th row of $R$, the total number of entries which are $0$ or $1$ is $d+(s_{i}-1)-(i-1)=d+s_{i}-i$. Thus on every $i-$th row of $R$, $n-d-(s_{i}-i)$ entries are arbitrary scalers in $F$, and so in matrix $R$, $\sum\limits_{i=1}^{d}(n-d-(s_{i}-i)) =d(n-d)+\frac{d(d+1)}{2}-\sum\limits_{i=1}^{d}(s_{i})$ entries are arbitrary scalers. Therefore we get that $|\mathcal{R}_{\mathbf{s}}|= q^{d(n-d)+\frac{d(d+1)}{2}-\sum\limits_{i=1}^{d}(s_{i})}$. $\Box$ \\ \begin{remark} By Theorem \ref{formula}, we have $|\Gr(1,n)|=\sum\limits_{i=0}^{n-1}q^{i}=q^{n-1}+q^{n-2}+...+1$. $|\Gr(0,n)|=|\Gr(n,n)|=1$. More generally, $|\Gr(d,n)|=|\Gr(n-d,n)|$. \\ \end{remark} \begin{remark} In order to express the coefficients of the polynomial-type formula which appear in Theorem \ref{formula} more precisely we act as follows. We consider the equivalence relation $\sim$ on $S^{(d)}$ as $(s_{1},...,s_{d})\sim(s'_{1},...,s'_{d})$ if and only if $\sum\limits_{j=1}^{d}s_{j}=\sum\limits_{j=1}^{d}s'_{j}$. For $(s_{1},...,s_{d})\in S^{(d)}$, let $[s_{1},...,s_{d}]$ denotes its equivalence class. Then Theorem \ref{formula} yields that: $$|\Gr(d,n)|=\sum\limits_{\ell=0}^{d(n-d)}c_{\ell}q^{d(n-d)-\ell}= c_{0}q^{d(n-d)}+c_{1}q^{d(n-d)-1}+...+c_{d(n-d)}$$ where for each $0\leq\ell\leq d(n-d)$, $c_{\ell}$ is the cardinality of the class $[s_{1},...,s_{d}]$ whenever $\ell+\frac{d(d+1)}{2}=\sum\limits_{k=1}^{d}s_{k}$ for some $(s_{1},...,s_{d})\in S^{(d)}$. By induction on $\ell$, one can show that each $c_{\ell}\geq 1$. For $\ell=0$, take $(s_{1}, s_{2},...,s_{d})=(1, 2,...,d)$ which belongs to $S^{(d)}$ and $0+\frac{d(d+1)}{2}=\sum\limits_{i=1}^{d}s_{i}$, thus $c_{0}=|[1,2,...,d]|\geq 1$. Now let $\ell>1$. Then by induction hypothesis, there exists $(s_{1},...,s_{d})\in S^{(d)}$ such that $\ell-1=\sum\limits_{i=1}^{d}(s_{i}-i)$. But there exists some $1\leq j\leq d$, so that $s_{j}<n-d+j$. Take $j_{0} :=\max\{j : 1\leq j\leq d, s_{j}< n-d+j\}$. Now, set $s'_{i}=s_{i}$ for each $i\neq j_{0}$ and $s'_{j_{0}}=s_{j_{0}}+1$. Then, $(s'_{1},...,s'_{d})\in S^{(d)}$ and $\ell=\sum\limits_{i=1}^{d}(s'_{i}-i)$. Therefore $c_{\ell}=|[s'_{1},...,s'_{d}]|\geq 1$. Furthermore, it is easy to see that $c_{0}=c_{1}=c_{d(n-d)-1}=c_{d(n-d)}=1$. Therefore $|\Gr(d,n)|=q^{d(n-d)}+q^{d(n-d)-1}+\mathbf{c_{2}}q^{d(n-d)-2}+ \mathbf{c_{3}}q^{d(n-d)-3}...+\mathbf{c_{d(n-d)-2}}q^{2}+q+1$. \\ We conclude this paper by proposing the following question. Could one describe the coefficients $\mathbf{c_{2}}, \mathbf{c_{3}},...,\mathbf{c_{d(n-d)-2}}$ which appear in the above formula somehow more precisely than described in the above? \\ \end{remark}
2023-04-23T06:10:21.620Z
2019-12-02T02:08:56.000Z
redpajama/arxiv
arxiv_0002
663
2,103